精選文章

SmallBurger Asset Home

  SmallBurger

2025年5月12日 星期一

DepthEdgeFlow — Geometry-Free Soft Edge FX for Unity URP

Simulate shoreline motion and fog boundaries using depth reconstruction and noise, without any vertex displacement, tessellation, or raymarching. Optimized for mobile platforms.

Simulating shoreline transitions in Unity often involves vertex displacement or mesh tessellation —
which can be expensive or even impractical on mobile platforms.

That’s what led me to design a screen-space alternative I now call **DepthEdgeFlow** —
a technique born from my ongoing effort to bring geometry-free solutions to real-time mobile graphics.

Let’s take a look at how it works, starting from a common shoreline scenario.


Traditional oceans have a situation where the triangle count is even higher.

Let us first examine the traditional approach to simulating water surfaces.
This typically involves geometric tessellation to approximate wave motion.
A common implementation displaces vertex positions directly within the Vertex Shader, producing an oscillating surface that mimics wave dynamics.

However, this technique has inherent limitations:
When the mesh lacks sufficient vertex density, the surface deformation appears overly rigid, resulting in a “block-like” movement that resembles gelatin — visually unrealistic and lacking fine detail.

Conversely, increasing vertex density to achieve higher visual fidelity
can significantly impact performance on mobile platforms,
often causing frame rate degradation before the water surface even enters view.

These issues are further amplified in large-scale oceanic environments,
where the computational cost scales poorly with area coverage.

Let’s take a look at the top-down view scenario.


Traditional sea in a GodView scenario.

From this perspective, the only area where water surface undulation is actually visible is the narrow zone near the shoreline. The large central area of the water shows little visual variation, and players typically won’t pay much attention to it either.

So we started asking ourselves:
Is there a way to preserve only the detailed changes near the shore,
while skipping unnecessary simulation for the rest of the water surface?

Let’s start by watching the video below.


You’ll notice subtle vertical motion along the shoreline —
the water appears to ripple as if it’s being displaced by actual geometry.
But here’s the twist: there’s no tessellation or vertex deformation involved.

If you’ve implemented water shaders before, you might be familiar with a common technique:
reconstructing world position from the depth texture,
then comparing it to the water level to control transparency near the shore.

In this case, we simply added a bit of noise to the reconstructed height,
introducing minor fluctuations that create the illusion of dynamic movement.
This adds depth and natural detail — with virtually no extra cost.

What you just saw in the video is the result of this entire process.
I call this approach Depth Edge Flow.

Next, let’s take a look at the implementation details of this approach.

In DepthEdgeFlow.hlsl, there's a function called CalculateSoftEdgeFlowWeight. We simply need to pass in the NDC coordinates, positionWS, DepthParams, and PerlinNoiseParams from the Fragment Shader to compute the SoftEdgeFlowWeight.

half CalculateSoftEdgeFlowWeight(float2 positionCS2, float3 positionWS,
//x:range, y:power, z:offest surfaceHeight
float3 depthParams,
//x:worldScale, noiseSpeed, rangeRatio
float3 perlinNoiseParams)
{
float2 offestXZ = positionWS.xz * perlinNoiseParams.x +
_Time.y * perlinNoiseParams.y;
float surfaceHeight = positionWS.y + Snoise(offestXZ) *
perlinNoiseParams.z - depthParams.z;
return GetSoftEdgeFlowWeight(positionCS2, surfaceHeight,
depthParams.x, depthParams.y);
}

If you have your own noise implementation, you can also directly call GetSoftEdgeFlowWeight and pass in the computed height value.

half GetSoftEdgeFlowWeight(float2 positionCS2, float surfaceHeight,
float depthRange, float depthPow)

{
float3 groundPosition = ComputeWorldSpacePositionFromDepth(
positionCS2);
float depthRatio = saturate((surfaceHeight - groundPosition.y)
/ depthRange);
depthRatio = 1.0 - pow(1.0 - depthRatio, depthPow);
return depthRatio;
}

As for the ComputeWorldSpacePositionFromDepth part, it calls Unity's built-in ComputeWorldSpacePosition function, which already handles the inverse projection from depth to world space. We just need to provide the NDC coordinates, depth value, and UNITY_MATRIX_I_VP.

float3 ComputeWorldSpacePositionFromDepth(float2 positionCS2)
{
//float2 destPositionCS = positionCS;
#if UNITY_REVERSED_Z
float depth = LoadSceneDepth(positionCS2.xy);
#else
// Adjust z to match NDC for OpenGL
float depth = lerp(UNITY_NEAR_CLIP_VALUE, 1,
LoadSceneDepth(positionCS2.xy));
#endif
float2 positionSS = positionCS2 * _ScreenSize.zw;
return ComputeWorldSpacePosition(positionSS, depth, UNITY_MATRIX_I_VP);
}

As for the actual call to CalculateSoftEdgeFlowWeight, I'll use the AreaFog Fragment Shader as an example. It's easy to see that this part simply calculates an alpha transition weight called SoftEdgeFlowWeight.

half4 FragmentProgram(VertexOutput input) : SV_Target
{

real3 normalWS = real3(0.0, 1.0, 0.0);

half softEdgeFlowWeight = CalculateSoftEdgeFlowWeight(
input.positionCS.xy, input.positionWS,
float3(_Depth, _DepthPow, _OffestHeight),
float3(_PerlinWorldScale, _PerlinNoiseSpeed, _PerlinRnageRatio));
half maskWeight = SAMPLE_TEXTURE2D(_MaskMap, sampler_MaskMap,
input.baseUV).r;
#ifdef VIEW_SOFT_EDGE_FLOW
return half4(softEdgeFlowWeight, 0.0, 0.0, 1.0);
#else
return half4(_Color, softEdgeFlowWeight * maskWeight);
#endif
}


However, SoftEdgeFlowWeight doesn’t necessarily have to be used for alpha transitions.
For example, in the case of a water shoreline, what we want is a refraction distortion effect.
In this case, we use the URP’s After Opaque pass, and apply noise-based distortion based on depth to blend with the background.
The water itself remains an opaque object.

half4 FragmentProgram(VertexOutput input) : SV_Target
{
InputData inputData;
half3 normalTS;
InitializeInputData(input, inputData, normalTS);

half softEdgeFlowWeight = CalculateSoftEdgeFlowWeight(
input.positionCS.xy, input.positionWS,
float3(_Depth, _DepthPow, _OffestHeight),
float3(_PerlinWorldScale, _PerlinNoiseSpeed, _PerlinRnageRatio));

half3 diffuse = lerp(_ShallowColor, _DeepColor, softEdgeFlowWeight);
half4 specularGloss = half4(1.0, 1.0, 1.0, 1.0);
half smoothness = 0.5;
half3 emission = half3(0, 0, 0);
half alpha = 1.0;

half3 lightResult = UniversalFragmentBlinnPhong(inputData, diffuse,
specularGloss, smoothness, emission, 1.0, normalTS).rgb;

//Handling refraction blending
half2 distortionScreenUV = input.positionSS.xy / input.positionSS.w +
_RefractionDistortion * softEdgeFlowWeight * inputData.normalWS.xz;
half3 sceneColor = SampleSceneColor(distortionScreenUV);
return half4(lerp(sceneColor, lightResult, softEdgeFlowWeight), 1.0);
}


The related video is as follows:


DepthEdgeFlow is simple, efficient, and flexible —
ideal for situations where visual fidelity matters but resources are limited.

If you find ways to expand or adapt it for your own use case,
I’d love to see what you come up with.

### 🔗 GitHub Open Source
You can try DepthEdgeFlow yourself — the shader and sample scenes are open source:

Includes demo scenes, shader code, and URP integration steps.

沒有留言:

張貼留言