![]() |
![]() |
PixelMotionBlur Sample
Path
Why is this sample interesting? Motion blur adds realism to the scene. It has the perceptual effect of creating high speed motion. Instead of rendering the geometry multiple times with different alpha values to create a blur effect, this sample shows off a realistic image based motion blur effect. While the scene is rendered, the shaders record the per-pixel velocity relative to the previous frame. This per-pixel velocity is then used in a post process pass to blur the pixels in the final image.
How does this sample work? In CMyD3DApplication::RestoreDeviceObjects() it does the following:
In CMyD3DApplication::FrameMove() it does the following:
In CMyD3DApplication::Render() is essentially doing 2 things:
In more detail, the first part of CMyD3DApplication::Render() uses the technique "WorldWithVelocity" which has uses a vs_2_0 vertex shader called "WorldVertexShader". This vertex shader transforms vertex position into screen space, performs a simple N dot L lighting equation, copies texture coordinates, and computes the velocity of the vertex by transforming the vertex by both the current and previous world * view * projection matrix and taking the difference. The difference will be the amount that this vertex has moved in screen space since last frame. This per-vertex velocity is passed to the pixel shader via TEXCOORD1. The technique "WorldWithVelocity" then uses a ps_2_0 pixel shader called "WorldPixelShader" which modulates the diffuse vertex color with the mesh's texture color and outputs this to COLOR0 and also outputs the velocity of the pixel per frame in COLOR1. The render target for COLOR1 is D3DFMT_G16R16F, a floating point 16 bits per channel texture. In this render target, it records the amount the pixel has moved in screen space since last frame in the X direction the red channel has, and the Y delta in the green channel. The interpolation from per-vertex velocity to per-pixel velocity is done automatically since the velocity is passed to the pixel shader via a texture coord. The second part of CMyD3DApplication::Render() is where the motion blur actually happens. It uses the technique called "PostProcessMotionBlur" to do the post process motion blur. The technique "PostProcessMotionBlur" uses a ps_2_0 pixel shader called "PostProcessMotionBlurPS". This pixel shader does 2 texture lookups to get the per-pixel velocity of the current frame (from CurFrameVelocityTexture) and the last frame (from LastFrameVelocityTexture). If it just used the current frame's pixel velocity, then it wouldn't blur where the object was last frame because the current velocity at those pixels would be 0. Instead you could do a filter to find if any neighbors have a non-zero velocity, but that requires a lot of texture lookups which are limited and also may not work if the object moved too far. So instead the shader uses either the current velocity or the last frames velocity. It chooses based on the one with greatest magnitude. Once it has chosen the value to use as the pixel's velocity, it samples 12 times along the direction of the pixel's velocity. While sampling, it accumulates the color and divides by the number of samples to get the average color of all the samples and outputs this as the final color for that pixel.
For example, here is an original unblurred scene:
And here's what a blurred scene looks like:
What are the pitfalls of this technique?
Also if the pixels in an object have moved too far since the last frame then it will be noticable:
To correct this, you can either increase the framerate so that the pixels do not move as far between frames or reduce the blur factor.
|