![]() |
![]() |
HDRCubeMap Sample
Path
User's Guide
Sample Overview HDRCubeMap shows off two rendering techniques: cubic environment-mapping and high dynamic range lighting. Cubic environment-mapping is a technique in which the environment surrounding a 3D object are rendered into a cube texture map, so that the object can use the cube map to achieve complex lighting effects without expensive lighting calculations. High dynamic range lighting is a technique to render highly realistic lighting effects by using floating-point textures and high-intensity lights. The floating-point texture formats are introduced in DirectX 9.0. Unlike traditional textures in integer format, floating-point textures are capable of storing a wide range of color values. Because color values in floating-point textures don't get clamped to [0, 1], much like lights in real-world, these textures can be used to achieve greater realism.
The scene that HDRCubeMap renders is an environment-mapped mesh, along
with several other objects making up the environment around the mesh. The mesh
has a reflectivity setting adjustable from 0 to 1, where 0 means the mesh absorbs
light completely, and 1 means it absorbs no light and reflects all. The lights
used in the scene consist of four point lights, whose intensity is adjustable
by the user.
The scene that the sample renders consists of these objects:
The camera used in the sample is a CModelViewerCamera. This camera always looks at the origin, where the environment-mapped mesh is. The user can move the camera position around the mesh and rotate the mesh with mouse control. The camera object computes the view and projection matrices for rendering, as well as the world matrix for the environment-mapped mesh.
The Rendering Code RenderScene() is the function that does the actual work of rendering the scene onto the current render target. This is called both to render the scene onto the cube texture and onto the device backbuffer. It takes three parameters: a view matrix, a projection matrix, and a flag indicating whether it should render the environment-mapped mesh. The flag is needed because the environment-mapped mesh is not drawn when constructing the environment map. The rest of the function is straight-forward. It first updates the effect object with the latest transformation matrices, as well as the light position in view space. Then, it renders every mesh object in the scene by using an appropriate technique for each one. RenderSceneIntoCubeMap()'s job is rendering the entire scene (minus the environment-mapped mesh) onto the cube texture. First, it saves the current render target and stencil buffer, and sets the stencil surface for the cube texture as the device stencil buffer. Next, the function iterates through the six faces of the cube texture. For each face, it sets the appropriate face surface as the render target. Then, it computes the view matrix to use for that particular face, with the camera at the origin looking out in the direction of the cube face. It then calls RenderScene() with bRenderEnvMappedMesh set to false, passing along the computed view matrix and a special projection matrix. This projection matrix has a 90 degree field of view and an aspect ratio of 1, since the render target is a square. After this process is complete for all six faces, the function restores the old render target and stencil buffer. The environment map is now fully constructed for the frame. Render() is the top level rendering function called once per frame by the sample framework. It first calls FrameMove() for the camera object, so that the matrices managed by the camera can be updated. Next, it calls RenderSceneIntoCubeMap() to construct the cube texture to reflect the environment for that frame. After that, it renders the scene by calling RenderScene() with the view and projection matrices from the camera, and bRenderEnvMappedMesh set to true.
The Shaders The RenderLight technique is used for rendering the spheres representing the light sources. The vertex shader does a usual world-view-projection transformation, then assigns the light intensity value to the output diffuse. The pixel shader propagates the diffuse to the pipeline. The RenderHDREnvMap technique is used for rendering the environment-mapped mesh in the scene. Besides transforming the position from object space to screen space, the vertex shader computes the eye reflection vector (the reflection of the eye-to-vertex vector) in view space. This vector is passed to the pixel shader as a cubic texture coordinate. The pixel shader samples the cube texture, applies the reflectivity to the sampled value, and returns the result to the pipeline. The RenderScene technique is used to render everything else. Objects rendered by this technique employ per-pixel diffuse lighting. The vertex shader transforms the position from object to screen space. Then, it computes the vertex position and normal in view space and passes them to the pixel shader as texture coordinates. The pixel shader uses this information to perform per-pixel lighting. A for-loop is used to compute the amount of light received from the four light sources. The diffuse term of the pixel is computed by taking the dot product of the normal and the unit vector from the pixel to the light. The attenuation term is computed as the reciprocal of the square of the distance between the pixel and the light. These two terms are then modulated to represent the amount of light the pixel receives from one of the light sources. Once this process is done for all lights in the scene, the values are summed and modulated with the light intensity and the texel to form the output.
High Dynamic Range Realism
When DirectX 9.0's new floating-point cube texture is used with lights whose intensity is higher than 1, the texture is capable of storing color values higher than 1.0, thus preserving true luminance even after the reflectivity multiplication. The figure below shows this.
Alternatives
|