«Natalya Tatarchuk 3D Application Research Group ATI Research What’s in It for You? • Share our lessons of developing an extensive environment for ...»
• Sample from water membrane simulation using current position – Use the (xz) world-space coordinates for look-up – Scaled by the artist parameter to vary by size of membrane simulation (and size of ripples) – To reduce visual repetition, we rotate these coordinates by a pre-specified angle (ex: 15°) – Angle is specified per-object
• No additional geometry is required for water puddles We sample from the water membrane simulation using the object’s current position in world space (the xz Cartesian coordinates) as a lookup texture coordinates into the computed ripple wave normal map. Since our system implements a single ripple simulation for all puddle surfaces due to memory considerations, this limitation is overcome by providing the artists control over the ripple sampling space. To reduce visual repetitions of the resulting puddles, we provide a per-object scale parameter so for ripple waves and a rotational angle qo for the ripples look-up. The ripple simulation sample coordinates are rotated in texture space based on the specified object angle qo. Note that no additional geometry is required for puddle integration. This approach also enable our system to control turning on and off of puddle rendering on demand by using a material parameter and dynamic flow control features of the latest shader models. We also specify the puddle strength parameter to specify how much the ripple normals perturb the original bump map. This allows create different water motion for various objects Puddle Placement and Depth
• To render deep puddles, we use just the water puddle normal sampled as just describe, along with color / albedo attributes of the object
• However, in real environment, puddle depth varies greatly
• To simulate that, we allow a puddle depth and location mask map
• Adding puddles with ripples to objects:
– Define scale parameter and sample ripple normals – Sample puddle depth map – Interpolate between the normal map for the object and the water surface normal based on the puddle depth value In real environments, water puddle depth and locations differ significantly due to landscape details and rainfall accumulation.
Our system provides complete artistic control over the puddle placement and depth with a puddle depth mask. This mask specifies both the location of each puddle in the environment and its depth variation. Adding puddles with dynamic ripples to objects is intuitive with this approach. During rendering, we first sample the puddle depth map for the current depth value di. Then the ripple normal map is sampled as described earlier.
We observe that the deep puddles’ visual properties depend mainly on the color of the underlying material (for example, the asphalt on the street), and the water surface geometric properties for illumination. As the light rays refract through the water surface, the viewer observes the color properties of the material. However, the actual micro geometric structure of the surface under the puddle does not influence the appearance of the puddle. Therefore to modify the apparent puddle depth, we can specify the influence of the water surface normal as compared to the object normal vector pi.
We interpolate between the object normal vector and the water surface normal based on di and an artist-specified puddle influence parameter pi. Using this perturbed normal, we render the objects with water surfaces using Fresnel equations ( [Jen01]) for water-air refraction and reflection, as well as the material properties of the object as desired. We also specify the puddle strength parameter to specify how much the ripple normals perturb the original bump map. This allows create different water motion for various objects Creating Swirling Water Puddle
• Create an impression of water, swirling towards the drain, with ripples from the raindrops
• To create ripples from raindrops, use the same approach as for puddles on the street
• For water, swirling toward the drain, we used several ‘wake’ normal maps – Swirling radially around the drain – Concentric circles draining toward the drain Water Droplet Animation and Rendering on Glass Surfaces in Real-Time on the GPU Water Droplets Trickling Down on Glass Surfaces
• Adopted an offline raindrop simulation system from [Kaneda99] to the GPU – Dynamically animate and render a large number of water droplets on glass surfaces on the GPU – The simulation is modified to use a gather pass in the pixel shader, rather than original scatter-based particle system implementation
• The droplet shape and motion is influenced by the forces of gravity and the interfacial tension forces, as well as air resistance We adopted an offline raindrop simulation system from Kaneda99 to the GPU to dynamically animate and render a large number of water droplets and their streams trickling down on glass planes in real-time. We animate and render the droplets entirely on the GPU, with the simulation using the gather operation in the pixel shader, rather than the original scatter-based particle system implementation. The shape and motion of water droplets is influenced by the forces of gravity and the interfacial tension force, as well as air resistance. We generate the quasi-random meandering of raindrops due to surface tension and the wetting of the glass surfaces due to water trails left by droplets traveling on the surface. Our system produces correctly lit droplet appearance including the refraction and reflection effects.
We run the more extensive droplet simulation on the main store front windows, whereas for other windows in the environment we have a version that only renders static droplets with some texture tricks to make them appear moving.
We represent the glass surface as a discrete lattice of cells, storing the water mass Mi, j at that location, the velocity vi, j, and the droplet traversal quantity ti, j within each cell (i, j).
Each lattice cell stores the affinity parameter ka(i, j) (artist-specified or assigned at random from a normal distribution) which describes the hydrophobic or hydrophilic properties of that surface location.
The droplet information is packed into a 16-bit per channel RGBa texture.
Droplet Movement Simulation
The droplet begins to trickle down the glass surface when the acting downward forces start to exceed the upward resisting forces on the droplet. Droplet movement direction is determined by external forces acting on the droplet, however, the meandering of the droplet path also depends on the surface properties of the glass (due to impurities, small scratches or grooves). Additionally we can account for obstacles on the droplet path which can be encoded into the cell information.
The scene is rendered first and then we render the water droplet simulation on the window after that. This allows us to reflect and refract the scene through the individual water droplets. In order to do that, we use the water density for a given rendered pixels. If there is no water, we simply render the scene with regular properties. However, if the water is present, then we can use the water mass as an offset to refract through that water droplet.
Rain Effects - Windshield Wipers
We render bright objects into the half-size reflection buffer [1/4 for the roof]. We render a variety of objects, for example, all of the scene lamp, telephone pole lights, neon sign, traffic lights, etc. We specify the reflection buffer to be 1010102 as well so that we can preserve most of the light dynamic range (brighter than 1). In our case it’s more 0-6.5 range. To save on draw-calls – all street lamps are rendered as one big object (using skinning) Use the same technique as we did for blurring the blow buffer to streak the reflection buffer and simulate water mistiness. We sample from the reflection buffer using screen space projection of the input vertex coordinate. We also use the normal in tangent space to account for stretching of the reflection in view space and warp the reflection based on the surface normal.
Rendering View-Dependent Reflections
• Render reflections of complex arbitrary objects by rendering their impostors – Approximating the geometry of the reflected scene – Render reflector impostors into a separate reflection buffer
• Render the reflector objects into billboard reflector impostors – For bright light sources and objects and for dark objects – Render the impostors fully lit with the simplified material shaders for accurate dynamic reflection appearance – Reflection material shader saturate the dominant colors We render reflections of complex arbitrary objects by using their impostors approximating the geometry of the reflected scene. We render the reflector objects into billboard reflector impostors (as described in [MS01]) both for the bright light sources and the dark objects (such as the telephone poles)). We render the impostors lit with the scene illumination using manually-simplified material shaders to ensure the accurate reflections appearance, however, the reflections materials shaders strongly saturate the dominant colors. The dynamic lighting allows us to represent reflected animated light sources (such as a flickering neon light or blinking traffic lights in the streets) correctly. The reflections attenuate simultaneously with their corresponding reflector objects.
Reflection Impostor Rendering
The reflection impostors are dynamically stretched view-dependently toward the viewer in the vertex shader. The amount of stretching varies depending on the distance of the object to the viewer. The reflection buffer is down-scaled to half size of the original rendering buffer, and we use HDR texture formats to preserve the range for the reflections. The post-processing blurring technique is used to dynamically streak the reflection buffer in the vertical direction to simulate warping due to raindrops striking in the puddles. Note that this is done in separate passes from the regular scene post-processing. The downsampling of the reflection buffer provides additional blurring for the reflections. To render objects with the stretched reflections, we sample from the reflection buffer using the screen space projection of the input vertex coordinate for each reflective object. We use object’s per-pixel normal in tangent space to account for stretching of the reflection in view space and distort the reflection based on the surface normal. The post-process based blurring and further warping alleviates specular aliasing and excessive flickering from reflections which would otherwise be highly distracting.
Incorporating Streaky Reflections
• During the final rendering of the scene, sample from the reflection buffer using the screen-space projection of the vertex – Use the object per-pixel normal in tangent space to account for stretching of the reflection in view-space – Warp the reflection based on the surface normal for the reflecting object
• To make the wet reflections appear blurry and streaky, we use the post-processing pipeline and apply a Kawase-style blurring in vertical direction only – This creates a feeling of misty reflections that are dynamically distorted by the raindrops – This is a separate pass from the regular scene post-processing – Additional benefit: this alleviates specular aliasing and excessive flickering of reflections The post-processing blurring technique is used to dynamically streak the reflection buffer in the vertical direction to simulate warping due to raindrops striking in the puddles. Note that this is done in separate passes from the regular scene postprocessing.
The downsampling of the reflection buffer provides additional blurring for the reflections. To render objects with the stretched reflections, we sample from the reflection buffer using the screen space projection of the input vertex coordinate for each reflective object. We use object’s per-pixel normal in tangent space to account for stretching of the reflection in view space and distort the reflection based on the surface normal. The post-process based blurring and further warping alleviates specular aliasing and excessive flickering from reflections which would otherwise be highly distracting.
Results Discussion Test System: ATI ToyShop
• Implemented in DirectX 9.0c – Using HLSL SM 3.0 shaders – Lua scripting language for rendering scripts
• Roughly 300 shaders for various rain-related components – ~500 individual shaders for the entire system
• Although our environment is a night time scene, the algorithms can be applied in variety of lighting simulations
• Environment geometry, textures and related offscreen buffers used ~240MB video memory – High resolution textures were used to capture extreme detail of the represented world
• Approximately 5K-20K particles were used for raindrops and rain splashes Our test system consisted of rendering several city blocks in stormy weather, with animated vehicles and other objects in the scene. We used DirectX 9.0c c HLSL shaders to implement all of our effects, and the Lua scripting language to create the rendering scripts for the post-processing system and lightning integration. For rendering rain-related effects, nearly 300 unique shaders were used, with more than 500 used to render the entire complex environment in full.
The rain-related shaders included various object shaders for wet materials, dynamic water simulation shaders, view-dependent reflections, raindrops, rain splashes, misty halos around objects, composite rainfall layer rendering, water droplet rendering and so on. Although our example video contains rendering of a night scene, the approaches presented in this paper can be successfully used in variety of lighting environments, including daytime renderings. The environment geometry, textures and rain-related offscreen buffers used 240 MB of video memory. In order to create a realistic environment, high resolution textures were used to capture the extreme detail of the represented world. For rendering individual falling raindrop and their splashes we used from 5,000 to 20,000 particles depending on a particular scene.