Teapot War

This project is made in Render Monkey, HLSL (2.0 and 3.0).


1. Sky

For rendering the surrounding environment, a large cube is drawn. In the pixel shader, we compute
the vector that goes from the viewer to the the given pixel (world coordinates). The HLSL method
texCUBE is used to address the cube texture with the environment, and retrieve the colour at the
point of intersection between the vector and the cube.
The cube should be at infinite distance from the observer, and there are 2 options: translate the
cube such that the observer is always in the centre, or make the cube very large. I tried to make the
cube large, as the first method was not looking good while the cube was intersecting the terrain.
However, the cube was not made too large for not exceeding the default far value of the camera.

Figure 1: left: cube mapping applied on a cube; the black rectangle indicates the edge of the cube. The edge
should not be noticeable if the cube would be larger. Right: sphere mapping using the same technique.

2. Terrain

The terrain is bump mapped using a normal map, and the lighting algorithm is computed in the
tangent space . The colour for the terrain is obtained by locally blending two textures.


Figure 2 Pass “Terrain”: There is one texture that represents the sand, which is below variable y0, and another texture for the peaks, on top of y0. In the interval [y0-delta, y0+delta] the colour of the terrain is obtained by smoothly interpolating the colours from the two textures.

It would have been better to use a height map to obtain the terrain, but vertex shader3 has
limitations in what texture sampling is concerned.

3. Burning teapot

Step1: rendering the teapot with an animated texture
Figure 3 Teapot with animated texture, 2 different frames

The teapot is textured using a noise-animated texture. The initial idea was to render a teapot into
a texture that is blurred using two passes, and then perturbed using a noise texture. The texture
could have been placed behind the teapot (like a halo effect), giving the impression of a burning
teapot. However, better visual effects were obtained by placing a textured quad in front of the
teapot, that had a per pixel-fire perturbation. Therefore, the passes “RenderToTex”, “Blur1” and
“Blur2” have been discarded, but they are still included in the workspace.

Step2: rendering the quad using the animated texture

Figure 4 Left: A quad with an animated texture is drawn in front of the teapot. Right: the same polygon is drawn with the reverse blending function. The position of the polygon and the transparent part of it can be noticed.

The quad is always perpendicular on the view direction, and has the position of the teapot.
However, the polygon is a little displaced towards the observer, so that it will always be in front of
the teapot as the viewer rotates around the object.

4. Robot-like teapot

Fig 5: Robot-like teapot

The robot is obtained using the vertex shader, by displacing the model’s vertices using specific
functions, as described in Appendix C.

5. Tank-like treapot


Figure 6: The diffuse and specular components of the light show that the teapot normals have been adjusted

The vertices and normals have been displaced like in the previous effect. The sides of the tank have
been transformed to planes, and the normals at the sides are set to the normals of the planes. The
tank is animated, such that it moves itself as well as its guns over time. Periodic functions like
time0_X and sin(k*time) have been used. However, discontinuities can occur when the time resets
to 0, but this can be adjusted by changing the time cycles in render monkey.

6. Flying-jet like teapot

Fig 7

The plane has animated fire jets, that are 2 polygons with the burning texture (the same as for the
burning teapot). The polygons are aligned such that they are parallel with camera up direction, but
the right direction is given by the plane moving direction on Ox axis.

7. Cruiser-like teapot

Figure 8 The cannon was obtained from the teapot’s spout. The canon’s normals were obtained by
rotating the spout, approximating with a cylinder, and rotating it back.

8. Eroded teapot
 Figure 9 ErodedTeapot. Smoke added in the second picture for a more realistic effect.

9. Fire (particle systems)

Fig 10 Particle system
Figure 11 Particle system that uses the same 3D fire particle. The application was written in C++ and Cg, see the Real Time GPU Graphics Engine Modules (previous posts). Comparing it with Fig 10, we can notice a different look given by the physics underneath the particle system.

Figure 12 The 8 successive frames from the 3D fire texture.

The system contains an array of quads that are identifying themselves through their z coordinate.
There is an emitter that represents the starting point of the particles. Their position is calculated
using the emitter, time and their id, as detailed in the appendices. The quads are aligned to be
perpendicular to the camera. The colour of the particles is changing according to a volume texture
that is applied on them. The texture is sampled using a parameter that goes from 0 to 1 while the
fire particle goes up. However, there are limitations, as the effect cannot simulate the real physics
underneath a particle system. It would have been ideal to have an emitter that is constantly
spawning particles that have random living times. The effect looks patterned because of the nonrandomness
generation of particle positions.

10. Fire (per pixel effect)
(see section 3)

11. Smoke

Figure 13 Two types of smoke. Left: smoke effect of applying a 3D texture and a mask texture. Right: the same 3D texture and mask, combined differently.

The effect is similar to the fire effect, but the particles have different trajectories.

12. Explosions

Figure 14 Left: First polygon, uses a 3D texture. Middle: Second polygon, combines two 3D textures. Right: combining the polygons, frame from the final explosion effect.

The explosion effect consists in rendering 2 textured polygons that are growing in size over time. The
depth buffer test is disabled, such that the explosion will not have parts of it behind the terrain.

13. Flags

For rendering the flags a tessellated polygon was used. Its vertices are animated by adding 3
sinusoidal waves (the same algorithm used for animating the water surface). For placing them on the
flag pole, a smooth per vertex deformation has been applied at the right hand side of the flag. The
flag pole is a teapot transformed in a cylinder.

Figure 15 Left: animated flag. Right: animated eroded flag. The texture applied is rendered in 2 passes, for adding a teapot on the pirate flag texture. A mask texture is used for erosion at the edges.



14. Animated Lake Water

The water surface is a tessellated plane, and the vertices are displaced using 3 sinusoidal waves, as
described in [1]. The height is given by the y coordinate. After the vertices are displaced, the
normals are computed using the cross product between the tangent and binormal in the given point.

Figure 16 Water surface with a floating object. The water surface reflects the distant environment (the
reflected ray is intersected with the cubemap). The refraction is just a blending with the objects drawn before in the frame buffer.

15 Animated fire jets

For the flying teapot:
The flying teapot has 2 fire jets made of 2 polygons aligned correspondingly. However,
the speed of the plane is greater than the noticeable updating rate of the texture on the
polygon, which makes it look static. If the plane is slowed down (i.e. set TWait to 20) the
jets animation can be noticed.

For the plane's missle:
For rendering the missle’s jet, a quad array is used. First the missle position is
computed:
P= Pi+(Pf-Pi)*t; Then the position of each particle is displaced on the direction of the
projectile, such that, as times goes by, the jet is getting longer (the particles are
displaced more and more from the pivot point P):
pos.xyz-=displacement*dir*Input.Position.z*t;
t is a time parameter in [0,1]. Its computation is described in Appendix B.
Figure 17: Top: Missle with jet that has just been launched. Bottom: The animated jet has changed its shape since launching. The missle hits the ground and an explosion is spawned.

For the cruiser's bullet:
“Missle2” is a screen aligned textured quad that represents a small bomb launched by
the cruiser. “MissleJet” draws an animated fire jet behind it. The position of the bomb is
following a parabolic trajectory, and is the pivot point for the fire jet. Each particle of
the fire jet is displaced from the pivot point on the direction d=-df/dt (the opposite
vector of the tangent to the trajectory). There is also a position perturbation around this
direction.

Figure 18 Left: the bullet thrown by the war teapotship.
Right: the animated missle tail.

16 Ray traced spheres

All the objects of the scene are drawn into a texture. The texture is used as a background image for
the raytracer. The raytraced objects are floating spheres, which have their position computed in the
vertex shader. The pixel shader is used to compute the color of each pixel by using raytracing. If the
ray does not hit any object the pixel is coloured with the background colour, which is the
corresponding pixel from the scene’s texture. If the ray is a reflected ray, the background texture is
the sky cubemap, otherwise is the scene texture.

The raytracer must take into consideration the z buffer. A first solution would be to draw the same
spheres in a separate pass, without affecting the colour buffer, but writing the stencil buffer. Then
the raytracer could have drawn only the pixels with stencil equal to 1. The method was not
implemented because the pixels of the raytraced spheres were not matching the pixels of the usual
spheres. The method used samples the scene’s depth buffer (only terrain and plank were rendered
to a depth texture). If the ray hits an object that is further away than the distance in the depth
buffer, it will draw the background colour. For water blending, another depth texture was used. If
the pixel is behind the water, the colour is blended with the water colour.

 Figure 19 Raytracing: The red sphere is transparent. The other spheres appear to be distorted at the borders of the refractive sphere because of the refraction index. The background color for the raytracer is the scene texture. The water depth buffer yields viewing artefacts at the intersection with the spheres.


 Figure 20 Left: Raytracing with all the spheres reflective but not refractive. The reflection of the cubemap can be noticed. Right: The red sphere is refractive and reflective; all the others are only reflective.

Figure 21 Depth buffer testing applied to the raytraced spheres.

However, the ray tracing algorithm has some issues, as the refracted rays that do not hit anything
will not intersect the background, will just return the “non ray-traced” pixel. It would have been
better to render the world into a cubemap that surrounds the spheres, and the rays could have
returned a colour from it. Rendering all the objects in a cubemap would have definitely not been real
time, and the number of passes for each object would have been 6 times greater (a very unpractical
thing to do in RenderMonkey).
Another problem is that a recursive algorithm does not suits the pixel shader. Raytracing on GPU
should use different technologies (i.e. CUDA).

17. Blooming effect

 Figure 22 Blooming effect can be noticed on the tail of the torpedo and the front side of the plane.
All the objects are drawn into a texture. Before drawing the fire effects, the texture is processed for
making the bright objects glow. A high pass filter is applied to get the bright objects into a texture
that is blurred with a Gaussian filter. The processed texture is combined with the original at the end
(passes brightPass, blurHoriz1, blurVert1, blurHoriz2, blurVert2, applyBlooming, copyToMainScene)

18. Grass pack

Figure 23 Grass: A quad array with no animation, but a wide spread factor.

19. Plank

Figure 24 Bump-mapped plank using a normal map. The technique fails as the angle of view gets small – noticeable at the ending part of the plank.



Appendices

Appendix A
 Parametrizing a parabolic path


Appendix B
Determinig a time interval parametrization

For animating the objects of the scene a path has been described that has been parametrized in
[0,1]. Render Monkey provides a time variable that goes from [0,1] in 120 seconds (default cycle
time). For not changing the default period, this variable has not been used. So, given the predefined
time in [0,120], we compute t that goes from [0,1] in Tf seconds. (Tf should be an integer).
The time interval of 120 is divided in chunks of Tf seconds (inaccuracies appear when 120 is not
divisible by Tf), and at a given moment Ti we remove the finished Tf chunks, such that we obtain a
time that goes periodically from 0 to Tf. We can easily scale this value to [0,1] to obtain the desired
time value:
float timeD=time-deltaT; //delay the starting time
int x=timeD/Tf; //             x=[timeD/Tf]
float tt=timeD-Tf*x; //      tt=timeD-Tf[timeD/Tf], tt goes from 0 to Tf
float T=(float)tt/Tf; //       T goes from [0,1] in Tf seconds

Appendix C
Selecting regions of vertices using smoothstep function

For selecting easier some regions of the teapot, without inserting lots of branches, two methods
have been introduced:
float smoothSelect4(float a, float b, float c, float d, float t){
return smoothstep(a,b,t)-smoothstep(c,d,t);
}
float smoothSelectDelta(float a, float b, float delta, float t){
return smoothstep(a,a+delta,t)-smoothstep(b-delta,b,t);
}
If a value needs to be smoothly scaled, it can be easily displaced by using:
Val*=1+smoothSelect*(...,Input.Position.x)
The above formula tells that val will be modified only for those vertices that have the x
value in the range given as input. More than that, val can be modified by selecting a
surface, or volume of points:
smoothSelect(...,pos.x)*smoothSelect*(...,pos.y)*smoothSelect*(...,pos.z)
For example, to make the robot’s legs, the y coordinate has been scaled for a selection of
points that have x and z in the corresponding values. The body of the robot and the
antenna were also obtained by a smooth selection applied to y.

Appendix D
Adjusting the normals after perturbing the vertices

After displacing the vertices, the normals to the surface have to be recalculated. Without
using the geometry shader, we cannot access the adjacent vertices. Mathematically, if
the transformation is affine (say transformation T), we can work out the normal by
applying a transformation G = transpose(T^-1). (a proof for that formula can be found in [2]). So
if a region of the teapot is only scaled or rotated, we compute G and work out the
normals. More than that, if the transformation is only a rotation, then T^-1= transpose(T), so G = T,
and we can apply the same transformation on the normals. If the transformation is not
affine however, we need to know the exact function of the surface to be able to compute
the normals, which is a very difficult thing to do while using build-in functions like
smoothstep. The normals have only been approximated in those cases by shapes that
were rather similar to the resulted ones. For instance, the legs of the robot are almost
cylinders, so the normals have been deduced imagining the leg as a cylinder with the Oz
axis the symmetry axis. Another approximation that can be done in the following way:
the deformed surface can be translated to the origin, the normals are computed by
normalizing the position, and then the vertices are translated back again. These 3
methods of computing the normals (for affine transformations, approximating with
cylinders/spheres, or approximating with the position vector) have been applied to all
the deformed teapots. If the approximations are not good enough, the lighting algorithm
would give unpleasant artefacts, and can be adjusted by not taking in consideration the
normal at all for the diffuse light. It would terribly affect the 3D graphics, but it will
result in a colourful object. It was though avoided, as the deformed teapots have a
reasonable approximation of the normals.


References:

[1] Mark Finch , Effective Water Simulation from Physical Models, GPU gems, chapter1, NVIDIA
Corporation, sept 2007, available at
http://http.developer.nvidia.com/GPUGems/gpugems_ch01.html
[2] António Ramires Fernandes, GLSL Tutorial. The gl_NormalMatrix, available at
http://www.lighthouse3d.com/opengl/glsl/index.php?normalmatrix
[3] Jacco Bikker, Raytracing: Theory & Implementation Part 3, Refractions and Beer's Law,
06/10/2005, available at http://www.devmaster.net/articles/raytracing_series/part3.php
[3] Render Monkey samples and documentation, available at:
http://developer.amd.com/gpu/rendermonkey/Pages/default.aspx

Links for textures and 3D models used (apart from the ones available in Render Monkey):
http://www.filterforge.com/
http://www.turbosquid.com/
http://www.shutterstock.com/
http://www.humus.name/index.php?page=3D&&start=32