Saturday, February 5, 2011

Ray Casting (Project Update)

In addition to writing a ray tracer I've also been developing a ray casting program that creates Wolfenstein 3D like graphics by utilizing multi-core CPUs and the GPU texture processing to pipeline collision detection. Below is my first GPU pipelined rendering. Instead of using the traditional libraries for rendering, my engine is specifically optimized for this scenario. The GPU is given two textures as input. Traditionally textures are rasterized image that extend the geographic or shading properties of a 3D mesh. However, my engine utilizes textures entirely different.

The first texture handed to the GPU is the set of rays being cast out into the scene. The second texture is a map of the world into which the rays are being cast. Finally, a third texture, a set of doubles representing where the collisions occurred and how far away from the camera they are. This information is processed to create the final image.

In this style of ray casting, there are a few assumptions that can be made which I optimize for. Ray casting actually takes place in 2D space. All calculations up to the final drawing are 2D math. This means that our map is flat and more importantly, the center of every wall drawn is at the center of the image. I can use simple trigonometric properties to find walls distance from the camera (via the GPU) and then do some more operations to draw this portion of the wall.

Here is my first rendering done this way:

No comments:

Post a Comment