Thomas Sampson

Leave a comment

View Frustum Notes

The purpose of this blog post is to jot down a few handy tips and tricks worth remembering working with view frustums, ranging from the blatantly obvious to the more obscure. I don’t spend as much time reviewing these posts as I would like, so as ever, please feel free to comment with any corrections or related tips.

Inverse View Matrix

A cameras view matrix is most often represented as a homogeneous 4×4 matrix used to shift bases, transforming points from world space, to the camera’s own three-dimensional co-ordinate system (‘camera space’ or ‘view space’). However a cameras inverse view matrix can also have it’s uses. A camera’s inverse matrix represents the position and orientation of the camera it’s self within the world and can therefore prove useful in the following scenarios:

  • When rendering game cameras within the world, perhaps useful when debugging or writing tools which deal with the positioning/manipulation of multiple cameras
  • Extracting the camera’s translation and orientation when these values are not immediately available/accessible

Finding View Frustum Vertices

Sometimes it can be useful to find the world space vertices which define the corners of the view frustum, perhaps to render the frustum, or to find the distance to a particular point or edge. Given the position, orientation, aspect ratio and FOV of a camera it is possible to manually calculate the frustums corners, however a much more elegant approach is at hand.

Start by setting up the vertices of a cube with it’s local extents representing the clip-space of your graphics pipeline (OpenGL uses [-1,-1,-1]→[1,1,1] for clip-space whereas DirectX uses [-1,-1,0]→[1,1,1]). Next, transform this ‘clip space cube’ by the camera’s inverse projection matrix. This will reveal the corners of the view frustum in view space. Further transforming these points by the camera’s inverse view matrix will yield the world-space vertices representing the corners of the view frustum.

Caching View Frustum Data

This one may seem pretty obvious, but calculating data such as frustum planes, corners and angles can be computationally expensive and need only be carried out whenever the camera is moved or re-oriented. If your camera class has Set methods for properties such as position, lookat, FOV etc, it may be wise to check that the new value is unique before setting, re-calculating or storing any new data.

Storing View Frustum Planes / Normals

As planes can be represented using a pair of vectors (position and a unit normal) or better still in a single vector (ax + by + cz + d = 0), it might seem sensible to store a vector or vector pair per frustum plane. It is worth considering however that view frustums are almost always symmetrical and therefore calculating and storing only the left/top plane may be beneficial in some cases. When the right/bottom planes are required they can be generated on the fly by reflecting the left/top planes about the cameras local axis. If the view frustum is always parallel to a world axis this fact can be exploited by storing only the left/top frustum planes and simply inverting the appropriate component of the point/normal vectors to perform the reflection. Frustum planes can also be extracted directly from the camera’s view/projection matrix on the fly, see Fabian “ryg” Giesen’s excellent blog post for more information.


MSDN – Viewports and Clipping (Direct3D 9)
Frustum planes from the projection matrix
OpenGL Transformations

1 Comment

Debugging DirectX applications with PIX for Windows


 After using Microsoft’s PIX tool numerous times over the past couple of years, on a number of projects (all Windows/DX9 based), I was surprised to find that many other students weren’t using PIX and would often spend many hours close to submission date, getting to the bottom of the most tedious graphical bugs or rendering artefacts. I’m confident that the bugs in question could often have been identified and fixed in a matter of moments given the right tools . This brings me to “PIX for Windows”, Microsoft’s graphics debugger for DirectX. Don’t get me wrong, PIX isn’t the answer to all your problems, neither will it fix anything for you automatically out of the box. The purpose of this post is to provide a quick run through the essential prerequisites required in order to get up and running with PIX, followed by a brief explanation of some of the most useful features PIX has to offer. I also demonstrate how you can configure your C++ DirectX application to be “PIX aware”, communicating with PIX to make the debugging experience a little simpler and smarter. For further reference, please see PIX for Windows documentation.

Installing PIX

PIX for Windows is a GUI tool distributed with the Microsoft DirectX SDK and can be found in the following location after install;


Configuring Your System

Before firing up PIX, first head to the DirectX Control Panel, this is a nice GUI utility which allows you to tweak the DirectX runtime by enabling/disabling certain features and components.

The DirectX control panel is also part of the Microsoft DirectX SDK and can be found in the following location after install;


Regardless of whether or not you choose to use PIX,  it is handy to know about this utility as it can be used to toggle between the debug/release DirectX DLLs and turn on useful compile/runtime feedback . This feedback ranges from efficiency warnings to runtime memory leak reports.

Figure 1 (Click to enlarge)

Figure 1 shows my DirectX Control Panel configuration, which I have tweaked for personal preference. Mirroring this configuration should ensure PIX operates correctly, although not all the options I have enabled are not necessarily fundamental or related to PIX in any way. Play around with this configuration utility and find a configuration you are comfortable with. I often find myself tweaking the “Debug Output Level” slider based on the scenario, and disabling “Break on memory leaks” when I’m looking at someone else’s code and don’t care too much about memory leaks. Also, use “Software Only” mode judiciously, as this disables all hardware acceleration and forces everything to be rendered in software on the CPU (which can be painfully slow!).

Note: The “Render” and “Mesh” windows within PIX do not function correctly when “Maximum Validation” is disabled.

Next: Experiment One >>

Leave a comment

ATI Tessellation demo

ATI tessellation demo showing how the GPU can take a relatively simple triangular mesh and use additional normal/displacement data to generate a more rich, detailed mesh on the fly. Extra tessellation detail is generated dynamically and passes straight through the GPU, requiring no extra video memory for the additional vertex data.

1 Comment

Debugging full screen DirectX Applications

Today I was trying to step through some code for intialising a vertex buffer, as something was going drastically wrong along the way. However my application needed to be tested in full screen mode which made debugging through VS impossible as the application I was trying to debug takes full control of the screen.

There seem to be two solutions in this scenario;

  1. Modify your code to run in windowed mode (easy enough to do, just change your window class registration and DirectX initialisation routines)
  2. Use multiple monitors –> Goto control panel, choose DirectX, Goto DirectDraw tab, choose Advanced and enable Multiple Monitor Debugging.


DirectX Vs OpenGL

I thought this debate would be a good place to start for the new Games Programming section. First of all I will attempt to explain why either of these are necessary and what their roles are in the grand scheme of things.

When programmers first ventured into games programming on the desktop PC the main platform of choice was DOS. The reason for choosing DOS over a more complete operating system such as windows was that DOS gave the programmer low level access to the hardware. As the hardware specs weren’t so impressive in these times, direct access to the hardware was essential, especially when creating games with lots of graphics and perhaps 3d objects on the scene. Trying to program on top of the operating system slowed things down and the running of the operating system would often get in the way of the game code, making for slow frame rates and poor game-play, and also restricting access to the video hardware. Most “In Operating System” games would resemble windows pinball or a simple card game.

So DOS was the way forward? Well, not completely. The idea behind DOS (getting close to the hardware and maximising the potential of the graphics card) was brilliant in theory but the problem lied in compatibility. Programmers would have to write their own “drivers” for each graphics card, in many cases having to manipulate hardware registers and write manually to memory and display buffers on the graphics card. This was fine as long as the drivers worked well and the card in your PC was compatible, otherwise the game would refuse to work at all. I remember the first DOS game I played which was “Duke Nukem 3D”.

Duke Nukem 3DIn the settings dialogue there would be a list of 4 or 5 mainstream chip-sets which were supported which you could select. Overall this was not much fun for the game programmers, more time was spent on tech than it was on game-play or game design, with every game having its own “in house” game engine and graphics pipeline. At this point it is also worth mentioning that all the problems mentioned with the portability of graphics code is also applicable to audio cards, with only a select few being supported by most DOS games. Clearly something was needed to remove this diversity of code and bring some standardisation to the emerging industry. Microsoft eventually saw the market potential for gaming on the PC and came up with DirectX which is now on version 10, with its main competitor being the open standard alternative openGL (open graphics library).

The idea of both these “libraries” was to bring the hardware manufacturers and the programmers together to standardise the way things are done and provide a layer of abstraction upon the hardware. The idea is that the programmer uses a standard code library to accomplish what they want, and the calls to that library are translated into the hardware mechanisms used by the vendors of the graphics card, with the vendors making their drivers and chips conform to these libraries. This way the programmer could write efficient code and not worry about compatibility issues as it was now the responsibilty of the card manufacturers to comply with industry standards. Both Libraries are optimised to run efficiently on top of a modern operating, giving the programmer the benefit of programming in a controlled environment.

Although this post is titled “directX Vs OpenGL” I dont wish to debate this too much. The libraries are both very similar and both share a common goal (although directX is more hardware focused than openGL which strives to be a 3d rendering system which may be hardware accelerated). To me the main consideration when picking either is the platform. DirectX is striclty for windows platforms, the PC or the (direct)Xbox 360, whereas due to the open source nature of openGL it can be adapted to many hardware configurations including the PlayStation2, PlayStation3 and most Nintendo platforms. If you do wish to consider the specific comparables of each graphics API I recommend the following link.

For now I have chosen to focus my efforts on DirectX as I am informed that this is the better supported library and it is also the API I will be programming on my course next year.