Version 12 (modified by bhook, 13 years ago)
--

Mouse Picking Demystified

April 5, 2005

Introduction

There comes a time in every 3D game where the user needs to click on something in the scene. Maybe he needs to select a unit in an RTS, or open a door in an RPG, or delete some geometry in a level editing tool. This conceptually simple task is easy to screw up since there are so many little steps that can go wrong.

The problem is this: given the mouse's position in window coordinates, how can I determine what object in the scene the user has selected with a mouse click?

One method is to generate a ray using the mouse's location and then intersect it with the world geometry, finding the object nearest to the viewer. Alternatively we can determine the actual 3-D location that the user has clicked on by sampling the depth buffer (giving us (x,y,z) in viewport space) and performing an inverse transformation. Technically there is a third approach, using a selection or object ID buffer, but this has numerous limitations that makes it impractical for widespread use.

This article describes using the inverse transformation to derive world space coordinates from the mouse's position on screen.

Before we worry about the inverse transformation, we need to establish how the standard forward transformation works in a typical graphics pipeline.

The View Transformation

The standard view transformation pipeline takes a point in model space and transforms it all the way to viewport space (sometimes known as window coordinates) or, for systems without a window system, screen coordinates. It does this by transforming the original point through a series of coordinate systems:

Math

Not each step is discrete. OpenGL has the GL_MODELVIEW matrix, M, that transforms a point from model space to view space, using a right-handed coordinate system with +Y up, +X to the right, and -Z into the screen.

Math

Another matrix, GL_PROJECTION, then transforms the point from view space to homogeneous clipping space. Clip space is a right-handed coordinate system (+Z into the screen) contained within a canonical clipping volume extending from (-1,-1,-1) to (+1,+1,+1):

Math

After clipping is performed the perspective divide transforms the homogeneous coordinate to a Cartesian point in normalized device space. Normalized device coordinates are left-handed, with w = 1, and are contained within the canonical view frustum from (-1,-1,-1) to (+1,+1,+1):

Math

Finally there is the viewport scale and translation, which transforms the normalized device coordinate into viewport (window) coordinates. Another axis inversion occurs here; this time +Y goes down instead of up (some window systems may place the origin at another location, such as the bottom left of the window, so this isn't always true). Viewport depth values are calculated by rescaling normalized device coordinate Z values from the range (-1,1) to (0,1), with 0 at the near clip plane and 1 at the far clip plane. Note: any user specified depth bias may impact our calculations later.

Math

This pipeline allows us to take a model space point, apply a series of transformation, and get a window space point at the end.

Our ultimate goal is to transform the mouse position (in window space) all the way back to world space. Since we're not rendering a model, model space and and world space are the same thing.

The Inverse View Transformation

To go from mouse coordinate to world coordinates we have to do the exact opposite of the view transformation:

Math

That's a lot of steps, and it's easy to screw up, and if you screw up just a little that's enough to blow everything apart.

Viewport to NDC to Clip

The first step is to transform the viewport coordinates into clip coordinates. The viewport transformation takes a normalized device coordinate and transforms it into a viewport coordinate:

Math

So we need to do the inverse of this process by rearranging to solve for n:

Math

Okay, not so bad. The only real question is the z component of v. We can either calculate that value by reading it back from the framebuffer, or ignore it by substituting 0, in which case we'll be computing a ray passing through v that we'll then have to intersect with world geometry to find the corresponding point in 3-space.

From here we need to go to clip coordinates, which, if you recall, are the homogeneous versions of the NDC coordinates (i.e. w != 1.0). Since w is already 1.0 and the transformation back to clip coordinates is a scale by w, this step can be skipped and we can assume that our NDC coordinates are the same as our clip coordinates.

Clipping Space to View Space

A point in view space is transformed to clipping space with the GL_PROJECTION matrix:

Math

Given this we can do the opposite by multiplying the clipping space coordinate by the inverse of the GL_PROJECTION matrix. This isn't as bad as it sounds since we can avoid computing a true 4x4 matrix inverse if we just construct the inverse projection matrix at the same time we build the projection matrix.

A typical OpenGL projection matrix takes the form: