Changes between Version 7 and Version 8 of MousePicking

Show
Ignore:
Author:
bhook (IP: 64.207.62.170)
Timestamp:
05/18/06 14:07:37 (13 years ago)
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • MousePicking

    v7 v8  
    22April 5, 2005 
    33 
     4== Introduction == 
    45There comes a time in every 3D game where the user needs to click on something in the scene.  Maybe he needs to select a unit in an RTS, or open a door in an RPG, or delete some geometry in a level editing tool.  This conceptually simple task is easy to screw up since there are so many little steps that can go wrong. 
    56 
    1213Before we worry about the inverse transformation, we need to establish how the standard forward transformation works in a typical graphics pipeline. 
    1314 
    14 This article is under construction!   
     15== The View Transformation == 
     16 
     17The standard view transformation pipeline takes a point in model space and transforms it all the way to viewport space (sometimes known as window coordinates) or, for systems without a window system, screen coordinates.  It does this by transforming the original point through a series of coordinate systems: 
    1518 
    1619{{{ 
    2629}}} 
    2730 
    28 Some text: 
     31Not each step is discrete.  OpenGL has the {{{GL_MODELVIEW}}} matrix, '''{{{M}}}''', that transforms a point from model space to view space, using a right-handed coordinate system with +Y up, +X to the right, and -Z into the screen. 
    2932 
    3033{{{ 
    3134#!latex-math-hook 
    32 {\boldsymbol M}{\boldsymbol p}&=&{\boldsymbol v} 
     35\begin{eqnarray*} 
     36{\boldsymbol M}{\boldsymbol p}&=&{\boldsymbol v} \\ 
     37\text{where} \\ 
     38{\boldsymbol M}&=&\text{modelview transformation matrix} \\ 
     39\boldsymbol{p} &=& \text{point in model space} \\ 
     40\boldsymbol{v} &=& \text{point in view space} \\ 
     41\end{eqnarray*} 
    3342}}} 
    3443 
    35 More text
     44Another matrix, {{{GL_PROJECTION}}}, then transforms the point from view space to homogeneous clipping space.  Clip space is a right-handed coordinate system (+Z into the screen) contained within a canonical clipping volume extending from {{{(-1,-1,-1)}}} to {{{(+1,+1,+1)}}}
    3645 
    3746{{{ 
    3847#!latex-math-hook 
    39 \label{eqn:vp to ndc
    40 {\boldsymbol n} = \begin{pmatrix} 
    41 \frac{ 2{\boldsymbol v}_x }w - 1\\  
    42 \frac{ 2{\boldsymbol v}_y }{h}\\ 
    43 2{\boldsymbol v}_z - 1\\ 
    44 
    45 \end{pmatrix
     48\begin{eqnarray*
     49{\boldsymbol P}{\boldsymbol v}&=&{\boldsymbol c} \\ 
     50\text{where} \\ 
     51{\boldsymbol P}&=&\text{projection matrix} \\ 
     52\boldsymbol{v} &=& \text{point in view space} \\ 
     53\boldsymbol{c} &=& \text{point in clip space} \\ 
     54\end{eqnarray*
    4655}}} 
     56 
     57After clipping is performed the perspective divide transforms the homogeneous coordinate to a Cartesian point in normalized device space.  Normalized device coordinates are left-handed, with ''w'' = 1, and are contained within the canonical view frustum from {{{(-1,-1,-1)}}} to {{{(+1,+1,+1)}}}: 
     58 
     59{{{ 
     60#!latex-math-hook 
     61\begin{eqnarray*} 
     62{\boldsymbol n}&=&\frac{{\boldsymbol c}}{{\boldsymbol c}_w} \\ 
     63\text{where} \\ 
     64{\boldsymbol n}&=&\text{point in normalized device coordinate} \\ 
     65{\boldsymbol c}&=&\text{clipped point in homogeneous clipping space} \\ 
     66\end{eqnarray*} 
     67}}} 
     68 
     69Finally there is the viewport scale and translation, which transforms the normalized device coordinate into viewport (window) coordinates.  Another axis inversion occurs here; this time +Y goes down instead of up (some window systems may place the origin at another location, such as the bottom left of the window, so this isn't always true).  Viewport depth values are calculated by rescaling normalized device coordinate Z values from the range {{{(-1,1)}}} to {{{(0,1)}}}, with 0 at the near clip plane and 1 at the far clip plane. Note: any user specified depth bias may impact our calculations later. 
     70 
     71{{{ 
     72#!latex-math-hook 
     73\begin{eqnarray*} 
     74{\boldsymbol V}{\boldsymbol n}&=&{\boldsymbol w} \\ 
     75\text{where} \\ 
     76{\boldsymbol V}&=&\text{viewport transformation matrix} \\ 
     77{\boldsymbol n}&=&\text{point in normalized device coordinates} \\ 
     78{\boldsymbol w}&=&\text{point in viewport/window coordinates} \\ 
     79\end{eqnarray*} 
     80}}} 
     81 
     82This pipeline allows us to take a model space point, apply a series of transformation, and get a window space point at the end. 
     83 
     84Our ultimate goal is to transform the mouse position (in window space) all the way ''back'' to world space.  Since we're not rendering a model, model space and and world space are the same thing. 
     85 
     86== The Inverse View Transformation == 
     87