| 162 | {{{ |
---|
| 163 | #!latex-math-hook |
---|
| 164 | \begin{eqnarray*} |
---|
| 165 | {\boldsymbol P} = \begin{pmatrix} |
---|
| 166 | a & 0 & 0 & 0 \\ |
---|
| 167 | 0 & b & 0 & 0 \\ |
---|
| 168 | 0 & 0 & c & d \\ |
---|
| 169 | 0 & 0 & e & 0 \end{pmatrix} |
---|
| 170 | \end{eqnarray*} |
---|
| 171 | }}} |
---|
| 172 | |
---|
| 173 | The specific coefficient values depend on the nature of the perspective projection matrix (for more information I recommend you look at the documentation for [http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/glu/perspective.html gluPerspective]. These co-efficients should scale and bias the ''x'', ''y'', and ''z'' components of a point while assigning -''z'' to ''w''. |
---|
| 174 | |
---|
| 175 | To transform from view coordinates to clip coordinates: |
---|
| 176 | |
---|
| 177 | {{{ |
---|
| 178 | #!latex-math-hook |
---|
| 179 | \begin{eqnarray*} |
---|
| 180 | {\boldsymbol P}{\boldsymbol v}&=& |
---|
| 181 | {\boldsymbol c}\\ |
---|
| 182 | &=& |
---|
| 183 | \begin{pmatrix} |
---|
| 184 | a{\boldsymbol v}_x\\b{\boldsymbol v}_y\\c{\boldsymbol v}_z |
---|
| 185 | + d{\boldsymbol v}_w\\e{\boldsymbol v}_z |
---|
| 186 | \end{pmatrix}\\ |
---|
| 187 | \text{where}\\ |
---|
| 188 | {\boldsymbol P}&=&\text{projection matrix as described earlier}\\ |
---|
| 189 | {\boldsymbol v}&=&\text{point in view coordinates}\\ |
---|
| 190 | {\boldsymbol c}&=&\text{point in clipping coordinates}\\ |
---|
| 191 | \end{eqnarray*} |
---|
| 192 | }}} |
---|
| 193 | |
---|
| 194 | So solving for '''v''' we get: |
---|
| 195 | {{{ |
---|
| 196 | #!latex-math-hook |
---|
| 197 | \begin{equation*} |
---|
| 198 | {\boldsymbol v} = |
---|
| 199 | \begin{pmatrix} |
---|
| 200 | \frac{{\boldsymbol c}_x}{a}\\ |
---|
| 201 | \frac{{\boldsymbol c}_y}{b}\\ |
---|
| 202 | \frac{{\boldsymbol c}_w}{e}\\ |
---|
| 203 | \frac{{\boldsymbol c}_z}{d}-\frac{c{\boldsymbol c}_w}{de} |
---|
| 204 | \end{pmatrix} |
---|
| 205 | \end{equation*} |
---|
| 206 | }}} |
---|
| 207 | |
---|
| 208 | Encoding the viewspace to clipspace transformation in a matrix yields the inverse projection matrix: |
---|
| 209 | |
---|
| 210 | {{{ |
---|
| 211 | #!latex-math-hook |
---|
| 212 | \begin{equation*} |
---|
| 213 | {\boldsymbol P}^{-1} = \begin{pmatrix} |
---|
| 214 | \frac{1}{a} & 0 & 0 & 0 \\ |
---|
| 215 | 0 & \frac{1}{b} & 0 & 0 \\ |
---|
| 216 | 0 & 0 & 0 & \frac{1}{e} \\ |
---|
| 217 | 0 & 0 & \frac{1}{d} & -\frac{c}{de} |
---|
| 218 | \end{pmatrix} |
---|
| 219 | \end{equation*} |
---|
| 220 | }}} |
---|
| 221 | |
---|
| 222 | Computing the view coordinate from a clip coordinate is now: |
---|
| 223 | |
---|
| 224 | {{{ |
---|
| 225 | #!latex-math-hook |
---|
| 226 | \begin{equation*} |
---|
| 227 | {\boldsymbol P}^{-1}{\boldsymbol c}={\boldsymbol v} |
---|
| 228 | \end{equation*} |
---|
| 229 | }}} |
---|
| 230 | |
---|
| 231 | There's no guarantee that ''w'' will be 1, so we'll want to rescale appropriately: |
---|
| 232 | |
---|
| 233 | {{{ |
---|
| 234 | #!latex-math-hook |
---|
| 235 | \begin{equation*} |
---|
| 236 | {\boldsymbol v}' = \frac {{\boldsymbol v}}{{\boldsymbol v}_w} |
---|
| 237 | \end{equation*} |
---|
| 238 | }}} |
---|
| 239 | |
---|
| 240 | == Viewspace to Modelspace == |
---|
| 241 | |
---|
| 242 | Finally we just need to go from view coordinates to world coordinates by multiplying the view coordinates against the inverse of the modelview matrix. Again we can avoid doing a true inverse if we just |
---|
| 243 | logically break down what the modelview transform accomplishes when working with the camera: it is a translation (centering the universe around the camera) and then a rotation (to reflect the camera's |
---|
| 244 | orientation). The inverse of this is reversed rotation (accomplished with a transpose) followed by a translation with the negation of the modelview matrix's translation component after it has been rotated by the inverse rotation. |
---|
| 245 | |
---|
| 246 | If given our initial modelview matrix '''M''', consisting of a 3x3 rotation submatrix '''R''' and a 3-element translation vector '''t''': |
---|
| 247 | |
---|
| 248 | {{{ |
---|
| 249 | #!latex-math-hook |
---|
| 250 | \begin{equation*} |
---|
| 251 | {\boldsymbol M} = |
---|
| 252 | \begin{pmatrix} |
---|
| 253 | {\boldsymbol R}_{11} & {\boldsymbol R}_{12} & {\boldsymbol R}_{13} & {\boldsymbol t}_x\\ |
---|
| 254 | {\boldsymbol R}_{21} & {\boldsymbol R}_{22} & {\boldsymbol R}_{23} & {\boldsymbol t}_y\\ |
---|
| 255 | {\boldsymbol R}_{31} & {\boldsymbol R}_{32} & {\boldsymbol R}_{33} & {\boldsymbol t}_z\\ |
---|
| 256 | 0 & 0 & 0 & 1 |
---|
| 257 | \end{pmatrix} |
---|
| 258 | \end{equation*} |
---|
| 259 | }}} |
---|
| 260 | |
---|
| 261 | Then we can construct the inverse modelview using the transpose of the rotation submatrix and the camera's translation vector: |
---|
| 262 | |
---|
| 263 | {{{ |
---|
| 264 | #!latex-math-hook |
---|
| 265 | \begin{eqnarray*} |
---|
| 266 | {\boldsymbol R}^T{\boldsymbol t}& = &{\boldsymbol t'}\\ |
---|
| 267 | {\boldsymbol M}^{-1} &=& |
---|
| 268 | \begin{pmatrix} |
---|
| 269 | {\boldsymbol R}^T_{11} & {\boldsymbol R}^T_{12} & {\boldsymbol R}^T_{13} & -{\boldsymbol t'}_x\\ |
---|
| 270 | {\boldsymbol R}^T_{21} & {\boldsymbol R}^T_{22} & {\boldsymbol R}^T_{23} & -{\boldsymbol t'}_y\\ |
---|
| 271 | {\boldsymbol R}^T_{31} & {\boldsymbol R}^T_{32} & {\boldsymbol R}^T_{33} & -{\boldsymbol t'}_z\\ |
---|
| 272 | 0 & 0 & 0 & 1 |
---|
| 273 | \end{pmatrix} |
---|
| 274 | \end{eqnarray*} |
---|
| 275 | }}} |
---|
| 276 | |
---|
| 277 | If you're specifying the modelview matrix directly, for example by calling {{{glLoadMatrix}}}, then you already have it lying around and you can build the inverse as described earlier. If, on the other hand, the modelview matrix is built dynamically using something like {{{gluLookAt}}} or a sequence of {{{glTranslate}}}, {{{glRotate}}}, and {{{glScale}}} calls, you can use {{{glGetFloatv}}} to retrieve the current modelview matrix. |
---|
| 278 | |
---|
| 279 | Now that we have the inverse modelview matrix we can use it to transform our view coordinate into world space: |
---|
| 280 | |
---|
| 281 | {{{ |
---|
| 282 | #!latex-math-hook |
---|
| 283 | \begin{eqnarray*} |
---|
| 284 | {\boldsymbol M}^{-1}{\boldsymbol v}&=&{\boldsymbol w}\\ |
---|
| 285 | \text{where}\\ |
---|
| 286 | {\boldsymbol M}^{-1}&=&\text{inverse of the modelview matrix}\\ |
---|
| 287 | {\boldsymbol v}&=&\text{point in viewspace}\\ |
---|
| 288 | {\boldsymbol w}&=&\text{point in worldspace} |
---|
| 289 | \end{eqnarray*} |
---|
| 290 | }}} |
---|
| 291 | |
---|
| 292 | If the depth value under the mouse was used to construct the original viewport coordinate, then '''w''' should correspond to the point in 3-space where the user clicked. If the depth value was not read then we have an arbitrary point in space with which we can construct a ray from the viewer's position: |
---|
| 293 | |
---|
| 294 | {{{ |
---|
| 295 | #!latex-math-hook |
---|
| 296 | \begin{eqnarray*} |
---|
| 297 | \label{eqn:ray} |
---|
| 298 | \overrightarrow{{\boldsymbol r}}&=&{\boldsymbol a} + t({\boldsymbol w}-{\boldsymbol a})\\ |
---|
| 299 | \text{where}\\ |
---|
| 300 | \overrightarrow{{\boldsymbol r}}&=&\text{ray}\\ |
---|
| 301 | {\boldsymbol a}&=&\text{viewer's position in worldspace}\\ |
---|
| 302 | {\boldsymbol w}&=&\text{point in worldspace}\\ |
---|
| 303 | \end{eqnarray*} |
---|
| 304 | }}} |