June 28, 2019
This post will cover the following scenario: you have the internal and external camera calibration parameters of a hypothetical or actual camera defined in the OpenCV framework (or similar to OpenCV), and want to model this camera in OpenGL, possibly with respect to an object model.
The material is presented assuming that you are familiar the concepts of a pinhole camera, which is the model used for camera calibration of typical cameras in the OpenCV library. For more details about camera models and projective geometry, the best explanation is from Richard Hartley and Andrew Zisserman’s book Multiple View Geometry in Computer Vision, especially chapter 6, `Camera Models’ (this is my extremely biased view). I abbreviate that book as the “H-Z book” for the remainder of this tutorial.
This tutorial is meant to be generally self-contained, so that people can print off pages to read and make notes offline. Of course, not all sections are critical depending on your background, so skip around as needed for your own understanding.
It is possible to set many of the camera parameters using OpenGL function calls, such as
glOrtho(). However, in this tutorial, I set the matrices directly, and these matrices are then sent to a shader. These details, if they are new to you, will be more clear in the code examples which are now other pages of the tutorial, starting here. Because of the direct use of matrices, this tutorial may offer some clues to derive the OpenGL projection matrices for similar camera models to the pinhole model.
These are the two resources I used for figuring out this problem. Both are great resources. If you’re here, it is likely that you see that these two resources do not spell out how to convert OpenCV calibration matrices to OpenGL matrices, which is the goal of this tutorial.
If you aren’t familiar with modern OpenGL, the below set of tutorials is a good place to start:
Let’s get started with some definitions!
First, we’ll discuss all the ins and outs of the image coordinate systems of the two standards.
The cameras in the H-Z and OpenCV coordinate systems both assume that the principal axis is aligned with the positive axis. In other words, the positive axis is pointing towards the field of view of the camera. On the other hand, in OpenGL, the principal axis is aligned with the negative axis in the image coordinate system. Because of these changes, the axis will also be rotated degrees between the two representations.
Within the OpenCV/H-Z framework, there three coordinate frames: an image coordinate frame, camera coordinate frame, and a world coordinate frame.
Well, within the OpenGL frame, we have four: the image coordinate frame, camera coordinate frame, the world coordinate frame, and … normalized device coordinates, or NDC. I will illustrate how these work, along with the order of operations and some other needed preliminaries, by analogy with the OpenCV/H-Z framework.
Figure 1. An illustration of the relationships between three coordinate systems: World, Camera, and Image within the OpenCV framework. Many of the ideas and basic sketch of the camera coordinate figure will be familiar for those who have well-thumbed copies of the H-Z book. The scene observed by the camera is in the direction of the positive axis of the camera coordinate system. In the image coordinate system, notice that the origin is in the lower left corner, and the axis goes up – the opposite direction of in the data matrix layout. More on that in Figure 3. is the principal point in the image coordinate system, and a parameter that is found during camera calibration.
Since we are dealing with homogeneous coordinates, we need to normalize . Assuming that the indexing for the vectors is 0-based (in other words, the first item has an index of , so the third will have an index of ):
Following this operation,
If or are not within the image space, then those points are not drawn on the image. For instance, image coordinates less than zero are discarded, as are those that are larger than the image dimensions. A similar process happens in OpenGL, just add a dimension ()!
Figure 2. Importantly, note that this figure is produced for the highly specific conversion from OpenCV conventions to the OpenGL convention. Usually, the camera coordinate system has the negative as the principal axis. If this is the case for your calibration matrices, stop, and consult other guides.
Given that we’re assuming calibration matrices where the principal axis is in the camera coordinate system, as in the OpenCV context the pipeline starts by transforming world points by rotation and translation – exactly like in OpenCV, except with a row added to make the matrix square. Then, the camera coordinate system is slightly different; in OpenGL there is the notion of near and far planes, these are parameters that are defined by the user. The points in the camera coordinate system are transformed to the next space – I’ll call it the cuboid space – by , which is not a proper rotation and translation, but instead a reflection to a left-handed coordinate system. Then, the transformation transforms the cuboid space into a cube with corners that are – normalized device coordinates. This concludes all of the transformations the user has to specify – once the coordinates are in the left-handed normalized device coordinates, then OpenGL will transform those coordinates into image coordinates. To troubleshoot and transform them yourself, the equations are in Conversion Corner 1.
For now, I am not going to specify the matrix, but instead specify how all the coordinate systems work in OpenGL. But don’t worry, we’ll get to specifying all of these items.
First, in OpenGL there is the notion of clipping points/objects that are in between the and planes. While in the OpenCV framework, we consider any points in between the principal plane and +infinity viewable points, this is not the case in OpenGL. To account for these planes, whereas the clipping in image space (in the previous section) is quite intuitive in OpenCV – if the point is not in the image (defined as ), don’t draw it – OpenGL uses 4-element homogeneous vectors to accomplish similar aims.
I’ll denote the OpenGL NDC coordinate as ; it is a column vector with elements. It is also a homogeneous vector, and its last element is frequently given the letter . Like in the OpenCV representation, we normalize the image point by dividing by the 4th entry (again assuming 0-based indexing); I will say that a 4-element vector is normalized when the last element is equal to 1:
And like before, we have a similar result:
These coordinates are not necessarily image coordinates. The values are needed so that OpenGL can compute the drawing order for objects. The NDC space is a cube of length on each side, with dimensions . Song Ho’s site has some good illustrations of the NDC space.
If any coordinate has , then is not drawn (or, the edge with the coordinate on the end is clipped). In other words, if any coordinate is less that -1, or greater than 1, it is outside of the NDC space.
You may have noticed that the output of the OpenGL operation is not truly an image coordinate in the sense we’re used to working with in OpenCV – in other words, a coordinate in a data matrix – and you’re right. OpenGL takes care of the conversions to image space, but it is useful to know how those work. So for troubleshooting purposes, see the box below for conversion formulae.
To convert from the OpenGL NDC coordinates to OpenGL image coordinates, where is a 3-element vector, and has been normalized :
Note that since the image coordinate system in OpenGL is defined differently than it is in OpenCV (see Figure 3), a further conversion is needed to convert these coordinates to the OpenCV coordinates:
First, use the from the OpenCV matrices, but add a row to make it square. Such as:
And assuming that you have an intrinsic camera calibration matrix from the OpenCV context, , in the following form:
Then, we’ll use this modify the OpenCV matrix to create the corresponding OpenGL matrix perspective projection matrix. Note: it is highly likely that the skew parameter in the first row, second column of the OpenCV intrinsic camera calibration matrix could also be modeled in OpenGL, by negating this parameter in the form, similarly as described in Kyle Simek’s guide. However, I have not tested this and tend to set the skew parameter to zero for my calibrations, so leave to you to test!
Given those preliminaries, and with and as the dimensions of the image, we define two new variables and the new intrinsic camera calibration matrix in the OpenGL context.
Now, taking a closer look at Figure 2 and these matrices, you might be thinking, “holy smokes, why the heck would one switch back and forth, positive to negative, right-handed coordinate system to left-handed, etc., etc. Isn’t this a drag?” To which I answer: “yes.” But a couple of caveats: I am presenting the OpenGL pipeline from the perspective of someone in computer vision who loves the H-Z book and OpenCV, with a fair bit of hand-waving. In reality, and to add to the confusion, OpenGL’s camera coordinate system has as its principal axis the negative axis. I’ll say it again in case you’ve gotten this far without seeing it – if you have matrices where the calibration assumes a negative axis as the principal axis, check out other resources. I have done a lot of testing confirm that this works.
How can you test your camera models before getting in too deep? This is easiest with a scripting language like Matlab or octave (free), you could also do it with C++ and Eigen, Python, or any other language with which you are comfortable.
Figure 3. The top sub-figure illustrates the image coordinate system for the OpenCV and OpenGL contexts. The left lower corner shows the definition of the row () and column () indices for the OpenGL coordinate system; the OpenGL coordinate system and image coordinate system are the same (). The origin for matrices in the OpenCV context is the top left corner, requiring a vertical flip of the images grabbed from OpenGL with
Finally, I’ll end with a pedantic point concerning the layout of data matrices, in other words, the indexing of the rows and columns containing the pixels of data, and the non-relation of that layout to the image coordinate system. OpenGL has a layout different from OpenCV, which I alluded to in the Conversion Corner and the details of which are illustrated and described in the caption of Figure 3.
My code (here) currently renders the scene with OpenGL, with the correct orientation. Then, the buffer is grabbed with
glReadPixels( ) and written to an OpenCV
Mat image structure – upside down, so that it will turn out right-side up. The details are on Page 2.
© Amy Tabb 2019-2020. All rights reserved. The contents of this site reflect my personal perspectives and not those of any other entity.