Some thoughts on the IPD discrepancy

Following several posts on IPD settings problem and complains about the mismatch between measured distance between the lenses’ centers and the IPD reported by the headset I tried to figure out from where the problem comes and if it is even a problem.

I have been running recently some tests on ED and during those test I realized that it is relatively easy to play with OpenVR (using pyopenvr). I could get for example recommended rendering target resolution to prove that OpenVR applies the additional limit on the size which is not visible in the SteamVR UI (Elite Dangerous Setup Guide - Lets make it Dark Again :))

Another interesting info which one can get this way is what OpenVR calls Eye to head transform matrix by calling this function IVRSystem::GetEyeToHeadTransform.

This transformation matrix defines how the “eye coordinates” transform to the “head coordinates” and is used to split the original view (which would use the pancake version of the game) into the two views needed for stereo view rendering in VR.

When using PiTool in Compatible with parallel projection mode, the corresponding matrices read:

Left Eye:

[[ 1.        ,  0.        ,  0.        , -0.03486158],
 [ 0.        ,  1.        ,  0.        ,  0.        ],
 [ 0.        ,  0.        ,  1.        ,  0.        ]]

Right Eye:

[[1.        , 0.        , 0.        , 0.03486158],
 [0.        , 1.        , 0.        , 0.        ],
 [0.        , 0.        , 1.        , 0.        ]]

They look almost like an identity except the last column, which defines the translation and corresponds to the left and right eye offsets respectively. In this case it means I have my IPD set to ~ 70 mm and it is evenly split between the both eyes. (It is a translation defined along the X axis of the view space).

So far nothing new here, with the parallel projection enabled we are rendering to two coplanar views, which only differ by the shift along the X axis, the same way as for Rift or Vive.

Then I turned off “parallel projection” and read the matrices again in the native mode. The results got more interesting:

        [[ 0.9848078 ,  0.        ,  0.17364816, -0.03486158],
mLeft =  [ 0.        ,  1.        ,  0.        ,  0.        ],
         [-0.17364816,  0.        ,  0.9848078 ,  0.        ]]

        [[ 0.9848078 , -0.        , -0.17364816,  0.03486158],
mRight = [ 0.        ,  1.        , -0.        ,  0.        ],
         [ 0.17364816,  0.        ,  0.9848078 ,  0.        ]]

This shows not only translation along the X axis (which is the same), but also a rotation of the views. To demonstrate how to figure out the angle, I use the following direction vector

look = [0.0, 0.0, -1.0, 0.0]

in Eye view of the left eye. The first three numbers are the coordinates (X, Y, Z) and the last one is zero, because I am interested in direction transformation and not coordinates transformation (by putting 0 there, the translation is omitted, by putting 1 there it is included). Then calculating the new direction vector in Head coordinate system gives:

mLeft x look -> [-0.17364816,  0.        , -0.9848078 ]

Since I know that the original vector had the length of 1, using this, the change of the X coordinate and basic trigonometry, I can calculate the angle of which the view rotated along the Y axis (this one does not change).

degrees(asin(0.17364816)) -> 9.999998972144015° ~ 10°

Which means that in the native “canted” mode, the views which the headset reports to the OpenVR are not parallel but divergent by 10° on each side.

The visualization above is scale accurate to the eye size, IPD and the angle, but the placement of the lenses and panels are arbitrary, just to give an idea of the impact of the divergent property, notably how the lenses centers cannot be at the same distance as the eyes.

I also made an assumption, that the lenses are perpendicular to the optical axis (which corresponds to the view axis, reported by OpenVR) and that for the sake of a predictable behavior the lenses are positioned so that when the eye is looking through in the direction of the view defined by the headset, its optical axis is identical with the optical axis of the system.

Now it should be easier to imagine what happens when you set the IPD lower than what is your real one. The optical axis gets offset but the angle remains the same, so you get the lenses centers closer to the IPD distance, at the cost of getting the angular offset slightly off.

The another consequence of this design is that thanks to the divergence of the views, the sweet spot becomes dependent on the distance from the face as it is no longer invariant as in the parallel projection. In paralel projection, if you set your IPD right you can move the headset further or closer and this will only change the FOV, in “canted” projection, moving the headset further or closer also changes the position of the sweet spot as it may end up in front of or behind the eye pupil.


Yes I invoke the problem in a fewer words and less elegantly in the thread about vision fatigue. I also see that this cross axis position of the eyes in the horizontal plane is anatomically stressing because not normal in regular vision

In real life your eyes are not positioned like that. Also the lenses are not titled. a lens can correct a display that is angled to look like it’s flat going into your eye.


The lens of 5k+ are clearly at small angle vs the normal of the user face

If the fresnel lens were compensing for this angle the concentric ridges would be asymetrical or the lens hybride on the other side. For wich we have no clear indication. From memory when swiverver remove the lens they were flat…with out the exact specification I’m kind of sceptical.


You are right. In the real life it is quite difficult (but possible) to take a divergent look. In normal life it would not happen, but you could try to force your eyes into this configuration by looking at for example two dots, which are further from each other then is your IPD and try to look pass them until they align in your view into one. It will cause some strain and an uncomfortable feeling but it is possible.

In normal life (and we want that also in VR) the look has to be convergent, because it is the natural way we look at things and it also helps the brain to estimate the distance (bigger convergence = shorter distance).

I have sketched up a modified picture of the one of the Pimax configuration where the eyes are focused at the point which is 1,5 m distant (which might be around the focal distance of Pimax).

You can notice that while this is perfectly natural eye orientation (and focus) it is far from optimal when considering the optical geometry of the Pimax lenses. As the both optical axes are passing through the lenses off-center and therefore (most likely) incurring the distortion from being non center and non perpendicular.

It also shows that when people set the IPD according to their real IPD (and thus have lenses centers spaced further than the eyes) using the popular method “focus one eye, then focus the other eye on the same object and set the IPD that both are sharp” will never work, because the other eye will be always affected by the effect described above as the first eye will automatically become dominant in this test.

Setting the IPD in a way that the images in both eyes are clear (or that the lenses centers spacing matches the IPD) leads to the skewed angular perception of both views because they are no longer observed at the angle they are rendered.

So there is no clear way out.


Wait… Are we talking a python library?

Thanks for your research, calculations, reasoning, and sketches! Had been thinking of drawing a similar sketch, but only to demonstrate how with the canted lenses, unlike with in-line ones, n degrees to the side from a gaze centre line, looking straight ahead at something infinitely distant, is radically different to-and-between the left and right, as far as what part of the lens the section of image is coming through, and at what angle. You saved me that work, and then some. :slight_smile:


You can still draw it :wink: I had it also on mind, but I had a feeling I had already overloaded the forum with my sketches.

1 Like

Keep them coming, says I. Much appreciated! :smiley:

1 Like

Only the highest quality of sketches on this forum.


That’ll be $20.


Can I pay you in turnips?

The important point, by the way, would be the difference, in lens intersection, where you have object convergence, so between something that is, e.g. to the left, for both eyes – the opposite of the mirroring. :9

Unfortunately I only accept greenbacks (aka pickles and other green vegetables)

Is this better?

1 Like

I can do kale… Would you prefer a sack of raw leaves, or the stewed-with-cream-and-treacle dish we traditionally have on the Christmas table, around here?

I’m disoriented now. This is a paradigm shift. We may need to pivot, to properly consolidate such innovation, and integrate it into a new holistic vision. :dizzy_face:

1 Like

Unfortunaly, I can’t do that… because kale is gross.:nauseated_face:

Are the X-eyes because your eyes were physically removed from your head and stuck on the wrong side of the display? :laughing:


Gherkins it is, then! I’ve got the jar right here; You go out with a pillow, while I prepare the trebuchet.

This is quite the attractive PCB, though!

1 Like

When they say “eye-tracking” they actually mean that the pair of eyes in the HMD is tracking YOU!

1 Like

Death to the demoness Allegra Zuckerberg… or something… :stuck_out_tongue:

1 Like

1 Like

Thanks Risa2000 for explanations. Is that mean that angle that I marked on picture are 10° on each side or this is not this angle? I’m trying to understand it where this angle is on your picture so I can calculate theoretically optimal distance between my eyes and the lenses.

1 Like

Your marks are correct.


Thanks @risa2000.
Please correct me if I’m wrong. If I want to set IPD correctly this is what I have in mind.

I should have some FOV meter in front of me like this one from Miami Mike:

I should point my nose to 0°, close my left eye and look with my right eye to the 10° (without moving my head at all, just right eye) then I should adjust IPD using hardware adjuster so the picture will be as much sharp/clear as possible (right eye is directed 10° all the time) and I will see as large sweet spot as possible. When I find best IPD for the right eye I should do test with left eye also. Close right eye, open left one direct it to -10° and check if adjusting IPD for left eye gives me similar results (my head is not moving at all with nose directed to 0°). I’m assuming in this case that both eyes are healthy, and there is no dominant eye and my face is symmetrical. The best IPD should be the same for both eyes.

Is this method correct?

1 Like