Fixed Foveated Rendering not working in iRacing

Hi there,

I just got my Pimax 5k+ recently, and I’m still figuring out the best settings. I mainly use it for iRacing and Assetto Corsa Competizione.

I’m currently running the latest NVIDIA drivers (442.19), the PiTool Alpha 254 and Firmware 255 on my 5k+ (P2, SN:2036…).

Now with the PiTool at Render Quality 1.0 and SteamVR at 100%, I get 3852x3290 pixels at 90Hz/normal-FOV and a ridiculous 6950x3290 at 90Hz/large-FOV.

My 2080TI can handle lower resolutions at normal FOV, but the large FOV setting is just too much. I figure that FFR at the aggressive setting could greatly reduce the amount of pixels, but it doesn’t seem to do anything. The GPU frametime in fpsVR is the same. It did seem to me like it was working in Assetto Corsa Competizione, though.

Is there another PiTool version or some hack that will allow me to get FFR working in iRacing? What about the “Variable Rate Super Sampling” setting in the nvidia control panel. Does that have to be enabled for iRacing?

1 Like

Welcome to the OpenMR Forum!

There are a few Iracing members here. I think @MikeJeffries might be one. Also @SweViver and @mirage335 may have some setting recommendations.

1 Like

I havent used the 5k+ much since i got the Valve Index, but I never used FFR. I think there’s an issue where it doesn’t work for iRacing. Also, i didnt like how it looks. I used Normal FOV, 100% SteamVR SS, and i changed the SS thru iRacing’s OpenVR settings in their ini file. I put it up a bit to like 120-130 i believe. with a 9900KS and 2080Ti. it looked perfectly fine for me. PiTool I kept at 1.0 also

I havent used it in iRacing for at least 6 months though, so keep that in mind.


At 3290v (vertical pixels), that is already 2.28x Total SR (combined super sampling, 3290/1440).

This is too much.

Ignore the ‘100%’ number in SteamVR, it is bogus. Technically, ‘50%’ is often ‘1.0x’ supersampling, but that is not reliable.

Change PiTool Render Quality to 1.25, then adjust SteamVR until is reads out above 2880v (vertical pixels). At 2x Total SR, that will be visually as high-quality as your Pimax 5k+ is going to get.

If that still does not work well, adjust a bit lower still, down to 2520v, at Total SR of 1.75x. You are not likely to notice the quality difference.

As a minimum, consider 2160v, a Total SR of 1.5x, and the same number of pixels rendered as the 8kX at native (though it won’t look as clear as that of course).

Keep in mind, every time you reduce the vertical pixel count, you are reducing the number of pixels your GPU has to render by the square of that count.

This is exactly what I created spreadsheets to calculate.

FFR is kind of quirky. Sometimes it works, sometimes not.


At 3290v (vertical pixels), that is already 2.28x Total SR (combined super sampling, 3290/1440).

I’m not sure it works like this. It would only apply if we were looking directly at the lcd screens, with no lenses deforming the image. The image is rendered and then probably warped to a different size to account for the lens distortion. So rendering in 3290p will probably result in a smaller vertical resolution after the warping. Only after that has happened, the image will again be downsampled and/or cropped to 1440p. So the real question is, what render resolution will be warped to 1440p in the sweet spot area. This would then be our 1:1 or 100% resolution.

Ideally, Pimax would tell us what render resolution that is, but I haven’t been able to find any info on this. I suppose they change it all the time with new firmware and PiTool versions, when they try to improve on the lens distortion.

Anyhow, I totally agree that PiTool at 1.0 and SteamVR at 100% is most likely overkill. I just want the large FOV so badly, and I need some values to start with. Once I’ve gotten FFR going, I can lower SteamVR until I get a steady 90fps.

But not using FFR seems like such a waste. I’ve been running triple screens for years and rendering the far edges of the outer monitors at 100% resolution was just overkill. The area at the corner of the eye makes for a great sense of speed, but it sure can be blurry over there. I would have been very happy if there had been something like FFR for this.

In vertical pixels, it works exactly like that. Adjusting PiTool to 1.0x, and SteamVR to something that makes sense like 50% or 100%, does result in exactly the vertical resolution of the headset. And changing things like PiTool to 1.25x does result in exactly the change in resolution that would be expected.

Horizontal pixels is where things get a lot more uncertain, which is why I standardize on vertical pixels for all of my calculations.

Not using FFR is indeed a waste. However, not as much as might as one might expect. Much of the rendering pipeline is not bypasses simply by reducing the number of pixels at the edges - objects and textures out there must still be processed by the GPU. In the end, the theoretical maximum benefit can be something like 50%, and the actual benefit can be as high as something like 30%, aside from any gains due to bypassing inefficient code.

I did extensive SS testing a few months ago in RFactor 2 and I could not see any image degradation down until SS at 70% in SteamVR with PiTool at 1.0 and Normal FOV. I did these tests by finding out at which distance a brake marker sign was becoming unreadable while backing up slowly. At 60%, I was definitely seeing a difference.

Now I use Large FOV at 72Hz in iRacing and continue to use this rule; Pitool at 1.0 and SteamVR at 70% or more. I only increase SS to keep GPU utilization in the 70% range to avoid stuttering. For some reason I get stuttering when the GPU is not busy enough. (RTX 2080)

I haven’t tried FFR in iRacing because I look on the sides a lot and find the reduced resolution annoying. Additionally, I had seen no performance improvement when I tried FFR in RFactor 2 a few months ago.


Hi. If you rise the resolution the FFR is much less annoying. The less resolution you use the easier is to notice the low resolution in the outer parts.


Sorry, but I highly doubt that. Why would it make a difference horizontally or vertically?

I used to own the Rift DK2, on which you could remove the lenses and look directly at the screen. There was barrel distortion all over. I would guess current HMDs will work just like that.

Pimax headsets correct much more optical, and apparently, chromatic distortion, towards the far outer horizontal edges, than anywhere else. Probably this is due to the larger lenses imposing a larger focal length and a more rectangular profile.

Now please don’t take this the wrong way, but it really does seem you are not putting enough thought into this.

First you make a statement ‘I highly doubt that’, then you ask a question. As context for your perhaps rhetorical question about a wide FOV headset, you reference previous experiments with an obsolete narrow FOV headset. Last, you conclude with a guess that all current headsets will work the same.

Let me be clear.

Native vertical pixels, times the Total SR (all supersampling in the whole pipeline), will always equal rendered pixels, unless vertical pixels are discounted for some reason (ie. distortion correction).

Pimax wide-FOV headsets are not doing this, as indicated by consistent vertical pixel numbers changing exactly as would be expected when PiTool Render Quality is increased by by 25%.

Other wide-FOV headsets are probably not doing this either, as their optics would be similar or better.

Narrow FOV headsets might be doing this, but not by more than maybe 20% in the worst imaginable scenario. Still, that would be accurate enough to say, 2.28x Total SR is at least 128%^2 rendered pixels beyond the point of diminishing visual quality for a 1440v headset, which is to say something like 163% excessive load.

Since you are not working with a non-Pimax or narrow FOV headset now, all you need to consider is that the math clearly indicates you have been rendering way more pixels than is an ideal tradeoff between performance and visual quality.

The pimax and StarVR use a ,fried egg" style hybrid lens. With that the part near the nose is the Yoke. Now this area represents your traditional Narrow FoV gen1 Headsets with Round Lenses(though this area of the FoV is larger on pimax). The egg white is a flat area of the lens with minimal magnification.

But unless your the Optics Designer as these are completely custom hybrid lenses; most is guess work.

To gain a bit of an idea on how they work you could study the Wearality Sky Document to some extent though their 150 wide FoV does not use Canted displays.

@Neoskynet knows a lot about Optic designs.

1 Like

I was thinking of two different things, when I was using the term lens distortion before. The fresnel lenses in the HMD and the “lens distortion” that every 3D-game engine produces. I probably haven’t phrased that properly, so I’ll try to elaborate. I’m not trying to lecture or offend anyone, so please bear with me:

When we look at a game on a monitor we see a distorted image. It is a spherical image projected to a flat screen and looks like a high-fov photo taken with a rectilinear/wide-angle lens.

If we were to put that on a VR screen everybody would get motion sickness, so the image is rectified, which produces an image with 4 rounded edges. If you just put that on the screen of an HMD (like the Rift DK2 did iirc) you waste tons of precious screen space. So what do you do? You crop the rounded edges and magnify to the size of the screen. The result is an image that has a lower resolution than the screen.

When I look through the lenses of the Pimax 5k+, I see no black rounded edges on the top and lower end of the screen, so I can only assume that they crop and zoom. If you were to raise the render resolution by 25%, the cropped area would grow linearly.

But all this is beside my original point. In order to get a crisp image, I need like 2000 pixels vertically and the wide FOV just produces a ridiculous amount of pixels with that (something like 18 megapixels). So I can either live with a choppy framerate, or dial it down to a blurry image. Or I just switch to the normal fov, like everybody else. :confused:

FFR is my only hope to be able to utilize the wide fov in more demanding games/sims.


Thank you for clearly illustrating the effect you were describing.

If such crop and zoom is occurring, and this is not already optimized out some other way, then this seems to happen before reaching the SteamVR compositor.

Even if such crop and zoom is occurring, it would not need to account for more than maybe 20% of the vertical pixels in the worst imaginable case.

Therefore, it is still possible to at least roughly if not exactly determine how many pixels are supersampled into each actually displayed pixel - it is possible to calculate Total SR.

Your Total SR of 2.28 is still way too high. Even more so if rendered pixels at the edges are dropped.

There are definitely techniques other than FFR, which in some cases reduce the performance impact of off-screen pixels. All of these techniques show marginal performance gains at best, which is only helpful for certain applications. Other trivial inefficiencies, such as Parallel Projections, are often at least as severe.

Remember, foveated rendering of any kind can only optimize part of the pipeline.

FFR is not your only hope to utilize the wide FOV in more demanding games/sims. Proper optimization is, which only sometimes includes use of FFR.

In early Piplay (1.1.92) a user found at that time you could do a screenshot from desktop Steamvr that would give you 3 screenshots from the rendering pipeline. UnModified, Cropped then Lens Warp.

But stopped working in later Piplay. Either due to changes in pimax rendering and/or changes in Steamvr


Regarding the distortion/supersampling question, I just came across an interesting review on the pimax website:

Resolution of existing headsets at SteamVR SS 100%:
HTC Vive 1512x1680 2.54 MP
Oculus 1344x1600 2.15 MP
Vive Pro 2016x2240 4.5MP
Odyssey 1427x1776 2.54MP
Lenovo Explorer 1593x1593 2.54MP
If HTC Vive and HTC Vive Pro use a coefficient of 1.4 to compensate for distortion, Pimax 5k+ should use a coefficient of about 1.563 (in my measurement) and the following values should be used for the 1440p display:
Large FOV 4751x2250 10.69 MP
Normal FOV 2633x2250 5.92 MP
Small FOV 1853x2250 4.17 MP
These values are obtained by setting SS=47% at Pitool Quality=1.0 and it should be considered as basic.


To be fair, the guy doesn’t explain how he ended up with a 1.563 coefficient.
It is well know that the Vive use a 1.4x resolution before warping, in order to have an distorted image fitting most of the panel (with exception of the corners).

But is going by a rule of 3 for wider FOV the good way to compute that coefficient?
I’m just throwing the idea like that, but doesn’t the distortion increase the wider th FOV gets? It also depends on how the lenses distort the images (which doesn’t seem proportional the farther we get from the sweet spot).

And the pixel density computation done in this review is wrong (or at least was, I don’t know if Pimax did something since then), since the lower the FOV get, the less the panel surface is used.
SweViver illustrated this when he review the XTAL if I remember correctly.

Not saying the native PiTool resolution is not overkill, but I highly doubt the engineers at Pimax pushed some supersampling on purpose considering it would make their headset more difficult to use.
That’s probably the resolution needed to have the image, once distorted, to fit the panel, but under-sampling can bring performance gain overshadowing the loss in quality that is sometimes not perceivable.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.