Recommended render target resolution is just that - recommended

Inspired by some recent posts about the PiTool, SteamVR and application settings for the rendering quality (supersampling) I decide to have a closer look at the mechanics at play, while (ab)using some common applications - SteamVR Home and Bigscreen Beta. I did not expect what I found.

First let me state the config, even though I believe for this particular exercise it is not that important:
PiTool v1.0.1.91 (same old), headset firmware v181
PiTool rendering quality 1.0, Hidden area mask enabled.
FOV: Normal
SteamVR global manual override at 100%
OpenVR limit on max. target res raised to 8192.

With this config, the SteamVR was showing recommended target resolution for:
Parallel projection off: 3202x2633
Parallel projection on: 3851x3291

SteamVR Home

With Parallel projection turned off I had look at the image output of the application and found this:

which looked interesting. The resolution of the texture resource is 5216x4289, which is approx 1.62x (in each dimension) of the recommended res (PP off -> 3202x2633). The complete image in the top left corner has the resolution 3202x2633. This seems however to be rather a coincidence, which I explain later.

Why there are several images of the different size in the resource buffer? I believe this is because SteamVR Home tries to adjust the render target resolution to the GPU+CPU performance and renders several testing frames and when the rendering time hits the target it keeps the resolution. So the bigger images are leftovers from the testing procedure.

Then I turned parallel projection on, which raised the recommended render target resolution to 3851x3291 and again examined the output image:

Now the image resolution is 6274x5361 (which is again 1.62x dimensions of the originally recommended res) and the smallest image inside has 3128x2662, which is less than the recommended resolution.

Bigscreen Beta

EDIT: As pointed by @dragn09 below, Bigscreen Beta has default 1.5x supersampling option set, so the following text does not really prove the same point as the one above and therefore I leave it here just for the record. The conclusion at the end however remains.

In BSB I did not observe “stacked” images of different res in the output, so I assume the behavior was strictly application specific (i.e. limited to SteamVR Home), but what I observed was the different rendering resolution than the one suggested (and reported by SteamVR).

With parallel projection off and the recommended target res being 3202x2633, BSB rendered the output image at 4803x3950:

Which seems to be 1.5x dimension of the recommended res. With parallel projection on, the output image has the res 5778x4937 (again 1.5x dimensions of the original res).


The important conclusion of this observation is that regardless of the recommended resolution (i.e. the number SteamVR shows in the UI and OpenVR suggests to the application), the application is free to choose higher (as I also saw examples where the res was higher than the recommended one) or lower as it deems fit.

Without explicitly checking the output images, one cannot be sure which rendering resolution the application uses, and from this point of view, the numbers reported by SteamVR has to be taken for what they are - recommended.


i think bigscreen has a default setting of x1.5 resolution in its settings


This old table might be of interest:


EDIT: “Radial Density Masking” refers to checkerboarding part of the view, toward the periphery, by the way. Fixed foveated rendering by another name. :7


I have probably already passed today’s wit peak, so I might be a bit dense, but I cannot really see how is this relevant to my post. The table uses “the old” scale (i.e. by dimension, instead of by pixel count) and the other columns are confusing too as there is currently no Quality Level in SteamVR or MSAA reference.

OpenVR currently determines the recommended supersampling factor by measuring GPU+CPU performance factored by the recommended resolution from the HMD. What are you trying to say? :thinking:

What you see there is an example of Valve’s from the get-go recommending that developers implement dynamically adaptive quality in their games, in order to maintain 90 (true) fps on most machines. At the time, this is what they were using for their new “Robot Repair” experience, and I believe also one of the features of an example Unity package they released (…and yes: Back then they still used the “correct” way to factor scale ;9 ).

So this should account for the stepping up and down, that you observe happening in Valve’s own stuff (still), even while almost no third party ever ended up making the same effort. :7

This was of course before SteamVR had either the automatic baseline supersampling recommendation, nor per-app settings. It should none the less still simply apply app-level, automagically, without user input, to the recommended rendertarget size acquired from SteamVR: The engine adjusts quality level (comprising the AA and resolution scale parameters, as well as the checkerboarding, as a last resort), on the fly, to suit the performance of any given moment. (EDIT: …so turn, from your location at the edge of the game map, from looking out past the border, to looking in, where almost all the stuff is, and watch the graphics degrade in quality, instead of beginning to drop frames; Turn back, and see fidelity return.)

EDIT2: TLDR: Those quality levels, and their associated values, are not user settings, but (…examples of…) predefined discrete quality levels for the game to switch between on its own, as needed. (EDIT3: There are also, in that particular example table, four different rendertarget multipliers in the six quality levels above 0, which is the same number as in your first picture, but here I’m drawing some reeeeally sketchy conclusions. :P)

Now is it clearer :slight_smile:.

If my assumption is correct the character of the render buffer suggests that SteamVR Home tried first render at 1.62 “scale” and then scaled it down until it reached acceptable render time (which BTW was around 9 ms at the time of running). Sometimes I only saw like 3 iterations, the one i posted above for PP ON is an extreme, once I saw only 2. So it seems to be really dynamic. I assume the fact that I run NVidia debugger at the same time could distort the results a bit, but should not impact the conclusion.

The big fluctuation may explain why some people complain about the blurry image in SteamVR Home when the recommended resolution reported by SteamVR suggests that it should be ok. Basically it means that while theoretically they should not have a problem, in reality something is holding their system down.

Sounds reasonable. Maybe non-discrete adjustments would be less jarring - dunno.

I suppose the automatic scaling could conceivably make it a little less easy to notice when one have something bottlenecking one’s computer; Stuttering tends to raise irk faster than slightly blurry visuals… :7

Interesting post, need to digest it.
Subjectively, I noticed I get best visual quality possible by setting Pitool to 2.0 and leaving SVR at 100%. This makes very big difference in rF2.

I wonder what Pitool multiplier really is and why does it matter so much.

Would be nice though to not have OpenVR Composite auto adjusting the Render Target Resolution on us.

  1. Cause as now proven with the 5 and 8K Pimax - 90fps is not required for a good VR experience if the headset is capable of decent image display.

  2. We are grown men and women and can deal with the fact that we might have dialed in a setting that is beyond our hardware to do a reasonable job with and subsequently we can dial it back.

  3. Makes it annoying to find what settings work with what we have when the software is fighting us and giving us inconsistent results when we experiment with our settings. Do we have such an experience at all in previous viewing device software subsystems? No, usually we get the direct result of our over ambitious settings and we know then to dial it back to what is a better experience for ourselves. We do not need babying and this is why we sue computers for our entertainment instead of consoles or Mobile Devices.

If there is a way for us to stop this auto setting of the Render Target Resolution - I am all ears.


I have made several post on the subject:

1 Like

If you refer to my post it is not OpenVR which auto-adjust the resolution, it is SteamVR Home, i.e. the application itself, without any user input or control.

OpenVR calculates the recommended SS factor from the info the headset provides (i.e. headset recommended render target resolution and hidden area mask percentage) so the final resolution (which is shown in SteamVR) corresponds to the refresh contraint (e.g. 90Hz) and the GPU+CPU performance.

If you choose to manually override the supersampling factor in SteamVR, then the recommended resolution the application gets from the OpenVR is the one you set by your manual override.

The “problem” I wanted to point out is that there are applications (SteamVR Home), which do not necessarily render the image at the recommended resolution, without giving any clue to the user. This may lead to some confusion when observed image quality does not seem to correspond to the set image quality (recommended render target res).

Technically, having PiTool at 2.0 and SVR 100% should be equivalent to PiTool at 1.0 and SVR at 400% (as per my posts above) as both combination should lead to the same recommended render target resolution.

1 Like

I have found there to be some weird stuff going on with render targets as set within SteamVR and applied to Apps that rely on SteamVR for their VR implementation. Inconsistent Render Resolutions when pushing the pixels. Sometimes, the App will render at the requested target, other times - it will kick down to a very poor Render Resolution for the same setting.

StreamVR Home is not the only instance of this behaviour.

Thanks, it now makes sense :slight_smile: Will test 1.0/400% to see if it looks identical to 2.0/100%.

Then the next question would be: at which point increasing resolution no longer improves quality? With CV1 it was something around 180%-200%. Man, all those multipliers complicate things too much :smiley:

What’s the point of having two separate multipliers, Pitool and SteamVR? Using more than one headset on the same PC?

BTW many games ignore recommended render target, in order to test it I usually set SteamVR at 10% from fpsVR, if the image becomes really ugly then I know that it works.

Not to mention those not ignoring it, but adding their own user-selectable internal value (Elite, Project Cars 2, … )

1 Like

I do not know, normally I would assume that for any HMD which uses OpenVR the scale in SteamVR should be sufficient, but I do not know how the things work for e.g. Oculus games, which I guess do not use SteamVR/OpenVR.

Well, technically, since it is really just recommended, there is nothing wrong about not using it. What I consider confusing (and was the point of my post) is when it happens without an user intervention and/or knowing (as in SteamVR Home). Having an in-game option to further modify the supersampling as for example in Elite is perfectly fine for me.

1 Like

I’ve found that using the recommended SS setting in SteamVR yields a good framerate in games. If I increase the amount, the framerate is lower, but the image quality is better. I’m currently setting the PiTool “quality” at 2.0 and SteamVR on automatic (not manual). I’m quite happy with these settings in Elite Dangerous.

What if you set the Pitool to 1.0 and Steam on auto, any difference in quality- or performance-wise?


I set pi tool to 1.5 and SteamVR to auto and the quality is improved over 1.0 and my frame rate is lower than 90 but very acceptable in Fallout 4 VR with Smart Smoothing off. Its seems that pimax might be doing some other technique to make lower frame rates tolerable, or it could be what other people have suggested that the higher resolution of the HMD that makes it more tolerable. I do not know what it is but its definitely better than the Vive or Rift at lower frame rates


PimaxUSA once said that PiTool “Did More” so I tried Pitool at 0.5 and 1.25. When I did this I happened to be on the steam beta where they were trying out a “simplified” UI that just had a slider and the resolution but did not display the %SS.

So on Pitool .103 I had Pitool at 0.5 and steam at ??% (400?) And Pitool at 1.25 and 64% with parallel Projections on. These settings yeilded the exact same resolution in steam. Which also happens to be 1 pixel off from Pitool 1.0 and steam 100%.

Pitool at 0.5 was noticably more blurry in the headset than 1.25 despite being the exact same resolution.

Not sure why that would be unless Pitool is internally up/down sampling the output from steamVR when the output resolution doesn’t match it’s requested resolution. Or it could be something specific to having below 1.0 in Pitool.

Though I prefer 1.25 at 64% to 1.0 at 100% in elite dangerous. The small text seems very slightly better but the difference is small so it maybe I’m wrong.

However, 0.5 and 1.25 were definitely different despite steam rendering at the same resolution. This was on Pitool 103 and on a 5k+

1 Like