5K+ shaving ~40 FPS :( optimization tips please

I am losing ~40FPS after switching on my 5K+ with X-Plane 11. Is there any way to reduce this number? I have a 1080Ti OC edition oc’ed @2025, 8086i oc’ed @5.2GHz, and very fast M.2 SSD drives. I noticed thread 0 being maxed out all the time although GPU is <30% utilized.

I minimized in-game settings to bare minimum and still there’s a ~40 FPS hit from the headset. Is there anything I can do in PiMax Pi tool and SteamVR settings to get the optimal performance in FPS?


I am not familiar with X-Plane, but if your GPU utilization is < 30% it means the FPS is not GPU limited, in other words you are CPU limited. You suggest that you had 40 FPS more before switching to 5K+, when was it? (Or which headset did you use before?).


Hi Risa2000; thanks for the response, I was referring to switching from the monitor to my 5K+ headset. I know the CPU is my bottleneck but we’re talking about one of the fastest single thread performance CPUs out there, 5.2GHz. X-Plane use a single thread for doing a ton of flight dynamics calculations amongst other CPU intensive. The whole point is why FPS drop dramatically in VR which should be mostly GPU intensive, is it because X-Plane not stressing the GPU enough because of its CPU bottleneck?
More importantly, is there anything I can do to get the last bit of optimization from the PiTool?

Try to go back to stock settings on the CPU/mem and retest. It might be throttling.

1 Like

If you have Smart Smoothing enabled in PiTool, try disabling it. SS uses the CPU to reduce the load on the GPU. Since you are CPU-limited, that makes no sense, in this particular case.


if x-plane is primarily using one core you could try to move cpu affinity for the game and the pimax services to different cores.



This fps hit seems to be normal with X-Plane. I fly XP11 in VR using the Pimax5k + or the Valve Index, and the fps hit is about the same. It was similar when I tried the HP Reverb. Pitool can do very little about this. Even with very low fps, there is remarkable little stuttering with the 5k+, when compared to the other headsets.
If you read at the X-Plane.org forum, you will find that all VR users there experience this. Maybe the upcoming switch from OpenGL to Vulkan will bring some improvements in X-Plane, but I would not keep my hopes too high when it comes to VR.
X-Plane is just a very demanding program.


Thanks for the feedback, I will try both disabling SS and affinity.
I can’t wait until the Vulkan switch happens, I’m hoping it will have an impact on NVidia GPUs as well although XP said it will be much less than that of AMD GPUs.

1 Like

I hate to see all these games sticking to a single core … used to up set me -but it’s moving to pissed … it’s been like what? 10 Years of Consumer multicore CPUs and still very Little support is actually out there
Sorry- derailing.


I can’t agree more MReis, at least for flight simulation the Microsoft new sim (MSFS 2020) should be much more efficient. Can’t wait until it is released next year.


From the other replies here, I have the feeling that the only thing to blame are the X-Plane developers. Plus it seems they do also run something related to rendering on CPU as well, and I would even question the engine capability for stereo rendering. They may as well render each scene twice with full scene setup overhead considering the performance hit.

And you are right, multi core CPUs have been around for 10 years, there is no excuse today for not supporting them, and before @neal_white_iii comes with saying that parallelization is hard (which it is), I would say they when someone chooses to write airplane simulator he already chose the hard way.


Unfortunately, it can be quite difficult to update a single-core based program to use multi-cores. I’m a programmer and have actually done this and it took us over 5 years to fully complete the task for the program suite I work on. In some cases, a near total rewrite may be necessary.

That’s not an excuse; it’s an explanation as to why it can take so long.

You know me too well! :laughing: I should probably read to the end of a thread, before posting a reply.

Yes, the developers have chosen a difficult task, which is why significant changes can completely disrupt a code base.


I know - I am an M.C.Sc. Myself, but even back at university in 2000s we already took those steps and the theory to do it. Sure, starting up right is different then transforming- but for many games they have Milked the cow so often they should have thought about that long ago…

  1. I use process lasso

  2. Use fpsvr and pay attention to frame timing not just fps

  3. Download a whole bunch of different pitool versions and find the one that works fastest for you

  4. Turn smart smoothing off while measuring fps.

  5. Isolate the game to a clean core (not just thread) and keep all other processes off it

  6. In p.lasso set game IO to high, set cpu to very high priority, classify game process as a game

  7. Disable virus protection. Those programs turn processes into a dmv line

  8. Un park cores

  9. Shadows and post processing do the most damage to time to render per frame.

  10. Note that for whatever reason, despite a 90fps refresh rate, you need way faster than 11ms, for some reason you need just above 7ms. Ot sure if this is because of what pitool is doing as an intermediary.

  11. Disable all pitool enhancements. Ipd offset, vertical offset, color contrast etc.

  12. Download nvidia profile inspector. People say the opposite ALL.THE.TIME, but in my experience, non vanilla unity or unreal games, or other engines, vastly even out render spikes by turning up max frames rendered by cpu. You are usually locked to 4 frames in nvidia control panel, it profile inspector will let you go up to 8. For example this wildly improves fallout 4 and skyrim even ness.

  13. Set your supersampling expectations way low until you get your fps headroom. For some reason it’s a real bummer to see the game absurdly sharp and super laggy then to start at the low end and work up to a nice setting.

  14. Research your game and whether there are potato mode hacks, ini settings, potato mode mods. Use those, slowly apply post processing on top of it if you can.

1 Like

Thanks for the tips aesoptabled;
1- I have Lasso as well, and I tried to move processes away from thread 0 and leave it to X-Plane only and maybe the Pimax software, but no noticeable improvement there.
2- makes sense, I noticed my CPU time is the worst :frowning:
3- the latest worked best
4- I will try this, I think this will help some as I.ve never been above 30 FPS to-date, defeating the need for SS, and it becomes just another overhead.
5- I tried this (process lasso),
6- I will try this and report back.
7- X-Plane folder containing all processes is in the Defender’s exclusion list.
8- How do I “un-park” cores? is it the same as what I did in step 1 above?
9- I will try this, I have a list or a map of settings that are CPU bound vs GPU bound, and I guess I have to wind down my CPU bound settings further down, although lighting and shadows are GPU bound.
10- I really hope future PiTool optimizations bring fps ms time further down below 11ms.
11- I will try this as well.
12- I will try it, but my problem is CPU bound and this sounds like putting more stress on the CPU vs the other way around. My GPU is almost idle waiting for the CPU, so I am looking for passing processing more to the GPU.
13- I did, I just need to tweak them more and do more research I guess.

Thanks again for a very informative and comprehensive list. Much appreciated.



for all PiMax surely can do and optimize - imho still a lot of company’s just need to get there engines up to date - you can’t push a cow to run 200mph.

If you take a look at games like serious sam or the strange brigade (both not really my style) they have also updated their engines and hit awesome frame rates - also in SLI/CrossFire.

All the others keep saying it’s not worth it - sure it’s cheaper for them to not update - but it’s not that the technology is at fault or not worth it.

Sorry - just always gets me started up that so often they just don’t use all the power we have today - multicores, specialized hardware features and so on. It’s the same with RTX imho.
I can understand, that an Indi game can’t do all that - ok, but the big players? That is just a shame.

1 Like

X-Plane is the main issue here. Although it seems obvious that a more powerful graphics card would be able to generate the frames quicker, so if I want to generate two times the number of frames when my GPU is idling most of the time then it should have plenty of capacity, this is only a very small part of the story.

In reality, the GPU still needs to be told by the CPU what to draw and where. For every drawing loop, the CPU needs to tell the graphics engine where every object in the scene needs drawn. This is why you get such a hit when going to VR. Were effectively attempting to draw each scene twice and were having to wait on the CPU to do it, even though the GPU could probably handle it fine. Also VR was added to the existing framework rather than the framework created for VR - there are better ways of adding stereoscopic rendering to the render pipeline but these certainly wouldn’t be possible with the engine X-Plane has.

This is compounded by the fact that X-Plane’s game engine does almost everything in the same single thread.

Part of the problem is that the graphics engine is based on OpenGL which is then converted to something the graphics cards can actually process.

The good news is that X-Plane is moving to the much more modern Vulcan/Metal (for Mac) graphics engine which is a lot more efficient - the same scene can be rendered with about 50% of the function calls compared with OpenGL. (https://developer.x-plane.com/2019/12/vulkan-and-metal-testing-and-bug-fighting/ and https://developer.x-plane.com/2019/12/the-vulkan-metal-public-beta-will-be-in-2020/)
The second part of good news is that the move to the new engine also facilitates the possibility to move some of the other parts of the x-plane engine to seperate threads, which would also help.

One downside of this for nVidia owners is that , from the testing they did earlier in the year, the increase in performance just from the switch to Vulkan was about 5-10%. For AMD owners it was more like 40% (from memory). This suggests that the nVidia were actually really well optimized for processing OpenGL requests and the AMD ones were not so well optimized.
Still - an increase of 5-10% is still good, and thats without moving any of the other processes to other threads, and maybe they have been able to optimized further since then. We should know soon because it looks like a public beta some time soon(ish).

As for the future - we shall see what MSFS2020 has in store. Although it doesn’t look like they will support VR at launch (although they keep surprising us with new announcements) this is practically a new engine (very loosely based on FSX but sounds like large chunks have been rewritten) and its coming from a Asobo studio that have a history with working on AR/VR. Originally VR wasn’t really on their radar but since they invited a number of people from youtube/flight sim community to a preview event back in September it was increased in their priority list because most people were requesting it.

2020 (the year) is going to be exciting for taking to the skies…


Thanks for the insight GramboStorm. Recently I got very frustrated when I reached 9 FPS on an addon airport with weather (ASN), understanding both shave some of the FPS, I tried limiting the settings for both and I got a little bump to 15 FPS. Today, I tried my old Samsung Odyseey+ with all other variables being the same, my FPS went up to 28 from 15. Is WMR that much more optimized than PiTool although both still use SteamVR?

We are drifting a bit between facts and good guesses…
Well WMR is no where near the FOV and resolution of the 5K or 8K. A lot less to draw and consider when preparing to draw - hard to make a simple FPS comparison.

Normally the Program itself runs in the CPU doing its calculations for all kind of stuff, but simply put where is what when. That can be done on multiple threads and cpu cores but rarely is. So it’s down to the old single core area again and only GHZ count to make it faster.
Actually the Szene only needs one calculation about where is what and then must be drawn from different views - simply each being one of our eyes. This is the performance hit we should have to take for VR
At that point I would start guessing because i have little knowledg about compositiors or that part of the pipeline. NVIDIA has had some infos on that on ther VR pages.