Recently I ventured into Games development and bought a PC Laptop with a dedicated graphics card (GPU) – an “NVidia GTX 1660 ti”. What a headache it is to understand what is really going on under the hood!
Out of the box, if you leave the settings up to the manufacturer, Apps that use all of the CPU cores cause the CPU to hit 100°C within seconds or minutes, and the machine’s performance throttles back noticably. The GPU (which also gets hot by virtue of sitting next to the flaming CPU and produces its own heat too) also throttles back or crashes which often ends in tears with a ‘Blue Screen Of Death’ (BSOD) and an unhelpful message Critical Process Died and the machine hangs… OSX this is not!
The problem is that these ‘Gaming’ laptops are designed to do two things which are incompatible with each other: ‘Gaming’ and ‘Office’. Single core – single threaded office stuff such as Excel allows the CPU to run at its maximum ‘overclocked’ speed ( a claimed ‘official’ 4.1 GHz) rather than at the standard speed of 2.2GHz in my case. However, in my tests the CPU will max out at 3.5 GHz due to lack of power (Power Limit Throttling). No problem, nobody needs Excel spreadsheets to run at the speed of light. Note that the GPU heat output is not even a factor here. It’s not being used in such a scenario so its power demand and heat output is negligable.).
However, when multipe cores of the CPU are running, such as within games, the CPU produces exponentially more heat. Combined with the GPU, which is now also producing a fair amount of heat itself : about 75-80°C when gaming ( and only about 40°C when idle) thermal and power limit throttling begins to slow the computer down considerably and even crash it.
Nowhere in any official literature are these ‘truths’ mentioned. Companies think that by overclocking chips they can give impressive stats to would be purchasers, stats which are not real world usable or feasable, but just more marketing BS!
If you have a desktop rig with adequate cooling and an adequate power supply then you can overclock CPUs and GPUs and achieve incredible performance – but that is a different kettle of fish alltogether.
You would think that setting a maximum CPU clock speed of 3.5 GHz would make gaming better ( i.e faster frame rate and higher quality rendering) than running at the standard 2.2GHz speed. The opposite is true. I tested this effect many times in No Man’s Sky and I get far better frame rate, higher quality rendering, no stuttering and no crashes when the CPU is capped at its standard lower clockspeed of 2.2GHz using ‘Turbo Boost disabled’ in ThrottleStop !!!
So what’s going on… as most of the processing work is being done onboard the GPU and not the CPU?? The GPU is being ‘thermally’ throttled by the CPU’s heat output and ‘power limit’ throttled by the CPU’s demand for more Watts. Not only this but the Fans have to spin up and run fast to keep the unit cool ( more Wattage lost to the fans, and lots more noise!)
1. Maybe the CPU, at high clockspeeds, is simply too fast to communicate efficiently with the GPU, which itself is running at around 1.7 to 2.0 GHz. The lower CPU clock speed of 2.2 GHz intuitively seems more synchronous with the GPU.
2. With CPU clocks set at the standard 2.2GHz (setting Turbo Boost to Disabled in ThrottleStop) thermal throttling and power limit throttling are no longer an issue. The CPU is now running at least 20 degrees cooler and the fans are either not running or only lightly spinning. There is now a lot more thermal and voltage headroom for the GPU to max itself out and perform as it should: unrestricted.
(Tested kit is Alienware m15, i7-8750H CPU, 16Gb RAM, Nvidia GTX 1660 ti GPU, Samsung EVO 960 NVMe.M2 drive)