Dispelling the myth of DirectX 12

There are lots of articles out there detailing the merits of the new DirectX, but I think they all evoke expectations for the end-user that will never materialize.

The biggest aspect is that DX12 is “more efficient”, and so free performance. Being compatible with older hardware means that the same engine on the same hardware will run better, especially lowering load on the CPU side. All this leading up to the myth that DX12 will extend the life cycle of current hardware.

My opinion is that the opposite will happen: DX12 are a way to push again to buy new videocards and new CPUs. As it always happened.

A couple of days ago Eurogamer published an article about the first somewhat relevant DX12 benchmark:
http://www.eurogamer.net/articles/digitalfoundry-2015-ashes-of-the-singularity-dx12-benchmark-tested

The most important aspect is how on a fast CPU and a Nvidia card, DX12 is SLOWER than ancient DX11 technology. This is already the proof, just one case that means nothing beside showing that it can actually happen: DX12 isn’t a sure improvement. It could as well push things backward instead of forward. It’s not unambiguously “better”.

Here’s what I wrote about what might have happened (the beginning is an answer to someone claiming Nvidia DX12 drivers aren’t optimized yet):

Part 1: if we are at the bottom level, the activity of the driver isn’t very different from what DX11 does. If we are talking at a very basic level on DX12, it means dealing with basic instructions that DX11 already perfected. So there isn’t something intrinsic in DX12 that makes for a “tricky to develop” driver. The DX12 driver, compared to the DX11 one, is a driver that does less, at an even more basic level. So I’d assume for an engineer it’s much easier to write that driver (and less to work with when it’s time to squeeze out more performance). The first reason why DX11 might be FASTER is because Nvidia engineers know how to make something faster *in the driver*, whereas these guys who made the DX12 code didn’t know as many tricks. Hence, DX11 is faster because it ends up having better custom-code written by Nvidia.

Part 2: better multi-thread in DX12 still brings overhead. That’s why Nvidia backwards performance ONLY HAPPENS on 4+ cores and higher CPU frequency. If the DX11 render can keep up (meaning that it doesn’t completely fill one core) then the DX11 is FASTER than DX12. Because single-threading code is faster and because it leaves even more space on the remaining cores for the rest of the game logic. If instead you hit your CPU cap on the single thread THEN DX12 should be ideally faster, because you can spread better the load on other cores.

The reason why Final Fantasy 14 benchmark runs faster on DX9 than DX11 is somewhat similar. You can have fast single-thread code, or slower multi-thread core. At the end if you add up the load of multi-thread code it ends up cumulatively higher (so slower) than the single-thread code. The same happens with 64bits vs 32bits. 64 is marginally slower, but it allows you to tap into more resources.


Those are aspects that might explain why DX11 ends up being actually faster that DX12. But the myth is that the ideal better performance of an engine will become better performance for the end-user too. I think that’s false, and that’s because it’s produced by a false perception of how game development works.

I’ll try to explain again why DX12 expectations may be overblown, as it always happens, when you focus on the technical aspects and not on the practical ones.

Optimizing a game is a never-ending process that takes development time. Development time = money.

For a game company the first priority is to do things QUICKLY, because doing things fast turns into money you save. That’s why Batman game tanked: they didn’t want to allocate it enough time. They wanted it done FAST because PC isn’t worth long develop times.

Time spent on optimization and actual game performance for the end user belong to the same axis. That means that in a lot of cases the hypothetical speed of DX12 WILL NOT be translated into faster FPS for the end users, but into shorter optimization phases for the developer.

So, DX12 = same performance of DX11 with shorter development time (eventually), but at a lesser cost for the developer.

That’s how it works. The speed of an engine isn’t solely due to technology, but also to time spent on it. In practice, TIME is more an important variable for the developer than performance for the end-user.

That means, again, that in practice DX12 will end producing just about the same performance you see now in DX11. Every improvement in tech, in the HISTORY OF PC has always been eaten very quickly by rising requirements. Always and without exception. The moment you give developers some gains, they fill them up on their side by cutting down the time.

That’s not even the whole picture. As everyone knows video drivers are increasingly complex and optimized only for the newest cards. See Witcher 3 performing badly on 7xx cards. That means that even if DX12 theoretically bring benefits to ALL cards, as time passes the engineers writing drivers will only have time (and motivation to do so) to optimize them well on newer hardware. To not even consider developers who write engines, that will never waste weeks and months writing specific optimization for older hardware.

That means that all gains that DX12 might bring will be used to push new hardware, and not to make your current hardware live longer. It will mean less engineering effort to develop new cards while showing bigger performance gaps. Smoke & mirrors.

This is how things work in practice, since the world isn’t simply run by theoretical technology. What you expect from DX12 just WON’T HAPPEN. DX12 performance improvements are oversold, as it ALWAYS happened and will continue to happen with new technology.

One thought on “Dispelling the myth of DirectX 12

  1. Pingback: The Failure of DirectX 12 | The Cesspit.

Leave a Reply