I found this comment on Diablo-style loot. You know, Blizzard’s secret sauce.

Because it’s more exciting and you always have the feeling that the next item will be better. As opposed to getting a Longsword +3 and knowing that no matter how many enemies you kill, you will never find a better item because that’s the limit of the system.

Also, finding that one perfect (or near perfect) sword or armor feels more fun than finding just another sword you’ve seen 10 times already with exact same stats, name and appearance.

Makes sense, right?

Then I read this reply:

In these type of games, I usually feel the opposite, actually. When I get a decent weapon, I feel that the next 100 or so weapons I will find in the future will be crappy vendor trash. And when I actually find one that is better, it would be only a slight improvement that doesn’t excite me at all. Maybe this is why I don’t get the appeal of these games. I just don’t feel it.

I’ve always felt that more discrete weapons system in normal rpgs make each weapon much more meaningful than the ‘random gear everywhere’ system that loot based games use.

So I was thinking: is that games are like art, making us better.

Or is it that games just exploit our fallacies, the weaknesses.

Feels good man, until you don’t give it too much thought.


(Maybe these aren’t different players liking different things, but just different levels of player’s awareness? Here’s a little insight that probably everyone else forgot: during the World of Warcraft beta Blizzard changed the armor system. They made the numbers much bigger from a patch to the other, without changing the effectiveness. One dev also explained this in a forum post. I remember this because it always sounded like a sort of “fraud” and I’ve never accepted how that explanation could be acceptable. The logic was that in the old system it happened that players would keep a single piece of equipment for a few levels before finding an actual upgrade. You’d find loot, but it was just about the same of what you had equipped already. But by scaling up the numbers they obtained much more granularity in the system. That means that players would find upgrades, albeit smaller, a lot more frequently. You’d find a belt with 107 armor and replace it with one with 110. But the hidden truth behind this was that while before the loot numbers were set in a way that was pertinent to the formulas, in the new system instead those tiny upgrades literally MADE NO DIFFERENCE. They were lost in the formulas due to how approximations worked. Those upgrades are technically just mislead player perception. Manipulation.

The Blizzard guy who come up with this must have felt like a real trickster.)

World of Warcraft and its paid game designers

I suppose the quotes speak for themselves. I’m linking what I was writing on forums in 2014 (but also long before that, I just don’t care enough to dig deeper), and Blizzard, in 2016, finally get that kind of trickle-down insight too.


2014 forum discussion.

The faster leveling means that all the quest progression was completely broken. I couldn’t even advance on SINGLE quest line without outleveling it. And if I dared do a dungeon run I’d have to basically skip entirely to a different zone.

Racing through content may be good on paper, but it completely destroys the experience.

Fine, but then don’t say the game loses subs because it’s “old”. It loses subs because it systematically destroyed all the good things it had, without even introducing something new and appealing.

pre-Cataclysm WoW had an excellent balance with quest progression and leveling. Post-Cataclysm this balance was carelessly destroyed in the name of SPEED, NOW, MORE LEVELS. FAST FOODS.

But if they knew they were going to cut so much the leveling times then they should have rebalanced the quests accordingly.

Instead it seems the speed up was an afterthought and no one cared if they broke the perfectly crafted balance and one of the major features of the game. To me it feels like they handed a perfectly crafted thing to some new guy, and this new guy didn’t even remotely understand why the thing worked so well in the first place.

It’s not up to the player to balance this. If the game even lets you then it means there’s something fundamentally broken.

The point here is that pre-Cataclysm WoW was perfectly balanced, and, imo, the real BIG reason why it became hugely successful: WoW’s secret sauce was that the quest flow removed the feel of the grind you’d get in EVERY other MMO those days. But by speeding up so much the leveling process and disrupting all the quest chains and normal progression they simply destroyed their main feature. They TURNED BACK the game into a grind, with most players just burning through content without even looking at quest text or whatnot (or simply do dungeons and bypass all that).

WoTLK was the last good expansion and this is not my personal opinion. It’s just what pretty much everyone agrees with. Game design has taken a nosedive (and this is my opinion), WoW became just an affair for raids, and we know what happens when you specialize to hardcore players while leaving everything else behind. WoW’s leveling pace in now lightning fast, and the experience so bland and shallow, just because it’s all just at the service of the raiding game.

Blizzard 2016. Paid jobs.

Basically, low-level players now plough the game, killing everything easily in unsatisfying combat so they spend comparatively far more time simply running between objectives.

Some of this is down to changes made with the end-game in mind.

“There have been a lot of trickle-down effects from balances changes made to the max-level game. Things that used to be talents we now bake in as passives, we buff abilities, we move things that used to be high-level abilities down to make them available at level 10…”

we made levelling through the prior expansions a bit faster, and a bit faster, and a bit faster, because we didn’t want levelling to be such a barrier to entry.”

you shouldn’t be out-levelling zones before you’ve finished their story. You shouldn’t be doing one dungeon and finding that the zone you’re in is no longer relevant to you at all.”

the levelling-up experience through older zones at lower levels is “pretty broken right now. It’s not really very well tuned.” He added, “It’s not even about difficulty; it’s about pacing.”

But as the Warcraft development team focused on the live game of World of Warcraft, it definitely has shone a light on some deficiencies and areas where the game has been lacking recently, and that’s something we want to do something about.”

Good job? Round of applause?

Building 2D worlds

Nathan Jerpe, the guy who made the astounding Legerdemain roguelike-like (because not randomly generated and not permdying) sent me ALL the maps that build the whole game, in native resolution.

I cannot believe my eyes. I’m not a young lad and I’ve seen a lot in gaming. Especially ambitious stuff. But this is certainly one of the most impressive attempts at pure worldbuilding I’ve ever seen. It’s magnificent and beautiful (and ASCII can be so pretty when you know how to use it).

For the time being I cannot share a thing, though. He asked me to not share those images because he still wants players to discover the game on their own, and exploration is a major factor of this game. I would respect that, of course.

But for the greatness you can spot in games there’s always the seed that creates the desire for “more”. That’s what fuels my idea for the crazy roguelike I’m experimenting with. So I have this very remote idea of “remixing” the stuff here and blend it with some other concepts. For sure it will be a source of endless inspiration.

One aspect I want to bring up is again the idea of the flat, bidimensional world. I explained how in Dwarf Fortress the evolution to a 3D world with z-levels fundamentally changed the concept and removed that abstraction, and what’s important to understand is that it’s an abstraction that has its uses and purpose, even if technology would let you have more.

Exactly the same happened with Doom and following games. Doom still today has an unique charm that will never be replaced, and, more importantly, it has nothing to do with “nostalgia”. Of course the gameplay in Doom is much better than Quake, but this is an aspect that is only indirectly tied to the fact Doom is 2D versus 3D in Quake. Indirectly because the limits of a 3D world didn’t allow Quake to be as expansive as Doom. The same happened with Doom 3: huge improvements on graphical fidelity didn’t allow for the same scale to be maintained. This transformed Quake in a game that was far inferior to Doom in pure gameplay and action, but so much better in environment exploration (and the reason why both Doom and Quake are extremely relevant today and do not overlap).

But this still leaves the 2D abstraction of Doom as an unique style with its own merits, and that cannot be improved or replaced. Because it’s an abstraction that works great. Doom levels are 2D. This means you can bring up a map and it’s a perfect representation of all there is to see. It’s a 3D world, projected in two dimensions, but at no loss. This lack of an actual dimension means you are UNCHAINED in what you can do with just two. It means removing the complexity of one dimension so you can add back that complexity to the rest. It means compressing reality so that you can explode outwardly what you can do. Faster, more easily:


You can reach enormous complexity that otherwise would be unwieldy. It’s a deliberate renunciation, more than building levels in Doom instead of a newer game just for the nostalgia. The point is: no modern game out there can go even close to what Doom does today. Doom 4 will be shamed by this.






Doom, Dwarf Fortress before Z-levels, and roguelikes, in modern times, all share the deliberate choice of removing one dimension (and often graphics, entirely) to stick with 2D. Again not for nostalgia, but because this choice EMPOWERS worldbuilding, pushing it to levels that are unmatched, even in AAA commercial products with huge costs (it’s also interesting to consider that GTA V achieved prettiness by sacrificing quite a bit of complexity compared to IV).

So let’s return to Legerdemain and similar. The game world is visually impressive in a way not unlike those Doom screenshots. Elegant complexity that pushes worldbuilding. I have some gaming myths that I carry with me. One is an RPG called “Fate: Gates of Dawn”. It’s one of the most ambitious and complex classic RPGs ever made. The world is HUGE and reportedly it takes more than 150 hours to complete. This is its game-world:


It’s an actual gameworld, not an abstracted map. Pixel accurate 1:1. This is a game built as a 1st person dungeon crawler, so you move cell by cell. Every single pixel there represents an actual location. If you moved North once, turned East and moved forward again then it means you would have moved two pixels on that map. Of course cities and dungeons are separate, but it still means this gameworld is built by 640×400 cells, for a total of 256.000 locations. Essentially half of it is water, but it’s HUGE nonetheless.

Another impressive attempt at worldbuilding is Wizardry 7, another reportedly huge game that pushed the idea of linking separate maps into an “open-world” that is meant to be explored non-linearly. The wilderness in that game is very big, especially if compared to other dungeon crawlers, but we’re dealing with an overall grid that is close to 200×200 (plus, apparently, another as big to comprise all dungeons and similar locations). So it’s an overall 40.000 cells, and you can see from the map that only a small minority are actually explorable.

The transition to 3D with Wizardry 8 obviously killed the game. But they tried to not downsize the map too much, although the game is still extremely ugly and they didn’t do very much with the 3D itself. The point I’m trying to make is the same: deliberately losing one dimension allows to escalate complexity. It’s a renounce that empowers the wordbuilder to go beyond.

Now Legerdemain. Considering just one set of six dungeons. Each is built on a grid of 189×105. So each is ideally 19.845 cells. And the total of all six is: 119.070. That’s ONE dungeon set. This collection has a total of 68 maps and all locations range from 15.000 to 30.000 cells. Even in this case when looking at dungeons only a fraction of the space is actually explorable, but you can still see how this world isn’t huge, it’s humongous. Unprecedented (and beautifully built, I’ve already said). It took me a number of hours to explore two of them, and they are not even complete since there are a few doors that are locked (and now I can see that one of those doors also opens access to another level bigger than the other two).

I can imagine that Legerdemain’s world might be fairly empty to explore. When you move through the wilderness you move between areas, through forests, hills, mountains, bridges and so on. All beautifully drawn in ASCII or tiles. But cell by cell there’s not much that is specific to see or find. This is an aspect I’m studying, as the cell in a 1st person dungeon crawler isn’t the same as a cell in a top-down roguelike. But why? The question to this answer is what my own game experiment should answer.

Legerdemain does at least some of my ideal goal. In the dungeons you sometime find rooms that contain a “?”. When you step over it, a text message pops-up and it gives you “flavor text”. For example a more detailed description of the room you’re entering. This creates the meaningful distinction. In both 1st person dungeon crawlers and roguelike top-down, you still have a “tileset”. Some basic building blocks with which you build the world. So you look at a map and you know that those rooms are all virtually alike. A maze. They might contain some objects and monsters, traps, doors, but in the end it’s space that contains a variable mix of objects. In 1st person dungeon crawlers the zoomed-in perspective and the high density of encounters lead to smaller worlds that are more packed with stuff. In top-down roguelikes instead you have a more expansive, but emptier environment that you cut through at a much faster speed. More forgettable? My goal is to find a formula and put back classic roleplay flavor into that top-down perspective. A slower pace where the room is unique, with complex textual descriptions that aren’t used simply to add flavor, but that offer various forms of manipulation. Through text. Doing for rooms the same that Torment did with dialogue: not just dialogue text, but description and depth of interaction to do far more than the engine made of sprites on a 2D fixed background would allow. It’s again the deliberate renunciation of a dimension, to allow for far more.

Because in the end I believe we do not have to simply live in our time. If we want we can try traveling through time to rediscover and rebuild what was great, to achieve even greater things. We can put aside nostalgia to retrieve what was actually good. Because, again, modernity doesn’t have to build 100% of what we like, making obsolete and even what isn’t.

Well, map attempt failed

I started to play a roguelike-like called Legerdemain.

Not only because it seems great but also because I wanted to try to map the whole thing into a huge, flat plane. I think the partial “collage” I posted before looks amazing and I want so much to have it complete and use it as a source of endless inspiration.

The problem is that the game is restricted to a very small window, and taking screenshots to then assemble them in a big map is really complicate and time consuming. This is just one dungeon level. Actually it’s the TUTORIAL dungeon and it’s one level of total three (actually it’s just two, I think. My bad, I started a while ago). There’s not so much to find here, but it still takes a whole lot of time to explore fully, and there are still a couple of areas locked behind a door, and I’m not entirely sure if there’s a way to clear the rubble that walls certain other areas…


I’m also not playing fair and save scumming like crazy. I might play a bit recklessly but I died hundreds of times already and I have no idea how one would be able to play properly and restart every time from scratch. I know the game actually has a save system, but I still haven’t found it yet. Monsters aren’t very strong, but at the beginning the combat is very random so depending how the rolls go I can take no damage at all or risk death if I try to push my luck too much. While also needing to keep an eye on consumables like food and torches.

This game does a lot of things that my ideal game would be based on too. The levels might be a little wasteful and the combat bland, but that’s a good reason why this game exist and mine won’t even get close to plausibility.

I’ll keep playing, but obviously it’s not possible to take screenshots to assemble them. This game defies manual mapping, even if it would look amazing.

Let’s make a world

CaPG2DhUUAAAMj9.jpg large

I just saw this posted on Twitter. If very small it might look as a weird alphabet for some very strange language, but it’s just a collection of the levels in Lode Runner.

Every one of those hides complexity of gameplay. Sets of rules and patterns to solve. Small worlds of sub-creation. Maps and geographies.

What happens if we link them together? We obtain an “open-world”.

When the Dwarf Fortress game was in one of its earlier configurations there were no z-levels, the whole game was played on a surface. That was a wonderful feature that was lost in the quest of complexity. In that earlier version building a fortress was like creating a painting. Every fortress its own story at a glance. Its unique style. One picture that captured and contained everything. A four dimension world that included TIME (as progress was measured from left to right).

One of the ideas I have for my pie-in-the-sky roguelike is that it will have a “world” that exists on a flat surface, with elements of an open-world (but more Dark Souls than Skyrim, as nothing is dynamic or random).

A space to explore and conquer.


(this last image is from this game I’m currently playing)

Nvidia and the bleaker future of GPUs

I should probably spend time doing more worthwhile things rather than writing this. But it seems that no one does otherwise.

As usual when I deal with this stuff, I will be imprecise and simplify A LOT. But in general what I say is going to be practically correct. It means that the big picture is the one I’m describing, without getting lost in the technical details.

The situation is this: in the last couple of generations of GPU, namely the 7xx and the latest 9xx, Nvidia has won the market. They won with hardware that, at the same price level, can output better performance AND consistently better energy efficiency. So it’s a total win-win scenario, where Nvidia wins over AMD in every case you can measure.

The problem is that it turns out this was achieved by removing certain scheduling hardware from the chips, a process that started with the 7xx class and continued with the 9xx. So, putting in the most simplistic way possible, that there’s less “stuff” on the chip, and because of that the chip requires less power to run. Nvidia found out that they were able to improve the performance by moving that specific logic away from the hardware and dealing with it in “software” instead, meaning the drivers. Stripping down and simplifying the hardware allowed Nvidia to create these energy efficient GPUs, also drastically reducing production costs. That’s how they won.

But this summer the first DirectX 12 benchmarks came out, and they showed not only that ATI performed a lot better compared to Nvidia, but that in a few cases NVidia hardware performed WORSE in DX12 than in DX11. Turns out that DX12 implementations rely much more directly on the hardware scheduling that, guess what, is not physically present in the recent Nvidia hardware.

What this reveals is important for both DX11 and DX12 future games, and the likely scenario is that the current 970s and 980s videocards will age VERY quickly and very poorly. The current excellent performance of these GPUs depends critically on Nvidia writing specific game schedulers in the drivers. It means that critical optimization is done directly by Nvidia engineers at the compiler and driver level. Game programmers have NO ACCESS to this level of source code, so they cannot do anything beside calling Nvidia and hope they care enough to allocate their engineer hours to fix certain issues. Right now the 970s and 980s are showing excellent performance because they have full support directly from those engineers, writing these custom schedulers for every big game coming out. These GPUs are crucially dependent on driver optimization because the driver is doing a job that usually is done at the hardware level, in ATI’s case, but Nvidia stripped down the hardware from the new chips, and so does that job in the drivers. And that’s also why new games are coming out that show very poor performance of the 7xx chips compared to the 9xx ones. Because Nvidia engineers focus more and more on the newer cards and less and less effort goes on optimizing and writing drivers for older hardware. Widening the gap over time.

What happens when Nvidia will release new hardware next year, with proper support in hardware for the DX12 features? That everything changes. Nvidia engineers will be focused on optimization for the newer cards, because Nvidia’s job is to sell you new hardware. And because the current GPUs performance is so dependent on active drivers optimization, more than it ever was because the schedulers are written in software, it means that once Nvidia engineers stop putting all their work on that optimization the performance of the current cards will plummet.

The scenario is that while the 970s and 980s are, by far, the best cards right now in the market, in the next months and years we’ll see the scenario completely rewritten. Current cards are going to perform very badly and upgrades will be mandatory if you want to keep up with newer games. There’s going to be a significant step up in hardware requirements, way steeper than what we’ve seen in the least few years.

Yet it’s also not possible to determine if Nvidia has already lost the market battle. Right now ATI hardware is much better future-proof compared to Nvidia, so ATI is better strategically positioned. But the next year marks a shift in technology, a new beginning, and it’s probable that Nvidia will put back in hardware the schedulers, with proper DX12 support instead of emulation. But it’s a new beginning only for Nvidia and who is ready to buy brand new hardware. For everyone else who sticks with Nvidia’s current generation it will only mean that this hardware will quickly be rendered obsolete.

Posted in: Uncategorized |

A note on MMORPGs business models

First, remember that MOST people can only see what happened after it happened. Whereas other people learn enough to have a vision of what is going to happen. In a similar way, there are games created for an existing market and audience, and games that deliberately create a market that wasn’t there before, and that suddenly becomes canon and that everything else has to conform to from that point onward. A vision can open new paths, and these new paths become the foundation on which everything else is built.

That said, there’s this widespread myth that free to play has to “replace” subscription models, that it is some unavoidable destination. This discussion is conditioned by the idea of a new model replacing an obsolete one, instead of discussing the game’s own merits. So a game can fail or succeed because of its business model.

The truth of free to play versus subscription models is fairly simple and slightly different from the debates I usually see. The point is that a subscription model is more directly competitive, and so risky. But it is not a case of “new” versus “old”, or a model that is now obsolete. The rise of free to play is motivated by the fact that the market is so competitive no one would survive in a subscription model. Free to play is a way to virtually enlarge the pie. Understood?

The reason is also simple. Players out there can and will buy different games, they can shift their focus from one to the other. Whereas a subscription model leads to a situation where a player will decide on what game to play. It’s very unlikely that a player will maintain multiple subscriptions. So the result is that subscription-based games are much more directly competitive between each other, and only the “King of the Hill” will survive and do well under these conditions. Every other title will fall short and struggle, which is the very simple reason why World of Warcraft dominated all these years. Or the reason why Elder Scrolls Online has to move to a subscription-free model not because subscriptions are a “bad business model”, but merely because the title isn’t valid enough to face the competition of a subscription model. It’s like a two-tiered market where a couple of games can compete at the top and do well with a subscription model, while lesser competitors have to find a way to co-exist with less belligerent business models.

Again, subscription model are still “ideally” the more appropriate business model for a long-term MMORPG that wants to grow as a virtual world, but for the practical needs of a market, and a market where you want to survive, the free to play model offers a way to squeeze more space out of that highly competitive, merciless market.

Rocksteady engineers trying to do the impossible

Gaming news these days are a joke.


Actual quote:

Warner said the above list is the priority, but it’s still working on the following:

– Skipping the boot up splash screens

Man, their most talented engineers are all hard at work to make splash screens skippable, but you have to accept when a task is simply above your skills.

Please desist, WB and Rocksteady, we will forgive you if you can’t fix what’s honestly way too complex to accomplish realistically.

I feel we should start a campaign to send them advices and maybe help them with this cyclopean task. Something like: “try moving the splash screen video files out of their directory, because it just might work.”

Dispelling the myth of DirectX 12

There are lots of articles out there detailing the merits of the new DirectX, but I think they all evoke expectations for the end-user that will never materialize.

The biggest aspect is that DX12 is “more efficient”, and so free performance. Being compatible with older hardware means that the same engine on the same hardware will run better, especially lowering load on the CPU side. All this leading up to the myth that DX12 will extend the life cycle of current hardware.

My opinion is that the opposite will happen: DX12 are a way to push again to buy new videocards and new CPUs. As it always happened.

A couple of days ago Eurogamer published an article about the first somewhat relevant DX12 benchmark:

The most important aspect is how on a fast CPU and a Nvidia card, DX12 is SLOWER than ancient DX11 technology. This is already the proof, just one case that means nothing beside showing that it can actually happen: DX12 isn’t a sure improvement. It could as well push things backward instead of forward. It’s not unambiguously “better”.

Here’s what I wrote about what might have happened (the beginning is an answer to someone claiming Nvidia DX12 drivers aren’t optimized yet):

Part 1: if we are at the bottom level, the activity of the driver isn’t very different from what DX11 does. If we are talking at a very basic level on DX12, it means dealing with basic instructions that DX11 already perfected. So there isn’t something intrinsic in DX12 that makes for a “tricky to develop” driver. The DX12 driver, compared to the DX11 one, is a driver that does less, at an even more basic level. So I’d assume for an engineer it’s much easier to write that driver (and less to work with when it’s time to squeeze out more performance). The first reason why DX11 might be FASTER is because Nvidia engineers know how to make something faster *in the driver*, whereas these guys who made the DX12 code didn’t know as many tricks. Hence, DX11 is faster because it ends up having better custom-code written by Nvidia.

Part 2: better multi-thread in DX12 still brings overhead. That’s why Nvidia backwards performance ONLY HAPPENS on 4+ cores and higher CPU frequency. If the DX11 render can keep up (meaning that it doesn’t completely fill one core) then the DX11 is FASTER than DX12. Because single-threading code is faster and because it leaves even more space on the remaining cores for the rest of the game logic. If instead you hit your CPU cap on the single thread THEN DX12 should be ideally faster, because you can spread better the load on other cores.

The reason why Final Fantasy 14 benchmark runs faster on DX9 than DX11 is somewhat similar. You can have fast single-thread code, or slower multi-thread core. At the end if you add up the load of multi-thread code it ends up cumulatively higher (so slower) than the single-thread code. The same happens with 64bits vs 32bits. 64 is marginally slower, but it allows you to tap into more resources.

Those are aspects that might explain why DX11 ends up being actually faster that DX12. But the myth is that the ideal better performance of an engine will become better performance for the end-user too. I think that’s false, and that’s because it’s produced by a false perception of how game development works.

I’ll try to explain again why DX12 expectations may be overblown, as it always happens, when you focus on the technical aspects and not on the practical ones.

Optimizing a game is a never-ending process that takes development time. Development time = money.

For a game company the first priority is to do things QUICKLY, because doing things fast turns into money you save. That’s why Batman game tanked: they didn’t want to allocate it enough time. They wanted it done FAST because PC isn’t worth long develop times.

Time spent on optimization and actual game performance for the end user belong to the same axis. That means that in a lot of cases the hypothetical speed of DX12 WILL NOT be translated into faster FPS for the end users, but into shorter optimization phases for the developer.

So, DX12 = same performance of DX11 with shorter development time (eventually), but at a lesser cost for the developer.

That’s how it works. The speed of an engine isn’t solely due to technology, but also to time spent on it. In practice, TIME is more an important variable for the developer than performance for the end-user.

That means, again, that in practice DX12 will end producing just about the same performance you see now in DX11. Every improvement in tech, in the HISTORY OF PC has always been eaten very quickly by rising requirements. Always and without exception. The moment you give developers some gains, they fill them up on their side by cutting down the time.

That’s not even the whole picture. As everyone knows video drivers are increasingly complex and optimized only for the newest cards. See Witcher 3 performing badly on 7xx cards. That means that even if DX12 theoretically bring benefits to ALL cards, as time passes the engineers writing drivers will only have time (and motivation to do so) to optimize them well on newer hardware. To not even consider developers who write engines, that will never waste weeks and months writing specific optimization for older hardware.

That means that all gains that DX12 might bring will be used to push new hardware, and not to make your current hardware live longer. It will mean less engineering effort to develop new cards while showing bigger performance gaps. Smoke & mirrors.

This is how things work in practice, since the world isn’t simply run by theoretical technology. What you expect from DX12 just WON’T HAPPEN. DX12 performance improvements are oversold, as it ALWAYS happened and will continue to happen with new technology.