A note on MMORPGs business models

First, remember that MOST people can only see what happened after it happened. Whereas other people learn enough to have a vision of what is going to happen. In a similar way, there are games created for an existing market and audience, and games that deliberately create a market that wasn’t there before, and that suddenly becomes canon and that everything else has to conform to from that point onward. A vision can open new paths, and these new paths become the foundation on which everything else is built.

That said, there’s this widespread myth that free to play has to “replace” subscription models, that it is some unavoidable destination. This discussion is conditioned by the idea of a new model replacing an obsolete one, instead of discussing the game’s own merits. So a game can fail or succeed because of its business model.

The truth of free to play versus subscription models is fairly simple and slightly different from the debates I usually see. The point is that a subscription model is more directly competitive, and so risky. But it is not a case of “new” versus “old”, or a model that is now obsolete. The rise of free to play is motivated by the fact that the market is so competitive no one would survive in a subscription model. Free to play is a way to virtually enlarge the pie. Understood?

The reason is also simple. Players out there can and will buy different games, they can shift their focus from one to the other. Whereas a subscription model leads to a situation where a player will decide on what game to play. It’s very unlikely that a player will maintain multiple subscriptions. So the result is that subscription-based games are much more directly competitive between each other, and only the “King of the Hill” will survive and do well under these conditions. Every other title will fall short and struggle, which is the very simple reason why World of Warcraft dominated all these years. Or the reason why Elder Scrolls Online has to move to a subscription-free model not because subscriptions are a “bad business model”, but merely because the title isn’t valid enough to face the competition of a subscription model. It’s like a two-tiered market where a couple of games can compete at the top and do well with a subscription model, while lesser competitors have to find a way to co-exist with less belligerent business models.

Again, subscription model are still “ideally” the more appropriate business model for a long-term MMORPG that wants to grow as a virtual world, but for the practical needs of a market, and a market where you want to survive, the free to play model offers a way to squeeze more space out of that highly competitive, merciless market.

Rocksteady engineers trying to do the impossible

Gaming news these days are a joke.

http://www.eurogamer.net/articles/2015-08-21-batman-arkham-knights-interim-patch-due-in-the-next-few-weeks

Actual quote:

Warner said the above list is the priority, but it’s still working on the following:

– Skipping the boot up splash screens

Man, their most talented engineers are all hard at work to make splash screens skippable, but you have to accept when a task is simply above your skills.

Please desist, WB and Rocksteady, we will forgive you if you can’t fix what’s honestly way too complex to accomplish realistically.

I feel we should start a campaign to send them advices and maybe help them with this cyclopean task. Something like: “try moving the splash screen video files out of their directory, because it just might work.”

Dispelling the myth of DirectX 12

There are lots of articles out there detailing the merits of the new DirectX, but I think they all evoke expectations for the end-user that will never materialize.

The biggest aspect is that DX12 is “more efficient”, and so free performance. Being compatible with older hardware means that the same engine on the same hardware will run better, especially lowering load on the CPU side. All this leading up to the myth that DX12 will extend the life cycle of current hardware.

My opinion is that the opposite will happen: DX12 are a way to push again to buy new videocards and new CPUs. As it always happened.

A couple of days ago Eurogamer published an article about the first somewhat relevant DX12 benchmark:
http://www.eurogamer.net/articles/digitalfoundry-2015-ashes-of-the-singularity-dx12-benchmark-tested

The most important aspect is how on a fast CPU and a Nvidia card, DX12 is SLOWER than ancient DX11 technology. This is already the proof, just one case that means nothing beside showing that it can actually happen: DX12 isn’t a sure improvement. It could as well push things backward instead of forward. It’s not unambiguously “better”.

Here’s what I wrote about what might have happened (the beginning is an answer to someone claiming Nvidia DX12 drivers aren’t optimized yet):

Part 1: if we are at the bottom level, the activity of the driver isn’t very different from what DX11 does. If we are talking at a very basic level on DX12, it means dealing with basic instructions that DX11 already perfected. So there isn’t something intrinsic in DX12 that makes for a “tricky to develop” driver. The DX12 driver, compared to the DX11 one, is a driver that does less, at an even more basic level. So I’d assume for an engineer it’s much easier to write that driver (and less to work with when it’s time to squeeze out more performance). The first reason why DX11 might be FASTER is because Nvidia engineers know how to make something faster *in the driver*, whereas these guys who made the DX12 code didn’t know as many tricks. Hence, DX11 is faster because it ends up having better custom-code written by Nvidia.

Part 2: better multi-thread in DX12 still brings overhead. That’s why Nvidia backwards performance ONLY HAPPENS on 4+ cores and higher CPU frequency. If the DX11 render can keep up (meaning that it doesn’t completely fill one core) then the DX11 is FASTER than DX12. Because single-threading code is faster and because it leaves even more space on the remaining cores for the rest of the game logic. If instead you hit your CPU cap on the single thread THEN DX12 should be ideally faster, because you can spread better the load on other cores.

The reason why Final Fantasy 14 benchmark runs faster on DX9 than DX11 is somewhat similar. You can have fast single-thread code, or slower multi-thread core. At the end if you add up the load of multi-thread code it ends up cumulatively higher (so slower) than the single-thread code. The same happens with 64bits vs 32bits. 64 is marginally slower, but it allows you to tap into more resources.


Those are aspects that might explain why DX11 ends up being actually faster that DX12. But the myth is that the ideal better performance of an engine will become better performance for the end-user too. I think that’s false, and that’s because it’s produced by a false perception of how game development works.

I’ll try to explain again why DX12 expectations may be overblown, as it always happens, when you focus on the technical aspects and not on the practical ones.

Optimizing a game is a never-ending process that takes development time. Development time = money.

For a game company the first priority is to do things QUICKLY, because doing things fast turns into money you save. That’s why Batman game tanked: they didn’t want to allocate it enough time. They wanted it done FAST because PC isn’t worth long develop times.

Time spent on optimization and actual game performance for the end user belong to the same axis. That means that in a lot of cases the hypothetical speed of DX12 WILL NOT be translated into faster FPS for the end users, but into shorter optimization phases for the developer.

So, DX12 = same performance of DX11 with shorter development time (eventually), but at a lesser cost for the developer.

That’s how it works. The speed of an engine isn’t solely due to technology, but also to time spent on it. In practice, TIME is more an important variable for the developer than performance for the end-user.

That means, again, that in practice DX12 will end producing just about the same performance you see now in DX11. Every improvement in tech, in the HISTORY OF PC has always been eaten very quickly by rising requirements. Always and without exception. The moment you give developers some gains, they fill them up on their side by cutting down the time.

That’s not even the whole picture. As everyone knows video drivers are increasingly complex and optimized only for the newest cards. See Witcher 3 performing badly on 7xx cards. That means that even if DX12 theoretically bring benefits to ALL cards, as time passes the engineers writing drivers will only have time (and motivation to do so) to optimize them well on newer hardware. To not even consider developers who write engines, that will never waste weeks and months writing specific optimization for older hardware.

That means that all gains that DX12 might bring will be used to push new hardware, and not to make your current hardware live longer. It will mean less engineering effort to develop new cards while showing bigger performance gaps. Smoke & mirrors.

This is how things work in practice, since the world isn’t simply run by theoretical technology. What you expect from DX12 just WON’T HAPPEN. DX12 performance improvements are oversold, as it ALWAYS happened and will continue to happen with new technology.

<3 JRPG pixels

I’m fiddling a bit with Playstation 1 emulators.

Most of everyone uses ePSXe, because you can scale up the resolution and prettifying the graphic. Though the result is never as good as people think it is. The images might be dark, so increase brightness to better compare them and maybe open them in separate pages of the browser, on a black background.

This is just scenery, taken from Vagrant Story, a game that looks immensely pretty when properly pixelated.

The first image is ePSXe with scaled up resolution and textures. Notice that the polygons are much better defined. But there’s a complete lack of dithering, so the surfaces are very smooth and plain, giving a washed out, bland look.

The one below is again ePSXe, but with “software” mode plugin. Both pixels and dithering are back, but it doesn’t look so great. Yet I prefer this to the smoothness of the first image.

Now we change emulator. This is the one I always used, a Japanese, fairly obscure, emulator named “Xebra” with no configurable plugins and no possibility to “scale up” resolution and textures. This is just emulating with the maximum possible accuracy the original harware. So the following image is the vanilla Xebra. Also notice that Xebra produces slightly more vibrant colors compared to ePSXe. Even if there are big pixels the image blends well and offers a natural effect. The dithering makes surfaces richer, with more depth.

Now I noticed that if you disable OpenGL the image becomes sharper, and it looks way better. Only that it gives some graphic problem and it’s a lot slower in that mode. But then I realized that SweetFX shaders could work on top of the emulator, and so I could sharpen the image to get on OpenGL the same results. The following image is the result of me playing with these shaders. I actually like a lot the result, I toyed with various intensity values, and the image is sharp. The negative side is that, if you look at the bottom of the image, the effect enhances all these tiny squares that are so sharp they actually cover the actual detail by standing out too much.

The last image is again Xebra, with default setting and just a sharpen shader pushed to its maximum value. I think this produces the best effect. The beautiful dithering is there and the tiny squares at the bottom blend much better with the background.

Now another (open in new window for full size). The first is ePSXe in software. It is pretty bad. Then there’s ePSXe at higher resolution. The textures are smoother, but, if you notice, the better resolution also ends up exposing the problems of the source. The face looks a bit unnatural with its pointy nose and chin. The lines are too sharp. That’s a common side effect when you scale up games that weren’t made for that sort of detail. And due to lack of dithering the shoulder only shows diagonal bands of colors that look quite bad. The next quadrant is Xebra with my sharpen shader, to compare with the last quadrant that is vanilla Xebra. In this case the last one produces the best results, the image is softer and more natural. Suggesting that maybe a compromise between the last two is ideal.

But again, the important point I’m trying to prove is that the smooth, high-resolution option most people use is far from being the best. Pixels are beautiful.

Posted in: Uncategorized |