loopingworld.com

Books/mythology/stuff discussions moved to: loopingworld.com

This means that the site here won’t (usually) be updated and I’ll eventually copy all of book-related posts over there. The rest of the stuff will stay here for as long the site stays up (not planning of pulling it down for the foreseeable future).

UPDATE: I’ll sporadically still post here, but it will be for writing about roguelike development, tracking my own (lack of) progress, or other quirky gaming things.

Posted in: Uncategorized | Tagged:

Building 2D worlds

Nathan Jerpe, the guy who made the astounding Legerdemain roguelike-like (because not randomly generated and not permdying) sent me ALL the maps that build the whole game, in native resolution.

I cannot believe my eyes. I’m not a young lad and I’ve seen a lot in gaming. Especially ambitious stuff. But this is certainly one of the most impressive attempts at pure worldbuilding I’ve ever seen. It’s magnificent and beautiful (and ASCII can be so pretty when you know how to use it).

For the time being I cannot share a thing, though. He asked me to not share those images because he still wants players to discover the game on their own, and exploration is a major factor of this game. I would respect that, of course.

But for the greatness you can spot in games there’s always the seed that creates the desire for “more”. That’s what fuels my idea for the crazy roguelike I’m experimenting with. So I have this very remote idea of “remixing” the stuff here and blend it with some other concepts. For sure it will be a source of endless inspiration.

One aspect I want to bring up is again the idea of the flat, bidimensional world. I explained how in Dwarf Fortress the evolution to a 3D world with z-levels fundamentally changed the concept and removed that abstraction, and what’s important to understand is that it’s an abstraction that has its uses and purpose, even if technology would let you have more.

Exactly the same happened with Doom and following games. Doom still today has an unique charm that will never be replaced, and, more importantly, it has nothing to do with “nostalgia”. Of course the gameplay in Doom is much better than Quake, but this is an aspect that is only indirectly tied to the fact Doom is 2D versus 3D in Quake. Indirectly because the limits of a 3D world didn’t allow Quake to be as expansive as Doom. The same happened with Doom 3: huge improvements on graphical fidelity didn’t allow for the same scale to be maintained. This transformed Quake in a game that was far inferior to Doom in pure gameplay and action, but so much better in environment exploration (and the reason why both Doom and Quake are extremely relevant today and do not overlap).

But this still leaves the 2D abstraction of Doom as an unique style with its own merits, and that cannot be improved or replaced. Because it’s an abstraction that works great. Doom levels are 2D. This means you can bring up a map and it’s a perfect representation of all there is to see. It’s a 3D world, projected in two dimensions, but at no loss. This lack of an actual dimension means you are UNCHAINED in what you can do with just two. It means removing the complexity of one dimension so you can add back that complexity to the rest. It means compressing reality so that you can explode outwardly what you can do. Faster, more easily:

doom4

You can reach enormous complexity that otherwise would be unwieldy. It’s a deliberate renunciation, more than building levels in Doom instead of a newer game just for the nostalgia. The point is: no modern game out there can go even close to what Doom does today. Doom 4 will be shamed by this.

doom6

doom8

doom3

doom2

doom1

Doom, Dwarf Fortress before Z-levels, and roguelikes, in modern times, all share the deliberate choice of removing one dimension (and often graphics, entirely) to stick with 2D. Again not for nostalgia, but because this choice EMPOWERS worldbuilding, pushing it to levels that are unmatched, even in AAA commercial products with huge costs (it’s also interesting to consider that GTA V achieved prettiness by sacrificing quite a bit of complexity compared to IV).

So let’s return to Legerdemain and similar. The game world is visually impressive in a way not unlike those Doom screenshots. Elegant complexity that pushes worldbuilding. I have some gaming myths that I carry with me. One is an RPG called “Fate: Gates of Dawn”. It’s one of the most ambitious and complex classic RPGs ever made. The world is HUGE and reportedly it takes more than 150 hours to complete. This is its game-world:

FateWorld

It’s an actual gameworld, not an abstracted map. Pixel accurate 1:1. This is a game built as a 1st person dungeon crawler, so you move cell by cell. Every single pixel there represents an actual location. If you moved North once, turned East and moved forward again then it means you would have moved two pixels on that map. Of course cities and dungeons are separate, but it still means this gameworld is built by 640×400 cells, for a total of 256.000 locations. Essentially half of it is water, but it’s HUGE nonetheless.

Another impressive attempt at worldbuilding is Wizardry 7, another reportedly huge game that pushed the idea of linking separate maps into an “open-world” that is meant to be explored non-linearly. The wilderness in that game is very big, especially if compared to other dungeon crawlers, but we’re dealing with an overall grid that is close to 200×200 (plus, apparently, another as big to comprise all dungeons and similar locations). So it’s an overall 40.000 cells, and you can see from the map that only a small minority are actually explorable.

The transition to 3D with Wizardry 8 obviously killed the game. But they tried to not downsize the map too much, although the game is still extremely ugly and they didn’t do very much with the 3D itself. The point I’m trying to make is the same: deliberately losing one dimension allows to escalate complexity. It’s a renounce that empowers the wordbuilder to go beyond.

Now Legerdemain. Considering just one set of six dungeons. Each is built on a grid of 189×105. So each is ideally 19.845 cells. And the total of all six is: 119.070. That’s ONE dungeon set. This collection has a total of 68 maps and all locations range from 15.000 to 30.000 cells. Even in this case when looking at dungeons only a fraction of the space is actually explorable, but you can still see how this world isn’t huge, it’s humongous. Unprecedented (and beautifully built, I’ve already said). It took me a number of hours to explore two of them, and they are not even complete since there are a few doors that are locked (and now I can see that one of those doors also opens access to another level bigger than the other two).

I can imagine that Legerdemain’s world might be fairly empty to explore. When you move through the wilderness you move between areas, through forests, hills, mountains, bridges and so on. All beautifully drawn in ASCII or tiles. But cell by cell there’s not much that is specific to see or find. This is an aspect I’m studying, as the cell in a 1st person dungeon crawler isn’t the same as a cell in a top-down roguelike. But why? The question to this answer is what my own game experiment should answer.

Legerdemain does at least some of my ideal goal. In the dungeons you sometime find rooms that contain a “?”. When you step over it, a text message pops-up and it gives you “flavor text”. For example a more detailed description of the room you’re entering. This creates the meaningful distinction. In both 1st person dungeon crawlers and roguelike top-down, you still have a “tileset”. Some basic building blocks with which you build the world. So you look at a map and you know that those rooms are all virtually alike. A maze. They might contain some objects and monsters, traps, doors, but in the end it’s space that contains a variable mix of objects. In 1st person dungeon crawlers the zoomed-in perspective and the high density of encounters lead to smaller worlds that are more packed with stuff. In top-down roguelikes instead you have a more expansive, but emptier environment that you cut through at a much faster speed. More forgettable? My goal is to find a formula and put back classic roleplay flavor into that top-down perspective. A slower pace where the room is unique, with complex textual descriptions that aren’t used simply to add flavor, but that offer various forms of manipulation. Through text. Doing for rooms the same that Torment did with dialogue: not just dialogue text, but description and depth of interaction to do far more than the engine made of sprites on a 2D fixed background would allow. It’s again the deliberate renunciation of a dimension, to allow for far more.

Because in the end I believe we do not have to simply live in our time. If we want we can try traveling through time to rediscover and rebuild what was great, to achieve even greater things. We can put aside nostalgia to retrieve what was actually good. Because, again, modernity doesn’t have to build 100% of what we like, making obsolete and even what isn’t.

Well, map attempt failed

I started to play a roguelike-like called Legerdemain.

Not only because it seems great but also because I wanted to try to map the whole thing into a huge, flat plane. I think the partial “collage” I posted before looks amazing and I want so much to have it complete and use it as a source of endless inspiration.

The problem is that the game is restricted to a very small window, and taking screenshots to then assemble them in a big map is really complicate and time consuming. This is just one dungeon level. Actually it’s the TUTORIAL dungeon and it’s one level of total three (actually it’s just two, I think. My bad, I started a while ago). There’s not so much to find here, but it still takes a whole lot of time to explore fully, and there are still a couple of areas locked behind a door, and I’m not entirely sure if there’s a way to clear the rubble that walls certain other areas…

lmaps

I’m also not playing fair and save scumming like crazy. I might play a bit recklessly but I died hundreds of times already and I have no idea how one would be able to play properly and restart every time from scratch. I know the game actually has a save system, but I still haven’t found it yet. Monsters aren’t very strong, but at the beginning the combat is very random so depending how the rolls go I can take no damage at all or risk death if I try to push my luck too much. While also needing to keep an eye on consumables like food and torches.

This game does a lot of things that my ideal game would be based on too. The levels might be a little wasteful and the combat bland, but that’s a good reason why this game exist and mine won’t even get close to plausibility.

I’ll keep playing, but obviously it’s not possible to take screenshots to assemble them. This game defies manual mapping, even if it would look amazing.

Let’s make a world

CaPG2DhUUAAAMj9.jpg large

I just saw this posted on Twitter. If very small it might look as a weird alphabet for some very strange language, but it’s just a collection of the levels in Lode Runner.

Every one of those hides complexity of gameplay. Sets of rules and patterns to solve. Small worlds of sub-creation. Maps and geographies.

What happens if we link them together? We obtain an “open-world”.

When the Dwarf Fortress game was in one of its earlier configurations there were no z-levels, the whole game was played on a surface. That was a wonderful feature that was lost in the quest of complexity. In that earlier version building a fortress was like creating a painting. Every fortress its own story at a glance. Its unique style. One picture that captured and contained everything. A four dimension world that included TIME (as progress was measured from left to right).

One of the ideas I have for my pie-in-the-sky roguelike is that it will have a “world” that exists on a flat surface, with elements of an open-world (but more Dark Souls than Skyrim, as nothing is dynamic or random).

A space to explore and conquer.

level-collage

(this last image is from this game I’m currently playing)

Nvidia and the bleaker future of GPUs

I should probably spend time doing more worthwhile things rather than writing this. But it seems that no one does otherwise.

As usual when I deal with this stuff, I will be imprecise and simplify A LOT. But in general what I say is going to be practically correct. It means that the big picture is the one I’m describing, without getting lost in the technical details.

The situation is this: in the last couple of generations of GPU, namely the 7xx and the latest 9xx, Nvidia has won the market. They won with hardware that, at the same price level, can output better performance AND consistently better energy efficiency. So it’s a total win-win scenario, where Nvidia wins over AMD in every case you can measure.

The problem is that it turns out this was achieved by removing certain scheduling hardware from the chips, a process that started with the 7xx class and continued with the 9xx. So, putting in the most simplistic way possible, that there’s less “stuff” on the chip, and because of that the chip requires less power to run. Nvidia found out that they were able to improve the performance by moving that specific logic away from the hardware and dealing with it in “software” instead, meaning the drivers. Stripping down and simplifying the hardware allowed Nvidia to create these energy efficient GPUs, also drastically reducing production costs. That’s how they won.

But this summer the first DirectX 12 benchmarks came out, and they showed not only that ATI performed a lot better compared to Nvidia, but that in a few cases NVidia hardware performed WORSE in DX12 than in DX11. Turns out that DX12 implementations rely much more directly on the hardware scheduling that, guess what, is not physically present in the recent Nvidia hardware.

What this reveals is important for both DX11 and DX12 future games, and the likely scenario is that the current 970s and 980s videocards will age VERY quickly and very poorly. The current excellent performance of these GPUs depends critically on Nvidia writing specific game schedulers in the drivers. It means that critical optimization is done directly by Nvidia engineers at the compiler and driver level. Game programmers have NO ACCESS to this level of source code, so they cannot do anything beside calling Nvidia and hope they care enough to allocate their engineer hours to fix certain issues. Right now the 970s and 980s are showing excellent performance because they have full support directly from those engineers, writing these custom schedulers for every big game coming out. These GPUs are crucially dependent on driver optimization because the driver is doing a job that usually is done at the hardware level, in ATI’s case, but Nvidia stripped down the hardware from the new chips, and so does that job in the drivers. And that’s also why new games are coming out that show very poor performance of the 7xx chips compared to the 9xx ones. Because Nvidia engineers focus more and more on the newer cards and less and less effort goes on optimizing and writing drivers for older hardware. Widening the gap over time.

What happens when Nvidia will release new hardware next year, with proper support in hardware for the DX12 features? That everything changes. Nvidia engineers will be focused on optimization for the newer cards, because Nvidia’s job is to sell you new hardware. And because the current GPUs performance is so dependent on active drivers optimization, more than it ever was because the schedulers are written in software, it means that once Nvidia engineers stop putting all their work on that optimization the performance of the current cards will plummet.

The scenario is that while the 970s and 980s are, by far, the best cards right now in the market, in the next months and years we’ll see the scenario completely rewritten. Current cards are going to perform very badly and upgrades will be mandatory if you want to keep up with newer games. There’s going to be a significant step up in hardware requirements, way steeper than what we’ve seen in the least few years.

Yet it’s also not possible to determine if Nvidia has already lost the market battle. Right now ATI hardware is much better future-proof compared to Nvidia, so ATI is better strategically positioned. But the next year marks a shift in technology, a new beginning, and it’s probable that Nvidia will put back in hardware the schedulers, with proper DX12 support instead of emulation. But it’s a new beginning only for Nvidia and who is ready to buy brand new hardware. For everyone else who sticks with Nvidia’s current generation it will only mean that this hardware will quickly be rendered obsolete.

Posted in: Uncategorized |

A note on MMORPGs business models

First, remember that MOST people can only see what happened after it happened. Whereas other people learn enough to have a vision of what is going to happen. In a similar way, there are games created for an existing market and audience, and games that deliberately create a market that wasn’t there before, and that suddenly becomes canon and that everything else has to conform to from that point onward. A vision can open new paths, and these new paths become the foundation on which everything else is built.

That said, there’s this widespread myth that free to play has to “replace” subscription models, that it is some unavoidable destination. This discussion is conditioned by the idea of a new model replacing an obsolete one, instead of discussing the game’s own merits. So a game can fail or succeed because of its business model.

The truth of free to play versus subscription models is fairly simple and slightly different from the debates I usually see. The point is that a subscription model is more directly competitive, and so risky. But it is not a case of “new” versus “old”, or a model that is now obsolete. The rise of free to play is motivated by the fact that the market is so competitive no one would survive in a subscription model. Free to play is a way to virtually enlarge the pie. Understood?

The reason is also simple. Players out there can and will buy different games, they can shift their focus from one to the other. Whereas a subscription model leads to a situation where a player will decide on what game to play. It’s very unlikely that a player will maintain multiple subscriptions. So the result is that subscription-based games are much more directly competitive between each other, and only the “King of the Hill” will survive and do well under these conditions. Every other title will fall short and struggle, which is the very simple reason why World of Warcraft dominated all these years. Or the reason why Elder Scrolls Online has to move to a subscription-free model not because subscriptions are a “bad business model”, but merely because the title isn’t valid enough to face the competition of a subscription model. It’s like a two-tiered market where a couple of games can compete at the top and do well with a subscription model, while lesser competitors have to find a way to co-exist with less belligerent business models.

Again, subscription model are still “ideally” the more appropriate business model for a long-term MMORPG that wants to grow as a virtual world, but for the practical needs of a market, and a market where you want to survive, the free to play model offers a way to squeeze more space out of that highly competitive, merciless market.

Rocksteady engineers trying to do the impossible

Gaming news these days are a joke.

http://www.eurogamer.net/articles/2015-08-21-batman-arkham-knights-interim-patch-due-in-the-next-few-weeks

Actual quote:

Warner said the above list is the priority, but it’s still working on the following:

– Skipping the boot up splash screens

Man, their most talented engineers are all hard at work to make splash screens skippable, but you have to accept when a task is simply above your skills.

Please desist, WB and Rocksteady, we will forgive you if you can’t fix what’s honestly way too complex to accomplish realistically.

I feel we should start a campaign to send them advices and maybe help them with this cyclopean task. Something like: “try moving the splash screen video files out of their directory, because it just might work.”

Dispelling the myth of DirectX 12

There are lots of articles out there detailing the merits of the new DirectX, but I think they all evoke expectations for the end-user that will never materialize.

The biggest aspect is that DX12 is “more efficient”, and so free performance. Being compatible with older hardware means that the same engine on the same hardware will run better, especially lowering load on the CPU side. All this leading up to the myth that DX12 will extend the life cycle of current hardware.

My opinion is that the opposite will happen: DX12 are a way to push again to buy new videocards and new CPUs. As it always happened.

A couple of days ago Eurogamer published an article about the first somewhat relevant DX12 benchmark:
http://www.eurogamer.net/articles/digitalfoundry-2015-ashes-of-the-singularity-dx12-benchmark-tested

The most important aspect is how on a fast CPU and a Nvidia card, DX12 is SLOWER than ancient DX11 technology. This is already the proof, just one case that means nothing beside showing that it can actually happen: DX12 isn’t a sure improvement. It could as well push things backward instead of forward. It’s not unambiguously “better”.

Here’s what I wrote about what might have happened (the beginning is an answer to someone claiming Nvidia DX12 drivers aren’t optimized yet):

Part 1: if we are at the bottom level, the activity of the driver isn’t very different from what DX11 does. If we are talking at a very basic level on DX12, it means dealing with basic instructions that DX11 already perfected. So there isn’t something intrinsic in DX12 that makes for a “tricky to develop” driver. The DX12 driver, compared to the DX11 one, is a driver that does less, at an even more basic level. So I’d assume for an engineer it’s much easier to write that driver (and less to work with when it’s time to squeeze out more performance). The first reason why DX11 might be FASTER is because Nvidia engineers know how to make something faster *in the driver*, whereas these guys who made the DX12 code didn’t know as many tricks. Hence, DX11 is faster because it ends up having better custom-code written by Nvidia.

Part 2: better multi-thread in DX12 still brings overhead. That’s why Nvidia backwards performance ONLY HAPPENS on 4+ cores and higher CPU frequency. If the DX11 render can keep up (meaning that it doesn’t completely fill one core) then the DX11 is FASTER than DX12. Because single-threading code is faster and because it leaves even more space on the remaining cores for the rest of the game logic. If instead you hit your CPU cap on the single thread THEN DX12 should be ideally faster, because you can spread better the load on other cores.

The reason why Final Fantasy 14 benchmark runs faster on DX9 than DX11 is somewhat similar. You can have fast single-thread code, or slower multi-thread core. At the end if you add up the load of multi-thread code it ends up cumulatively higher (so slower) than the single-thread code. The same happens with 64bits vs 32bits. 64 is marginally slower, but it allows you to tap into more resources.


Those are aspects that might explain why DX11 ends up being actually faster that DX12. But the myth is that the ideal better performance of an engine will become better performance for the end-user too. I think that’s false, and that’s because it’s produced by a false perception of how game development works.

I’ll try to explain again why DX12 expectations may be overblown, as it always happens, when you focus on the technical aspects and not on the practical ones.

Optimizing a game is a never-ending process that takes development time. Development time = money.

For a game company the first priority is to do things QUICKLY, because doing things fast turns into money you save. That’s why Batman game tanked: they didn’t want to allocate it enough time. They wanted it done FAST because PC isn’t worth long develop times.

Time spent on optimization and actual game performance for the end user belong to the same axis. That means that in a lot of cases the hypothetical speed of DX12 WILL NOT be translated into faster FPS for the end users, but into shorter optimization phases for the developer.

So, DX12 = same performance of DX11 with shorter development time (eventually), but at a lesser cost for the developer.

That’s how it works. The speed of an engine isn’t solely due to technology, but also to time spent on it. In practice, TIME is more an important variable for the developer than performance for the end-user.

That means, again, that in practice DX12 will end producing just about the same performance you see now in DX11. Every improvement in tech, in the HISTORY OF PC has always been eaten very quickly by rising requirements. Always and without exception. The moment you give developers some gains, they fill them up on their side by cutting down the time.

That’s not even the whole picture. As everyone knows video drivers are increasingly complex and optimized only for the newest cards. See Witcher 3 performing badly on 7xx cards. That means that even if DX12 theoretically bring benefits to ALL cards, as time passes the engineers writing drivers will only have time (and motivation to do so) to optimize them well on newer hardware. To not even consider developers who write engines, that will never waste weeks and months writing specific optimization for older hardware.

That means that all gains that DX12 might bring will be used to push new hardware, and not to make your current hardware live longer. It will mean less engineering effort to develop new cards while showing bigger performance gaps. Smoke & mirrors.

This is how things work in practice, since the world isn’t simply run by theoretical technology. What you expect from DX12 just WON’T HAPPEN. DX12 performance improvements are oversold, as it ALWAYS happened and will continue to happen with new technology.

<3 JRPG pixels

I’m fiddling a bit with Playstation 1 emulators.

Most of everyone uses ePSXe, because you can scale up the resolution and prettifying the graphic. Though the result is never as good as people think it is. The images might be dark, so increase brightness to better compare them and maybe open them in separate pages of the browser, on a black background.

This is just scenery, taken from Vagrant Story, a game that looks immensely pretty when properly pixelated.

The first image is ePSXe with scaled up resolution and textures. Notice that the polygons are much better defined. But there’s a complete lack of dithering, so the surfaces are very smooth and plain, giving a washed out, bland look.

The one below is again ePSXe, but with “software” mode plugin. Both pixels and dithering are back, but it doesn’t look so great. Yet I prefer this to the smoothness of the first image.

Now we change emulator. This is the one I always used, a Japanese, fairly obscure, emulator named “Xebra” with no configurable plugins and no possibility to “scale up” resolution and textures. This is just emulating with the maximum possible accuracy the original harware. So the following image is the vanilla Xebra. Also notice that Xebra produces slightly more vibrant colors compared to ePSXe. Even if there are big pixels the image blends well and offers a natural effect. The dithering makes surfaces richer, with more depth.

Now I noticed that if you disable OpenGL the image becomes sharper, and it looks way better. Only that it gives some graphic problem and it’s a lot slower in that mode. But then I realized that SweetFX shaders could work on top of the emulator, and so I could sharpen the image to get on OpenGL the same results. The following image is the result of me playing with these shaders. I actually like a lot the result, I toyed with various intensity values, and the image is sharp. The negative side is that, if you look at the bottom of the image, the effect enhances all these tiny squares that are so sharp they actually cover the actual detail by standing out too much.

The last image is again Xebra, with default setting and just a sharpen shader pushed to its maximum value. I think this produces the best effect. The beautiful dithering is there and the tiny squares at the bottom blend much better with the background.

Now another (open in new window for full size). The first is ePSXe in software. It is pretty bad. Then there’s ePSXe at higher resolution. The textures are smoother, but, if you notice, the better resolution also ends up exposing the problems of the source. The face looks a bit unnatural with its pointy nose and chin. The lines are too sharp. That’s a common side effect when you scale up games that weren’t made for that sort of detail. And due to lack of dithering the shoulder only shows diagonal bands of colors that look quite bad. The next quadrant is Xebra with my sharpen shader, to compare with the last quadrant that is vanilla Xebra. In this case the last one produces the best results, the image is softer and more natural. Suggesting that maybe a compromise between the last two is ideal.

But again, the important point I’m trying to prove is that the smooth, high-resolution option most people use is far from being the best. Pixels are beautiful.

Posted in: Uncategorized |

This modern counter-bias

So, it looks like The fantasy side of tabletop Warhammer joins those things that got a “reboot”. It shouldn’t surprise anyone that it turned into shit.

It doesn’t take very long, looking at the internet, to see that the response for this reboot has been almost universally negative. The Warhammer fantasy universe has been reset, so all established lore has been canceled, and it was also an opportunity to rewrite the rules and, guess what, make them more “casual”.

The main differences are the focus on a smaller amount of units and more importance given to heroes with special abilities. So a smaller scale to manage where single units make the difference. Beside that, everyone complains that the removal of army points makes the battles simply impossible to balance. And it sounds like a gaping hole of an oversight, however you want to look at it.

It should be evident that they now want a toy, and not a wargame.

But I’m pointing this out to underline two basic trends. One is about these “reboots” that systematically alienate the current players yet gain absolutely no one new. The point here is that it doesn’t take any careful analysis to realize these plans are always terrible ones.

The second trend is that I was reading this article that was doing a good job explaining the situation:
http://www.terminally-incoherent.com/blog/2015/07/04/age-of-sigmar-and-the-end-of-warhammer/

The problem is that it falls in this trend of the “Social Justice Warrior” angle being forced upon everything, which has the only effect of undermining perfectly reasonable complaints. As I said the article makes very good points, so it really wasn’t necessary to also put the load on that silly angle. I’m linking it because it reads like a parody of those same issues.

One of the new things the new rules seem to do is trying to break the fictional layer of the game to engage directly THE PLAYER as a game mechanic. In some kind of parody game it could even be a good, goofy idea, but on the actual Warhammer? It’s beyond stupid.

But I find even more funny that on one side the game rules themselves break the fictional layer, while on the other side the guy writing that article pushes the political agenda onto a fictional game/product. So I guess two wrongs make a right. And so the result is that perfectly reasonable complaints about a very goofy ruleset turn into very goofy complaints, in a kind of circular way.

And so the accusation:
“encouraging players to straight up mock people who suffer from mental illness”

About this rule:
“if, during your hero phase, you pretend to ride an imaginary horse, you can re-roll failed hit rolls”

Uh-oh. So very offensive. Worthy of a crusade. GRAB THE WARHAMMERS!

P.S.
On a more serious note, this way of thinking is dangerous. It’s a weapon of an argument and it is now pervasive in our culture, in plenty of more subtle ways. Blaming people for imaginary intentions. If you ride an imaginary horse while playing a game your INTENTION for doing so is “mock people who suffer from mental illness”. And of course you cannot even defend yourself from the accusation, because the accusation pretends to reveal an HIDDEN purpose, and so that won’t be admitted. Like a dialectic bullet of entitlement. Beware, because this way of thinking is spreading.