Revisiting Collectible Card Game RPGs

I remember playing the Pokemon Trading Card Game for Gameboy Color as a child. It is a game where you walk around to different gyms and battle trainers and gym leaders using pokemon trading cards. Opponents give you booster packs when you defeat them, and you use the new cards from those packs to build and improve your decks. There is a storyline, but it is very thin; the game is mostly focused on battles.

These days, there are plenty of digital Collectible Card Games (CCGs) out there. CCGs are not the same as the deckbuilding games that are popular nowadays; in a CCG, you can freely assemble and modify any number of custom decks from your ever-growing collection, whereas deckbuilders incorporate deckbuilding into the game and greatly restrict your ability to modify your deck.

Most digital CCGs are built around an online PvP structure, supplemented by a limited challenge mode where you compete against an AI. Microtransactions to buy new cards or booster packs are also common. Presumably the thought is that players will compete against their friends and be driven to improve their decks, occasionally shelling out some money for better cards.

This is a terrible idea.

CCGs that rely on PvP inherently suffer from the problem of network effects, the same as any other multiplayer game. If many people play the game then the game is fun. If few people play the game, then it isn’t fun, even if the basic mechanics are sound. Having a thriving PvP scene is a nice bonus, but it absolutely cannot carry the game.

Why don’t we see more digital CCGs like the old Pokemon game? Because of the (weak) RPG structure, it was a lot of fun even if you didn’t play against other human players (though that was also supported using a gameboy link cable). CCGs actually form a very strong basis for an RPG. Both opponents are playing with the same ruleset, so it is easy to understand what your opponent might do. Booster packs work as natural loot for winning a battle, and it is easy to draw a connection between the opponent and the loot; just brand the booster pack accordingly. The player can also expect to find some of the cards that the opponent used against them in the booster pack, which is a very effective incentive to challenge the same foe again and again.

RPG combat, at least from traditional JRPGs can also often feel repetitive and stale, with the player just finding the optimal attack and hammering it. Even looking at the pokemon games, I find that the RPG combat is much less interesting than the CCG combat – there are so many moves like leer and growl that there is no point in using because the more powerful moves will take out the opponent instantly. Whereas the card game is structured so that almost every move is useful in some instances, often because the player lacks the energy for the move they really want to use.

I think CCG RPGs have a lot of unexplored potential. The CCG elements fix the problem of stale combat and progression mechanics, and the RPG elements ensure that you don’t need a vibrant online player base to enjoy the game. I’m definitely considering trying my hand at one at some point.

Designer Diary: Corrupted by Ruin #3

I think I’ve figured out a workable system for combat in Corrupted by Ruin. The biggest constraint is that it must be fast and uncomplicated since each hero incursion potentially initiates multiple encounters. Forcing players to go through the same actions many times will get repetitive. The natural model for quick, simple combat is an idle system where the player sets up the battle and lets it play out. Just think of Loop Hero, where the player speeds through hundreds of fights in a run. So I decided to have a similar system, where combatants have health bars and cooldown timers and attack random targets.

Idle combat comes with its flaws. My chief concern was that it would muddy the differences between different monsters, which happens in “Hero’s Hour.” I think I can get around this by limiting the number of combatants on each side to eight, but this still leaves the question of how to distinguish monsters from each other without the ability to select actions. And I think I have found the answer – a variant of my idea to incorporate Spirit Island threshold mechanics, but one that treats monsters and heroes as asynchronous element converters.

Here is how my new system works. Each creature has two or more actions, each with a priority. It goes through its list in order until it finds something it is capable of doing. Each ability may have an elemental cost and might produce some elements. However, these are available to any creature on either side of the fight – for example, the heroic cryomancer might use the water element your slime produced to boost a spell, or your salamander might benefit from the enemy flame knight’s fire generation. Each battle becomes a converter-based economic game.

One implication of this system is that speed matters a lot. If you see a slow hero that needs fire to be effective, you can counter it with a faster monster that also uses fire because it is more likely to get the element first. Or, if there is a quick hero that consumes water, you might avoid using water monsters.

Another neat thing that I can do with this system is to change the behavior of creatures depending on what is available. For example, maybe there is a troll that typically guards other monsters, but if you flood the room with psychic energy, it goes berserk and starts attacking. The player can then use elements to choose a strategy for their side and manipulate their enemies.

One concern with any combat system based around synergies is that players will get locked into a pattern. If a combination of monsters is good, why not just use it every time? I predict that my elemental conversion system will avoid this because even the same team of monsters will behave very differently depending on the enemy heroes they face. Furthermore, even though you fight the same heroes in multiple rooms within a single incursion, the rooms can also consume and produce elements, which should be enough to shake things up.

Designer Diary: Corrupted by Ruin #2

I spent the last week at a game design retreat, and among other things, one game I tested was “Corrupted by Ruin.” I don’t have a digital prototype yet, so I printed out some tetrominoes and monster cards and had people play it as though it were a board game.

There are some pretty significant differences between digital games and board games. Some games that work great in the digital realm fall flat in the real world, and some things don’t make the transition in the other way very well either. Still, paper prototyping is a great way to validate an idea, and it is a good sign if something works well even without a machine.

While playtesting, I realized that the triggering mechanism for an incursion makes a big difference in how it feels. Traditionally in tower defense games, the player alternates between constructing defenses and operating them (actively or passively) against regularly scheduled waves of intruders. For example, in “The Last Spell,” the player builds structures and upgrades their heroes during the day and then deploys them against the monsters that attack every night. I initially approached my design this way since it is about defending a dungeon against heroes.

The problem is that this leaves the player very little control over where the heroes enter the dungeon. A big part of the game is engineering encounters to maximize the effectiveness of the monster and room combinations. If the heroes can enter the dungeon anywhere, the player has little control over their path. Seeing where the heroes enter ahead of time doesn’t help much either because the player can’t rearrange their rooms to account for it.

So instead, I tried letting players trigger the incursions. I added “staircase” tiles around the map, and whenever a player would connect to a staircase tile, heroes would invade. The player’s incentive to go after staircases was that defeating heroes was necessary to win. The resulting game felt much more interesting to me. The player is still playing defense, but now they control when and where they fight the heroes.

Roguelike games tend to have a proactive aesthetic, where the player is constantly advancing spatially towards a goal. Reactive gameplay clashes with this. I tried out “Tower Tactics: Liberation,” a tower defense roguelike, and found this contradiction to stand out. You move from place to place, and in each location, you set up towers and defend against waves of enemies, but the contrast between the map movement and the lane defense is a little jarring. By flipping the script so that the heroes react to the player’s incursion, I hope to align the high-level gameplay aesthetic of conquering land after land with the lower-level conquest of a single kingdom.

Giving the players complete control of hero incursions also has implications for what happens when the heroes invade. Deterministic combat allows players to calculate the exact results of triggering a battle rather than relying on their heuristics. I want to avoid this because it is tedious for the player. Also, with more control over the path the heroes take comes the obligation to make the associated decisions more interesting. I think my current combat system is too plain to justify this, so I am starting to think about more interesting mechanics for combat. I will most likely want a puzzle of some sort.

Another problem that I solved through paper playtesting was how to handle monster recruitment. My previous system involved buying monsters from a pool of available ones, with new monsters unlocked by building specific types of rooms. That has three problems: First, it obligates the design of a matching room or monster even when it doesn’t necessarily make sense. Second, players already draft rooms, and doing the same for monsters feels repetitive. Third, it doesn’t interact very much with the tile-placement mechanics that are central to the game.

The solution turned out to be very simple – monsters, like gold, start each level embedded in the map. When you cover them with a tile, they join your dungeon. Now I can design monsters independently from rooms and even associate them with map types, and monster drafting interacts with room placement to enhance both.

To “mine” monsters out of the ground implies that monsters are a finite resource. Limited upgrades for monsters also make sense since they feel less generic than they would under a drafting system.

Designer Diary: Corrupted by Ruin #1

I’ve started working on a new computer game. I initially envisioned it as a cross between Dungeon Keeper 2 and FTL; you build a dungeon, gain monsters, and move them around your dungeon to respond to adventurers that try to invade. By surviving over several rounds, you gradually corrupt and take control of the land you are in, and a run involves corrupting several such lands.

In contrast to Icewords, for which development was very ad-hoc, I’ve decided to plan this game out in advance. I am making a game design document in as much detail as possible to have a clear picture of where I am going. I am also experimenting with test-driven development, using the Godot unit testing framework WAT.

Understanding the genre expectations of players is crucial when making a game. You don’t have to adhere to all of them, but it is easier for players to understand a game when they are already familiar with aspects of it. Dungeon Keeper 2 is the best example of the genre I am targeting. Some of its features that I see as essential are building a dungeon out of different types of rooms, attracting a variety of monsters to your dungeon, and using those monsters to fight off heroes that try to invade. Mining for gold while building tunnels is another iconic mechanic that would be good to replicate.

I decided to start my brainstorming with the concept of rooms. A large part of the progression that I envision in my game comes from unlocking, building, and upgrading many different types of rooms. Each room has a different effect and spawns a unique minion type. Some rooms can also transform when connected to others, an idea I got from Loop Hero. For example, placing a Library next to a Graveyard transforms it into a Haunted Library. Players like having little secrets to discover.

My initial vision for the rooms was as square tiles placed in a grid, like in Galaxy Trucker. When designing any spatial game about building something, I always ask: “Why does it matter where I place things?” In Galaxy Trucker, many threats come from a predictable direction and are more likely to occur in some rows or columns than others. While this makes sense in space, I didn’t see an obvious way to do something similar underground.

There are all sorts of ways you can make positioning matter based on the abilities of certain rooms. I object to relying exclusively on room abilities, however. It places a burden on new players; to make intelligent decisions about placement, they need to learn the nuances of each room. Therefore their heuristics are not transferable when exposed to new room types. I wanted to provide a reason to put a room in a particular location independent of what the room did.

I decided to try using tetrominoes instead of uniform square tiles. Most games that deal with tetrominoes ask questions about how efficiently they tile the board and what is on the spaces they cover-up. The latter focus is perfect for a game about building a dungeon because it is naturally suited to mining mechanics: you can print resources on the board for the player to harvest by covering them up.

Cover-up mining is probably enough to justify tetromino placement, but I wanted to make efficient tiling a focus too. In Dungeon Keeper 2, you gain a mana resource based on the area of your dungeon. In thinking about implementing a similar system, it occurred to me that rewarding the playing for dungeon area and penalizing them for dungeon perimeter naturally incentivizes efficient tiling. Mechanically, the player gains energy for each tile in their dungeon and loses it for each adjacent empty tile. Thematically, rooms generate mana and radiate it off into the environment. Mana thus obeys similar rules to real-world heat, which I like.

Such a system gives players two contradictory guiding reasons for tile placement. They want to spread out to harvest resources on the map, but they want to stay compact to minimize mana radiation loss. On top of this, they need to consider room evolution and any room abilities that care about proximity or adjacency. Overall, I think this gives enough reasons to care about where they place their rooms to make tile positioning meaningful.

At this point, I decided to try out unit testing in Godot while implementing classes to handle tetromino rooms and maps. I tried using GUT first but didn’t like that I couldn’t use arguments in my constructors with it, so I switched to WAT. I also prefer WAT’s integration with the editor over GUT’s requirement that you launch a scene to run your tests.

The biggest unexpected thing I learned is that unit testing is great for giving you feedback without having to implement example scenes. Previously, when making games in Godot, I would rush through a slapdash setup to get something I could play, leading to messy code that I would have to refactor. Psychologically, we want to see results for our work, and it is hard waiting for thousands of lines of code before you see anything happen – plus, you are almost sure to have an error somewhere. But being able to write a unit test and then immediately see results reduced this need a lot. Unit testing doesn’t just help you catch errors; it also helps you stay motivated.

Combat is the next consideration, which I am still debating. The planned structure of the game is that you cycle through the phases of building rooms, preparing for the heroes, and fighting the heroes. At first, I imagined this working like FTL boarding mechanics where monsters and heroes in the same room will automatically fight each other, and the challenge is allocating your monsters. Heroes would enter the dungeon from doors that lead out, so the layout would influence where threats appear. Battles would happen in real-time, perhaps pausing to cast spells or issue orders.

However, I’m not sure how interesting I can make that without becoming inaccessible. Crew combat is simple in FTL because you have to manage other things simultaneously, like weapons and power allocation. But if the focus is on fighting alone, I’m not sure it would be interesting enough to justify the phase. The issue with idle combat is that aside from the choice of where to allocate forces, the player has very little to do. Also, the idea of independent heroes invading from many different points feels off; capturing the traditional feel of a party venturing through a sequence of encounters would be better.

Another problem with real-time combat, in general, is that it severely limits the feedback you can give the player about differences between units. For instance, I recently played a newly released game called Hero’s Hour, where you recruit armies comprising many different types of units. However, combat involves a real-time battle where your entire army fights the enemy army, and it is difficult in the chaos to tell how each type of unit differs from the rest. I have observed this with RTS games as well; real-time combat smooths over the unique properties of each unit. I want the different monster types to have personalities, not blur together into generic minions. Therefore, I realized that I needed to have turn-based combat.

Currently, I am considering a system where a single party of adventurers encounters one room of your dungeon at a time, and you deploy monsters (and traps and spells) to that room as though you are playing cards. I was thinking about a turn-based idle system where heroes and monsters alternate dealing their damage, but that adds a lot of time to the game, and I’m not sure how your choices would change between combat rounds. If a spell is good to play, wouldn’t you spam it? And if you can’t, then why have multiple turns? Instead, my current vision is that there is one round of combat in which you play your cards and hit resolve, and then the heroes progress to the next room.

I’ve been playing a lot of Spirit Island lately, so it occurred to me that the power thresholds mechanic would work well with such a combat system. Each room and monster in the room provides elements, and each room, monster, trap, and spell has two effects – a weaker default effect and a better effect that applies when used in a room with the matching elements. I still need to figure out what makes the most sense thematically for how traps work.

I love reading or listening to post-mortems and designer diaries, but my games often have few records about why I made the choices I did or what issues I encountered. I think recording my reasoning as I go is valuable not just for looking back but also to help understand my design choices as I make them, so I will make more of an effort to post about it here.

Icewords: a Post-Mortem

In January, I published Icewords on Steam. Icewords is an adaptation of Spelling Brawl in which the player moves around on a hexagonal grid, spelling words and pushing winter-themed enemies into the water. Commercially, the game wasn’t much of a success, but I learned a lot from putting it up for sale.

Content is King

Icewords relies on randomly generated word-search puzzles to be engaging. There are only five enemy types, so replayability is limited. Score attack is not enough to guarantee replayability by itself and cannot be the main feature of the game; you need to provide content for players.

“Content” doesn’t necessarily mean handcrafted levels, though those would probably help. It just means you need to give players things to discover. The most common method uses unlockable game features, like new enemies, abilities, or characters. However, these require more art and music, so a campaign with non-randomized maps might be a more economical solution.

You don’t necessarily need to gate content off from players at the start of the game. I’ve been playing a lot of Rift Wizard lately, and it has no unlocks. However, the game has so many spells and skills and ways to combine them that it is endlessly replayable. If you are confident in your replayability, you don’t need content. Giving the player everything at the start also leaves no indication of when they have seen everything there is to see, which may enhance replayability. Rift Wizard demonstrates well how to leverage your content to keep players coming back for more.

Don’t Rely on “Influencers”

When I published Icewords, I was unaware of the Steam “Influencer” subculture. People cold-email you to ask for keys, with the promise that they can promote you on their twitch channels. After reading up on the subject, I learned that most of these are actually “collectors” wanting to get Steam games for free, not scammers reselling keys.

The astounding thing to me was how much effort some of them will put in. The retail price of Icewords is $2.99. Some of the “influencers” exchanged multiple emails with me begging for a key; do they not value their time? Or maybe there are bragging rights associated with hoodwinking a game dev?

Ultimately, to the extent that Influencers can help you, you might as well wait for them to find your game organically. Anybody begging for a key is either not successful enough to be worth your time or has no intention of actually streaming your game; the real influencers will buy your game themselves. Of course, collectors would probably not buy your game anyway, so giving them a copy doesn’t cost you sales, but it does cost valuable time. My new policy is to ignore all requests for a free game.

Build for Multiplayer from the Start

After publishing Icewords, I decided to implement a multiplayer feature to enhance the game’s limited replayability. I did not make Icewords with multiplayer in mind, and the changes I had to make felt very kludgy. I would have set up the game very differently from the start if I had to make it again.

It is much harder to add features you never planned for, which isn’t news to any programmer. I don’t feel too bad about it because I would change much of what I did if I were to do it again.

Recruit your playtesters

Steam has a “Playtest” feature that lets you release an early version of the game to random playtesters that sign up. I tried this and got zero feedback. In retrospect, it isn’t surprising. Self-selected playtesters have no obligation to give you advice and probably won’t. After trying the early version, they might even form a negative opinion, losing you potential customers.

Cut your losses

I spent a lot more time than I should have working on multiplayer for a game with so few users. I think it makes sense to spend time post-release on very successful games. If nobody is buying, it is better to focus on the next project. I don’t regret adding the multiplayer feature because it taught me how to use Steam’s multiplayer API. If it weren’t for that, I would have been better off just leaving the game as it was.

While Icewords hasn’t been a success so far, I feel more confident about my next computer game and can hopefully avoid making the same mistakes.

Icewords is available for purchase here:

https://store.steampowered.com/app/1661760/Icewords/

One Night Ultimate Werewolf versus Blood on the Clocktower

I loved the social deduction game Werewolf in high school. On various school or club trips, I would try to get other people to play it in our free time. My interest in Werewolf began to taper off in college and disappeared when I discovered One Night Ultimate Werewolf (ONUW). I’ve tried numerous social deduction games – Two Rooms and a Boom, Secret Hitler, Coup, etc. – but none came close to dethroning ONUW.

The biggest reason I liked ONUW is that even good-aligned players have incentives to lie. Since your team can change without your knowledge, part of the game involves figuring out your team, and if you always tell the truth, you may find yourself without an alibi when the Troublemaker reveals that they swapped you.

Another reason I liked ONUW is how short the games are. A game of Werewolf feels heavy, while ONUW feels light-hearted. Because each game is such a small time investment, you can try all sorts of silly strategies without worrying that you will spoil the experience of other players. If you try a crazy gambit in Werewolf and it doesn’t work out, your team may feel like you are wasting their time.

Of course, the elephant in the room is player elimination, which is a weakness of Werewolf and noticeably absent from ONUW. Sitting around to watch because you are dead is often not much fun. The penalty of dying also skews incentives by making players less willing to die for the good of their team, which limits the sort of strategies they will try.

Recently, I encountered Blood on the Clocktower (BotC) by watching Let’s Play videos on the No Rolls Barred Youtube channel, which was recommended to me by a friend. It is very entertaining to watch, and I was curious to see how it would play. Mechanically, it is a lot like Werewolf but with a few twists that make a big difference.

The most visible change is that dead players still participate in conversations and get a “ghost vote” that they can use once per game. Dead players usually save this vote until the last day of the game, where it comes down to choosing between the “final three” players alive. One problem this solves is giving living good players a way to win even when outnumbered.

A more innovative change is the storyteller’s role in the game. In most Werewolf variants, the moderator resolves night actions mindlessly. You could replace them with a machine and see no difference in the game. In BotC, the storyteller chooses what information to give players and has the explicit agenda of providing a balanced game (which in practice means making sure the game reaches the final three players).

The game has numerous other clever ideas, such as madness and its notion of scripts, but dead votes and the storyteller’s role are the most defining features. After watching lots of games of BotC on Youtube, I decided to try it out myself. I joined the unofficial discord channel, learned the ropes, and played three games.

In my first game, I was the Butler, a good-aligned character that must choose a master and can only vote when their master does. It was a relaxing role because it simplified things a lot; I didn’t have to think too hard about who to vote for and could focus on trying to solve the puzzle that the game presents.

In my second game, I was the Godfather, an evil-aligned character that knows which “outsiders” are in play (outsiders are good characters with harmful abilities) and gets to kill whenever one dies. I had a lot of fun with that role and ended up winning.

My third game made me the demon (the evil character the good team needs to kill to win) and was a bit of a train wreck. My team lost early on, but the game dragged on due to the Good team’s paranoia. Playing as the demon was nerve-wracking in a mildly unpleasant way, and it made me feel bad because it felt like my misplays were responsible for my team’s loss.

I have mixed feelings about BotC. What I enjoy most about social deduction games is the feeling of solving a puzzle when not all of the information you have is reliable, and BotC delivers on that. However, the night phase can drag on a bit, and the game length overall is a bit much. The one thing that BotC does brilliantly is to make the storyteller role fun rather than rote. It is so popular that the Discord server I played on has implemented queues for people wanting to story-tell.

Compared to its most immediate competition in the sub-genre of social deduction games where people die, BotC is the best game I have played, and I will happily play it when given the opportunity. However, ONUW is still my favorite social deduction game overall. Both ONUW and BotC beat out the competition by providing logical puzzles to solve rather than just relying on social cues.

The one shared element of ONUW and BotC that I dislike is how long the night phase takes. It is more tolerable in ONUW because it only happens once, but it is still something I wish could be shortened. As a game designer, this suggests unexplored design space in finding ways to eliminate the night phase while preserving role-based information.

Randomized Victory and Alliances of Doubt

In Oath, there is a mechanic in which at the end of rounds five through seven out of eight, the Chancellor player rolls a die to see if they win, provided they have fulfilled their victory condition. The probability of victory starts at 1/6 and doubles each round up to 2/3.

When I first encountered this rule, it felt out of place – clunky, even. After playing a game as a Citizen and another game observing the behavior of a Citizen, I think I understand why it is there; it is key to the entire dynamic between Citizen and Chancellor.

Citizens, in Oath, are players that are ostensibly allies of the Chancellor. However, unlike other games such as Dune or Eclipse with official alliances, they do not win together. Instead, the Citizens have an additional victory condition. If they have achieved their goal when the Chancellor wins, the Citizen wins instead; otherwise, they lose.

In practice, this can lead to strange conflicts between allies. If it looks like the Citizen will meet their condition, the Chancellor might attack them or even self-sabotage to prevent the game from ending. If the Citizen has not achieved their personal goal, they might attack the Chancellor to do the same, as I did in the game where I was a Citizen.

Enter the victory die. Both the Citizen and Chancellor need the Chancellor’s goal to be met for either to have a chance of winning, but if they know for sure that they will lose should it be achieved that round, they would have no choice but to self-sabotage. But because the Chancellor or Citizen is not guaranteed to win at the end of rounds five, six, or seven, there is enough doubt that they can work together even though they know who would win if the game did end early. Their alliance works because the losing member thinks the game might not end until they can achieve their goal.

The illusion of hope is crucial to designing games because knowing they will lose effectively eliminates a player. Traditionally, official alliances require shared victory because who would cooperate with somebody they know will win when the game ends? Oath demonstrates a different approach by adding enough uncertainty to enable teamwork between technical enemies. I still think the mechanic is clunky, but I also think it is needed.

I can already think of mechanics to support a similar dynamic for other designs. Instead of randomizing the end of the game, what if we randomized the winner of the alliance instead? For example, suppose you have a system where official allies put tokens into a bag according to their contributions. Then, at the end of the game, the winning alliance draws to indicate which one wins.

Or another option: allies could gain hidden victory points, and if their team wins, then the player with the most points is the sole winner. Rex: Final Days of an Empire does something a little like this with its betrayal cards, where each player has a secret condition which, if fulfilled, steals the win from their team. The problem with its system is that the default is shared victory, so stealing it feels petty and spiteful. If only one player can ever win, this goes away.

I don’t think shared victory is a bad thing, though some people do. It can introduce problems like freeloaders or power imbalances, but there are solutions to these. However, as Oath shows, there is a viable alternative for games with official alliances; you need to make the sole winner uncertain enough that allies keep hoping that it will be them.

On-Play Effects and Subtypes for Attention Savings

One of the many things I like about Oath is how it associates abilities with areas. The areas in area control games often feel dull, and gaining new powers for control – or simply for the presence of your pawn, as in Oath – makes them feel much more alive. This system does mean that there are a lot of text effects in the game, and text effects cost a lot of player attention to support.

Suppose you have a bunch of objects – probably cards, but they could be tokens as well – and each object has an effect and a collection of subtypes associated with it. By subtypes, I mean tags that identify something when determining which abilities apply to it. For example, in Imperial Settlers, each card has one to three colors that function as subtypes, and certain cards reward you when you play a card of a particular color.

The nice thing about the effect-subtype pairing, and the reason we see it in so many games, is that it makes it very easy to incorporate interesting side effects into your cards. Side effects – beneficial outcomes of a player choice that were not the reason the player took that action – are good because they challenge the player to find a use for the unintended bonus. Subtypes are a fantastic side effect because they are practically free in attention costs – players can see at a glance which subtypes they have, assuming the graphic design is decent.

For example, consider the card “Longbows” in Oath. It lets you add or subtract an attack die from any battle, which is a fantastic bonus that justifies playing it. However, it also has the subtype “Order”. If you rely heavily on Longbows, you will likely invest in cards that benefit from control of Order cards, such as Order advisors.

The one problem with this is that while subtypes are cheap, effects are expensive. Passive abilities that trigger automatically when some event occurs are the worst, but even new actions the player must choose to use are a burden. When you have ten different abilities to keep track of, you are bound to start forgetting to use some of them. But there is one type of effect that costs almost nothing in ongoing attention: the “on-play” effect.

When I talk about cards with on-play effects, I do not mean action/event cards that you resolve and discard. I refer specifically to cards that do something when you play them and then remain out, effectively blank aside from their subtypes. For example, when you play “A Small Favor”, you gain four warbands – and that’s it. You don’t need to look out for something to be triggered later, or remember a modifier to another ability, or consider an extra action available to you. For the rest of the game, the card is a blank card with the “Discord” subtype.

As a game designer, this pattern of pairing on-play effects with subtypes is a fantastic tool for cutting attention costs for players. Going one step further, I can imagine structuring a card pool so that the more complex effects are on-play, while new actions, passive modifiers, and triggered abilities are simpler.

Of course, you don’t want all the cards in your game to have on-play effects; that defeats the purpose of persistent cards. So the natural question is: what percentage of cards should be on-play? I have no idea, so I decided to look at some of the games in my collection.

First, Oath. The on-play cards in Oath are easy to identify because of the graphical design. Oath has a total of 198 denizen cards; of those, 27 are on-play, for a total of 14% on-play cards.

Next, I looked at 7 Wonders. The on-play cards in that game are the blue and yellow cards; I decided not to classify red or green cards as on-play because they require some monitoring of other players to see what strategy they are pursuing. I found that 31% of the cards were on-play, at least according to my classification.

Wingspan was next, and it is another game where on-play effects have specific graphics identifying them, which made things easy. I decided to include cards without any effect at all in my count. Out of 170 cards, I found that 25% were on-play.

I also looked at 7-Wonders Duel, which contrasts with 7-Wonders in how red cards work – when you draft one, you move the Conflict Pawn toward your opponent’s city, and then the card does nothing. I wound up with a count of 56%, which is the highest that I saw.

Finally, I decided to look at technology tokens in Eclipse because I wanted to find an example that doesn’t rely on cards. Tech tokens in Eclipse come with three subtypes and reduce the cost of subsequent research in their subtype. Out of 24 technologies, only 3 (13%) did not have ongoing effects – Quantum Grid, Artifact Key, and Advanced Robotics.

The above is not a rigorous study; the sample size is tiny, and there is some subjectivity in what I classify as on-play effects, so take my conclusions with a grain of salt. I think it would be interesting to survey more examples. Looking at what I have, I noticed that the games with the lowest proportion of on-play effects are also the heaviest ones. Many factors go into the weight of a game, and I’m not saying that attention costs are even the most important, but they do matter.

One game in my collection that caught my eye is Innovation, because of the conspicuous absence of on-play effects. It fits the other criteria perfectly; you have cards with symbols (subtypes) that let you take advantage of other cards. Yet every single card effect is available at all times. It is hard enough to track what you can do, let alone other players, leading to massive analysis paralysis. Now that I see the pattern, I can’t help but think that the game would be better if 20%-30% of the cards had effects that triggered only when played. Such a change would require adding some rules, but the attention savings would more than compensate.

Granularity and Combat Mechanics in Tactical CCGs

I have never played Hearthstone, but I have played some of the games it inspired. Back when its servers were still active, I remember enjoying Duelyst a lot, in particular. Duelyst was an online CCG where players drew creatures, items, and spells and played them to a grid. Essentially, Hearthstone but with spatial positioning. While I don’t often care much about graphics, I found its pixel art style one of its most compelling features.

Recently, partially out of nostalgia for Duelyst, I tried out a similar game called Cards and Castles 2. I stopped playing after only a few minutes because I found the granularity of the combat mechanics to be unbearably high. Just as with Duelyst, each creature has an attack number and a health number. Attacking lowers your target’s health number and also prompts them to counterattack. However, the range of numbers used commonly goes into the forties and fifties. Worse, to see the current attack and health, you have to mouse over a unit, making it very difficult to understand each creature on the board at a glance.

Funnily enough, Battle for Wesnoth has similar combat mechanics and granularity, but I don’t mind it in that game. I think this is due to expectations. Battle for Wesnoth is a turn-based strategy game often played against the computer, so you expect to consider each move carefully. Cards and Castles 2 presents as a CCG suitable for quick matches against other people, so the amount of processing required to understand each unit is more noticeable. Duelyst, by contrast, keeps most numbers under ten and displays them clearly beneath the creatures.

The moral of the story is that large numbers make it harder for players to grasp the game state because they make the arithmetic harder and less automatic. Another related issue with Cards and Castles 2 is that the abilities of the cards use percentages; “this unit gets 20% damage resistance” or “this unit takes 70% less damage when attacked from the front.” A percent value works for probabilities but serves as a barrier to understanding when it requires actual multiplication. The only percentages that most players can multiply without effort are 50%, multiples of 100%, and (to a lesser degree) 10%. Abilities phrased in terms of small integers are a lot easier to grasp.

In contrast to board game design, I think it is tempting when making computer games to assume that complex calculations carry no cost because the computer is performing them. But this isn’t entirely true – even though the computer can crunch the numbers, the player may still want to understand what they mean.

After my disappointment with Cards and Castles 2, I still felt nostalgic for Duelyst, so I tried another similar game; Stormbound. I was pleasantly surprised. The game takes place on a four-by-five grid where the objective is to damage your opponent by marching a certain number of units to their side of the board. Unlike other tactical CCGs, it is an auto-battler – you cannot issue commands to your pieces once placed; they move forward by themselves every turn.

Another unusual feature of Stormbound is the deck size – 12 cards. Most CCGs have decks of 30-45 cards, but Stormbound instead recycles cards so that you have no discard pile. You can have exactly one copy of each card in your deck and see all of them several times each game. Lucid works the same way, so I am well acquainted with its advantages. Among other things, this makes constructing a deck much less daunting since you only have twelve choices to make and don’t have to worry about how many copies of each card to include.

The most novel feature of Stormbound for me was the stats of its armies. Instead of the typical Attack/Health, each card has Strength/Movement. The first number is how many units you gain when playing the card; the second is their number of immediate moves.

When two opposing armies fight, they both lose an equal number of units such that only one remains. I have not seen combat mechanics of this mutually-destructive sort before; the closest thing I can think of is combat in Neptune’s Pride. My natural inclination before seeing Stormbound was that it wouldn’t work because it eliminates the possibility of one side gaining an advantage through combat – for each unit you destroy, you have to sacrifice one of your own, so what is the point?

I think I understand how Stormbound makes it work, however. Most units created are just a byproduct of effects that occur when you play cards. For example, playing a card might deal one damage to every unit in a line AND create a two-strength army at the origin of the line. Some units do have persistent special effects, but most do not. It doesn’t matter that your units mutually annihilate in combat because the game is all about where you play your cards.

The idea of having one number combine attack and health doesn’t seem quite so radical to me anymore. In Duelyst, combat hurts both parties as well; the only difference is that both might survive. It might be different in a multiplayer game where both suffer to the benefit of the other players; then again, this is already what happens with more than two players.

I don’t recommend Stormbound as a game. It allows players to level up their cards to make them stronger, which means that a player who has spent more money might have a better deck than a new player even with the same cards. But it has does have some unusual mechanics worth checking out that challenge the orthodoxy on digital CCGs.

How to make randomized victory points work

My opinion of Eclipse has deteriorated since I first learned about it in college. Back then, it had just come out, and I was excited at the prospect of a shorter Twilight Imperium with more streamlined mechanics. The part that most intrigued my inner designer was how it dealt with income – you removed cubes from a track to place them on a board, and the number uncovered was how much you got to collect. This practice is commonplace nowadays, but it was innovative at the time. I bought the game along with its expansion and played it extensively with my friends.

I revisited Eclipse a few years ago and found that it did not live up to my memories of it. In particular, I found myself much more aware of how arbitrary so much of it is. Exploring is costly, and the quality of the systems you encounter varies widely. Combat is a dice fest. But the most obvious way in which the game feels random is the reputation tile mechanics.

When you fight in Eclipse, whether you win or lose, you are rewarded with randomly drawn “reputation tiles” – which my friends and I abbreviated to “reptiles” to the frequent confusion of new players. These are worth some number of victory points between one and four, and the value of the tiles you own is secret. You can only hold a limited number of tiles and eventually discard lower-value tiles to make room for more valuable ones.

The reasons behind this system make sense. Because there are only a limited number of high-value tiles and players are trading up to them, players want to fight early. Because the value of the tiles is secret, players can’t be sure what anybody’s total score is, obfuscating the current leader. Succeeding in battle is also incentivized because you draw more tiles and choose one to keep.

Statistically, reputation tiles work. In practice, they lead to feel-bad moments where you draw a low tile by chance and feel cheated. The problem isn’t just with Eclipse, either. In general, when you distribute random amounts of victory points to players for the same actions, you make it easier for players to attribute their successes or failures to luck rather than their hard work.

The risk of unfairness may be why randomized point distribution of this sort is uncommon in board games. But such mechanics have undeniable benefits. We do not want players to know who has won the game until the end, and hidden victory points that nobody else knows about are great at confusing the issue. So how do we give players random victory points without making success feel arbitrary?

One of the principles I hold to in game design is that when bad things happen to a player, there should be a silver lining for them to see. My favorite example to point to is Imperial Settlers, where whenever another player destroys one of your buildings, you get one wood and a foundation that you can sacrifice to build something else. The foundation isn’t technically a benefit – after all, you could have used the building you had before the attack as a foundation already. But psychologically, it feels like a silver lining because it makes one of your choices – which card to give up – easy to make.

We can use this same principle to fix the reputation tile problem. What if you could spend reputation tiles and the value of the tile didn’t matter? For example, imagine a game where you transport cargo drawn at random from four different point values. However, during the game, you can burn unwanted tokens to move faster. The amount of the boost is independent of the point value of what you spent. (Note: This is basically how the Reactor Furnace in Galaxy Trucker works, though there the cargo isn’t drawn randomly)

In this scenario, drawing low-point tokens does not feel unfair. It is sometimes even a relief because it simplifies the player’s decision of what to do. Draw some worthless scrap? Burn it! Yet hidden tokens still serve to obscure a player’s point total. A player might choose not to burn cargo because it is valuable, or they might think they can win without doing so.

I think hidden randomized victory points can work without feeling arbitrary, and I think the key is giving players ways to convert the less valuable ones into something useful.