Designer Diary: Corrupted by Ruin #3

I think I’ve figured out a workable system for combat in Corrupted by Ruin. The biggest constraint is that it must be fast and uncomplicated since each hero incursion potentially initiates multiple encounters. Forcing players to go through the same actions many times will get repetitive. The natural model for quick, simple combat is an idle system where the player sets up the battle and lets it play out. Just think of Loop Hero, where the player speeds through hundreds of fights in a run. So I decided to have a similar system, where combatants have health bars and cooldown timers and attack random targets.

Idle combat comes with its flaws. My chief concern was that it would muddy the differences between different monsters, which happens in “Hero’s Hour.” I think I can get around this by limiting the number of combatants on each side to eight, but this still leaves the question of how to distinguish monsters from each other without the ability to select actions. And I think I have found the answer – a variant of my idea to incorporate Spirit Island threshold mechanics, but one that treats monsters and heroes as asynchronous element converters.

Here is how my new system works. Each creature has two or more actions, each with a priority. It goes through its list in order until it finds something it is capable of doing. Each ability may have an elemental cost and might produce some elements. However, these are available to any creature on either side of the fight – for example, the heroic cryomancer might use the water element your slime produced to boost a spell, or your salamander might benefit from the enemy flame knight’s fire generation. Each battle becomes a converter-based economic game.

One implication of this system is that speed matters a lot. If you see a slow hero that needs fire to be effective, you can counter it with a faster monster that also uses fire because it is more likely to get the element first. Or, if there is a quick hero that consumes water, you might avoid using water monsters.

Another neat thing that I can do with this system is to change the behavior of creatures depending on what is available. For example, maybe there is a troll that typically guards other monsters, but if you flood the room with psychic energy, it goes berserk and starts attacking. The player can then use elements to choose a strategy for their side and manipulate their enemies.

One concern with any combat system based around synergies is that players will get locked into a pattern. If a combination of monsters is good, why not just use it every time? I predict that my elemental conversion system will avoid this because even the same team of monsters will behave very differently depending on the enemy heroes they face. Furthermore, even though you fight the same heroes in multiple rooms within a single incursion, the rooms can also consume and produce elements, which should be enough to shake things up.

Designer Diary: Corrupted by Ruin #2

I spent the last week at a game design retreat, and among other things, one game I tested was “Corrupted by Ruin.” I don’t have a digital prototype yet, so I printed out some tetrominoes and monster cards and had people play it as though it were a board game.

There are some pretty significant differences between digital games and board games. Some games that work great in the digital realm fall flat in the real world, and some things don’t make the transition in the other way very well either. Still, paper prototyping is a great way to validate an idea, and it is a good sign if something works well even without a machine.

While playtesting, I realized that the triggering mechanism for an incursion makes a big difference in how it feels. Traditionally in tower defense games, the player alternates between constructing defenses and operating them (actively or passively) against regularly scheduled waves of intruders. For example, in “The Last Spell,” the player builds structures and upgrades their heroes during the day and then deploys them against the monsters that attack every night. I initially approached my design this way since it is about defending a dungeon against heroes.

The problem is that this leaves the player very little control over where the heroes enter the dungeon. A big part of the game is engineering encounters to maximize the effectiveness of the monster and room combinations. If the heroes can enter the dungeon anywhere, the player has little control over their path. Seeing where the heroes enter ahead of time doesn’t help much either because the player can’t rearrange their rooms to account for it.

So instead, I tried letting players trigger the incursions. I added “staircase” tiles around the map, and whenever a player would connect to a staircase tile, heroes would invade. The player’s incentive to go after staircases was that defeating heroes was necessary to win. The resulting game felt much more interesting to me. The player is still playing defense, but now they control when and where they fight the heroes.

Roguelike games tend to have a proactive aesthetic, where the player is constantly advancing spatially towards a goal. Reactive gameplay clashes with this. I tried out “Tower Tactics: Liberation,” a tower defense roguelike, and found this contradiction to stand out. You move from place to place, and in each location, you set up towers and defend against waves of enemies, but the contrast between the map movement and the lane defense is a little jarring. By flipping the script so that the heroes react to the player’s incursion, I hope to align the high-level gameplay aesthetic of conquering land after land with the lower-level conquest of a single kingdom.

Giving the players complete control of hero incursions also has implications for what happens when the heroes invade. Deterministic combat allows players to calculate the exact results of triggering a battle rather than relying on their heuristics. I want to avoid this because it is tedious for the player. Also, with more control over the path the heroes take comes the obligation to make the associated decisions more interesting. I think my current combat system is too plain to justify this, so I am starting to think about more interesting mechanics for combat. I will most likely want a puzzle of some sort.

Another problem that I solved through paper playtesting was how to handle monster recruitment. My previous system involved buying monsters from a pool of available ones, with new monsters unlocked by building specific types of rooms. That has three problems: First, it obligates the design of a matching room or monster even when it doesn’t necessarily make sense. Second, players already draft rooms, and doing the same for monsters feels repetitive. Third, it doesn’t interact very much with the tile-placement mechanics that are central to the game.

The solution turned out to be very simple – monsters, like gold, start each level embedded in the map. When you cover them with a tile, they join your dungeon. Now I can design monsters independently from rooms and even associate them with map types, and monster drafting interacts with room placement to enhance both.

To “mine” monsters out of the ground implies that monsters are a finite resource. Limited upgrades for monsters also make sense since they feel less generic than they would under a drafting system.

Designer Diary: Corrupted by Ruin #1

I’ve started working on a new computer game. I initially envisioned it as a cross between Dungeon Keeper 2 and FTL; you build a dungeon, gain monsters, and move them around your dungeon to respond to adventurers that try to invade. By surviving over several rounds, you gradually corrupt and take control of the land you are in, and a run involves corrupting several such lands.

In contrast to Icewords, for which development was very ad-hoc, I’ve decided to plan this game out in advance. I am making a game design document in as much detail as possible to have a clear picture of where I am going. I am also experimenting with test-driven development, using the Godot unit testing framework WAT.

Understanding the genre expectations of players is crucial when making a game. You don’t have to adhere to all of them, but it is easier for players to understand a game when they are already familiar with aspects of it. Dungeon Keeper 2 is the best example of the genre I am targeting. Some of its features that I see as essential are building a dungeon out of different types of rooms, attracting a variety of monsters to your dungeon, and using those monsters to fight off heroes that try to invade. Mining for gold while building tunnels is another iconic mechanic that would be good to replicate.

I decided to start my brainstorming with the concept of rooms. A large part of the progression that I envision in my game comes from unlocking, building, and upgrading many different types of rooms. Each room has a different effect and spawns a unique minion type. Some rooms can also transform when connected to others, an idea I got from Loop Hero. For example, placing a Library next to a Graveyard transforms it into a Haunted Library. Players like having little secrets to discover.

My initial vision for the rooms was as square tiles placed in a grid, like in Galaxy Trucker. When designing any spatial game about building something, I always ask: “Why does it matter where I place things?” In Galaxy Trucker, many threats come from a predictable direction and are more likely to occur in some rows or columns than others. While this makes sense in space, I didn’t see an obvious way to do something similar underground.

There are all sorts of ways you can make positioning matter based on the abilities of certain rooms. I object to relying exclusively on room abilities, however. It places a burden on new players; to make intelligent decisions about placement, they need to learn the nuances of each room. Therefore their heuristics are not transferable when exposed to new room types. I wanted to provide a reason to put a room in a particular location independent of what the room did.

I decided to try using tetrominoes instead of uniform square tiles. Most games that deal with tetrominoes ask questions about how efficiently they tile the board and what is on the spaces they cover-up. The latter focus is perfect for a game about building a dungeon because it is naturally suited to mining mechanics: you can print resources on the board for the player to harvest by covering them up.

Cover-up mining is probably enough to justify tetromino placement, but I wanted to make efficient tiling a focus too. In Dungeon Keeper 2, you gain a mana resource based on the area of your dungeon. In thinking about implementing a similar system, it occurred to me that rewarding the playing for dungeon area and penalizing them for dungeon perimeter naturally incentivizes efficient tiling. Mechanically, the player gains energy for each tile in their dungeon and loses it for each adjacent empty tile. Thematically, rooms generate mana and radiate it off into the environment. Mana thus obeys similar rules to real-world heat, which I like.

Such a system gives players two contradictory guiding reasons for tile placement. They want to spread out to harvest resources on the map, but they want to stay compact to minimize mana radiation loss. On top of this, they need to consider room evolution and any room abilities that care about proximity or adjacency. Overall, I think this gives enough reasons to care about where they place their rooms to make tile positioning meaningful.

At this point, I decided to try out unit testing in Godot while implementing classes to handle tetromino rooms and maps. I tried using GUT first but didn’t like that I couldn’t use arguments in my constructors with it, so I switched to WAT. I also prefer WAT’s integration with the editor over GUT’s requirement that you launch a scene to run your tests.

The biggest unexpected thing I learned is that unit testing is great for giving you feedback without having to implement example scenes. Previously, when making games in Godot, I would rush through a slapdash setup to get something I could play, leading to messy code that I would have to refactor. Psychologically, we want to see results for our work, and it is hard waiting for thousands of lines of code before you see anything happen – plus, you are almost sure to have an error somewhere. But being able to write a unit test and then immediately see results reduced this need a lot. Unit testing doesn’t just help you catch errors; it also helps you stay motivated.

Combat is the next consideration, which I am still debating. The planned structure of the game is that you cycle through the phases of building rooms, preparing for the heroes, and fighting the heroes. At first, I imagined this working like FTL boarding mechanics where monsters and heroes in the same room will automatically fight each other, and the challenge is allocating your monsters. Heroes would enter the dungeon from doors that lead out, so the layout would influence where threats appear. Battles would happen in real-time, perhaps pausing to cast spells or issue orders.

However, I’m not sure how interesting I can make that without becoming inaccessible. Crew combat is simple in FTL because you have to manage other things simultaneously, like weapons and power allocation. But if the focus is on fighting alone, I’m not sure it would be interesting enough to justify the phase. The issue with idle combat is that aside from the choice of where to allocate forces, the player has very little to do. Also, the idea of independent heroes invading from many different points feels off; capturing the traditional feel of a party venturing through a sequence of encounters would be better.

Another problem with real-time combat, in general, is that it severely limits the feedback you can give the player about differences between units. For instance, I recently played a newly released game called Hero’s Hour, where you recruit armies comprising many different types of units. However, combat involves a real-time battle where your entire army fights the enemy army, and it is difficult in the chaos to tell how each type of unit differs from the rest. I have observed this with RTS games as well; real-time combat smooths over the unique properties of each unit. I want the different monster types to have personalities, not blur together into generic minions. Therefore, I realized that I needed to have turn-based combat.

Currently, I am considering a system where a single party of adventurers encounters one room of your dungeon at a time, and you deploy monsters (and traps and spells) to that room as though you are playing cards. I was thinking about a turn-based idle system where heroes and monsters alternate dealing their damage, but that adds a lot of time to the game, and I’m not sure how your choices would change between combat rounds. If a spell is good to play, wouldn’t you spam it? And if you can’t, then why have multiple turns? Instead, my current vision is that there is one round of combat in which you play your cards and hit resolve, and then the heroes progress to the next room.

I’ve been playing a lot of Spirit Island lately, so it occurred to me that the power thresholds mechanic would work well with such a combat system. Each room and monster in the room provides elements, and each room, monster, trap, and spell has two effects – a weaker default effect and a better effect that applies when used in a room with the matching elements. I still need to figure out what makes the most sense thematically for how traps work.

I love reading or listening to post-mortems and designer diaries, but my games often have few records about why I made the choices I did or what issues I encountered. I think recording my reasoning as I go is valuable not just for looking back but also to help understand my design choices as I make them, so I will make more of an effort to post about it here.

One Night Ultimate Werewolf versus Blood on the Clocktower

I loved the social deduction game Werewolf in high school. On various school or club trips, I would try to get other people to play it in our free time. My interest in Werewolf began to taper off in college and disappeared when I discovered One Night Ultimate Werewolf (ONUW). I’ve tried numerous social deduction games – Two Rooms and a Boom, Secret Hitler, Coup, etc. – but none came close to dethroning ONUW.

The biggest reason I liked ONUW is that even good-aligned players have incentives to lie. Since your team can change without your knowledge, part of the game involves figuring out your team, and if you always tell the truth, you may find yourself without an alibi when the Troublemaker reveals that they swapped you.

Another reason I liked ONUW is how short the games are. A game of Werewolf feels heavy, while ONUW feels light-hearted. Because each game is such a small time investment, you can try all sorts of silly strategies without worrying that you will spoil the experience of other players. If you try a crazy gambit in Werewolf and it doesn’t work out, your team may feel like you are wasting their time.

Of course, the elephant in the room is player elimination, which is a weakness of Werewolf and noticeably absent from ONUW. Sitting around to watch because you are dead is often not much fun. The penalty of dying also skews incentives by making players less willing to die for the good of their team, which limits the sort of strategies they will try.

Recently, I encountered Blood on the Clocktower (BotC) by watching Let’s Play videos on the No Rolls Barred Youtube channel, which was recommended to me by a friend. It is very entertaining to watch, and I was curious to see how it would play. Mechanically, it is a lot like Werewolf but with a few twists that make a big difference.

The most visible change is that dead players still participate in conversations and get a “ghost vote” that they can use once per game. Dead players usually save this vote until the last day of the game, where it comes down to choosing between the “final three” players alive. One problem this solves is giving living good players a way to win even when outnumbered.

A more innovative change is the storyteller’s role in the game. In most Werewolf variants, the moderator resolves night actions mindlessly. You could replace them with a machine and see no difference in the game. In BotC, the storyteller chooses what information to give players and has the explicit agenda of providing a balanced game (which in practice means making sure the game reaches the final three players).

The game has numerous other clever ideas, such as madness and its notion of scripts, but dead votes and the storyteller’s role are the most defining features. After watching lots of games of BotC on Youtube, I decided to try it out myself. I joined the unofficial discord channel, learned the ropes, and played three games.

In my first game, I was the Butler, a good-aligned character that must choose a master and can only vote when their master does. It was a relaxing role because it simplified things a lot; I didn’t have to think too hard about who to vote for and could focus on trying to solve the puzzle that the game presents.

In my second game, I was the Godfather, an evil-aligned character that knows which “outsiders” are in play (outsiders are good characters with harmful abilities) and gets to kill whenever one dies. I had a lot of fun with that role and ended up winning.

My third game made me the demon (the evil character the good team needs to kill to win) and was a bit of a train wreck. My team lost early on, but the game dragged on due to the Good team’s paranoia. Playing as the demon was nerve-wracking in a mildly unpleasant way, and it made me feel bad because it felt like my misplays were responsible for my team’s loss.

I have mixed feelings about BotC. What I enjoy most about social deduction games is the feeling of solving a puzzle when not all of the information you have is reliable, and BotC delivers on that. However, the night phase can drag on a bit, and the game length overall is a bit much. The one thing that BotC does brilliantly is to make the storyteller role fun rather than rote. It is so popular that the Discord server I played on has implemented queues for people wanting to story-tell.

Compared to its most immediate competition in the sub-genre of social deduction games where people die, BotC is the best game I have played, and I will happily play it when given the opportunity. However, ONUW is still my favorite social deduction game overall. Both ONUW and BotC beat out the competition by providing logical puzzles to solve rather than just relying on social cues.

The one shared element of ONUW and BotC that I dislike is how long the night phase takes. It is more tolerable in ONUW because it only happens once, but it is still something I wish could be shortened. As a game designer, this suggests unexplored design space in finding ways to eliminate the night phase while preserving role-based information.

Randomized Victory and Alliances of Doubt

In Oath, there is a mechanic in which at the end of rounds five through seven out of eight, the Chancellor player rolls a die to see if they win, provided they have fulfilled their victory condition. The probability of victory starts at 1/6 and doubles each round up to 2/3.

When I first encountered this rule, it felt out of place – clunky, even. After playing a game as a Citizen and another game observing the behavior of a Citizen, I think I understand why it is there; it is key to the entire dynamic between Citizen and Chancellor.

Citizens, in Oath, are players that are ostensibly allies of the Chancellor. However, unlike other games such as Dune or Eclipse with official alliances, they do not win together. Instead, the Citizens have an additional victory condition. If they have achieved their goal when the Chancellor wins, the Citizen wins instead; otherwise, they lose.

In practice, this can lead to strange conflicts between allies. If it looks like the Citizen will meet their condition, the Chancellor might attack them or even self-sabotage to prevent the game from ending. If the Citizen has not achieved their personal goal, they might attack the Chancellor to do the same, as I did in the game where I was a Citizen.

Enter the victory die. Both the Citizen and Chancellor need the Chancellor’s goal to be met for either to have a chance of winning, but if they know for sure that they will lose should it be achieved that round, they would have no choice but to self-sabotage. But because the Chancellor or Citizen is not guaranteed to win at the end of rounds five, six, or seven, there is enough doubt that they can work together even though they know who would win if the game did end early. Their alliance works because the losing member thinks the game might not end until they can achieve their goal.

The illusion of hope is crucial to designing games because knowing they will lose effectively eliminates a player. Traditionally, official alliances require shared victory because who would cooperate with somebody they know will win when the game ends? Oath demonstrates a different approach by adding enough uncertainty to enable teamwork between technical enemies. I still think the mechanic is clunky, but I also think it is needed.

I can already think of mechanics to support a similar dynamic for other designs. Instead of randomizing the end of the game, what if we randomized the winner of the alliance instead? For example, suppose you have a system where official allies put tokens into a bag according to their contributions. Then, at the end of the game, the winning alliance draws to indicate which one wins.

Or another option: allies could gain hidden victory points, and if their team wins, then the player with the most points is the sole winner. Rex: Final Days of an Empire does something a little like this with its betrayal cards, where each player has a secret condition which, if fulfilled, steals the win from their team. The problem with its system is that the default is shared victory, so stealing it feels petty and spiteful. If only one player can ever win, this goes away.

I don’t think shared victory is a bad thing, though some people do. It can introduce problems like freeloaders or power imbalances, but there are solutions to these. However, as Oath shows, there is a viable alternative for games with official alliances; you need to make the sole winner uncertain enough that allies keep hoping that it will be them.

On-Play Effects and Subtypes for Attention Savings

One of the many things I like about Oath is how it associates abilities with areas. The areas in area control games often feel dull, and gaining new powers for control – or simply for the presence of your pawn, as in Oath – makes them feel much more alive. This system does mean that there are a lot of text effects in the game, and text effects cost a lot of player attention to support.

Suppose you have a bunch of objects – probably cards, but they could be tokens as well – and each object has an effect and a collection of subtypes associated with it. By subtypes, I mean tags that identify something when determining which abilities apply to it. For example, in Imperial Settlers, each card has one to three colors that function as subtypes, and certain cards reward you when you play a card of a particular color.

The nice thing about the effect-subtype pairing, and the reason we see it in so many games, is that it makes it very easy to incorporate interesting side effects into your cards. Side effects – beneficial outcomes of a player choice that were not the reason the player took that action – are good because they challenge the player to find a use for the unintended bonus. Subtypes are a fantastic side effect because they are practically free in attention costs – players can see at a glance which subtypes they have, assuming the graphic design is decent.

For example, consider the card “Longbows” in Oath. It lets you add or subtract an attack die from any battle, which is a fantastic bonus that justifies playing it. However, it also has the subtype “Order”. If you rely heavily on Longbows, you will likely invest in cards that benefit from control of Order cards, such as Order advisors.

The one problem with this is that while subtypes are cheap, effects are expensive. Passive abilities that trigger automatically when some event occurs are the worst, but even new actions the player must choose to use are a burden. When you have ten different abilities to keep track of, you are bound to start forgetting to use some of them. But there is one type of effect that costs almost nothing in ongoing attention: the “on-play” effect.

When I talk about cards with on-play effects, I do not mean action/event cards that you resolve and discard. I refer specifically to cards that do something when you play them and then remain out, effectively blank aside from their subtypes. For example, when you play “A Small Favor”, you gain four warbands – and that’s it. You don’t need to look out for something to be triggered later, or remember a modifier to another ability, or consider an extra action available to you. For the rest of the game, the card is a blank card with the “Discord” subtype.

As a game designer, this pattern of pairing on-play effects with subtypes is a fantastic tool for cutting attention costs for players. Going one step further, I can imagine structuring a card pool so that the more complex effects are on-play, while new actions, passive modifiers, and triggered abilities are simpler.

Of course, you don’t want all the cards in your game to have on-play effects; that defeats the purpose of persistent cards. So the natural question is: what percentage of cards should be on-play? I have no idea, so I decided to look at some of the games in my collection.

First, Oath. The on-play cards in Oath are easy to identify because of the graphical design. Oath has a total of 198 denizen cards; of those, 27 are on-play, for a total of 14% on-play cards.

Next, I looked at 7 Wonders. The on-play cards in that game are the blue and yellow cards; I decided not to classify red or green cards as on-play because they require some monitoring of other players to see what strategy they are pursuing. I found that 31% of the cards were on-play, at least according to my classification.

Wingspan was next, and it is another game where on-play effects have specific graphics identifying them, which made things easy. I decided to include cards without any effect at all in my count. Out of 170 cards, I found that 25% were on-play.

I also looked at 7-Wonders Duel, which contrasts with 7-Wonders in how red cards work – when you draft one, you move the Conflict Pawn toward your opponent’s city, and then the card does nothing. I wound up with a count of 56%, which is the highest that I saw.

Finally, I decided to look at technology tokens in Eclipse because I wanted to find an example that doesn’t rely on cards. Tech tokens in Eclipse come with three subtypes and reduce the cost of subsequent research in their subtype. Out of 24 technologies, only 3 (13%) did not have ongoing effects – Quantum Grid, Artifact Key, and Advanced Robotics.

The above is not a rigorous study; the sample size is tiny, and there is some subjectivity in what I classify as on-play effects, so take my conclusions with a grain of salt. I think it would be interesting to survey more examples. Looking at what I have, I noticed that the games with the lowest proportion of on-play effects are also the heaviest ones. Many factors go into the weight of a game, and I’m not saying that attention costs are even the most important, but they do matter.

One game in my collection that caught my eye is Innovation, because of the conspicuous absence of on-play effects. It fits the other criteria perfectly; you have cards with symbols (subtypes) that let you take advantage of other cards. Yet every single card effect is available at all times. It is hard enough to track what you can do, let alone other players, leading to massive analysis paralysis. Now that I see the pattern, I can’t help but think that the game would be better if 20%-30% of the cards had effects that triggered only when played. Such a change would require adding some rules, but the attention savings would more than compensate.

How to make randomized victory points work

My opinion of Eclipse has deteriorated since I first learned about it in college. Back then, it had just come out, and I was excited at the prospect of a shorter Twilight Imperium with more streamlined mechanics. The part that most intrigued my inner designer was how it dealt with income – you removed cubes from a track to place them on a board, and the number uncovered was how much you got to collect. This practice is commonplace nowadays, but it was innovative at the time. I bought the game along with its expansion and played it extensively with my friends.

I revisited Eclipse a few years ago and found that it did not live up to my memories of it. In particular, I found myself much more aware of how arbitrary so much of it is. Exploring is costly, and the quality of the systems you encounter varies widely. Combat is a dice fest. But the most obvious way in which the game feels random is the reputation tile mechanics.

When you fight in Eclipse, whether you win or lose, you are rewarded with randomly drawn “reputation tiles” – which my friends and I abbreviated to “reptiles” to the frequent confusion of new players. These are worth some number of victory points between one and four, and the value of the tiles you own is secret. You can only hold a limited number of tiles and eventually discard lower-value tiles to make room for more valuable ones.

The reasons behind this system make sense. Because there are only a limited number of high-value tiles and players are trading up to them, players want to fight early. Because the value of the tiles is secret, players can’t be sure what anybody’s total score is, obfuscating the current leader. Succeeding in battle is also incentivized because you draw more tiles and choose one to keep.

Statistically, reputation tiles work. In practice, they lead to feel-bad moments where you draw a low tile by chance and feel cheated. The problem isn’t just with Eclipse, either. In general, when you distribute random amounts of victory points to players for the same actions, you make it easier for players to attribute their successes or failures to luck rather than their hard work.

The risk of unfairness may be why randomized point distribution of this sort is uncommon in board games. But such mechanics have undeniable benefits. We do not want players to know who has won the game until the end, and hidden victory points that nobody else knows about are great at confusing the issue. So how do we give players random victory points without making success feel arbitrary?

One of the principles I hold to in game design is that when bad things happen to a player, there should be a silver lining for them to see. My favorite example to point to is Imperial Settlers, where whenever another player destroys one of your buildings, you get one wood and a foundation that you can sacrifice to build something else. The foundation isn’t technically a benefit – after all, you could have used the building you had before the attack as a foundation already. But psychologically, it feels like a silver lining because it makes one of your choices – which card to give up – easy to make.

We can use this same principle to fix the reputation tile problem. What if you could spend reputation tiles and the value of the tile didn’t matter? For example, imagine a game where you transport cargo drawn at random from four different point values. However, during the game, you can burn unwanted tokens to move faster. The amount of the boost is independent of the point value of what you spent. (Note: This is basically how the Reactor Furnace in Galaxy Trucker works, though there the cargo isn’t drawn randomly)

In this scenario, drawing low-point tokens does not feel unfair. It is sometimes even a relief because it simplifies the player’s decision of what to do. Draw some worthless scrap? Burn it! Yet hidden tokens still serve to obscure a player’s point total. A player might choose not to burn cargo because it is valuable, or they might think they can win without doing so.

I think hidden randomized victory points can work without feeling arbitrary, and I think the key is giving players ways to convert the less valuable ones into something useful.

What I Learned from Slipways

Slipways is an excellent take on the 4x genre, streamlining the formula to create a game that you can play in 1-2 hours. I used to like games like Civilization, Master of Orion, and Alpha Centauri, but I no longer enjoy them because they take too long and require too much micromanagement. After dozens and dozens of playthroughs, these are my biggest takeaways as a game designer.

Debt creates goals

The most innovative mechanic in Slipways for me is the way it handles converters. “Converters” are things that accept resource inputs and produce other resources for the player. The brilliant thing about converters (colonies) in Slipways is that they yield their first outputs as soon as you build them before receiving any inputs. If you urgently need a resource, you can set up a colony to produce it immediately.

While a colony will produce resources without receiving its inputs, it does so at an escalating cost to happiness, an important part of the scoring formula. Furthermore, while satisfying its needs eliminates the unrest, taking too long to fix it results in a penalty that stays forever.

What makes this work so well is that each planet both solves an existing problem and provides the player with a new goal to pursue (and a time limit to complete that goal). Each colony you build is a loan that you will need to pay back, and the neverending quest to pay them all back is the primary driver of the gameplay.

The summary of this game design pattern is: 

  • To provide a goal, give players an immediate reward tied to a penalty. Allow them to eliminate the penalty later by accomplishing some specific (but optional) task.

Upgrade converters when used to encourage interaction

Providing input to a colony does more than just removing the happiness penalty; it also upgrades the planet, creating new needs. Each level results in both more outputs and new challenges, ranging from finding markets for the exports to improving nearby planets. In this way, the player receives a natural-feeling stream of goals.

A big problem that converters often have is that they feel too one-dimensional. In many games, it is common to see converters sitting idle when their outputs are not needed. Upgrading a converter when used is a brilliant side effect that gives the player a sense of progression. Maybe you don’t need any wood right now, but wouldn’t you rather have a logging camp instead of that pitiful little forester?

The summary of this game design pattern is: 

  • To provide player progression and encourage players to use converters, include a side effect in your converter designs. After using a converter a certain number of times, it upgrades.

Discourage completionism by gating off low-level options

Slipways does something interesting with its tech trees that I haven’t seen before. At any given time, you may research technologies from your current tech tier or the previous one. Completing research causes your tech level to advance, providing access to new technologies while also cutting off access to old ones that you never got around to researching. Thematically, the justification is that your scientists have moved on to more exciting projects.

The effect of this design decision is that players must think carefully about which techs they need from each tier because they can’t take them all. This discourages degenerate tendencies towards buying obsolete tech simply because it is cheap relative to the player’s current science production.

For the game designer, this makes balancing technologies a lot easier. Even early game tech can be impactful because you don’t have to worry about the player picking it up at no opportunity cost later. It also improves replayability because the player can’t just always take all of the early technologies.

I think this principle applies to any system where the player chooses from options across several tiers with escalating costs. By locking early options as you unlock later options, each item the player chooses becomes more meaningful.

The summary of this game design pattern is:

  • To enhance replayability in games where the player buys new abilities from a tiered list, lock earlier tiers as the player advances.

Use marginal increases in upkeep rates

The most jarring and unpleasant part of Slipways for me relates to administrative upkeep costs. As you add more planets to your network, there are thresholds at which your empire “size” increases. Each time this happens, the number of credits you pay per planet in upkeep increases by 1. I am often shocked when this happens because building a single new colony causes a massive drop in income.

I found that such abrupt changes in income felt artificial because it didn’t make sense that adding one planet would suddenly make the rest of them cost more. This led to situations where I didn’t want to build a colony because it would drastically change my costs. I prefer marginal upkeep systems like the one in Eclipse where each colony costs progressively more to maintain than the last one.

My main takeaway from this aspect of Slipways is to avoid springing massive upkeep changes on players just because they crossed some threshold. There’s no design pattern here, just a cautionary tale.

4X games don’t need combat

Slipways has no combat, a departure from the genre (the fourth X stands for “exhale” instead of “exterminate”). While it is a single-player game, its focus on trade would work very well in a multiplayer board game, trade providing healthier player interaction than war. The emphasis on commerce over combat is one of the things I like a lot about Sidereal Confluence, and Slipways demonstrates that you don’t need to abstract everything else away to get it.

The difference between cards and dice

Dice have been a looming problem in Dungeon Rancher for a while. Every monster uses dice to track its level by the number of dice on the card. On top of that, players must roll enough dice to feel secure in feeding their monsters. Given four colors of dice, even the most conservative estimate puts the total dice required at 120. This is prohibitive on cost alone, but dice are also inconvenient for tracking states because it is easy to knock them over, losing information.

For these reasons, we’ve decided to try switching to using cards to track monster levels instead of dice. Each monster has a stack of cards, and to tend to it you must play a card equal to or greater than the top card. Instead of four bags of dice, there are now four decks of cards. This has major implications for the game.

When we used dice to represent levels, the highest die determined the difficulty of tending to the monster. It is inconvenient for players to search a pile of cards, so now only the top card matters. Since it would be too powerful to train a monster by changing just one card, we decided to remove the training mechanic altogether. Rerolls became the ability to discard a card and redraw from that deck.

Fortunately, we have space for the card piles above or below each monster because we limit rooms to two monsters. This is good because the new resource cards need to be the same size as the draftable cards since some are in the draft. The change also makes it much easier to theme the resources because we can use thematic icons rather than pictures of colored dice.

Players naturally hold all the cards they gain from different sources in their hands – both the drafted cards and the produced ones. Therefore we had to modify the rules about discarding your hand at the end of your turn. Now players must only discard down to their hand size, whether they keep drafted or non-drafted cards. Their hand size is equal to the number of rooms they control, making room building a little more interesting.

Finally, we decided that all monsters should start at level 1. To level them up faster, players may spend a new type of token to tend to them again.

Using a different type of component to represent information always has consequences. I find it is best to go with whatever changes are suggested by the new medium rather than forcing it to work the same way as the old one. The same is true when working under component restrictions. For example, if you are involved in an 18-card contest, don’t try to cram a bunch of state information into the cards. Just use them the way cards are typically used and make a good game within those bounds.

Evolution of Dungeon Rancher Monsters

Early on, we knew that the monsters in Dungeon Rancher would have abilities if only to differentiate them from each other. There are four resources players use to feed monsters. Since each monster uses two resources, this implies a total of 4 choose 2 = 6 monsters. We gave the Dragon and the Golem simple resource production abilities to start with while testing other aspects of the game.

It wasn’t too long before we added abilities for the rest of the monsters as well. One useful principle we discovered is that abilities that don’t require keeping the monster around are unthematic because they make the monster feel more like a resource card than like something to raise. For instance, one version of the Golem’s ability allowed players to discard it to build a room for free, so players treated it like a “free room” card and always used it immediately.

We settled eventually on the concept that each monster would have a different scoring ability. After some experimentation, we further determined that such scoring abilities should always force the player to take on risk for more points. For example, the Dragon scoring ability increases its value for every ‘6’ on it, making it much more difficult to tend. Under this system, creatures represent different ways for the players to challenge themselves to increase their score.

The newest addition to the monster mechanics rewards players for placing monsters with matching “personalities” in adjacent rooms. For example, placing a creature with a food symbol on its right in a room to the left of one with a matching food symbol on its left produces a single food resource, which is automatically useful since you have two monsters that require it. The objective was to make monster placement more interesting. Adding this mechanic costs very little because it echoes the existing mechanic that adjacent matching rooms produce magic resources.