Randomized Victory and Alliances of Doubt

In Oath, there is a mechanic in which at the end of rounds five through seven out of eight, the Chancellor player rolls a die to see if they win, provided they have fulfilled their victory condition. The probability of victory starts at 1/6 and doubles each round up to 2/3.

When I first encountered this rule, it felt out of place – clunky, even. After playing a game as a Citizen and another game observing the behavior of a Citizen, I think I understand why it is there; it is key to the entire dynamic between Citizen and Chancellor.

Citizens, in Oath, are players that are ostensibly allies of the Chancellor. However, unlike other games such as Dune or Eclipse with official alliances, they do not win together. Instead, the Citizens have an additional victory condition. If they have achieved their goal when the Chancellor wins, the Citizen wins instead; otherwise, they lose.

In practice, this can lead to strange conflicts between allies. If it looks like the Citizen will meet their condition, the Chancellor might attack them or even self-sabotage to prevent the game from ending. If the Citizen has not achieved their personal goal, they might attack the Chancellor to do the same, as I did in the game where I was a Citizen.

Enter the victory die. Both the Citizen and Chancellor need the Chancellor’s goal to be met for either to have a chance of winning, but if they know for sure that they will lose should it be achieved that round, they would have no choice but to self-sabotage. But because the Chancellor or Citizen is not guaranteed to win at the end of rounds five, six, or seven, there is enough doubt that they can work together even though they know who would win if the game did end early. Their alliance works because the losing member thinks the game might not end until they can achieve their goal.

The illusion of hope is crucial to designing games because knowing they will lose effectively eliminates a player. Traditionally, official alliances require shared victory because who would cooperate with somebody they know will win when the game ends? Oath demonstrates a different approach by adding enough uncertainty to enable teamwork between technical enemies. I still think the mechanic is clunky, but I also think it is needed.

I can already think of mechanics to support a similar dynamic for other designs. Instead of randomizing the end of the game, what if we randomized the winner of the alliance instead? For example, suppose you have a system where official allies put tokens into a bag according to their contributions. Then, at the end of the game, the winning alliance draws to indicate which one wins.

Or another option: allies could gain hidden victory points, and if their team wins, then the player with the most points is the sole winner. Rex: Final Days of an Empire does something a little like this with its betrayal cards, where each player has a secret condition which, if fulfilled, steals the win from their team. The problem with its system is that the default is shared victory, so stealing it feels petty and spiteful. If only one player can ever win, this goes away.

I don’t think shared victory is a bad thing, though some people do. It can introduce problems like freeloaders or power imbalances, but there are solutions to these. However, as Oath shows, there is a viable alternative for games with official alliances; you need to make the sole winner uncertain enough that allies keep hoping that it will be them.

On-Play Effects and Subtypes for Attention Savings

One of the many things I like about Oath is how it associates abilities with areas. The areas in area control games often feel dull, and gaining new powers for control – or simply for the presence of your pawn, as in Oath – makes them feel much more alive. This system does mean that there are a lot of text effects in the game, and text effects cost a lot of player attention to support.

Suppose you have a bunch of objects – probably cards, but they could be tokens as well – and each object has an effect and a collection of subtypes associated with it. By subtypes, I mean tags that identify something when determining which abilities apply to it. For example, in Imperial Settlers, each card has one to three colors that function as subtypes, and certain cards reward you when you play a card of a particular color.

The nice thing about the effect-subtype pairing, and the reason we see it in so many games, is that it makes it very easy to incorporate interesting side effects into your cards. Side effects – beneficial outcomes of a player choice that were not the reason the player took that action – are good because they challenge the player to find a use for the unintended bonus. Subtypes are a fantastic side effect because they are practically free in attention costs – players can see at a glance which subtypes they have, assuming the graphic design is decent.

For example, consider the card “Longbows” in Oath. It lets you add or subtract an attack die from any battle, which is a fantastic bonus that justifies playing it. However, it also has the subtype “Order”. If you rely heavily on Longbows, you will likely invest in cards that benefit from control of Order cards, such as Order advisors.

The one problem with this is that while subtypes are cheap, effects are expensive. Passive abilities that trigger automatically when some event occurs are the worst, but even new actions the player must choose to use are a burden. When you have ten different abilities to keep track of, you are bound to start forgetting to use some of them. But there is one type of effect that costs almost nothing in ongoing attention: the “on-play” effect.

When I talk about cards with on-play effects, I do not mean action/event cards that you resolve and discard. I refer specifically to cards that do something when you play them and then remain out, effectively blank aside from their subtypes. For example, when you play “A Small Favor”, you gain four warbands – and that’s it. You don’t need to look out for something to be triggered later, or remember a modifier to another ability, or consider an extra action available to you. For the rest of the game, the card is a blank card with the “Discord” subtype.

As a game designer, this pattern of pairing on-play effects with subtypes is a fantastic tool for cutting attention costs for players. Going one step further, I can imagine structuring a card pool so that the more complex effects are on-play, while new actions, passive modifiers, and triggered abilities are simpler.

Of course, you don’t want all the cards in your game to have on-play effects; that defeats the purpose of persistent cards. So the natural question is: what percentage of cards should be on-play? I have no idea, so I decided to look at some of the games in my collection.

First, Oath. The on-play cards in Oath are easy to identify because of the graphical design. Oath has a total of 198 denizen cards; of those, 27 are on-play, for a total of 14% on-play cards.

Next, I looked at 7 Wonders. The on-play cards in that game are the blue and yellow cards; I decided not to classify red or green cards as on-play because they require some monitoring of other players to see what strategy they are pursuing. I found that 31% of the cards were on-play, at least according to my classification.

Wingspan was next, and it is another game where on-play effects have specific graphics identifying them, which made things easy. I decided to include cards without any effect at all in my count. Out of 170 cards, I found that 25% were on-play.

I also looked at 7-Wonders Duel, which contrasts with 7-Wonders in how red cards work – when you draft one, you move the Conflict Pawn toward your opponent’s city, and then the card does nothing. I wound up with a count of 56%, which is the highest that I saw.

Finally, I decided to look at technology tokens in Eclipse because I wanted to find an example that doesn’t rely on cards. Tech tokens in Eclipse come with three subtypes and reduce the cost of subsequent research in their subtype. Out of 24 technologies, only 3 (13%) did not have ongoing effects – Quantum Grid, Artifact Key, and Advanced Robotics.

The above is not a rigorous study; the sample size is tiny, and there is some subjectivity in what I classify as on-play effects, so take my conclusions with a grain of salt. I think it would be interesting to survey more examples. Looking at what I have, I noticed that the games with the lowest proportion of on-play effects are also the heaviest ones. Many factors go into the weight of a game, and I’m not saying that attention costs are even the most important, but they do matter.

One game in my collection that caught my eye is Innovation, because of the conspicuous absence of on-play effects. It fits the other criteria perfectly; you have cards with symbols (subtypes) that let you take advantage of other cards. Yet every single card effect is available at all times. It is hard enough to track what you can do, let alone other players, leading to massive analysis paralysis. Now that I see the pattern, I can’t help but think that the game would be better if 20%-30% of the cards had effects that triggered only when played. Such a change would require adding some rules, but the attention savings would more than compensate.

How to make randomized victory points work

My opinion of Eclipse has deteriorated since I first learned about it in college. Back then, it had just come out, and I was excited at the prospect of a shorter Twilight Imperium with more streamlined mechanics. The part that most intrigued my inner designer was how it dealt with income – you removed cubes from a track to place them on a board, and the number uncovered was how much you got to collect. This practice is commonplace nowadays, but it was innovative at the time. I bought the game along with its expansion and played it extensively with my friends.

I revisited Eclipse a few years ago and found that it did not live up to my memories of it. In particular, I found myself much more aware of how arbitrary so much of it is. Exploring is costly, and the quality of the systems you encounter varies widely. Combat is a dice fest. But the most obvious way in which the game feels random is the reputation tile mechanics.

When you fight in Eclipse, whether you win or lose, you are rewarded with randomly drawn “reputation tiles” – which my friends and I abbreviated to “reptiles” to the frequent confusion of new players. These are worth some number of victory points between one and four, and the value of the tiles you own is secret. You can only hold a limited number of tiles and eventually discard lower-value tiles to make room for more valuable ones.

The reasons behind this system make sense. Because there are only a limited number of high-value tiles and players are trading up to them, players want to fight early. Because the value of the tiles is secret, players can’t be sure what anybody’s total score is, obfuscating the current leader. Succeeding in battle is also incentivized because you draw more tiles and choose one to keep.

Statistically, reputation tiles work. In practice, they lead to feel-bad moments where you draw a low tile by chance and feel cheated. The problem isn’t just with Eclipse, either. In general, when you distribute random amounts of victory points to players for the same actions, you make it easier for players to attribute their successes or failures to luck rather than their hard work.

The risk of unfairness may be why randomized point distribution of this sort is uncommon in board games. But such mechanics have undeniable benefits. We do not want players to know who has won the game until the end, and hidden victory points that nobody else knows about are great at confusing the issue. So how do we give players random victory points without making success feel arbitrary?

One of the principles I hold to in game design is that when bad things happen to a player, there should be a silver lining for them to see. My favorite example to point to is Imperial Settlers, where whenever another player destroys one of your buildings, you get one wood and a foundation that you can sacrifice to build something else. The foundation isn’t technically a benefit – after all, you could have used the building you had before the attack as a foundation already. But psychologically, it feels like a silver lining because it makes one of your choices – which card to give up – easy to make.

We can use this same principle to fix the reputation tile problem. What if you could spend reputation tiles and the value of the tile didn’t matter? For example, imagine a game where you transport cargo drawn at random from four different point values. However, during the game, you can burn unwanted tokens to move faster. The amount of the boost is independent of the point value of what you spent. (Note: This is basically how the Reactor Furnace in Galaxy Trucker works, though there the cargo isn’t drawn randomly)

In this scenario, drawing low-point tokens does not feel unfair. It is sometimes even a relief because it simplifies the player’s decision of what to do. Draw some worthless scrap? Burn it! Yet hidden tokens still serve to obscure a player’s point total. A player might choose not to burn cargo because it is valuable, or they might think they can win without doing so.

I think hidden randomized victory points can work without feeling arbitrary, and I think the key is giving players ways to convert the less valuable ones into something useful.

What I Learned from Slipways

Slipways is an excellent take on the 4x genre, streamlining the formula to create a game that you can play in 1-2 hours. I used to like games like Civilization, Master of Orion, and Alpha Centauri, but I no longer enjoy them because they take too long and require too much micromanagement. After dozens and dozens of playthroughs, these are my biggest takeaways as a game designer.

Debt creates goals

The most innovative mechanic in Slipways for me is the way it handles converters. “Converters” are things that accept resource inputs and produce other resources for the player. The brilliant thing about converters (colonies) in Slipways is that they yield their first outputs as soon as you build them before receiving any inputs. If you urgently need a resource, you can set up a colony to produce it immediately.

While a colony will produce resources without receiving its inputs, it does so at an escalating cost to happiness, an important part of the scoring formula. Furthermore, while satisfying its needs eliminates the unrest, taking too long to fix it results in a penalty that stays forever.

What makes this work so well is that each planet both solves an existing problem and provides the player with a new goal to pursue (and a time limit to complete that goal). Each colony you build is a loan that you will need to pay back, and the neverending quest to pay them all back is the primary driver of the gameplay.

The summary of this game design pattern is: 

  • To provide a goal, give players an immediate reward tied to a penalty. Allow them to eliminate the penalty later by accomplishing some specific (but optional) task.

Upgrade converters when used to encourage interaction

Providing input to a colony does more than just removing the happiness penalty; it also upgrades the planet, creating new needs. Each level results in both more outputs and new challenges, ranging from finding markets for the exports to improving nearby planets. In this way, the player receives a natural-feeling stream of goals.

A big problem that converters often have is that they feel too one-dimensional. In many games, it is common to see converters sitting idle when their outputs are not needed. Upgrading a converter when used is a brilliant side effect that gives the player a sense of progression. Maybe you don’t need any wood right now, but wouldn’t you rather have a logging camp instead of that pitiful little forester?

The summary of this game design pattern is: 

  • To provide player progression and encourage players to use converters, include a side effect in your converter designs. After using a converter a certain number of times, it upgrades.

Discourage completionism by gating off low-level options

Slipways does something interesting with its tech trees that I haven’t seen before. At any given time, you may research technologies from your current tech tier or the previous one. Completing research causes your tech level to advance, providing access to new technologies while also cutting off access to old ones that you never got around to researching. Thematically, the justification is that your scientists have moved on to more exciting projects.

The effect of this design decision is that players must think carefully about which techs they need from each tier because they can’t take them all. This discourages degenerate tendencies towards buying obsolete tech simply because it is cheap relative to the player’s current science production.

For the game designer, this makes balancing technologies a lot easier. Even early game tech can be impactful because you don’t have to worry about the player picking it up at no opportunity cost later. It also improves replayability because the player can’t just always take all of the early technologies.

I think this principle applies to any system where the player chooses from options across several tiers with escalating costs. By locking early options as you unlock later options, each item the player chooses becomes more meaningful.

The summary of this game design pattern is:

  • To enhance replayability in games where the player buys new abilities from a tiered list, lock earlier tiers as the player advances.

Use marginal increases in upkeep rates

The most jarring and unpleasant part of Slipways for me relates to administrative upkeep costs. As you add more planets to your network, there are thresholds at which your empire “size” increases. Each time this happens, the number of credits you pay per planet in upkeep increases by 1. I am often shocked when this happens because building a single new colony causes a massive drop in income.

I found that such abrupt changes in income felt artificial because it didn’t make sense that adding one planet would suddenly make the rest of them cost more. This led to situations where I didn’t want to build a colony because it would drastically change my costs. I prefer marginal upkeep systems like the one in Eclipse where each colony costs progressively more to maintain than the last one.

My main takeaway from this aspect of Slipways is to avoid springing massive upkeep changes on players just because they crossed some threshold. There’s no design pattern here, just a cautionary tale.

4X games don’t need combat

Slipways has no combat, a departure from the genre (the fourth X stands for “exhale” instead of “exterminate”). While it is a single-player game, its focus on trade would work very well in a multiplayer board game, trade providing healthier player interaction than war. The emphasis on commerce over combat is one of the things I like a lot about Sidereal Confluence, and Slipways demonstrates that you don’t need to abstract everything else away to get it.

Could fixed prices solve lengthy trade negotiations?

Trading mechanics are a powerful way to introduce player interaction into a game. However, freeform trading can massively slow down the game depending on how much players negotiate. If players are allowed to trade anything, they might argue over the proper exchange rate. Even in Sidereal Confluence, where exchange rates are explicitly defined, it is unclear how much the loan of a converter card is worth.

I’ve been thinking about this lately after the first playtest of Dungeon Rancher. Trade in Dungeon Rancher serves the vital role of smoothing over randomness by allowing players to buy from other players the things they could not mine themselves. However, we aim for the game to last 30-45 minutes with six rounds, which means there is little space for lengthy negotiations. We previously let players trade monsters, dice, and rooms, but now we restrict it to dice, the most comparable resource. I don’t know whether this is enough, though.

Most games with trading mechanics don’t restrict what players can trade. In Dune, you can even exchange binding promises about future actions (modern games don’t allow this anymore). In a game like Sidereal Confluence where everybody is trading simultaneously, this isn’t too bad – as long as everybody is engaged, the only effect is to increase the game length. But in a game like Catan where a player can only trade on their turn, this can lead to significant downtime in the wrong playgroup. For this reason, I think freeform trade should seldom be turn-restricted.

But some games have an elegant alternative – fixed prices. In Imperial Settlers, players can build “Open Production” buildings that produce a resource and allow other players to gain that same resource (from the bank) by giving the owner a worker. Setting aside the question of whether this counts as trade, this is quick because there is no negotiation – the owner cannot even refuse the offer, and the rate is always one-for-one.

Root: The Riverfolk Expansion does something similar with its otter faction, which has three different types of goods and services on offer; in this case, however, they may set the price at the end of their turn. The player has more control over their sales but still cannot refuse the trade.

In both Imperial Settlers and Root, trade is indirect; the seller puts out an item at a price on their turn, and the buyer chooses whether to buy it on theirs. This can feel less personal than direct trading between players. A mechanic that I think could strike a middle ground is at-will trading with fixed prices. Say I want to exchange wood for stone; I look up the trade ratio for the two (1:1 for simple games) and offer another player the trade. There is no room for negotiation here – they must accept or reject my offer, avoiding a bartering session. Since the rules endorse the exchange rate, it is less likely that a player will feel ripped off.

I don’t recall seeing this mechanic in any published games. Fixed trade ratios are often used for trading with the supply, not directly with other players. But even when you set the terms of exchange, trades between players are interesting. Trading mechanics are good at compensating for bad luck and make specialization viable while also promoting player interaction. If that is possible without exploding the game length, I think it is worth trying.