Design Choices Distribute Consequences Before Anyone Acts

When no objectively correct answer exists, a choice must still be made. That choice determines who benefits and who is disadvantaged — before anyone takes action within the system.

“Arbitrary” doesn’t mean random or careless. It means: a decision where no perfect answer exists. But arbitrary decisions are never neutral — they embed assumptions that distribute consequences.


How This Idea Emerged

The starting point: Studying Chapter 5 of macroeconomics — learning that CPI uses a “fixed basket” of goods to measure inflation.

The first question: Who decides what’s in the basket?

The answer: Statistics Canada surveys households and builds a basket reflecting “average” spending patterns.

The problem I noticed: There is no “average” household. Seniors spend more on healthcare. Young families spend more on childcare. Urban renters spend more on housing. The basket fits nobody perfectly — yet everyone’s wages, benefits, and tax brackets get indexed to it.

My reaction: “Arbitrary decisions have real impact on the lives of citizens. It feels wrong to make arbitrary decisions when people’s lives aren’t arbitrary.”

The reframe: “Arbitrary” in economics doesn’t mean random or careless. It means a choice where no objectively correct answer exists. But that reframe didn’t resolve the tension — it sharpened it. If no perfect answer exists, then whoever MAKES the choice is deciding whose reality counts.

The connection: This reminded me of procedural justice from Organizational Behavior — the idea that people care not just about WHAT they receive (distributive justice) but HOW the decision was made (procedural justice). When outcomes are unfavorable, people scrutinize the process. If the process seems opaque or arbitrary, the outcome feels unjust.

The abstraction: This isn’t just about CPI. Any time a structure must be defined — performance metrics, eligibility rules, national accounting standards — someone’s choice about HOW to define it distributes consequences before anyone acts within the system.

The distinction from related ideas: I already have a note on Structures Constrain Outcomes Independent of Merit, which is about navigating existing structures. This note is different — it’s about the moment BEFORE the structure exists, when someone decides how to design it. The first note is about playing within rules. This note is about who writes the rules.


The Mechanism

StageWhat HappensExample: CPI Basket
1. A structure must be definedSomeone needs to create a system to measure, evaluate, or allocateStatistics Canada needs to measure inflation — but inflation for WHO?
2. No objectively correct definition existsMultiple valid approaches exist, each with different implicationsShould the basket reflect seniors’ spending? Low-income households? Urban renters? There is no “true” typical Canadian.
3. Someone makes a choiceAn authority decides, often using reasonable-sounding criteriaStatistics Canada surveys households and builds a basket reflecting “average” spending patterns
4. That choice embeds assumptionsThe decision assumes certain things are normal, typical, or representativeThe basket assumes people spend X% on housing, Y% on food, Z% on transportation — but YOUR percentages may differ
5. Consequences flow from assumptionsPolicies, wages, and benefits get tied to this measureCPP benefits, tax brackets, and wage adjustments are indexed to CPI — if CPI understates YOUR inflation, your purchasing power erodes
6. People act WITHIN the structureIndividuals make choices, but the playing field was already tiltedA senior on fixed income can be frugal, work part-time, budget carefully — but if their inflation is 5% and CPI says 2%, they fall behind no matter what they do

The design of the structure distributes consequences before anyone acts within it.

By the time you’re playing the game, the rules have already decided who has the advantage.


Examples Across Domains

DomainStructureDesign ChoiceWho BenefitsWho Loses
Macro (CPI)How inflation is measuredStatistics Canada decides what goods go in the basket and in what proportions — based on surveys of “average” household spendingHouseholds whose spending matches the “average” basket — their wage increases and indexed benefits track their actual cost of livingSeniors (more healthcare spending), low-income households (higher % on food/housing), rural households (more transportation) — CPI understates their inflation, so indexed benefits erode their purchasing power over time
Macro (GDP)How national output is measuredOnly market transactions count as “production” — if money doesn’t change hands, it’s invisiblePaid work, market activity, formal economy — these get counted and therefore get policy attentionHome production (cooking, cleaning, childcare by family members), volunteer work, leisure — a stay-at-home parent’s labor is invisible to GDP, so policies optimizing for GDP ignore or undervalue this work
OB (Performance)How employee contributions are evaluatedManagers decide which metrics define “good performance” — sales numbers, hours logged, output quantityEmployees whose strengths align with measured metrics — they get raises, promotions, recognitionEmployees whose value isn’t easily quantified — mentorship, culture-building, institutional knowledge, helping colleagues — they may be rated as “average” despite critical contributions
Policy (Eligibility)Who qualifies for benefits or programsPolicymakers define thresholds — income limits, age cutoffs, geographic boundariesThose clearly inside the criteria — they receive benefits as designedEdge cases and those slightly outside — a family earning $1 over the limit loses eligibility entirely; someone who turns 65 one day after the cutoff waits a full year

Methodology Is a Choice — Different Choices Create Different Consequences

The same phenomenon can be measured in multiple valid ways. Each method embeds different assumptions and creates different consequences.

Example: Measuring inflation

MethodHow It WorksAssumption EmbeddedConsequence
CPI (Consumer Price Index)Track price of a FIXED basket of goods over timePeople don’t change their behavior when prices changeOverstates inflation impact — ignores that people substitute cheaper goods when prices rise (substitution bias)
GDP DeflatorTrack price of CURRENT production each yearThe basket should reflect what’s actually being produced/consumedCan’t cleanly isolate price changes from quantity changes — if the basket changes AND prices change, which caused the index to move?

Neither is “wrong.” They answer different questions:

QuestionBetter Method
”How much more expensive is the SAME lifestyle?”CPI — holds basket constant
”How much more expensive is what we’re ACTUALLY doing?”GDP Deflator — updates basket

The policy implication: If wages are indexed to CPI, workers are compensated for maintaining a fixed lifestyle. If wages were indexed to the GDP deflator, compensation would shift with actual economic behavior. Same workers, same economy — different methodology, different paychecks.

There is no neutral measurement. Every method answers a different question, embeds different assumptions, and distributes consequences differently.


Connection to Organizational Justice

The OB framework explains WHY methodology matters to people:

Justice TypeQuestionApplied to Methodology
DistributiveDid I get what I deserved?Does the measurement capture my reality? If CPI understates my inflation, my “cost of living adjustment” doesn’t cover my actual costs.
ProceduralWas the process fair?Did I have input on how it was designed? Were people like me consulted when Statistics Canada built the basket?
InformationalWas I told the real reason?Do I understand WHY it’s designed this way? Does Statistics Canada explain its methodology transparently?
InterpersonalWas I treated with dignity?Was my situation considered at all? Or was I assumed to be “average” without anyone checking?

Key finding from OB: When outcomes are unfavorable, people scrutinize the process. If the process seems arbitrary or opaque, the outcome feels unjust — even if no harm was intended.

This explains why “technical” decisions can generate political backlash. People aren’t irrational for questioning methodology when outcomes hurt them. They’re applying procedural justice instincts.


Using the Seven Lenses to Evaluate Methodology

Seven Lenses for Decomposing Claims can interrogate design choices — whether you’re creating a structure or on the receiving end of one:

LensQuestion for Methodology
ActorsWho designed this? Who was consulted? Who benefits from this definition? Who loses? Whose voices were absent?
ConditionsWhat assumptions does this rely on? What has to be true for this to work fairly? What if those conditions change?
Trade-offsWhat’s sacrificed to get this simplicity or measurability? Whose reality is erased to make the numbers clean?
ScopeDoes this claim universality it doesn’t have? Is “average” really representative, or is it a fiction?
ScaleDoes this work the same for individuals vs. aggregates? Can a measure that’s accurate in aggregate be unfair to individuals?
MechanismHOW does this methodology translate into consequences? What’s the causal chain from measurement to outcome?
SequenceWhat had to happen first? Who had power at the moment of design? Can the methodology be revised, or is it locked in?

The Distinction from “Structures Constrain Outcomes”

NoteFocusKey Question
Structures Constrain Outcomes Independent of MeritNavigating existing structures — individual effort operates within limits set by the structure”Given this structure, what can I achieve?”
This noteDesigning structures — consequences are distributed before anyone acts”Who decided the structure would work this way, and whose interests does that serve?”

The first is about playing within rules. This is about who writes the rules.

Both matter. But they require different responses:

  • If you’re navigating a structure: understand its constraints, work within or around them
  • If you’re designing a structure (or can influence its design): recognize that your choices distribute consequences, apply the Seven Lenses, seek input from affected parties

Common Trap

Trap: Treating methodology as “just technical” — separate from politics or values. Assuming that because experts made the decision using data, it must be neutral.

Fix: Recognize that every methodological choice answers the question “whose reality counts?” That question is inherently political. Data and expertise inform the choice but don’t make it neutral. The choice still distributes consequences.

The tell: When someone says “that’s just how it’s measured” as if that ends the conversation — that’s the moment to ask “who decided to measure it that way, and who benefits?”


North: Where this comes from

East: What opposes this?

South: Where this leads

West: What’s similar?