Design Choices Distribute Consequences Before Anyone Acts
When no objectively correct answer exists, a choice must still be made. That choice determines who benefits and who is disadvantaged — before anyone takes action within the system.
“Arbitrary” doesn’t mean random or careless. It means: a decision where no perfect answer exists. But arbitrary decisions are never neutral — they embed assumptions that distribute consequences.
How This Idea Emerged
The starting point: Studying Chapter 5 of macroeconomics — learning that CPI uses a “fixed basket” of goods to measure inflation.
The first question: Who decides what’s in the basket?
The answer: Statistics Canada surveys households and builds a basket reflecting “average” spending patterns.
The problem I noticed: There is no “average” household. Seniors spend more on healthcare. Young families spend more on childcare. Urban renters spend more on housing. The basket fits nobody perfectly — yet everyone’s wages, benefits, and tax brackets get indexed to it.
My reaction: “Arbitrary decisions have real impact on the lives of citizens. It feels wrong to make arbitrary decisions when people’s lives aren’t arbitrary.”
The reframe: “Arbitrary” in economics doesn’t mean random or careless. It means a choice where no objectively correct answer exists. But that reframe didn’t resolve the tension — it sharpened it. If no perfect answer exists, then whoever MAKES the choice is deciding whose reality counts.
The connection: This reminded me of procedural justice from Organizational Behavior — the idea that people care not just about WHAT they receive (distributive justice) but HOW the decision was made (procedural justice). When outcomes are unfavorable, people scrutinize the process. If the process seems opaque or arbitrary, the outcome feels unjust.
The abstraction: This isn’t just about CPI. Any time a structure must be defined — performance metrics, eligibility rules, national accounting standards — someone’s choice about HOW to define it distributes consequences before anyone acts within the system.
The distinction from related ideas: I already have a note on Structures Constrain Outcomes Independent of Merit, which is about navigating existing structures. This note is different — it’s about the moment BEFORE the structure exists, when someone decides how to design it. The first note is about playing within rules. This note is about who writes the rules.
The Mechanism
| Stage | What Happens | Example: CPI Basket |
|---|---|---|
| 1. A structure must be defined | Someone needs to create a system to measure, evaluate, or allocate | Statistics Canada needs to measure inflation — but inflation for WHO? |
| 2. No objectively correct definition exists | Multiple valid approaches exist, each with different implications | Should the basket reflect seniors’ spending? Low-income households? Urban renters? There is no “true” typical Canadian. |
| 3. Someone makes a choice | An authority decides, often using reasonable-sounding criteria | Statistics Canada surveys households and builds a basket reflecting “average” spending patterns |
| 4. That choice embeds assumptions | The decision assumes certain things are normal, typical, or representative | The basket assumes people spend X% on housing, Y% on food, Z% on transportation — but YOUR percentages may differ |
| 5. Consequences flow from assumptions | Policies, wages, and benefits get tied to this measure | CPP benefits, tax brackets, and wage adjustments are indexed to CPI — if CPI understates YOUR inflation, your purchasing power erodes |
| 6. People act WITHIN the structure | Individuals make choices, but the playing field was already tilted | A senior on fixed income can be frugal, work part-time, budget carefully — but if their inflation is 5% and CPI says 2%, they fall behind no matter what they do |
The design of the structure distributes consequences before anyone acts within it.
By the time you’re playing the game, the rules have already decided who has the advantage.
Examples Across Domains
| Domain | Structure | Design Choice | Who Benefits | Who Loses |
|---|---|---|---|---|
| Macro (CPI) | How inflation is measured | Statistics Canada decides what goods go in the basket and in what proportions — based on surveys of “average” household spending | Households whose spending matches the “average” basket — their wage increases and indexed benefits track their actual cost of living | Seniors (more healthcare spending), low-income households (higher % on food/housing), rural households (more transportation) — CPI understates their inflation, so indexed benefits erode their purchasing power over time |
| Macro (GDP) | How national output is measured | Only market transactions count as “production” — if money doesn’t change hands, it’s invisible | Paid work, market activity, formal economy — these get counted and therefore get policy attention | Home production (cooking, cleaning, childcare by family members), volunteer work, leisure — a stay-at-home parent’s labor is invisible to GDP, so policies optimizing for GDP ignore or undervalue this work |
| OB (Performance) | How employee contributions are evaluated | Managers decide which metrics define “good performance” — sales numbers, hours logged, output quantity | Employees whose strengths align with measured metrics — they get raises, promotions, recognition | Employees whose value isn’t easily quantified — mentorship, culture-building, institutional knowledge, helping colleagues — they may be rated as “average” despite critical contributions |
| Policy (Eligibility) | Who qualifies for benefits or programs | Policymakers define thresholds — income limits, age cutoffs, geographic boundaries | Those clearly inside the criteria — they receive benefits as designed | Edge cases and those slightly outside — a family earning $1 over the limit loses eligibility entirely; someone who turns 65 one day after the cutoff waits a full year |
Methodology Is a Choice — Different Choices Create Different Consequences
The same phenomenon can be measured in multiple valid ways. Each method embeds different assumptions and creates different consequences.
Example: Measuring inflation
| Method | How It Works | Assumption Embedded | Consequence |
|---|---|---|---|
| CPI (Consumer Price Index) | Track price of a FIXED basket of goods over time | People don’t change their behavior when prices change | Overstates inflation impact — ignores that people substitute cheaper goods when prices rise (substitution bias) |
| GDP Deflator | Track price of CURRENT production each year | The basket should reflect what’s actually being produced/consumed | Can’t cleanly isolate price changes from quantity changes — if the basket changes AND prices change, which caused the index to move? |
Neither is “wrong.” They answer different questions:
| Question | Better Method |
|---|---|
| ”How much more expensive is the SAME lifestyle?” | CPI — holds basket constant |
| ”How much more expensive is what we’re ACTUALLY doing?” | GDP Deflator — updates basket |
The policy implication: If wages are indexed to CPI, workers are compensated for maintaining a fixed lifestyle. If wages were indexed to the GDP deflator, compensation would shift with actual economic behavior. Same workers, same economy — different methodology, different paychecks.
There is no neutral measurement. Every method answers a different question, embeds different assumptions, and distributes consequences differently.
Connection to Organizational Justice
The OB framework explains WHY methodology matters to people:
| Justice Type | Question | Applied to Methodology |
|---|---|---|
| Distributive | Did I get what I deserved? | Does the measurement capture my reality? If CPI understates my inflation, my “cost of living adjustment” doesn’t cover my actual costs. |
| Procedural | Was the process fair? | Did I have input on how it was designed? Were people like me consulted when Statistics Canada built the basket? |
| Informational | Was I told the real reason? | Do I understand WHY it’s designed this way? Does Statistics Canada explain its methodology transparently? |
| Interpersonal | Was I treated with dignity? | Was my situation considered at all? Or was I assumed to be “average” without anyone checking? |
Key finding from OB: When outcomes are unfavorable, people scrutinize the process. If the process seems arbitrary or opaque, the outcome feels unjust — even if no harm was intended.
This explains why “technical” decisions can generate political backlash. People aren’t irrational for questioning methodology when outcomes hurt them. They’re applying procedural justice instincts.
Using the Seven Lenses to Evaluate Methodology
Seven Lenses for Decomposing Claims can interrogate design choices — whether you’re creating a structure or on the receiving end of one:
| Lens | Question for Methodology |
|---|---|
| Actors | Who designed this? Who was consulted? Who benefits from this definition? Who loses? Whose voices were absent? |
| Conditions | What assumptions does this rely on? What has to be true for this to work fairly? What if those conditions change? |
| Trade-offs | What’s sacrificed to get this simplicity or measurability? Whose reality is erased to make the numbers clean? |
| Scope | Does this claim universality it doesn’t have? Is “average” really representative, or is it a fiction? |
| Scale | Does this work the same for individuals vs. aggregates? Can a measure that’s accurate in aggregate be unfair to individuals? |
| Mechanism | HOW does this methodology translate into consequences? What’s the causal chain from measurement to outcome? |
| Sequence | What had to happen first? Who had power at the moment of design? Can the methodology be revised, or is it locked in? |
The Distinction from “Structures Constrain Outcomes”
| Note | Focus | Key Question |
|---|---|---|
| Structures Constrain Outcomes Independent of Merit | Navigating existing structures — individual effort operates within limits set by the structure | ”Given this structure, what can I achieve?” |
| This note | Designing structures — consequences are distributed before anyone acts | ”Who decided the structure would work this way, and whose interests does that serve?” |
The first is about playing within rules. This is about who writes the rules.
Both matter. But they require different responses:
- If you’re navigating a structure: understand its constraints, work within or around them
- If you’re designing a structure (or can influence its design): recognize that your choices distribute consequences, apply the Seven Lenses, seek input from affected parties
Common Trap
Trap: Treating methodology as “just technical” — separate from politics or values. Assuming that because experts made the decision using data, it must be neutral.
Fix: Recognize that every methodological choice answers the question “whose reality counts?” That question is inherently political. Data and expertise inform the choice but don’t make it neutral. The choice still distributes consequences.
The tell: When someone says “that’s just how it’s measured” as if that ends the conversation — that’s the moment to ask “who decided to measure it that way, and who benefits?”
North: Where this comes from
- ECON-1221 Chapter 5 - Notes from the Textbook (CPI basket decisions sparked this insight)
- Organizational Justice — Four Types (explains why people care about process, not just outcome)
- ECON-1221 Chapter 4 - Notes from the Textbook (arbitrary decisions in measurement)
East: What opposes this?
- Methodology as Neutral (the assumption that technical = apolitical)
- Meritocracy Assumption (outcomes reflect only individual action, ignoring structural advantage)
- Expertise as Authority (if experts decided, it must be correct/fair)
South: Where this leads
- Seven Lenses for Decomposing Claims (tool to interrogate design choices)
- Procedural Justice in Policy Design (how to make methodology fairer)
- Consulting as Structure Design (when advising clients, you’re shaping structures that distribute consequences)
West: What’s similar?
- Structures Constrain Outcomes Independent of Merit (complementary — navigating vs. designing)
- The Map Is Not the Territory (measurements are models, not reality — all models embed choices)
- Framing Effects (how a question is framed shapes what answers are possible)