I learned something in my macroeconomics class that I haven’t been able to stop thinking about.

The consumption function — the thing that tells you how much people spend — has this term in it called autonomous consumption. It’s all the stuff that drives spending independent of your actual income. Expectations about the future, wealth effects, credit conditions, consumer confidence. It all gets packed into this one variable.

And the thing about autonomous factors is that they’re assumed. They’re not derived. They’re the part of the model you plug in before you run the math.

Which means by the time you see the output, the conclusion was already shaped by whatever someone decided to stuff into that box.

The Vibecession Isn’t New

When I started reading about the “vibecession” — this idea that consumer sentiment diverged from economic fundamentals in 2022 and 2023 — my first reaction was kind of underwhelming. Because the macroeconomic models already account for this. The consumption function includes expectations. Keynes called them animal spirits. Friedman’s permanent income hypothesis says people consume based on what they expect their lifetime income to be. The idea that feelings affect spending decisions is literally chapter material.

So what did Kyla Scanlon actually discover?

I think the honest answer is: she named a specific instance of a pattern the models were already capable of producing. If you plug in the right inputs — divergent baselines between institutional reporting and lived experience, partisan filtering of economic data, the distinction between inflation rates and price levels — the models generate the vibecession as an output. The toolkit was there. The profession just wasn’t running that particular simulation.

And that’s a useful observation. But it’s applied work, not theoretical work. And I think the distinction matters.

The Real Question Is About the Inputs

My professor talks about this — how when you’re building a macroeconomic model, the assumptions are where the real argument lives. By the time you’re running the math, the conclusion is largely determined by what you assumed going in.

So the question isn’t really “why did vibes diverge from data?” The question is: were the inputs honest?

And then you have to ask — honest to whom?

This is where it gets uncomfortable. “Inflation is cooling” and “prices are 20% higher than three years ago” are both true statements. They’re answering different questions. CPI tracks the rate of change — how much more expensive is the same basket this year versus last year. But nobody lives in rate-of-change reality. People live in price-level reality. Eggs cost what they cost. And no amount of “inflation is coming down” makes the eggs cheaper.

Someone chose which number to put in the headline. That’s not a neutral act. That’s a design choice — and design choices distribute consequences before anyone acts within the system.

Every Comparison Has a Hidden Baseline

I keep coming back to this principle I’ve been developing in my notes: every evaluation is a comparison, every comparison has a baseline, and the baseline is usually invisible.

“The economy is strong.” Compared to what? Compared to 2020? Sure. Compared to 2019 purchasing power? Maybe not. Compared to what people expected their lives would look like? That’s a different conversation entirely.

The vibecession isn’t people being irrational. It’s two baselines producing two different conclusions. The institutional baseline says year-over-year inflation rate. The public’s intuitive baseline says what my groceries used to cost. Neither is wrong. But whoever picked the institutional baseline as the one that gets reported — they controlled the narrative.

And here’s the part that I think a lot of people miss. The choice of which metric to foreground isn’t just a technical decision. It’s a normative one. “Unemployment is 4%” — okay, but why unemployment and not underemployment? Why not the labor force participation rate? The selection of what to measure, what to highlight, what to compare against — all of these choices reflect judgments about what’s relevant, important, or meaningful.

The positive/normative distinction that we learn in first-year economics is pedagogically useful. But it’s not absolute. The moment you choose what to measure, you’ve made a value judgment about what matters.

The Cost Transfers to People Who Didn’t Choose It

When a modeler buries assumptions in autonomous factors, or a reporter leads with the convenient metric, or a politician frames the data to serve their narrative — the cost of that shortcut doesn’t vanish. It transfers.

It transfers to people making real financial decisions based on a framing they didn’t choose and probably don’t even recognize as a framing. It transfers to voters forming opinions about policy based on a baseline they can’t see. It transfers to everyone downstream of the narrative who doesn’t have the tools or the time to decompose what they’re being told.

The people bearing the cost didn’t choose the shortcut. And that’s the part that gets to me.

But Here’s Where I Get Stuck

I want to say “people should think more critically about this.” And I believe that. But then I have to be honest about something.

I’m a full-time student. This is literally what I’m studying. I have the luxury of sitting with these models long enough to develop frameworks for interrogating them. And it still takes me a long time.

I think about a single parent working two jobs. A tradesperson with a full schedule. Someone managing a household and aging parents and their own health and the daily machinery of just getting through the week. Are they supposed to run seven analytical lenses on every economic claim they encounter? Should they be asking “who chose this baseline?” every time they read a headline?

The structural reality is that most people don’t have the time, the access, or the training to do this kind of decomposition. And it’s not because they lack the intelligence. It’s because the structure of their lives doesn’t permit it. Telling them to “think critically” is an individual solution to a structural problem.

Which means narratives aren’t just laziness. They’re kind of a rational adaptation to a structural constraint. When you don’t have time to run the full analysis, you reach for a pre-packaged conclusion. The vibecession. The soft landing. Whatever the pundit said this morning. A label converts uncertainty into something that feels like understanding.

And that’s the illusion of competence operating at a societal scale. Recognition masquerading as recall. You read the label, it matches your feeling, you nod along, and now you feel like you understand what’s happening in the economy. But you couldn’t reconstruct the argument from scratch if someone asked you to.

The narrative didn’t reduce the actual uncertainty. The economy is still doing whatever it’s doing regardless of what you call it. The narrative just made you feel less uncertain. Which might actually be worse — because now you’ve stopped investigating.

The Contradiction I Can’t Resolve

So then — what? If people can’t do the work, and narratives are a structural inevitability, and the cost of simplified framing transfers to the people consuming it — what’s the point of recognizing any of this?

I went through a loop trying to answer that. It went something like: bias exists, so you should detect it. But detection takes time and tools most people don’t have. So you have to trust the people providing information. But trust requires assuming they don’t have perverse incentives. Which you can’t verify. So you accept bias. But if you accept bias, why bother detecting it? But you can’t not detect it because it affects decisions. But do decisions even matter if the information feeding them is biased anyway?

I don’t think that loop resolves cleanly. I think it’s a tension you manage, not a problem you solve.

And then I hit the final wall. Self-determination theory. People prefer to feel they have control over their actions. Turning something into an obligation reduces motivation. There’s plenty of research supporting this.

Everything I’ve described — the lenses, the baselines, the bias detection, the methodology critique — the moment you turn any of it into “you should do this,” you’ve killed the intrinsic motivation that makes it actually work. You’ve turned critical thinking into homework. And people resist homework.

Which means you can’t teach this. Not directly. Not as a framework you hand someone and say “here, use this.” The second it feels imposed, it gets rejected or complied with superficially. Either way, you lose.

Where That Leaves Me

I don’t have a clean answer.

I think the honest position is something like: you can’t make people think critically. You can’t even tell them to. You can only be visibly doing it yourself and let people who are ready for it find their way there on their own terms.

And for yourself — you don’t have to resolve every tension. Not every insight needs to become a system. Sometimes you just notice something real about how the world works, and that’s enough.

The uncertainty doesn’t go away. But I’m starting to think the uncertainty is the point. The people who feel certain about economic narratives — who consume a label and feel like they understand — those are the ones who should probably worry.

I’d rather sit with the discomfort of not knowing than the comfort of thinking I do.