Power of Prophecy

Power of Prophecy
Source: Ep478 The Power of Prophecy

Here is a summary of Carey's interview with Carissa Véliz, associate professor at the Institute for Ethics in AI at the University of Oxford and author of Prophecy: Prediction, Power and the Fight for the Future—From Ancient Oracles to AI.

The Central Thesis:
Prophecy as Power, Not Truth

Carissa's core argument, developed throughout the interview, is that prophecy has historically functioned not to predict the future but to determine it—a mechanism of power disguised as foreknowledge. From the Oracle of Delphi to modern AI executives, those who claim special insight into what is coming often shape what comes to pass. The Pythia's inconvenient prophecies could be altered with additional offerings; emperors like Augustus manufactured prophecies retrospectively to legitimise seized power. Carissa draws a direct line to today: tech executives like Sam Altman and Dario Amodei function as modern court astrologers, whispering visions of inevitable AI dominance into the ears of political leaders while serving financial interests.

The Power of Prophecy - Firewalls Don’t Stop Dragons Podcast
We have relied on prophets and seers for most of human history, largely because humans are obsessed with the future – specifically their own.…

Click for the full discussion and supporting notes - backup here

AI as Prediction Machine—and Its Fundamental Limits

Large language models are, at base, sophisticated word prediction engines. Carissa emphasises that they do not track truth but generate statistically plausible responses. When asked which books she has written, ChatGPT might invent a plausible-sounding title like The Importance of Privacy rather than consulting a database. This probabilistic nature means systematic failure: asked to count the letter 'r's in "strawberry," it answers two rather than three because it operates on pattern association, not actual calculation.

More troublingly, predictions about humans become self-fulfilling. If an algorithm deems someone unemployable, and most hiring systems draw from similar training data, that person may never secure work. The algorithm then claims accuracy—but this accuracy reflects engineered reality, not discovered truth. Carissa calls this "creating the reality they're purporting to predict." The same dynamic operates in criminal justice with bail, sentencing and parole decisions.

The Ideology Behind the Hype: Effective Altruism and Long-Termism

One of the most significant sections addresses the philosophical commitments of prominent AI figures. Carissa explains:

  • Effective altruism: Originating among philosophers at Oxford and San Francisco, it holds that altruism should be maximally efficient. While intuitively appealing, its utilitarian foundations lead to disturbing implications. The classic dilemma: a doctor might maximise utility by killing one homeless patient to harvest organs for five transplant recipients.
  • Long-termism: An extension prioritising hypothetical future populations over present ones. Effective altruists project trillions of future people, arguing their wellbeing outweighs billions alive today. This justifies spending enormous resources on speculative "existential risk" from AI rather than addressing current suffering.

Carissa notes this ideology serves tech executives perfectly: it positions them as indispensable protectors against dangers they themselves create. She identifies "a whiff of an abusive relationship"—the pattern of inflicting harm then offering salvation from it. She compares this to the Brahmin priests of ancient India, who monopolised access to divine knowledge, and to Professor Harold Hill in The Music Man, who manufactures crises to sell solutions.

Statistical Methods and the Erasure of Outliers

Carissa critiques the reliance on statistical models and normal distributions. "Some of the most consequential things in life are due to outliers"—the ant colony that survives because fringe members find paths around obstacles, the visionaries who transform society precisely because they do not fit patterns. Apple's "Here's to the Crazy Ones" campaign, narrated by Steve Jobs, celebrates these figures. Yet algorithmic systems systematically discriminate against outliers, selecting for average candidates when innovation demands exceptional ones. The more we automate selection, the more we "choose average people" and suppress the very diversity that drives progress.

Data, Objectivity, and Bureaucratic Violence

Several striking quotations from Carissa's book punctuate the discussion:

  • "Data is silent. We make it speak, but there's always more than one way to interpret it."
  • "If you torture data long enough, it will confess to anything."

She traces our numerical trust to bureaucratic history: as societies scaled beyond face-to-face trust, measurement became necessary for taxation and trade. But this generated what Carissa calls "bureaucratic violence"—the forcing of complex human realities into predefined categories. The rural French worker who is simultaneously carpenter, ironworker and cook must become merely "carpenter" to fit the system. Those who cannot be categorised are excluded. We trust numbers because we do not trust people, forgetting that people create the numbers.

Prediction Markets: Gambling Masked as Investment

Carissa delivers a scathing critique of platforms like Kalshi. These markets allow betting on any future event—from geopolitical conflicts to celebrity pronouncements—while presenting themselves as financial instruments. She distinguishes them sharply from stock markets, where investment actually capitalises companies. Prediction markets contribute nothing; they are "essentially gambling" dressed in respectable language.

The ethical problems are already manifest:

  • Politicians betting on themselves to manipulate public perception
  • Content creators wagering on their own shows while controlling editorial decisions
  • Anonymous accounts created hours before the Iran strike, earning millions—suggesting insider trading
  • Prediction markets paying journalists for coverage
  • Groups threatening journalists to alter reports about Israeli strikes, attempting to win bets retrospectively

Hybrid Intelligence and the Path Forward

Despite her critiques, Carissa sees promise in hybrid systems combining neural networks with symbolic reasoning. When an LLM recognises an arithmetic query, forwarding it to an actual calculator produces reliable results. This mirrors how human cognition integrates empirical observation with causal understanding. Pure correlation-based systems fail because they cannot distinguish causation; pure symbolic systems struggle with complexity. The synergy offers a better model.

Resistance and the Philosophical Stance

Carissa explicitly rejects the Cassandra role. She reports mixed success influencing policy: her previous book, Privacy is Power, placed banning the trade in personal data onto the public agenda, though no government has yet acted. Some companies have fundamentally changed data practices; others merely "check a box." She finds satisfaction in integrity regardless of outcome, criticising the utilitarian obsession with consequences we cannot predict.

Her closing advice centres on cultivating productive uncertainty:

  • Recognise uncertainty as "good news"—evidence you do not live in a police state
  • Hear predictions as predictions, not facts
  • Treat "this is the future" and "it's inevitable" as conversation-stoppers demanding challenge
  • Demand more from governments, companies and prophets
  • Build resilient institutions rather than chasing cheap, lenient solutions
  • Return to analog foundations: beauty, justice, truth, sincere human relationships.

She invokes Epicurus and the Greek legacy: just as philosophy grew alongside poisonous prophecy as its antidote, critical questioning remains our defence against deterministic visions. "The stuff of life is mostly analog"—a point Carey echoes in recommending her TED talk on the subject.

The interview concludes with Carissa's refusal to plan her next project, embodying her own philosophy: "I never make plans that far ahead." Energy spent worrying about the future, she suggests, is better invested in building and enjoying the present.