Welcome

Evidence under uncertainty

Unbiased Machine is a set of practical tools for clear thinking and honest disagreement under uncertainty—especially when self-serving instincts and social dynamics distort judgment.

For years I’ve been writing and teaching about what happens when people try to get closer to the truth: scientific method, technology, argumentation—and the layer that often breaks conversations: incentives and identity.

I kept seeing the same pattern around high-stakes topics—sentience, transhumanism, AI, existential risk, alignment. These discussions are rarely neutral curiosity. They often trigger predictable failure modes: motivated reasoning, identity protection, and status dynamics. Not always out of bad faith; often it’s just human cognition doing what it evolved to do.

My working hypothesis (fallible, revisable) is that some of the most consequential biases aren’t the usual list of fallacies. They’re deeper: existential, social, and axiological biases—pressures to protect belonging, narratives, status, or a sense of control, even when clarity is the cost.

Where AI becomes relevant is not only as a topic, but as a tool. Used carefully, modern language models can support better disagreement: making assumptions explicit, separating object-level claims from meta-level disputes, generating steelman + best-critique pairs, testing alternative framings, and surfacing ambiguity early. This is not a guarantee of truth—LLMs can also amplify rationalization—but it is an empirical opportunity: we can build workflows that make intellectual honesty easier and escalation less likely.

Many materials on this site are drafted and iterated with AI assistance, with a simple goal: keep the tools short, usable, and testable (checklists, prompts, protocols you can try in real conversations, then refine based on outcomes).

This won’t manufacture certainty. It can, however, improve the odds of genuine progress—and reduce the odds of failures built out of persuasive narratives.

 

What you’ll find here

  • Case studies from public discourse (politics, science, tech, AI): claims, evidence, omissions, incentives.
  • Communication techniques that clarify or distort: framing, ambiguity, false dichotomies, and virality-optimized “truths.”
  • Classic cognitive biases and deeper background biases (identity, existential, axiological)—and how to notice when they’re steering the conversation.
  • A unifying thread: how to think under uncertainty without turning disagreement into tribal warfare.
  • A recurring obsession: axiology (what is valuable and why) and why it matters for AI alignment.

 

Two healthy warnings

  • This is not a certainty factory. It’s a workshop for better questions.
  • The biggest risk isn’t that “others are biased.” It’s believing you are not. Expect corrections, caveats, and updated views when the evidence or reasoning demands it.

 

Vision

If artificial intelligence helps us to debate rigorously, to see our biases, and to better understand the world, it can also improve what currently blocks alignment: coordination among humans. The technical solution may be difficult and cooperation unlikely, but a more enlightened humanity—supported by cognitive tools (and perhaps, one day, by biotechnology)—makes that “unlikely” more attainable.

 

Subscribe

If you want these case studies and—more importantly—the reusable tools behind them, subscribe on my Substack. The goal is that each post leaves you with something applicable beyond the article: a better question, a quick check, a sharper mental model.