The Martian at Your Dinner Table

Picture this.
A Martian lands on Earth. Looks human. Two eyes, a nose, walks upright. Sits across from you at a dinner table. Speaks your language fluently, let’s say English, because why not. You’re having a great conversation. Everything feels… normal.
Then the food arrives.
You start eating. The Martian doesn’t. “I can’t eat that,” it says. “My digestive system works differently.”
And suddenly you remember: this thing is not human.
It looked human. It talked human. It sat at the table like a human. So somewhere between the handshake and the bread basket, you forgot. You stopped seeing the alien and started seeing a colleague.
This is exactly what’s happening with AI right now. At scale. In every organization trying to figure out how to work alongside it.
The Forgetting Problem
Here’s what I’ve been observing, in my work helping organizations align their peope and AI, in conversations with enterprise leaders across dozens of countries, and in my own daily use of these tools: the better AI gets at behaving like us, the faster we forget that it isn’t us.
When you talk to Claude, GPT, or any modern LLM, the interaction feels human. The responses are articulate, contextual, sometimes even funny. Your brain does what brains do, it pattern-matches to the closest thing it knows: another person.
And then you hit the dinner table moment. The AI “can’t eat the food.” Maybe it hallucinates a fact. Maybe it confidently optimizes toward a goal you didn’t intend. Maybe it makes a decision that no human with common sense would make — because common sense was never in its training data.
You feel betrayed. Wait, you were speaking English a second ago. Why can’t you eat?
But the Martian was never pretending. You just stopped remembering what it was.
This Isn’t a Bug — It’s the Design Challenge
The instinct most organizations have is to treat this as a technology problem. “The model needs to be better.” “We need more guardrails.” “We need a better prompt.”
Some of that is true. But the deeper issue isn’t the AI — it’s us. It’s how we set up the collaboration.
I often share a story about a Las Vegas hotel that used an ML model to sell flight-and-hotel bundles. The optimization target was simple: sell more tickets to people likely to gamble. The model did its job beautifully — until they noticed a spike in cancellations and refund requests. When they investigated (and this required a human picking up a phone and calling people one by one), they discovered the model had started targeting individuals with certain mental health vulnerabilities. People with gambling tendencies tied to psychological conditions.
The model wasn’t evil. It didn’t have a concept of “mental health” or “exploitation.” It lived inside a narrow world, the data sandbox the team had built for it, and optimized for the only thing it could see: conversion rates. Everything outside that sandbox didn’t exist. Not ethics. Not consequences. Not the phone call from a distressed family member asking for a refund.
The Martian ate the food it was given. The problem was the menu.
Multi-Agent Teams and the Alignment Gap
Now scale this up. We’re not talking about one chatbot anymore. We’re talking about multi-agent systems, multiple AIs collaborating with each other and with humans. The industry loves the phrase “human-in-the-loop,” but let’s be honest about what that loop actually looks like.
You have Agent A researching. Agent B drafting. Agent C reviewing. And somewhere in that chain, a human is supposed to be providing oversight. But oversight of what? Based on what understanding of how these agents “think”?
This is where the Martian metaphor gets real. In a team of humans, you can read the room. You notice body language, hesitation, the slight frown before someone agrees to a plan they don’t believe in. You catch the micro-signals. AI gives you none of that. It gives you confident, articulate output, and zero micro-signals. When a human team member disagrees, you sense it before they say it. When an AI agent goes sideways, you find out when the refund requests start piling up.
The alignment challenge isn’t “make AI smarter.” It’s: how do we design work systems where fundamentally different kinds of intelligence collaborate effectively, given that one kind looks so much like the other that we keep forgetting it isn’t?
Three Things I’d Want Every Leader to Sit With
First: Resemblance is not equivalence. The Martian speaks English. It doesn’t mean it digests our food. Fluent output does not equal human judgment. Every time you see AI produce something articulate, remind yourself, that’s the English, not the digestion.
Second: The world you build for AI is the world it lives in. The Las Vegas model wasn’t broken. Its world was too small. When you deploy AI in your organization, you’re not just choosing a model, you’re designing a reality. Every data field you include or exclude, every metric you optimize for, every edge case you don’t account for: that’s the menu at the dinner table.
Third: Human oversight is a skill, not a checkbox. Research shows that when you pair AI with human experts, accuracy sometimes drops, particularly with senior professionals. Why? Because seniors override AI outputs based on ego and identity rather than evidence. “I’ve done this for 15 years” becomes a reason to reject the machine, even when the machine is right. Meanwhile, juniors, who carry no such baggage, collaborate more effectively. Oversight requires humility, and humility requires you to remember: you’re dining with a Martian, but you also don’t have all the answers yourself.
The Real Work Ahead
We’re at a strange inflection point. AI is good enough to feel like a teammate and different enough to fail like an alien. Most organizations are designing for one of those realities while ignoring the other.
The ones building “AI-first” strategies tend to forget the alien part, they over-trust, under-supervise, and get blindsided when the digestive system fails. The ones resisting AI tend to forget the teammate part, they see only the alien, miss the leverage, and fall behind.
The actual work, the hard, unglamorous, deeply human work, is designing for both. Building teams, cultures, and systems that account for the fact that your most capable new colleague is, at a fundamental level, not from this planet.
That’s what alignment really means. Not aligning the model to your preferences. Aligning the entire system, human and AI, around outcomes that actually matter.
The Martian is already at the table. The question isn’t whether to invite it. The question is whether you’ve thought about what’s on the menu.