What Happened Tonight
We didn’t come to teach anything. We came to sit inside a question, one that doesn’t offer a clear answer, and probably isn’t meant to. It’s the kind of question that sits underneath the surface of everyday life, quietly shaping decisions we don’t realize are being made.
As Khalil said during the live, "The machine doesn’t fail when the outcome feels wrong. That’s the hard part. It does what it was trained to do. What we told it to do." That line hit us both—because it cuts through the illusion that this is about machines making mistakes. It's not. It's about human instruction made invisible.
Tonight on Substack Live, Khalil and I opened one of those questions. Not to find resolution. Just to sit with it, honestly, out loud.
The question was simple on the surface. You’re in a self-driving car. It’s moving quickly. Someone steps into the road. The AI has to decide whether to swerve and risk killing the passengers inside or stay its course and strike the person ahead. No time. No perfect outcome. Only choices. And behind those choices: code.
What does the machine do? And more importantly, who taught it how to decide?
The Real Question Behind the Scenario
We know these decisions are already being written into real-world systems. This isn’t hypothetical. It’s already here. But what’s often left out of the conversation is the part we all play in shaping how the machine behaves. Because when the machine makes a decision, it is drawing from something that came from us.
It’s not failing. It’s doing what it was told.
The machine isn’t the problem. It never really was. It’s simply carrying out what it was given, operating off logic someone else programmed, following instructions someone else decided were correct or ethical or necessary. Which means when the outcome feels off, when it feels cold or unjust or somehow wrong, it isn’t because the system failed. It’s because it did exactly what it was designed to do.
We told it what to prioritize. We told it who to protect. We gave it definitions of what counts as harm, what counts as risk, what counts as acceptable loss. That’s where things start to feel a little heavier. Because the moment you realize the machine is just reflecting human judgment, you’re forced to reckon with the people behind it. And that includes all of us.
The Values Beneath the Code
It’s not about whether the tech is getting too smart. It’s about whether we’ve taken enough responsibility for the values being embedded into it. Whether we’re asking the right questions before we feed it into systems that will shape how millions of people move through the world.
As Rich shared during the live, "What we told it—most of us never even saw. It’s not about AI getting smarter. It’s about us not slowing down long enough to decide what values we’re actually embedding."
And Khalil followed with something that stayed with many of us: "If the machine is mirroring us, then we better be really clear about what we’re asking it to reflect."
The truth is, those values, no matter how neutral or objective we might want them to sound, always come from somewhere. They’re shaped by culture, by class, by legal precedent, by fear, by policy, by power, by bias, and by everything we’ve normalized without realizing it.
This is what makes the conversation uncomfortable. Not because the technology is out of control, but because it’s under our control, and what we’ve trained it to do might not line up with the kind of future we say we want.
This Is What We’re Here For
We didn’t solve it tonight. That was never the point. We held the space. We asked each other hard questions. And we made time for reflection without rushing toward resolution.
That’s what Coffee with Philosophy is for.
And while this was the first time we brought it live to Substack, it’s something we do regularly inside the-
Kornerz app. Every Tuesday and Thursday at 7:30 AM Eastern
we sit with questions like this — questions that push back on the pace of daily life and open something a little deeper inside us. More live times are coming soon, and you don’t have to wait for one to join. You can always open a Nook in the Kornerz app and start a conversation of your own.
What happened tonight was just one example. One thread. But it matters, because these conversations don’t disappear when the livestream ends. They live in us. They keep working their way through our thoughts, our choices, our interactions. They remind us that we’re still allowed to think for ourselves. And more importantly, to think together.
Continue the Conversation
Throughout the live, some of the responses in the chat stayed with us. These weren’t just surface-level questions—they stirred something deeper, and they kept looping back into the dialogue:
What do you think the machine should do—and why?
Who do you trust to make decisions like this—governments, tech companies, or no one?
What scares you most about AI making moral choices without us?
These questions shaped the way we listened to each other tonight. And they’re still open. Still unfolding. You can answer them below in the comments—or better yet, come into Kornerz and let your answers spark a live Nook of your own.
If you were there tonight, thank you for showing up. If you’re reading this after the fact, you’re still in the conversation. This space is for you, too.
Here are a few questions we’d love to hear your thoughts on. You can respond in the comments here—or, even better, come into the Kornerz app and share your reflections there. This is exactly the kind of thing we love to explore together, and we’d be glad to continue the conversation live.
What is one value you’d want every AI system to carry?
What is one you hope it never learns?
Have you ever had to make a decision where neither outcome felt right? What did it teach you?
Do you believe technology can ever be truly neutral?
Who do you trust to make ethical choices about AI — and who do you think should be making them?
Let us know. We’ll be reading. We’ll be listening.
And we’ll be back Tuesday morning. Same time. Same intention.
and
Share this post