Empathy for the Machine: How LLMs May Make Us More Human

AI isn’t replacing human thinking. It’s exposing how often we weren’t thinking clearly to begin with.

As I’ve worked with these systems, it’s obvious that empathy is becoming the most valuable skill in the future workforce. Not empathy as a soft skill or emotional nicety, but empathy as a developed and practiced capacity to understand something on its own terms rather than through your own assumptions. It produces better outcomes in human relationships and better outputs in AI collaboration for the same reason: clarity replaces projection.

The Divide We Brought With Us

As many like me have tried to do, I have tried to keep up with AI’s development, which has been a headache. About a month ago, I experienced the full range of AI discourse on a single day. That morning, I read Matt Shumer’s viral essay “Something Big is Happening”, which amassed over 80 million views, which argues we are living through an inflection point comparable to the early days of Covid, where most people don’t yet grasp how completely AI is about to reshape knowledge work. That same afternoon, I watched a ColdFusion analysis of a recent Remote Labor Index showing that AI fails 96% of real-world jobs supplied via Upwork. The disagreement on this technology, something so scientific in nature, was jarring.

What Shumer represents is the experiential view, the sense that something has fundamentally shifted, felt before it can be measured. What ColdFusion represents is the empirical corrective with the data as an anchor against hype. Both are legitimate; however, they’re incomplete. And nearly everything people bring to this conversation, the optimism, the skepticism, the FOMO, the dismissal, traces back to what they were trained on long before AI entered the picture.

That last sentence is not a joke. It’s the argument.

The King of the Median

To develop empathy for these systems, you have to understand what they actually are. LLMs are not tools in the traditional sense. They’re eager to please, fast to respond, and fundamentally limited by the boundaries of their own experience. They’re ghosts of data past. In my own work, the quality of output almost always reflects the clarity and creativity of the input. 

This is because AI models aren’t necessarily thinking, they’re predicting. AI models struggle in a specific and predictable way. For example, an LLM doesn’t play chess; it predicts sentences about chess. That’s why, in the middle of a high-level match, it might suggest an illegal move with total confidence. It doesn’t understand the board; it understands patterns. This is because it is trained on data derived from our records of the world, not the world itself. It doesn’t have a world model; it has a textual model.

I like to think of this in terms of statistics, with AI as the King of the Median. The model is optimized to produce the most statistically likely response, even when that response is logically flawed. It will confidently give you the average, even when the average is wrong. This is where most expectations around AI begin to break down. This isn’t because the technology failed, but because the person using it failed to work empathetically.

The Outlier Advantage

If the machine is the King of the Median, the human role is focused on the outlier. Mastering the outlier is a skill, practiced and developed to move and leverage the median AI is the king of.

Empathy, in this context, is the developed capacity to understand where the model ends and where you begin. You build it by working with these systems intentionally. You have to evaluate outputs against your own judgment, staying curious about why something landed flat, and notice when you’ve accepted the median without questioning it. It’s a workout, not a weighted blanket. It’s more cognitively demanding than accepting whatever the model returns, not less.

The most effective way I’ve found to use these systems is not to ask them to be creative, but to input a creative starting point, then work with it to the top. Taking time to be creative prior to prompting and to be prudent while reviewing the outputs separates the top performers from the mediocre.

Without that practice, you stay stuck in the median. The public already has a name for this: AI slop. And the reaction to it isn’t a rejection of AI but to the inauthentic. It's a rejection of content that feels empty, predictable, and mass-produced. Slop is not a failure of the model. It’s a failure of thinking. Empathy is what closes that gap, because it keeps you honest about what the tool can and cannot do.

The Sociological Bridge

Most people never develop a working understanding of these systems, and the reasons trace back further than AI itself.
The last paradigm-shifting technology wasn't AI. It was the internet and the way each generation learned to relate to the internet didn't just change what information they could access, It changed what they believed information fundamentally is. That belief, carried forward, is now shaping how entire teams and organizations relate to AI in ways that are equally misguided and almost never examined.

The Pre-Internet generation grew up treating information as authoritative. Books, experts, broadcasts. Truth was something you found in a credible source. When they encounter an AI that speaks with total confidence, cites nothing, and never says it doesn't know, that register feels trustworthy. It matches the pattern of authority they were trained on for decades. The result is over-trust, not out of naivety, but out of a perfectly rational application of a framework that worked for most of their lives.

The Internet-Early generation treats information as a network. They are fluent in search, retrieval, and verification. They grew up watching confident claims get debunked by the afternoon. When they see the same sourceless, unverifiable AI output, a red flag goes up. Their framework: if it's generated and unverified, approach with skepticism. Also rational. Also well-earned. It produces the opposite response: reflexive dismissal.

The AI-Native generation interacts with information rather than retrieving it. They're fluent with the tool, comfortable with dialogue-driven output, and often the most impatient with its failures. But fluency is not understanding. This generation grew up getting fast answers from systems they never had to think about, Google, autocomplete, and recommendation algorithms. AI fits that same pattern. The result is a kind of transactional blindness, treating the model like a vending machine or printer rather than engaging with what it actually is. They care about what they get out of it and don’t give it a second thought until it doesn’t work.

No matter the group, the framework, or the system failure, they make the same mistake. The irony is worth considering: over-trust and reflexive dismissal are the same mistake in different clothing. All three groups are pattern-matching AI to something familiar rather than meeting it on its own terms. The tension between them isn't technical. It's philosophical. A disagreement about what information is supposed to be.

This is where empathy becomes a bridge between people, not just between a person and a system. Skepticism often comes from a genuine respect for truth. Over-reliance often comes from a model of interaction that isn't entirely unreasonable, given how these systems present themselves. Transactional fluency mistakes speed for understanding. Without recognizing what each of these positions actually represents, collaboration breaks down, between colleagues, between generations, between intelligences, artificial or otherwise.

The Return of Empathy

Large language models make visible what was easier to ignore before. They don’t interpret loosely or compensate for unclear thinking. They respond to what is given. And in doing so, they expose how often clarity is missing and not just in our prompts, but in our communication, our direction, our ideas. AI doesn’t introduce this problem; it’s just made it obvious.

In practice, teams that work well with these systems share a common quality: they give them what they actually need. Clear intent. Honest constraints. Specific context. It’s an organizational behavior shift and it requires the same skill that makes teams work well without AI in the room.

Empathy is a companion to thinking, not a replacement for it. The more you practice it with the systems you use, with your colleagues, and with the work itself, the more useful both you and the tools become. It keeps you in the loop. It keeps you honest. It keeps you from outsourcing the part of the work that was yours to begin with.

If these systems are optimized for the average, then the human role moves toward everything that cannot be predicted: taste, judgment, intuition, and authenticity. The more capable AI becomes, the less valuable it is to think like it.

What becomes more valuable is the ability to understand clearly, intentionally, and with practiced empathy with both the tools we use and the people around us.