Agentic AI versus CX

Published on

In many ways automated “Agentic AI” is going to make your company worse at customer experience (CX) if you’re thinking of customer service simply as a technology layer.

Some of my favourite CX experiences end with the customer apologizing for being rude or intense. For example, when the customer starts with anger and accusations, and the agent (human, in this case) is kind and objectively addresses the concern. And then the customer comes back with “thanks for holding me accountable,” and is generally surprised that they got support with their concern.

They do this in many cases because there’s an actual human responding. Customers are so used to dealing with awful CX experiences in telecom, healthcare, govt. services, etc., that they often show up in all requests for support expecting a cold and robotic response. They start from a place of distrust that anyone’s going to care about their experience at all.

It goes both ways—I’ve been this angry customer so many times. 😅

In fact, some of the most incredible changes I’ve made in my life were in direct response to me being a jackass to someone I don’t know and having them directly, objectively say “Okay, guess you’re not the person I should be talking to, buzz off now”; forcing me to confront my intensity as rude as it was.

Every now and then, we need to be told we’re wrong, and occasionally we need to be told to shut right up. The best customer service agents do this fluidly and with consummate kindness.

Important note here: kindness is not niceness.

In my mind, the key to excellent customer service is two-fold:

  • Objectivity: “Here’s what happened, here’s what we’re doing about it”. I try to avoid excuses & defensiveness. I definitely apologize for our mistakes, but not necessarily for how the customer feels.
  • Accountability: “Your feedback has been noted and a refund has been processed”. Take care of the concern as best you can.

Yes, AI can be helpful for workshopping CX responses. For example, asking for feedback on your initial pass at a response which you wrote yourself:

“How might this tone make our customer feel?”
“How can I make this more objective?”

But having AI handle the interaction (as a agent without a human-in-the-middle) is a great way to not develop any of the interpersonal skills necessary for effective support. In other words, when you get a customer you interact with in person or on a call, you’ll be less effective because you let an LLM deal with the emotion rather than feeling it yourself.

I’ll admit it: I get really triggered by customers every now and then. We’re all rude ghouls every now and then. Like if a customer doesn’t get immediate access to a product because of a bug or downtime, they might fire off an email accusing you of being a scammer.

One habit I’ve developed is snoozing these emails for a bit and feeling all the ways that I am a “scammer” in my bones—really stewing in it.

Where does the fear of being a scammer come from? It’s a fear that I’m doing something wrong—that I’m a bad person. So I sit with that and process all the ways I might be scamming people. I come to the conclusion that I am one of most generous people I know and I tend to obsess over doing things right and by the book. Is that how a scammer acts?

And then I might sit with the possibility that a customer might be feeling genuine fear on their end that they lost money or are having second thoughts on a purchase. I know that feeling; it’s not great.

With these “understandings” in mind, I can write a response that takes all perspectives into account, instead of just my initial reaction. Here’s where an LLM can actually help—workshopping the language, checking tone.

But notice what happened first: feeling the emotion. Processing. I came to my own conclusions about what’s true and what’s not.

I won’t defend and say “this is not a scam” or “I’m sorry that you’re feeling this way.” I’ll say “I’ve confirmed your purchase and fixed the access issue,” staying objective.

This typically leads a customer to thank you—sometimes even apologize for being intense. Having a real person respond, one who actually worked through the friction, is what makes that possible.

So use LLMs for what they’re good at: generating content in a provided context, refining drafts, checking your blind spots, but if you’re replacing your CX people with AI agents, your customer experience will suffer—because the work of service is in feeling the emotion, sitting with it, and working through it.

That should not be an agentic process.