Date
Published on

Breaking: GPT-4o How will it impact CX?

Authors

GPT-4o - How will it impact CX?


Empathy

As an early adopter of OpenAI technologies, we eagerly awaited today’s announcements. Our team enthusiastically watched Mira Murati introduce GPT-4o, and by this afternoon, we had a trial running with our Crescendo CX Assistant. The speed improvement is immediately apparent and will prove very helpful, but let’s discuss some of the more disruptive themes:

Voice

This is the most obvious disruptive feature of GPT-4o for CX. Despite years of expansion in digital customer care, the majority of customer service is still conducted over the phone. The CX industry is bracing for the impact of the first generative AI voicebots; though outside of our internal tests, I’ve yet to encounter one. If you’ve tried the ChatGPT 4 client and voice interface, you have some sense of how a CX voicebot would “feel.” It was reassuring to hear OpenAI acknowledge its initial clunkiness. At Crescendo, we’ve implemented workarounds to handle interruptibility and reduce latency, making it workable. It’s great to see OpenAI focusing deeply on integrating voice more naturally. From their announcement demos, you can tell there are improvements, but there is still work to be done to reach a human-to-human interaction level that will truly disrupt the CX industry. We look forward to continued innovation in this area, as we believe it will ultimately dwarf the impact of digital/chat assistants.

Omni

Let’s group all the other "Omni" features together under OpenAI’s terminology. Omni includes the ability to natively view and understand videos, such as from a phone camera, and computer desktop displays. The demonstrations here offer a peek into potential use cases. Imagine troubleshooting a consumer product by pointing your phone camera at it and having an AI system identify its state and assist you with troubleshooting. Although the demonstrations by OpenAI focused on still images within a video stream, the potential for new use cases is clear. Viewing the state and condition of an application could preempt a remote-desktop-type customer support scenario, which is common in many support operations, such as IT help desks. We’re eager to experiment with what can be achieved in this space.

Sound and Emotion

There were examples of ChatGPT processing sounds (like breathing and applause) and emotions (through visual face reading). There was also a notable demonstration of ChatGPT reading a story with varying degrees of enthusiasm. This offers a glimpse into the next generation of voicebot experiences where the interaction takes on a tone and style appropriate for the context. Whether it's a sympathetic response to a delayed flight or an enthusiastic conversation about booking an excursion, there’s a very fine line to tread between empathy and cringe—and maybe OpenAI crossed it a bit. Nevertheless, the ability to infuse automated interactions with warmth and depth of emotion adds another powerful layer for application developers.

We will continue investigating these capabilities and work with our clients to test and implement GPT-4o as this technology becomes available. We look forward to the next wave of efficiency and effectiveness we can derive from these new capabilities.

Get in touch with us today.