Thursday, September 1, 2016

Avoid the Uncanny Valley When Automating Customer Experience Interactions

Source: Pixabay
I wrote some months ago that marketers trying to improve their brands' customer experience can make the mistake of attempting to manufacture emotion rather than evoke it. Emotion is vitally important to build strong customer relationships, but the secret is to evoke positive emotions within your customers, not manufacture it at customers. Nowhere has this difference been more evident to me than in the way some brands stumble into the "uncanny valley" as they automate customer interactions.

If you are not familiar with the concept of the "uncanny valley," it describes the way humans can experience discomfort and even revulsion at robots that appear almost, but not exactly like, real human beings. While this term is typically applied to robots, you can see this effect at work in the way brands automate their customer engagement. Research suggests that automation must balance humans' awareness they are interacting with a machine with the functional and emotional cues provided by that machine. Attempts to make robots too human can create small incongruities that result in oversized negative reactions.

As brands adopt more automation in their social media, bots, IVR systems, marketing programs, and customer care systems, they must be careful that the desire to seem more human doesn't inadvertently cause negative, brand-damaging experiences. Just as a single incorrect line of code can cause an entire application to break, the smallest of missteps into the uncanny valley can damage customer relationships.

The danger to brands of the uncanny valley came to mind recently as I interacted with two brands' automated systems. In each instance, the brand attempted to inject emotion into their automated interactions in a way that created a negative rather than a positive response.
A Virtual Trainer Tries to Bolster My Ego

An online training program "hosted" by an imaginary virtual trainer provided positive feedback to a quiz response, telling me, "I'm proud of you." My reaction was profoundly negative for a number of reasons, not the least of which is that this pre-programmed, artificial being has no ability to feel anything, much less pride. The program designer stumbled into the uncanny valley, ascribing human emotion to a computer program. I know the system isn't human; the instructional designer knew the system isn't human; only the system seemed not to know this, and that felt creepy.

Another factor was that the level of praise was not appropriately matched to my action. The question I was presented was painfully obvious, and answering it correctly was no challenge. This level of effusive praise for such a simple behavior felt condescending, as if someone told me how proud they were I was able to tie my own shoes.

For another example of the uncanny valley, an examination of how brands stumble with this in social media and three tips to avoid the issue, please continue reading the complete blog post on my Gartner blog.

No comments: