Chris Butler on How to Build AI Products That People Trust

Editor’s note: You’re about to read an extract from our new book “Product Mindset”. You can get the full book here.

The key to success in AI implementation is in understanding the reasons that users trust and interact with AI. Like everything, it comes down to your human understanding of your customers. The state of mind that they are in. The actual needs that they have. And the mental models they hold which may not actually correspond to the hard facts of reality.

Chris Butler, Director of AI at the Philosophie Group, has implemented multiple complex AI solutions for a broad range of companies. Let’s explore some of his advice for understanding and employing Artificial Intelligence with insight and purpose.

Start with What AI is Not

  • AI Is not just chat bots
  • It’s not magic dust to improve any product
  • It’s unlikely to kill us all in the near term
  • We’re also unlikely to fall in love with it in the near term
  • AI is not automation
Reflection, lights and tunnels

So, what is AI?

“Data science produces insights. Machine Learning produces predictions. Artificial Intelligence produces actions.”

David Robinson, Chief Data Scientist at Data Camp.

Human Centered AI

Remember, everything boils down to the human user. When we think about AI from a human standpoint, we see:

  • Data science produces insights about people’s needs and desires
  • Machine learning produces predictions about what people will do
  • Artificial Intelligence produces actions that help people

In one line: AI is: a machine algorithm helps us exist towards our purpose, and nothing else.

AI Exists to Maximize or Minimize Human Actions – With Limitations

The idea that AI exists to radically maximize use of a product or service can be taken too far. There are only so many hours in a day and so many times someone can reasonably purchase a flight or watch a video. Netflix have apparently said their main competition is sleep! The limit here is obvious: People can stay up late, but eventually, they’ll need to close the laptop and get some shut-eye!

The idea is therefore to find a target, optimal number to represent usage. Humans are messier than big data would like us to be. Therefore, we need to find the sweet spot. In the pharmaceutical industry, this sweet spot is referred to as the Therapeutic Index, which measures how much medication someone should take before it becomes toxic.

AI or machine learning would attempt to optimize to an extreme degree, for example, to try and have humans watching Netflix videos 24 hours a day! Any human being, however, can understand that this would ultimately be harmful. Data doesn’t always get this.

Take Care in How We Build Customer Mental Models

girls eyes and a book

In the 1980s Don Normal performed an experiment on mental models where he examined how people use calculators. Regardless of the type of calculators use, a consistent function that all people perform is to clear the calculator multiple times. That’s a way humans have internalized how to use calculators even if it’s not the most optimal approach. Therefore, designers need to learn and understand the messy, sloppy and incomplete models that our users actually have. Our perfect theoretical models are not our users real, messier models.

So how do we get to good mental models? Tools like empathy mapping and personas help to bring out the assumptions that a user has about the way technology works, and how those assumptions actually play out in the real world.

For example:

  • When asking questions of our customers, we use simple, sensory language that does not require the user to have any understanding of AI.
  • We even go as far as to use roleplay prototyping where a human behaves as if they were a product, for example, a chat box, saying only words available to the software. By doing this, we get a sense of how empathetic and effective the process is.

As you put the user into the process and imagine a human being doing what your AI is being programmed to do, you’ll learn which processes it is OK to automate away, and which are not.

How Does a Machine Build Trust with People?

There are four levels of trust in machine automation

  • Use: People voluntarily use automation when they want
  • Misuse: People over-rely on automation
  • Disuse: People do not trust or are underutilizing automation and therefore they turn it off
  • Abuse: Automation of functions not taking into account the consequences on human performance

Everything boils down to actually understanding the real use case for the people using your products. For example, recently people had their Medicare/MedicAid benefits automatically adjusted by a machine algorithm. They had to go through the courts to get this explained or changed, a distressing experience that did not consider the human element.

Another example is a safety robot that was meant to lead people out of burning buildings. The trouble is, people would blindly follow the robot even if it malfunctioned and went around in circles. A robot was overkill for this kind of work, and people trusted it too much.

Gaining and Building Trust for Product Managers and the Products they Build

Trusting someone else, hand shake

The qualities of trustworthiness vary between individuals and products, but there are commonalities.

Someone who gains trust:

  • Creates alignment
  • Makes hard decisions
  • Understand the audience
  • Expands understanding

Someone who loses trust:

  • Doesn’t know the audience
  • Isn’t logically consistent
  • Makes mistakes in spelling, grammar, communication

A product that gains trust:

  • Is interpretable
  • Allows expertise to be built
  • Helps avoid human error
  • Makes good decisions

A product that loses trust:

  • Doesn’t allow intervention
  • Makes simple mistakes
  • Breaks the illusion of control
  • Doesn’t take the operator into account

Who is Accountable?

“If a machine is expected to be infallible, it cannot also be effective.”

Alan Turing

Real intelligence requires imagination, risk taking, and the willingness to make mistakes. If you are not wrong often enough, you are probably not trying hard enough. With this in mind, we need to think about who is accountable for the mistakes made by artificial intelligence. The software, the users, or the designers? Clear accountability is essential for building trust.

Summary

  1. Success isn’t just about minimizing or maximizing human behavior
  2. User mental models do not equal your mental models
  3. Make building the right amount of trust a priority

Meet Chris Butler

Chris Butler from IPsoft

With over 19 years of experience working in Product and Business Development, Chris Butler has played key roles at companies like Microsoft, KAYAK, and Waze. While studying Computer Systems Engineering at Boston University, he began mastering AI like graph theory and genetic algorithms.

He has since gone on to create techniques like Empathy Mapping for the Machine as well as Confusion Mapping to create cross-team alignment while building AI products. He has also worked on AI-related projects for businesses like Complete Seating, Horizon Ventures, and Philosophie.

ProductCon banner

Enjoyed the article? You may like this too: