Why human-centric design makes AI better

  • Post by Rachel
  • Aug 25, 2020
post-thumb

“I worry that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society. Despite its name, there is nothing “artificial” about this technology - it is made by humans, intended to behave like humans, and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.” - Fei-Fei Li, How to Make AI That’s Good For People

I’ve recently come across the Google People + AI Guidebook, published by Google in 2019 to help UX professionals and product managers follow a human-centred approach to AI. As you walk through the Google Guidebook, you might wonder why human-centred design is so crucial for AI development. Experts are saying that the only way forward is for AI-powered technologies to be grounded in human needs and work to extend and enhance our capabilities, not replace them.

“Smart technologies are unlikely to engender smart outcomes unless they are designed to promote smart adoption on the part of human end users. Successful applications of AI hinge on more than big data and powerful algorithms. Human-centred design is also crucial. AI applications must reflect realistic conceptions of user needs and human psychology

- Deloitte: “Smarter together: Why artificial intelligence needs human-centred design

Why AI sometimes flops

AI is undoubtedly in its golden age, hailed as the new electricity and set to be a massive game changer for the global economy with disruptive potential across every industry. AI’s pitfalls, however, have made some headlines but seem to disappear in the general AI hype. As Deloitte notes, AI algorithms embedded in ubiquitous digital technology can encode societal biases, spread conspiracies, disseminate fake news, amplify echo chambers of public opinion, hijack our attention, and even impair our mental well-being.

Many large scale AI projects that went pear shaped, like IBM’s Watson disease diagnosis’ $62 million setback, Microsoft’s friendly Twitter chatbot that turned racist within 24 hours and Amazon’s AI recruitment tool that favoured white males for engineering jobs, to name a few, demonstrate AI’s inherent bias, fairness and transparency issues, which ultimately degrade the user experience and impede adoption.

Part of the problem stems from a common misunderstanding of what AI can and cannot do. AI still has many limitations. Yes, machine learning applied to big data enables powerful AI applications ranging from self-driving cars and facial recognition to digital brand ambassadors, speech-enabled personal assistants, writing news articles, poetry and code and setting people up on dates. In many cases, AI performs these tasks just as well as or more accurately than humans. The uncanny quality of such applications might lead you to conclude that computers are implementing or rapidly approaching a type of of human intelligence in the sense that they “understand” what they are doing. But that’s not the case yet. Today’s AI algorithms are smart in narrow senses - not in the more generic sense in which humans are smart.

“Algorithms “demonstrate human-like tacit knowledge” only in the weak sense that they are constructed or trained using data that encodes the tacit knowledge of a large number of humans working behind the scenes. The term “human-in-the-loop machine learning” is often used to connote this process. While big data and machine learning enable the creation of algorithms that can capture and transmit meaning, this is very different from understanding or originating meaning.

- Deloitte: “Smarter together: Why artificial intelligence needs human-centred design”

Powerful algorithms can be currently trained by humans to churn through vast amounts of data and make intelligent, mostly accurate and fast predictions but they can’t - yet - demonstrate general human intelligence - i.e. experience, perceive and analyse the world in deeply contextual, complex and subtle ways as humans do. And this has a direct impact on the AI user experience.

What human-centred design means for AI

It’s clear that AI products can’t be built in a vacuum - As Fei-Fei Li, Chief Scientist for AI Research at Google Cloud, observes AI’s current limitations and potential concerns mean that AI applications should:

  • reflect more of the depth that’s found in our own intelligence and perception of the world (through multi-domain collaboration between engineers, data scientists, designers, cognitive psychologists and other human-centric disciplines and experts),

  • focus on enhancing rather than replacing humans, and

  • ensure that the development of this technology is guided, at each step, by concern for its effect on humans.

If you aren’t familiar with human-centered design, it’s an approach to problem solving, commonly used in design and management frameworks, that develops solutions to problems by involving the human perspective in all steps of the problem-solving process.

Human centred design aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques. This approach is intended to enhance effectiveness and efficiency, improve human well-being, user satisfaction, accessibility and sustainability and counteract possible adverse effects of use on human health, safety and performance.

The term ‘human-centred system’ was coined in 2008 by Mike Cooley, an Irish-born engineer who studied the social effects of technology and argued that:

“Human centredness asserts firstly, that we must always put people before machines, however complex or elegant that machine might be, and, secondly, it marvels and delights at the ability and ingenuity of human beings. The Human Centred Systems movement looks sensitively at these forms of science and technology which meet our cultural, historical and societal requirements, and seeks to develop more appropriate forms of technology to meet our long-term aspirations. In the Human Centred System, there exists a symbiotic relation between the human and the machine in which the human being would handle the qualitative subjective judgements and the machine the quantitative elements. It involves a radical redesign of the interface technologies and at a philosophical level the objective is to provide tools which would support human skill and ingenuity rather than machines which would objectivise that knowledge”.

The global design firm, IDEO, advanced the practice of “Design Thinking” from working with Steve Jobs on the design of Apple’s first mouse in the 80’s and the first notebook-style computer for GRiD Systems, redefining how users interacted with computers, to many other seminal projects in healthcare, education and more. Today, IDEO is at the forefront of “augmented intelligence”, merging machine learning with human-centred design by integrating data scientists into its project teams working side by side with design researchers, engineers, and interaction designers in response to increasing demand from AI-powered clients. In one such successful collaboration, IDEO helped Motorola turn mountains of customer feedback into product improvements in real time.

“We have some great chairs and mobile phones in the world, but we have a lot of systems that are not so great. Systems aren’t designed once and then left to run. They act much more like biology than machines and we need to make these systems super smart to evolve and learn–but they need to learn in collaboration with human beings.”

- Ideo CEO Tim Brown

How to design AI in a human-centred way

“When building any product in a human-centred way, the most important decisions you’ll make are: Who are your users? What are their values? Which problem should you solve for them? How will you solve that problem? How will you know when the experience is “done”?

- Google People + AI Guidebook

Google’s People + AI Guidebook uses an example app, RUN, which uses AI to help users get more variety in their runs by suggesting new running tracks, tracking their daily runs and meeting other runners at their skill level so they can stay motivated to keep running. The Guidebook’s recommendations, based on data and insights from over a hundred individuals across Google product teams, industry experts, and academic research, are given in the context of the RUN app and highlight what you need to consider when designing AI using a human-centred approach. Here are my top 10 takeaways:

1/ Understand which user problems are good candidates for AI - Making sure you’re solving a real problem in ways in which AI adds unique value is absolutely key to your product’s success. Ask if adding AI to your product would actually improve it or might it degrade the experience?

AI is useful when you want to make different recommendations to different users (e.g. personalised job recommendations on LinkedIn), predict future events (like flight prices in three months’ time), personalise a product (such as a smart home thermostat), understand natural language (as Alexa or other personal assistants do), recognise and classify large amounts of data (such as image tagging on social media), detect evolving patterns across big data (like fraudulent credit card transactions), create an agent or bot experience for a particular domain or automatable process (such as hotel bookings or customer service enquiries) and show dynamic content when that’s more efficient than a predictable interface (e.g. Netflix and Amazon’s movie suggestions). Conversely, when the user experience is better if you maintain interface predictability, minimise errors, provide static or limited information or don’t automate a task, AI is probably not the ideal solution.

2/ Think about “automation” vs. “augmentation” - Automate tasks that are difficult, unpleasant or where there’s a need for scale; augment tasks that people enjoy doing, that carry social capital, or where people don’t agree on the “correct” way to do it.

3/ Design the “reward function” (a mathematical formula that the AI model uses to determine “right” vs. “wrong” predictions and a major driver of the final user experience) with a cross-functional team (including engineering, UX and product at the very least), optimizing for long-term user benefits by imagining the downstream effects of your product. Weighing the ‘cost’ of different types of wrong predictions made by the model would significantly impact the user experience as some errors would be more critical than others. Certain performance tradeoffs are inevitable and depend on what your users expect from the product and what would give them a sense of task completeness. It’s also crucial to monitor the model’s performance and user feedback over time against clear success metrics and tweak it if necessary as you won’t be able to anticipate all pitfalls upfront.

4/ Translate user needs into data needs - Determine the type of data needed to train your model, considering predictive power, relevance, fairness, privacy and security. Your training dataset should have the features and breadth to ensure your AI meets your users’ needs; you may want to consider the benefits or disadvantages of using an existing dataset vs. creating your own bespoke dataset; and you’ll want to ensure that raters/labellers aren’t injecting error or bias into datasets when generating labels and that your dataset is broad and diverse enough for your purposes and reflects, as far as possible, data it will face in the real world.

5/ Design carefully for raters and labelling - For supervised learning, having accurate data labels is crucial to getting useful output from your model. Thoughtful design of rater instructions and UI flows will help yield better quality labels and therefore better output. Once your model is running, interpret the ML output to ensure it’s aligned with product goals and user needs. If it’s not, then troubleshoot: explore potential issues with your data or labelling instructions.

6/ Source your data responsibly & tune your model - Whether using pre-labeled data or collecting your own, it’s critical to evaluate your data and collection method to ensure they’re appropriate for your project. Once the model is live, evaluate the output to assess whether it’s addressing your target user’s needs according to the success metrics you defined. If not, you may need to tune it by adjusting the model’s parameters or your reward function or troubleshooting your training data. Try to tie model changes to a clear metric of the subjective user experience like customer satisfaction, or how often users accept a model’s recommendations.

7/ Prepare users’ mental models and help them understand how to train the system - set expectations for what a product can and can’t do, what kind of value people can expect to get from it, how it may change and how to improve it. It’s important to account for user expectations of human-like interaction & help users calibrate their trust - people are more likely to have unachievable expectations for products that they assume have human-like capabilities. It’s important to communicate the algorithmic nature and limits of these products to set realistic user expectations and avoid unintended deception. Because AI products are based on statistics and probability, the user shouldn’t trust the system completely. Incorporate system explanations so the user knows when to trust the system’s predictions and when to apply their own judgement.

EXAMPLE 1

Explain the benefits. Not the technology:

Source: Google People + AI Guidebook

8/ Optimise for understanding & manage influence on user decisions - In some cases, there may be no explicit, comprehensive explanation for the output of a complex algorithm. Even the developers of the AI may not know precisely how it works. In other cases, the reasoning behind a prediction may be knowable, but difficult to explain to users in terms they will understand. In many cases the best approach is not to attempt to explain everything – just the aspects that impact user trust and decision-making. Some techniques such as system or output explanation, data source transparency and model confidence displays can help build user understanding of and trust in the system. AI systems often generate output that the user needs to act on. If, when, and how the system calculates and shows confidence levels can be critical in informing the user’s decision making and calibrating their trust.

9/ Align feedback with model improvement - Leveraging feedback is a powerful and scalable way to improve your technology, provide personalized content, and enhance the user experience. For AI products, user feedback and control are critical to improving your underlying AI model’s output and user experience. When users have the opportunity to offer feedback, they can play a direct role in personalising their experiences and maximizing the benefit your product brings to them. When users have the right level of control over the system, they’re more likely to trust it.

10/ Think about potential errors and paths forward - With AI systems, errors can come from many places, be harder to identify, and appear to the user and to system creators in non-intuitive ways.As users interact with your product, they’ll test it in ways you can’t foresee during the development process. Misunderstandings, false starts, and impropriety will happen, so designing for these cases is a core component of any user-centered product. Errors are also opportunities. They can support faster learning by experimentation, help establish correct mental models, and encourage users to provide feedback. Creating paths for users to take action in response to the errors they encounter encourages patience with the system, keeps the user-AI relationship going, and supports a better overall experience.

EXAMPLE 2

Acknowledge errors & offer a forward path:

Source: Google People + AI Guidebook

Conclusion

Even the best AI will fail if it doesn’t provide unique value to users. Instead of asking “Can we use AI to do X”, start exploring human-centered AI solutions by asking: “How might we solve problem X for our users? Can AI solve this problem in a unique way?”.

To capture AI’s true value and ensure your product is not only successfully built but also successfully adopted and loved by your users, design AI products through a human-centric lens - embed human values, ethics, perceptions and needs into intelligent systems and focusing on augmenting (rather than replacing) human capabilities and experiences is a good place to start. As user-centered design pioneer Don Norman said AI needs to accept human behavior the way it is, not the way we would wish it to be.”

More Reading

IDEO’s design Kit tools & methodologies

Google AI Principles

Google Responsible AI Practices