FEB 2021

How to shift your design process towards AI

Are you failing to solve the right problems for your customers?
This article helps you to move away from the solution space to the problems that your users really need a solution for. This article is a brief guide on how to bring designers and data scientists to collaborate better and sooner.


In the article Moving towards Human-centered AI, I uncover the evolution of the term Human-centered Design (HCD) towards Human-centered AI (HCAI).

In this article, I will uncover how the 'traditional' design process has shifted to take into account the AI development process. And I will be focused on the First Stage — Empathize & Hypothesis.
Human-Centered Design, Human-Centered AI, HCD, HCAI, Design Thinking, Design Process, Artificial Intelligence
So, how can you shift the design thinking design process towards Human-centered AI (HCAI)?

⚠️ Spoiler alert! ⚠️ Designers will start to pair with data scientists to research the intersection of user needs with AI strengths to solve a problem in which AI adds a unique value. I will discuss a set of tools and methods — backed by academic research — that will help designers and data scientists to craft valuable, high-quality human-AI interactions and experiences.

Pair Designers with Data Scientists

Pair design
Pair design is putting two designers working together to achieve better design solutions. Collaboration as thought partners to solve design problems. Two heads think better than one, right?

The initiative should come from data scientists or engineering in recognizing the value of bringing designers to the beginning stage of the project. Paring with designers in their decision-making process will support their technological development towards a genuine human-centered philosophy and approach.

Prototyping with AI, therefore, requires intensive collaboration between designers, data and AI experts and strategists — not to mention users — and these interactions should start from the very beginning of a project [1].

Simon’s 7 stages of the design process
Human-centered AI Design Process
This process is a synergy between both qualitative and quantitative to inform designers and data scientists throughout the design process. The result is that solutions are more engaging and tailored to users' preferences, goals, and behaviors.

First Stage — Empathize & Hypothesis

Simon’s 7 stages of the design process
Human-centered AI Design Process - Stage 1 Empathize & Hypothesis
Designers Goals for Empathize & Hypothesis

1. User-centered problem solving
Explore how AI could help solve user needs in a unique way. To do so, they can approach the challenges by using either:

These strategies will allow you to identify user needs and find AI opportunities. The goal is to find meaningful use cases for applying AI that leverages user needs. But to narrow down the focus and scope, don't forget to support the process with Jobs-to-be-Done Framework and User Journey Mapping.
Take in mind, when filling the AI canvas, through dot system or post-its, annotate visually the assumptions that carry levels of risk or uncertainty:

  • Low Uncertainty = Low Risk
  • Medium Uncertainty = Some Risk
  • High Uncertainty = High Risk

After these interventions designers, data scientists, and stakeholders will be able to find the intersection of user needs & AI strengths. And start to build the ground to present solutions that may solve a real problem in ways in which AI adds unique value.

By the end of this phase...

Your AI solution(s) should fall into one or more of the categories below. Next, the discovery of what is desirable, viable, and/or technically feasible should be a collaborative process taken over by UX, Product, Engineering, and SMEs perspectives at the minimum.
7 Patterns of AI
The Seven Patterns of AI, source Cognilytica
2. Assess Automation vs Augmentation
Note, if you want to further understand this topic, please follow my article below.
In this stage, you need to assess if the best approach for the user is to automate or augment the solution.

  • When automate? If they are difficult, unpleasant, or where there's a need for scale.
  • When augment? If it is a task that people enjoy doing, that carry social capital, or where people don't agree on the "correct" way to do it.
The Google Triptech method is an early concept evaluation method that can be used to outline user requirements based on likes, dislikes, expectations, and concerns.
assess augmentation vs automation
Google People + AI Guidebook protocol questions to assess augmentation vs automation
3. Assess precision vs recall — Design reward function

I will try to skip the statistical jargon to keep this explanation as simple as possible. But precision and recall are statistical terms that measure the relevancy of results returned by an algorithm.

When designing for AI, the model will have to be tuned for precision or recall. This will define the accuracy of the model. This process is called designing the reward function and should be a collaborative process (again) taken over by UX, Product, and Engineering. The decisions made in this step will be key to the AI deployment success and will dramatically affect the final experience for your users.

Before diving into the concept of precision and recall, let me recap for you what in statistic Type I (false positive) and Type II (false negative) errors are.

Imagine you have an AI service to run cancer diagnosis. The AI model will predict whether or not a person has cancer. These kinds of models are called "binary classifiers". I will use them as a simple example for understanding how algorithms can be right or wrong.

When binary classifiers make predictions (have or haven't cancer), there are only four possible outcomes:

  • True positives. When the model correctly predicts a positive outcome. A person has cancer and the AI predicts effectively the person has cancer.
  • True negatives. When the model correctly predicts a negative outcome. A person hasn't cancer and the AI predicts effectively that the person is free of cancer.
  • False positives. When the model incorrectly predicts a positive outcome. A person hasn't cancer but the AI predicts that the person has cancer.
  • False negatives. When the model incorrectly predicts a negative outcome. A person has cancer but the AI wrongly predicts that the person hasn't cancer

In this case, what worse for the user? Be wrongly predicted with cancer but been free of it. Or having cancer and the system wrongly predicts that the person is free from cancer? This duality will be the difference of tuning the system for precision or for recall. So, trade-offs must be involved in this process.

Now that you are clear about what is Type I and Type II errors, let us dive into the concept of precision and recall.

  • Optimizing for Precision means that the AI model will use only the precisely correct answers, but it will miss some questionable positive cases (people how to have cancer and are detected as free of cancer). The higher the precision, the more confident you can be that any model output is correct. However, the tradeoff is that you will increase the number of false negatives by excluding possibly relevant results. It will show only cancer people, but it will miss some diagnosis. It won't find all the correct answers, only the clear cases.
  • Optimizing for Recall means that the AI model will use all the right answers it finds, even if it displays a few wrong answers (people how to haven't cancer and are detected as cancer). The higher the recall, the more confident you can be that all the relevant results are included somewhere in the output. However, the tradeoff is that you will increase the number of false positives by including possibly irrelevant results. It will show cancer people, but also a few wrongly diagnosis of cancers. It will find all the correct answers and a few more wrong diagnosis.
precision vs recall
Google People + AI Guidebook diagram showing the trade-offs when optimizing for precision or recall.
When designers pair with data scientists, their job is to help them decide what to optimize for. Which experience will be less prejudicial to the end-user. Providing meaningful insights about human reactions and human priorities can prove the most important job of a designer in an AI project. In this case, been detected with cancer and not having it, seems to be least worse than have it and miss the opportunity to treat it. In this case, the designer alongside the Product and Engineering should infer that is best to tune the model for recall rather than for precision.

And how can you do that? You may use Google People + AI Guidebook template for the reward function.
Based on user attitudes towards automation and augmentation (previous step) the designer with the rest of the team should map all the scenarios, especially ones that can be a false positive and a false negative, and list out instances of each reward function dimension. Then, look at the false positives and false negatives identified, and evaluate if the service offers the most useful benefit for fewer false positives, consider optimizing for precision. Or if the service offers the most useful benefit for fewer false negatives, consider optimizing for recall.
designing AI reward function
Google People + AI Guidebook template for designing the reward function
Data Scientists Goals for Empathize & Hypothesis

If you, as a data scientist, start adopting de design thinking model, you can start learning from primary and secondary research as designers do, and start to develop an understanding of the problem space. This will help you to apply the same techniques to help familiarize themselves with both the market or industry, get to know the stakeholders, and paired with designers, start to uncover insights that will assist in framing your data research process.
1. Tech-driven opportunity spotting
Algorithms are a new medium for design. Design has driven augmenting the human experience and creating a better emotional awareness through the use of algorithms.

In this stage, you will exploit opportunities for AI capabilities to create value. Here, you can use a Data Design Sprint to decide what data are required to meet user needs. The Data Design Sprint is designed to help your team align on the most important outcomes for building a machine learning model. The goal is to gather information for:

  • What intersection of user needs with AI strengths is the model trying to solving?
  • Is it a classification or regression problem?
classification vs regression
A Design-first Approach to AI — 1000 Days Out
    • Are the end-users technical or non-technical people? What value are they expecting to derive from the model?
    • What can be the larger implications of the model?
    • How big is the market? Who are the other players in this market? Are they complimenting our product or are they direct competitors?
    • What are the key regulatory, cultural, socioeconomic, and technological trends to take into account?

    Tune the algorithm for the user's psychographic profile provided by the design team.

    2. Data-driven opportunity spotting

    Where does the data come from?
    First thing first, the data source must be defined responsibly, otherwise, one may be introducing a lot of bias in our models and worse, putting in jeopardy the end-user experience.
    You will need to assess if the model will use your own data and collection methods or if it will use a third party (pre-existing data source). To decide between one, you should measure first the trade-off of possible bias in those resources. How they can hurt the user and ultimately the business revenue. This will lead to good Data Integrity.

    Understanding Data Assets
    According to Norbert Wirth and Martin Szugat, as soon as it is defined which information from the data is to be used, the question arises where this data can be obtained to ensure the feasibility of the planned product or solution. Many projects have gone terribly wrong because the available data never contained the required information insufficient quality — or even worse — at all. To overcome this problem we can use a Data Landscape Canvas. This canvas will help the team to blend the business, users' and data perspectives.
    Data Landscape Canvas
    Data Landscape Canvas created by Norbert Wirth and Martin Szugat retrieved in Research World.
    Pairing Designers and Data Scientists here are crucial to reduces the potential of AI harming business and society by encouraging mistrust and producing distorted results. Designers and Data Scientists need to ensure that the AI systems they are trying to implement are developed to improve human decision-making. We all have a responsibility to encourage progress on research and standards that will reduce bias in AI.

    So, you need to evaluate:

    • Will the data be private and public data (source data)?
    • What data are required to meet our user needs?
    • How one will tune the AI (precision or recall)?
    • Will the data be static or dynamic?
    • Is the model state-based or stateless?
    • What is the granularity and quality of the data being collected?
    • How do you best represent that information?
    • Are there regulatory restrictions to data usage like PCI, HIPAA, or GDPR?

    Translating user needs into data needs is a sensitive process. Ultimately, determining the type of data needed to train your model like considering predictive power, relevance, fairness, privacy, and security, will impact the end-user experience.


    Through all this process, designers will help data science teams save money and time by elucidating the problem space towards user needs. Designers can hold back data scientists who often set their mindset towards what data is available and want to jump straight to uncover meaningful insights, toward guiding them first to uncover what would be most strategically valuable to focus their research on before their start already hands-on on their models.
    1. V
    2. https://medium.com/dain-studios/prototyping-with-data-and-ai-in-service-design-841e1d4820b2
    3. Cerejo, J. and Carvalhais, M. (2020). The lens of Anticipatory Design under AI-driven Services. DIGICOM International Conference on Digital Design & Communication.
    4. Manyika, J., Silberg. J., and Presten, B. (2019). What do we do about the Bias in AI?. Harvard Business Review.
    Did you like this article?
    Made on