In the coming years, a huge effort will be underway to scale the potential of AI to augment human capabilities and successfully partner humans and AI. This synergy will only be possible if designers and engineers
build trustworthy solutions that can operate transparently in people's best interests.
Some businesses are failing to accomplish this because although they have the necessary technological resources, they lack a holistic understanding of the end-to-end user experience. A system may have the best algorithm performance and user interface yet fail to meet its user and business goals because there is not a trusting relationship between the user and tool.
With AI services, usable experiences do not necessarily result in transparent and trustworthy experiences. It is difficult to manage expectations in any service, particularly under autonomous intelligent systems where the user may lack understanding of
what the system can and cannot do.
Therefore, designers and engineers need to emphasize inclusivity and ethics to mitigate the consequences of bias and recalibrate trust. For instance, with
Google PAIR, this can be done by explaining predictions, recommendations, and other AI output to users. Designers and engineers must enable users to understand when to trust predictions or when to apply their own judgment. By failing to design this balance, intelligent systems will just add another layer of frustration to the user experience. Designers need to ensure users can understand the potentially unexpected output of the AI.
Establishing the right level of trust is an ongoing process. AI can change and adapt over time, and so will the user's relationship and their trust with the product.