You’ve heard the maxim, “Trust, but verify.” That’s a contradiction—if you need to verify something, you don’t truly trust it. And if you can verify it, you probably don’t need trust at all!
While consulting for a national DIY automotive store chain, we discovered a common pattern. Auto enthusiasts (gearheads) who could evaluate spare part technologies and verify quality on their own did not care which store they patronized, as long as the products they needed were always available. On the other hand, relative amateurs and novices who lacked sufficient technical knowledge developed loyalty to retail stores where they felt they received trustworthy guidance to help select the right products for their needs.
In healthcare discussions, one of the most frequently repeated claims (approaching an aphorism) is that “explainability” is a foundation of trust. But I want to explain why this may mischaracterize user behaviors and lead AI developers in the wrong direction.
Explainability Has Never Been a Key Driver of Trust
If that were the case, millions of everyday transactions would never occur as commonly and smoothly as they do. There are countless processes that companies cannot, or do not, make transparent to consumers (or even their technical users). Consider how little young parents might know about production processes of the baby food that they rely on, how little travelers understand about the guardrails that keep airplanes safe, how opaque the chemical composition (or mechanism of action) of anti-depressant medications is to patients, how little a driver might know about the complex electronics under the hood of a new hybrid car, or how ambiguous the workings of implantable cardioverter-defibrillators (ICDs) are for patients and families.
These are examples of products and services that are fraught with uncertainty, ambiguity, and complexity—i.e., they create extreme information asymmetry between companies and their customers. It would take specialized knowledge, lots of effort, and considerable time to fully understand how specific products and services work. The typical consumer would have no time to live their lives! Their time would be consumed with verifying and understanding new products and service options facing them.
This is where trust comes into the picture. Patients wouldn’t need trust if everything were clear, transparent, understood, and explainable. Trust in healthcare, for example, is most essential, and the trustworthiness of a brand or provider is most critical, precisely when patients face information asymmetry and feel vulnerable. Trust is the substitute that individuals rely on in these cases where explainability is unviable. These may include situations where the patient either does not have the motivation to try to understand a lot of detail or lacks the ability to understand underlying processes.
Trust Provides Patients a Path Through Persistent Information Asymmetry
Considering artificial intelligence (AI) explainability as a prerequisite for trust in AI products and services is counterproductive. Instead, AI firms should focus on strengthening trust in their brands and organizations and help sustain trust in the clinicians who serve as a critical point of contact for patients. As our own research and other research have shown, they can (and do) fortify trust through demonstrated competence and benevolence (or pro-customer intent), across all domains of interaction with patients and other stakeholders.
Moreover, it is perhaps “reputation”—a close cousin of trust—that is a major driver of patients’ initial impressions and first experiences with AI-driven healthcare. Reputations for trustworthiness are generally built without first-hand experience, via mass media, word-of-mouth stories, and social media. Companies should monitor and manage their reputations with care and discipline.
My key recommendation is that rather than consider “explainability” as a prerequisite for trust, AI providers are better off thinking of trustworthiness and trusted reputations as ways to surmount persistent “inexplicability.” Providers with established trustworthiness in technology, such as Google or Microsoft, or in healthcare, such as the Cleveland Clinic or Duke Health, already have the necessary risk-dampening identities that will serve them well. At the point of care, patients’ trust in their physicians and other clinicians will help them overcome any anxiety or sense of vulnerability they might feel as AI becomes increasingly a part of their care processes.
Clinicians and healthcare systems exercising caution in adopting AI in patient-facing processes should be commended, not criticized for being risk-averse. They are protecting their trustworthiness, a critical and hard-earned resource that can serve them and their patients well, as AI becomes more pervasive in healthcare, and information asymmetries between patients and providers continue to persist.
link
