skip to main content
10.1145/3531146.3533182acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Designing for Responsible Trust in AI Systems: A Communication Perspective

Published: 20 June 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Current literature and public discourse on “trust in AI” are often focused on the principles underlying trustworthy AI, with insufficient attention paid to how people develop trust. Given that AI systems differ in their level of trustworthiness, two open questions come to the fore: how should AI trustworthiness be responsibly communicated to ensure appropriate and equitable trust judgments by different users, and how can we protect users from deceptive attempts to earn their trust? We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH, which describes how trustworthiness is communicated in AI systems through trustworthiness cues and how those cues are processed by people to make trust judgments. Besides AI-generated content, we highlight transparency and interaction as AI systems’ affordances that present a wide range of trustworthiness cues to users. By bringing to light the variety of users’ cognitive processes to make trust judgments and their potential limitations, we urge technology creators to make conscious decisions in choosing reliable trustworthiness cues for target users and, as an industry, to regulate this space and prevent malicious use. Towards these goals, we define the concepts of warranted trustworthiness cues and expensive trustworthiness cues, and propose a checklist of requirements to help technology creators identify appropriate cues to use. We present a hypothetical use case to illustrate how practitioners can use MATCH to design AI systems responsibly, and discuss future directions for research and industry efforts aimed at promoting responsible trust in AI.

    References

    [1]
    Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6(2018), 52138–52160.
    [2]
    Matthew Arnold, Rachel KE Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mojsilović, Ravi Nair, K Natesan Ramamurthy, Alexandra Olteanu, David Piorkowski, 2019. FactSheets: Increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development 63, 4/5 (2019), 6–1.
    [3]
    Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
    [4]
    Rebecca BliegeBird and EricAlden Smith. 2005. Signaling theory, strategic interaction, and symbolic capital. Current anthropology 46, 2 (2005), 221–248.
    [5]
    Shelly Chaiken. 1980. Heuristic versus systematic information processing and the use of source versus message cues in persuasion.Journal of personality and social psychology 39, 5(1980), 752.
    [6]
    Brian L Connelly, S Trevis Certo, R Duane Ireland, and Christopher R Reutzel. 2011. Signaling theory: A review and assessment. Journal of management 37, 1 (2011), 39–67.
    [7]
    Graham Dietz and Deanne N Den Hartog. 2006. Measuring trust inside organisations. Personnel review (2006).
    [8]
    Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608(2017).
    [9]
    Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
    [10]
    Upol Ehsan, Samir Passi, Q Vera Liao, Larry Chan, I Lee, Michael Muller, Mark O Riedl, 2021. The who in explainable ai: How ai background shapes perceptions of ai explanations. arXiv preprint arXiv:2107.13509(2021).
    [11]
    Malin Eiband, Daniel Buschek, Alexander Kremer, and Heinrich Hussmann. 2019. The impact of placebic explanations on trust in intelligent systems. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 1–6.
    [12]
    Gunther Eysenbach, John Powell, Oliver Kuss, and Eun-Ryoung Sa. 2002. Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review. Jama 287, 20 (2002), 2691–2700.
    [13]
    Andrew J Flanagin and Miriam J Metzger. 2007. The role of site features, user attributes, and information verification behaviors on the perceived credibility of web-based information. New media & society 9, 2 (2007), 319–342.
    [14]
    Brian J Fogg. 2003. Prominence-interpretation theory: Explaining how people assess credibility online. In CHI’03 extended abstracts on human factors in computing systems. 722–723.
    [15]
    Brian J Fogg, Jonathan Marshall, Othman Laraki, Alex Osipovich, Chris Varma, Nicholas Fang, Jyoti Paul, Akshay Rangnekar, John Shon, Preeti Swani, 2001. What makes web sites credible? A report on a large quantitative study. In Proceedings of the SIGCHI conference on Human factors in computing systems. 61–68.
    [16]
    Brian J Fogg, Cathy Soohoo, David R Danielson, Leslie Marable, Julianne Stanford, and Ellen R Tauber. 2003. How do users evaluate the credibility of Web sites? A study with over 2,500 participants. In Proceedings of the 2003 conference on Designing for user experiences. 1–15.
    [17]
    Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable active learning (xal) toward ai explanations as interfaces for machine teachers. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3(2021), 1–28.
    [18]
    James J Gibson. 1977. The theory of affordances. Hilldale, USA 1, 2 (1977), 67–82.
    [19]
    Anthony Giddens. 1984. The constitution of society: Outline of the theory of structuration. Univ of California Press.
    [20]
    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.
    [21]
    Brian Hilligoss and Soo Young Rieh. 2008. Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing & Management 44, 4 (2008), 1467–1484.
    [22]
    Michael Hind, Stephanie Houde, Jacquelyn Martino, Aleksandra Mojsilovic, David Piorkowski, John Richards, and Kush R Varshney. 2020. Experiences with improving the transparency of ai models and services. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–8.
    [23]
    Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608(2018).
    [24]
    Jake M Hofman, Daniel G Goldstein, and Jessica Hullman. 2020. How visualizing inferential uncertainty can mislead readers about treatment effects in scientific results. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–12.
    [25]
    Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 624–635.
    [26]
    Monique WM Jaspers, Thiemo Steen, Cor Van Den Bos, and Maud Geenen. 2004. The think aloud method: a guide to user interface design. International journal of medical informatics 73, 11-12(2004), 781–795.
    [27]
    Daniel Kahneman. 2011. Thinking, fast and slow. Macmillan.
    [28]
    Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
    [29]
    Aniket Kittur, Bongwon Suh, and Ed H Chi. 2008. Can you ever trust a Wiki? Impacting perceived trustworthiness in Wikipedia. In Proceedings of the 2008 ACM conference on Computer supported cooperative work. 477–480.
    [30]
    Bran Knowles and John T Richards. 2021. The Sanction of Authority: Promoting Public Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 262–271.
    [31]
    Johannes Kunkel, Tim Donkers, Lisa Michael, Catalin-Mihai Barbu, and Jürgen Ziegler. 2019. Let me explain: Impact of personal and impersonal explanations on trust in recommender systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
    [32]
    Vivian Lai, Chacha Chen, Q Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. arXiv preprint arXiv:2112.11471(2021).
    [33]
    John Lee and Neville Moray. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243–1270.
    [34]
    John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.
    [35]
    Q Vera Liao and Wai-Tat Fu. 2014. Age differences in credibility judgments of online health information. ACM Transactions on Computer-Human Interaction (TOCHI) 21, 1(2014), 1–23.
    [36]
    Q Vera Liao and Kush R Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790(2021).
    [37]
    Zachary C Lipton. 2018. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3 (2018), 31–57.
    [38]
    Zhuoran Lu and Ming Yin. 2021. Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
    [39]
    Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.
    [40]
    Miriam J Metzger. 2007. Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American society for information science and technology 58, 13 (2007), 2078–2091.
    [41]
    Miriam J Metzger, Andrew J Flanagin, and Ryan B Medders. 2010. Social and heuristic approaches to credibility evaluation online. Journal of communication 60, 3 (2010), 413–439.
    [42]
    Barbara Misztal. 2013. Trust in modern societies: The search for the bases of social order. John Wiley & Sons.
    [43]
    Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229.
    [44]
    Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 11 (2019), 501–507.
    [45]
    Donald A Norman. 1988. The psychology of everyday things.Basic books.
    [46]
    Daniel J O’Keefe. 2013. The elaboration likelihood model. The Sage handbook of persuasion: Developments in theory and practice (2013), 137–149.
    [47]
    Lace MK Padilla, Maia Powell, Matthew Kay, and Jessica Hullman. 2021. Uncertain about uncertainty: How qualitative expressions of forecaster confidence impact decision-making with uncertainty visualizations. Frontiers in Psychology(2021), 3747.
    [48]
    Richard E Petty and John T Cacioppo. 1984. Source factors and the elaboration likelihood model of persuasion. ACR North American Advances(1984).
    [49]
    Richard E Petty and John T Cacioppo. 1986. The elaboration likelihood model of persuasion. In Communication and persuasion. Springer, 1–24.
    [50]
    Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33–44.
    [51]
    Soo Young Rieh and David R Danielson. 2007. Credibility: A multidisciplinary framework. Annual review of information science and technology 41, 1(2007), 307–364.
    [52]
    Justus Robertson, Athanasios Vasileios Kokkinakis, Jonathan Hook, Ben Kirman, Florian Block, Marian F Ursu, Sagarika Patra, Simon Demediuk, Anders Drachen, and Oluseyi Olarewaju. 2021. Wait, But Why?: Assessing Behavior Explanation Strategies for Real-Time Strategy Games. In 26th International Conference on Intelligent User Interfaces. 32–42.
    [53]
    Julia Schwarz and Meredith Morris. 2011. Augmenting web pages and search results to support credibility assessment. In Proceedings of the SIGCHI conference on human factors in computing systems. 1245–1254.
    [54]
    Ben Shneiderman. 2020. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy Human-Centered AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 10, 4(2020), 1–31.
    [55]
    Keng Siau and Weiyu Wang. 2018. Building trust in artificial intelligence, machine learning, and robotics. Cutter business technology journal 31, 2 (2018), 47–53.
    [56]
    Elizabeth Sillence, Pam Briggs, Lesley Fishwick, and Peter Harris. 2004. Trust and mistrust of online health sites. In Proceedings of the SIGCHI conference on Human factors in computing systems. 663–670.
    [57]
    S Shyam Sundar. 2008. The MAIN model: A heuristic approach to understanding technology effects on credibility. MacArthur Foundation Digital Media and Learning Initiative.
    [58]
    S Shyam Sundar and Jinyoung Kim. 2019. Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on human factors in computing systems. 1–9.
    [59]
    Harini Suresh, Natalie Lao, and Ilaria Liccardi. 2020. Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making. In 12th ACM Conference on Web Science. 315–324.
    [60]
    Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces. 109–119.
    [61]
    Lauren Thornton, Bran Knowles, and Gordon Blair. 2021. Fifty Shades of Grey: In Praise of a Nuanced Approach Towards Trustworthy Design. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 64–76.
    [62]
    Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad Van Moorsel. 2020. The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 272–283.
    [63]
    Shawn Tseng and BJ Fogg. 1999. Credibility and computing technology. Commun. ACM 42, 5 (1999), 39–44.
    [64]
    Kush R Varshney. 2019. Trustworthy machine learning and artificial intelligence. XRDS: Crossroads, The ACM Magazine for Students 25, 3 (2019), 26–29.
    [65]
    Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2(2021), 1–39.
    [66]
    Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces. 318–328.
    [67]
    C Nadine Wathen and Jacquelyn Burkell. 2002. Believe it or not: Factors influencing credibility on the Web. Journal of the American society for information science and technology 53, 2 (2002), 134–144.
    [68]
    Yusuke Yamamoto and Katsumi Tanaka. 2011. Enhancing credibility judgment of web search results. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1235–1244.
    [69]
    Amotz Zahavi. 1975. Mate selection—a selection for a handicap. Journal of theoretical Biology 53, 1 (1975), 205–214.
    [70]
    Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295–305.

    Cited By

    View all
    • (2024)“It would work for me too”: How Online Communities Shape Software Developers’ Trust in AI-Powered Code Generation ToolsACM Transactions on Interactive Intelligent Systems10.1145/365199014:2(1-39)Online publication date: 15-May-2024
    • (2024)Generative AI in User Experience Design and Research: How Do UX Practitioners, Teams, and Companies Use GenAI in Industry?Proceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660720(1579-1593)Online publication date: 1-Jul-2024
    • (2024)Homogenization Effects of Large Language Models on Human Creative IdeationProceedings of the 16th Conference on Creativity & Cognition10.1145/3635636.3656204(413-425)Online publication date: 23-Jun-2024
    • Show More Cited By

    Index Terms

    1. Designing for Responsible Trust in AI Systems: A Communication Perspective
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image ACM Other conferences
            FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
            June 2022
            2351 pages
            ISBN:9781450393522
            DOI:10.1145/3531146
            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Sponsors

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 20 June 2022

            Permissions

            Request permissions for this article.

            Check for updates

            Author Tags

            1. AI design
            2. Trust in AI
            3. human-AI interaction
            4. human-centered AI

            Qualifiers

            • Research-article
            • Research
            • Refereed limited

            Conference

            FAccT '22
            Sponsor:

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)800
            • Downloads (Last 6 weeks)61

            Other Metrics

            Citations

            Cited By

            View all
            • (2024)“It would work for me too”: How Online Communities Shape Software Developers’ Trust in AI-Powered Code Generation ToolsACM Transactions on Interactive Intelligent Systems10.1145/365199014:2(1-39)Online publication date: 15-May-2024
            • (2024)Generative AI in User Experience Design and Research: How Do UX Practitioners, Teams, and Companies Use GenAI in Industry?Proceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660720(1579-1593)Online publication date: 1-Jul-2024
            • (2024)Homogenization Effects of Large Language Models on Human Creative IdeationProceedings of the 16th Conference on Creativity & Cognition10.1145/3635636.3656204(413-425)Online publication date: 23-Jun-2024
            • (2024)Investigating and Designing for Trust in AI-powered Code Generation ToolsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658984(1475-1493)Online publication date: 3-Jun-2024
            • (2024)Should Users Trust Advanced AI Assistants? Justified Trust As a Function of Competence and AlignmentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658964(1174-1186)Online publication date: 3-Jun-2024
            • (2024)Practising Appropriate Trust in Human-Centred AI DesignExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650825(1-8)Online publication date: 11-May-2024
            • (2024)Transparent AI Disclosure Obligations: Who, What, When, Where, Why, HowExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650750(1-11)Online publication date: 11-May-2024
            • (2024)Human-Centered Evaluation and Auditing of Language ModelsExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636302(1-6)Online publication date: 11-May-2024
            • (2024)Guidelines for Integrating Value Sensitive Design in Responsible AI ToolkitsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642810(1-20)Online publication date: 11-May-2024
            • (2024)Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is MadeProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642018(1-14)Online publication date: 11-May-2024
            • Show More Cited By

            View Options

            Get Access

            Login options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Media

            Figures

            Other

            Tables

            Share

            Share

            Share this Publication link

            Share on social media