# Trust in AI: progress, challenges, and future directions
> [!info]+ <center>Metadata</center>
>
> |<div style="width: 5em">Key</div>|Value|
> |--:|:--|
> |文献类型|journalArticle|
> |标题|Trust in AI: progress, challenges, and future directions|
> |短标题|对人工智能的信任:进步、挑战和未来方向|
> |作者|[[Saleh Afroogh]]、 [[Ali Akbari]]、 [[Emmie Malone]]、 [[Mohammadali Kargar]]、 [[Hananeh Alambeigi]]|
> |期刊名称|[[Humanities and Social Sciences Communications]]|
> |DOI|[10.1057/s41599-024-04044-8](https://doi.org/10.1057/s41599-024-04044-8)|
> |存档位置||
> |文库编目|DOI.org|
> |索书号||
> |版权||
> |分类|[[Nature系列]]|
> |条目链接|[My Library](zotero://select/library/items/UL6VUERT)|
> |PDF 附件|[Afroogh 等 - 2024 - Trust in AI progress, challenges, and future directions.pdf](zotero://open-pdf/library/items/WC9FQHNB)|
> |关联文献||
> ^Metadata
> [!example]- <center>本文标签</center>
>
> `$=dv.current().file.tags`
> [!quote]- <center>Abstract</center>
>
> We conducted an inclusive and systematic review of academic papers, reports, case studies, and trust frameworks in AI, written in English. Given that there is not a specific database on trust in AI in particular, we used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework to develop a protocol in this review (Fig. 1). In order to conduct a comprehensive review of the relevant studies, we followed two approaches. First, we manually searched for the most related papers on trust in AI: 19 papers were identified through the online search after the removal of duplicate files. Secondly, we fulfilled a keyword-based search (using the http://scholar.google.com search engine) to collect all relevant papers on the topic. This search was accomplished using the following keyword phrases: (1) “trust + AI” which provided 19 relevant result pages of Google Scholar, (2) “trust + Artificial + Intelligence” for which the first five result pages were reviewed, (3) “trustworthy + AI,” for which the first 15 result pages were reviewed; and (4) “trustworthy + Artificial + Intelligence,” for which the first 13 result pages of Google Scholar were reviewed.
> [!tldr]- <center>隐藏信息</center>
>
> itemType:: journalArticle
> title:: Trust in AI: progress, challenges, and future directions
> shortTitle:: 对人工智能的信任:进步、挑战和未来方向
> creators:: [[Saleh Afroogh]]、 [[Ali Akbari]]、 [[Emmie Malone]]、 [[Mohammadali Kargar]]、 [[Hananeh Alambeigi]]
> publicationTitle:: [[Humanities and Social Sciences Communications]]
> journalAbbreviation:: Humanit Soc Sci Commun
> volume:: 11
> issue:: 1
> pages:: 1568
> series::
> language:: en
> DOI:: [10.1057/s41599-024-04044-8](https://doi.org/10.1057/s41599-024-04044-8)
> ISSN:: 2662-9992
> url:: [https://www.nature.com/articles/s41599-024-04044-8](https://www.nature.com/articles/s41599-024-04044-8)
> archive::
> archiveLocation::
> libraryCatalog:: DOI.org
> callNumber::
> rights::
> extra:: 🏷️ /unread、📒
> collection:: [[Nature系列]]
> tags:: #unread
> related::
> itemLink:: [My Library](zotero://select/library/items/UL6VUERT)
> pdfLink:: [Afroogh 等 - 2024 - Trust in AI progress, challenges, and future directions.pdf](zotero://open-pdf/library/items/WC9FQHNB)
> qnkey:: Afroogh 等 - 2024 - Trust in AI:progress, challenges, and future directions
> date:: 2024-11-18
> dateY:: 2024
> dateAdded:: 2025-04-13
> datetimeAdded:: 2025-04-13 14:27:26
> dateModified:: 2025-04-13
> datetimeModified:: 2025-04-13 15:00:39
>
> abstract:: We conducted an inclusive and systematic review of academic papers, reports, case studies, and trust frameworks in AI, written in English. Given that there is not a specific database on trust in AI in particular, we used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework to develop a protocol in this review (Fig. 1). In order to conduct a comprehensive review of the relevant studies, we followed two approaches. First, we manually searched for the most related papers on trust in AI:19 papers were identified through the online search after the removal of duplicate files. Secondly, we fulfilled a keyword-based search (using the http://scholar.google.com search engine) to collect all relevant papers on the topic. This search was accomplished using the following keyword phrases:(1) “trust + AI” which provided 19 relevant result pages of Google Scholar, (2) “trust + Artificial + Intelligence” for which the first five result pages were reviewed, (3) “trustworthy + AI,” for which the first 15 result pages were reviewed; and (4) “trustworthy + Artificial + Intelligence,” for which the first 13 result pages of Google Scholar were reviewed.
%--------------ω--------------%
以下是《人文与社会科学通讯》中《对AI的信任:进展、挑战与未来方向》文献综述的详细总结:
---
### **研究背景与意义**
- AI技术已渗透医疗、交通、金融、军事等核心领域,成为社会基础设施的一部分。然而,用户对AI的信任度直接影响技术采用率,信任缺失可能阻碍AI的广泛应用。
- AI系统具有自主学习、不可预测性和“黑箱”特性,导致其决策透明度和可解释性不足,进一步加剧信任危机。
- 需要系统性界定AI信任的定义、范畴及影响因素,探索构建可信AI的技术与非技术路径,推动负责任的AI发展。
---
### **方法论**
- 采用PRISMA框架进行系统文献综述,通过Google Scholar等平台检索关键词(如“Trust + AI”“Trustworthy AI”),筛选329篇相关文献。
- 纳入标准:聚焦AI信任的学术文章,涵盖技术、伦理、法律等多维度分析,排除重复及主题不符文献。
---
### **核心研究发现**
#### **1. AI信任的模型与类型**
- **信任定义特殊性**:不同于人际信任(基于善意与诚实),AI信任围绕技术能力(准确性、可靠性)、透明度(可解释性)及伦理合规性。用户接受脆弱性并以预期行为匹配度为信任基础。
- **信任类型**:
- **人机交互**:如医生信任医疗AI诊断、用户依赖自动驾驶决策。
- **机机交互**:物联网设备间的信任协作,需防对抗攻击(如区块链用于智能合约验证)。
- **AI与对象交互**:如自动驾驶系统识别可信交通标志或社交媒体数据的真实性。
#### **2. 可信AI的评估指标**
- **技术指标**:
- **安全性**:系统抵御攻击与错误的能力(如医疗数据隐私保护)。
- **准确性**:预测或决策的正确率(如金融风险评估模型)。
- **鲁棒性**:在噪声或数据偏差下的稳定性(如自动驾驶应对极端天气)。
- **价值指标**:
- **伦理**:避免偏见(如招聘AI的公平性)、保障隐私(如用户数据匿名化)。
- **法律合规**:符合GDPR等监管要求(如算法可审计性)。
- **社会效益**:促进可持续发展(如环保能源管理AI)。
#### **3. 不信任AI的根源**
- **技术缺陷**:模型错误(金融AI的错误建议)、数据偏见(医疗诊断中的种族偏差)。
- **伦理与法律威胁**:如人脸识别侵犯隐私、自动化武器威胁人类安全。
- **自主性威胁**:AI替代人类决策引发的失控风险(如自动驾驶紧急避让的伦理困境)。
- **尊严威胁**:情感交互型AI(如护理机器人)削弱人际关系的真实性。
#### **4. 信任建立策略**
- **技术增强**:开发可解释AI(XAI)、提高模型鲁棒性(对抗训练)、实时不确定性量化。
- **透明度措施**:公开算法逻辑(如开源代码)、提供决策追溯(医疗诊断依据可视化)。
- **伦理与治理**:建立行业标准(如IEEE伦理指南)、跨学科监管框架(技术+法律专家协作)。
- **用户教育**:提升公众对AI能力的合理认知(避免过度依赖或排斥)。
---
### **讨论与挑战**
- **价值冲突**:如透明度增加可能降低模型性能(加密影响计算效率),需权衡实际需求。
- **动态信任校准**:用户信任随时间变化(如自动驾驶初期高信任→事故后信任崩塌),需自适应调节机制。
- **文化差异**:不同地区对隐私、公平性的定义不一(如亚洲集体主义vs西方个人主义),需本土化策略。
---
### **结论与未来方向**
- **研究重点**:需融合技术、伦理、法律等多学科,制定动态信任模型与统一评估体系。
- **技术革新**:开发实时信任监测工具、个性化信任校准算法(如基于用户性格特征)。
- **政策推动**:政府与企业协作建立认证机制(如可信AI标签),加强国际标准互认。
- **社会参与**:通过公众咨询、参与式设计提升AI系统的社会接受度。
---
### **典型案例**
- **医疗领域**:病理AI需结合医生经验,解释性功能(如热图标记)可增加信任;但过度依赖AI可能导致误诊。
- **金融领域**:聊天机器人因中立性获得信任,但复杂产品仍需人类顾问(信任的“算法厌恶”现象)。
- **自动驾驶**:用户对技术可行性的怀疑(如紧急制动可靠性)需通过模拟测试与透明度提升缓解。
---
该综述系统梳理了AI信任的全景,为技术开发者、政策制定者及用户提供了实践指南,并指出现有研究的空白(如跨文化信任差异),为未来研究指明方向。
## ✏️ 笔记区
> [!WARNING]+ <center>🐣 总结</center>
>
>🎯 一句话总结::
> [!inbox]- <center>📫 导入时间</center>
>
>⏰ importDate:: 2025-04-13
>⏰ importDateTime:: 2025-04-13 15:00:09
https://www.jianguoyun.com/p/DXLzUfoQk6_XChi68_QFIAA
%--------------ω--------------%