1. |
Wang C, Wang F, Li S, et al. Patient triage and guidance in emergency departments using large language models: multimetric study. J Med Internet Res, 2025, 27: e71613.
|
2. |
Yuan XL, Liu W, Lin YX, et al. Effect of an artificial intelligence-assisted system on endoscopic diagnosis of superficial oesophageal squamous cell carcinoma and precancerous lesions: a multicentre, tandem, double-blind, randomised controlled trial. Lancet Gastroenterol Hepatol, 2024, 9(1): 34-44.
|
3. |
McDuff D, Schaekermann M, Tu T, et al. Towards accurate differential diagnosis with large language models. Nature, 2025, 642(8067): 451-457.
|
4. |
Ye J, Huang C, Chen Z, et al. A multi-dimensional constraint framework for evaluating and improving instruction following in large language models. (2025-05-12) [2025-07-18].
|
5. |
Tang L, Sun Z, Idnay B, et al. Evaluating large language models on medical evidence summarization. NPJ Digit Med, 2023, 6(1): 158.
|
6. |
Zhang Y, Li Y, Cui L, et al. Siren's song in the AI ocean: a survey on hallucination in large language models. (2023-09-24) [2025-07-18].
|
7. |
刘军, 赵文哲. 人工智能技术在临床医学领域的应用与实践. 中华医学信息导报, 2025, 40(9): 14.Liu J, Zhao WZ. Application and practice of artificial intelligence technology in clinical medicine. China Med News, 2025, 40(9): 14.
|
8. |
Zhang K, Yang X, Wang Y, et al. Artificial intelligence in drug development. Nat Med, 2025, 31(1): 45-59.
|
9. |
Wang YJ, Yang K, Wen Y, et al. Screening and diagnosis of cardiovascular disease using artificial intelligence-enabled cardiac magnetic resonance imaging. Nat Med, 2024, 30(5): 1471-1480.
|
10. |
韩序, 刘亮, 楼文晖. 生成式人工智能大型语言模型在消化道癌症领域辅助科研创作的现状分析: 基于2024年美国临床肿瘤学会中国学者数据. 中国实用外科杂志, 2024, 44(8): 894-899.Han X, Liu L, Lou WH. A comprehensive analysis of large language models in generative artificial intelligence-assisted research writing: insights from 2024 ASCO gastrointestinal oncology data by Chinese scholars. Chin J Pract Surg, 2024, 44(8): 894-899.
|
11. |
Bao T, Zhang H, Zhang C. Enhancing abstractive summarization of scientific papers using structure information. Expert Syst Appl, 2025, 261: 125529.
|
12. |
Gu Y, Tinn R, Cheng H, et al. Domain-specific language model pretraining for biomedical natural language processing. ACM Trans Comput Healthc, 2021, 3(1): 2.
|
13. |
Moradi M, Samwald M. Improving the robustness and accuracy of biomedical language models through adversarial training. J Biomed Inform, 2022, 132: 104114.
|
14. |
Liang X, Song S, Zheng Z, et al. Internal consistency and self-feedback in large language models: a survey. [EB/OL]. (2024-07-19) [2025-07-18].
|
15. |
刘泽垣, 王鹏江, 宋晓斌, 等. 大语言模型的幻觉问题研究综述. 软件学报, 2025, 36(3): 1152-1185.Liu ZY, Wang PJ, Song XB, et al. Survey on hallucinations in large language models. J Softw, 2025, 36(3): 1152-1185.
|
16. |
Hu M, He B, Wang Y, et al. Mitigating large language model hallucination with faithful finetuning. (2024-06-17) [2025-07-18].
|
17. |
Xu N, Ma X. DecoPrompt: decoding prompts reduces hallucinations when large language models meet false premises. (2024-01-21) [2025-07-18].
|
18. |
Fadeeva E, Rubashevskii A, Shelmanov A, et al. Fact-checking the output of large language models via token-level uncertainty quantification. (2024-06-06) [2025-07-18].
|