影响因子:2.5
发表刊物:Applied Sciences
关键字:large language models; hallucination reduction; external knowledge retrieval; adversarial debate; voting mechanism; multi-agents
摘要:The emergence of large language models (LLMs), such as GPT and Claude, has revolutionized AI by enabling general and domain-specific natural language tasks. However, hallucinations, characterized by false or inaccurate responses, pose serious limitations, particularly in critical fields like medicine and law, where any compromise in reliability can lead to severe consequences. This paper addresses the hallucination issue by proposing a multi-agent LLM framework, incorporating adversarial and voting mechanisms. Specifically, the framework employs repetitive inquiries and error logs to mitigate hallucinations within single LLMs, while adversarial debates and voting mechanisms enable cross-verification among multiple agents, thereby determining when external knowledge retrieval is necessary. Additionally, an entropy compression technique is introduced to enhance communication efficiency by reducing token usage and task completion time. Experimental results demonstrate that the framework significantly improves accuracy, showing a steady increase in composite accuracy across 20 evaluation batches while reducing hallucinations and optimizing task completion time. Notably, the dynamic weighting mechanism effectively prioritized high-performing models, leading to a reduction in error rates and improved consistency in the final responses.
论文编号:3676
卷号:15
期号:7
页面范围:1-21
是否译文:否
发表时间:2025-03-27
收录刊物:SCI
发布期刊链接:https://doi.org/10.3390/app15073676