Yang Yi
![]() |
- Associate Professor
- Name (Simplified Chinese):Yang Yi
- Name (English):Yang Yi
- Date of Birth:1978-07-04
- E-Mail:
- Education Level:Postgraduate (Postdoctoral)
- Degree:Doctoral degree
- Professional Title:Associate Professor
- Status:Employed
- Teacher College:DZGCX
- Discipline:Signal and Information Processing

- ZipCode:
- PostalAddress:
- OfficePhone:
- Email:
- Research Focus
Large Language Model Multi-Agent Systems
The emergence of large language models (LLMs), such as GPT and Claude, has revolutionized AI by enabling general and domain-specific natural language tasks. However, hallucinations, characterized by false or inaccurate responses, pose serious limitations, particularly in critical fields like medicine and law, where any compromise in reliability can lead to severe consequences. We address the hallucination issue by proposing a multi-agent LLM framework, incorporating adversarial and voting mechanisms. Specifically, the framework employs repetitive inquiries and error logs to mitigate hallucinations within single LLMs, while adversarial debates and voting mechanisms enable cross-verification among multiple agents, thereby determining when external knowledge retrieval is necessary. Additionally, an entropy compression technique is introduced to enhance communication efficiency by reducing token usage and task completion time. Experimental results demonstrate that the framework significantly improves accuracy, showing a steady increase in composite accuracy across 20 evaluation batches while reducing hallucinations and optimizing task completion time. Notably, the dynamic weighting mechanism effectively prioritized high-performing models, leading to a reduction in error rates and improved consistency in the final responses.