Bloomberg AI Researchers Mitigate Risks of "Unsafe" RAG LLMs and GenAI in Finance
4 Articles
4 Articles
The RAG technology (Retrieval-Augmented Generation) has become a common tool for improving the accuracy of large-scale language models (MLMs) in business environments. The idea is simple and powerful: complementing the model responses with up-to-date and verified information, minimizing errors known as "hallucinations." However, recent research by Bloomberg has revealed a dark side of this technique that is generating an urgent debate. Bloomberg…
Bloomberg AI Researchers Mitigate Risks of "Unsafe" RAG LLMs and GenAI in Finance - PressReach
Two new academic papers reflect Bloomberg’s commitment to transparent, trustworthy, and responsible AI NEW YORK, April 28, 2025 /PRNewswire/ — From discovering that retrieval augmented generation (RAG)-based large language models (LLMs) are less “safe” to introducing an AI content risk taxonomy meeting the unique needs of GenAI systems in financial services, researchers across Bloomberg’s AI Engineering group, Data AI group, and CTO Office aim t…
Bloomberg study finds RAG systems increase unsafe responses in Llama-3-8B from 0.3% to 9.2%, raising safety concerns for LLMs; suggests using domain-specific AI safety taxonomies in financial services to address risks
RAG systems risk undermining LLM safety; Bloomberg study reports a jump in unsafe responses, from 0.3% to 9.2% with Llama-3-8B, when using RAG. Bloomberg suggests domain-specific AI safety taxonomies for financial services to mitigate risks. Source: venturebeat.com The post Bloomberg study finds RAG systems increase unsafe responses in Llama-3-8B from 0.3% to 9.2%, raising safety concerns for LLMs; suggests using domain-specific AI safety taxono…
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium


