Like humans, ChatGPT favors examples and 'memories,' not rules, to generate language
8 Articles
8 Articles
Prominent chatbots routinely exaggerate science findings, study shows
When summarizing scientific studies, large language models (LLMs) like ChatGPT and DeepSeek produce inaccurate conclusions in up to 73% of cases, according to a study by Uwe Peters (Utrecht University) and Benjamin Chin-Yee (Western University, Canada/University of Cambridge, UK). The researchers tested the most prominent LLMs and analyzed thousands of chatbot-generated science summaries, revealing that most models consistently produced broader …
Like humans, ChatGPT favors examples and 'memories,' not rules, to generate language
A new study led by researchers at the University of Oxford and the Allen Institute for AI (Ai2) has found that large language models (LLMs)—the AI systems behind chatbots like ChatGPT—generalize language patterns in a ...
AI LLMs Learn Like Us, But Without Abstract Thought
Summary: A new study finds that large language models (LLMs), like GPT-J, generate words not by applying fixed grammatical rules, but by drawing analogies, mirroring how humans process unfamiliar language. When faced with made-up adjectives, the LLM chose noun forms based on similarity to words it had seen in training, just as humans do. However, unlike humans, LLMs don’t build mental dictionaries; they treat each instance of a word as unique, r…
Research Reveals How AI Chooses Words by Memory, Not Rules
Large language models, often praised for mimicking human speech, appear to rely more on memory-based comparisons than grammatical logic, according to recent research from Oxford University and the Allen Institute for AI. Rather than extracting symbolic rules, these AI systems seem to reach language decisions through analogy, matching new inputs to known word patterns embedded in training data.The peer-reviewed findings, published in the Proceedi…
Like Humans, ChatGPT Relies On Memory And Examples For Language Generation
A study reveals ChatGPT generates language through analogy, similarly to humans, relying on stored examples and memory, as shown by research led by Oxford and Allen Institute for AI, published in PNAS, challenging the notion that large language models primarily use grammatical rules.
People Trust Legal Advice Generated By ChatGPT More Than A Lawyer – New Study - Stuff South Africa
People who aren’t legal experts are more willing to rely on legal advice provided by ChatGPT than by real lawyers – at least, when they don’t know which of the two provided the advice. That’s the key finding of our new research, which highlights some important concerns about the way the public increasingly relies on AI-generated content. We also found the public has at least some ability to identify whether the advice came from ChatGPT or a huma…
Coverage Details
Bias Distribution
- 100% of the sources are Center
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage