Unbiased News Awaits.
Published loading...Updated

Human-Like Object Concept Representations Emerge Naturally in Multimodal Large Language Models

  • On June 9, 2025, Chinese researchers published a paper in Nature Machine Intelligence demonstrating that multimodal large language models can spontaneously form object concept systems closely resembling those in humans.
  • The study combined behavioral experiments with neuroimaging to explore whether AI models can form conceptual systems similar to human cognition.
  • Researchers found that multimodal LLMs extract 66 concept dimensions strongly correlated with neural activity and outperform unimodal models in matching human behavior.
  • He Huiguang highlighted that human intelligence fundamentally involves forming complex concepts of natural objects, encompassing not just their physical traits but also their functions, emotions, and cultural significance, whereas current models primarily capture semantic information with limited sensory understanding.
  • The results suggest integrating multimodal sensory input could improve AI’s conceptual understanding, advancing human-like cognition in artificial systems.
Insights by Ground AI
Does this summary seem wrong?

11 Articles

All
Left
2
Center
1
Right
1
The Manila TimesThe Manila Times
+4 Reposted by 4 other sources
Lean Right

PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts

PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts

·Manila, Philippines
Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 50% of the sources lean Left
50% Left
Factuality

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

Macau Business broke the news in on Monday, June 9, 2025.
Sources are mostly out of (0)