Get access to our best features
Get access to our best features

How We're Using AI to Help You See the Full Story

Ground NewsSep 26, 2023
image of ground news and AI

Since its inception, Ground News has always been a tech-first news company. While irresponsible uses of powerful tech have hurt journalism and weakened trust in the news, responsible uses can restore that trust and help repair our shared sense of reality. This belief is why we built Ground News - a news comparison platform that makes it easy to see a story from multiple perspectives, read between the lines of media bias and break free from algorithms.

 

We're excited to introduce Frames: an AI-powered coverage analysis feature that enables you to easily lift the veil on media bias and understand a story from multiple perspectives quickly.

 

How we're using artificial intelligence - responsibly

 

For every story, Frames ingests the articles from two sources from the left, center and right, creating a bullet point summary from each perspective.

 

While it depends on coverage breadth, we typically pull two influential and well-read sources from each side, ingest the articles, and create a summary that captures the language and key points emphasized by each side.

 

We intentionally do not ask Frames to summarize the two articles from each side in a neutral manner. We do this to show how differences in framing and facts emphasized can change the understanding of a story. This means the summaries we present may contain biased statements and the inclusion/omission of facts that may be indicative of a political narrative. An example of concise and compelling summaries from the Left, Center and Right perspectives can be found here, in a story about a policy proposal from US Presidential candidate Vivek Ramaswamy.

 

Frames goes a level deeper with its comparison capabilities by looking at the more subtle differences in how the story is covered. How are different sides framing the story? What facts are they emphasizing? What facts did they leave out? How does the language used differ? See the comparison feature here, in a story about US Household incomes falling for the third straight year.

 

But isn't AI biased?

 

We've seen the tweets as well. Since the launch of ChatGPT and other LLMs, many people have shared screenshots of ChatGPT answers that many would consider politically biased. We share OpenAI's view that these concerns are valid and expose AI limitations that must be addressed. However, our use of the technology minimizes the potential for bias to pollute output. We do not ask the tool to make any value judgements; instead, we ask it to summarize content from multiple articles that ingest from the left, centre and right perspectives.

 

For our comparison section, we prompt the tool to look at differences in story framing, emphasis, and context provided across differing articles. Using this analysis structure, we isolate ChatGPT's function to one similar to a grammatical tool rather than an arbiter of truth. We've tested, iterated and improved this feature for months with thousands of beta testers and are confident it is worth your time and trust.

 

Why we made Frames 

 

The omission of specific facts and different framing of stories from sources with different biases contribute to the eroding of our shared sense of reality. Partisan media and the algorithms that create echo chambers are why conversations about issues often feel like an argument between fans of rival sports teams. The facts of the matter are irrelevant, and only one truth is accepted: that the other side is wrong.

 

We built Ground News to help challenge the worldview of our readers, not complement it. We believe that integrating AI into our platform will allow open-minded newsreaders to easily see every side of the story and build a deeper understanding of the cause they care about, the world, and themselves. To learn more about Ground News, our vision of a better future for news, and the team driving that vision, check out https://ground.news/about.

 

Limitations

 

While we tested the feature thoroughly and optimized the prompt to reduce errors, we may occasionally generate summaries containing incorrect information and grammatical mistakes. For all stories, we recommend our readers read multiple articles from different perspectives to get the complete picture and deeper understanding of the story.