Leah Zitter

Leah Zitter

The Story Behind the All-New Google News

Radar Zero | January 10th, 2017

What do you think of your new Google News site? Do you like it?

The story behind the change is that Google resolved to crack down on Fake News. So, earlier this month, Google rolled out its upgraded version that uses artificial intelligence to find the best and truest content by journalists around the world.

Your New Google News: Here's How it Works

The "Headlines" section shows you world news from trusted resources. In your personalized "For You" section, Google also gives you five stories that combines global with local news and is based on topics you've searched in the past.

Google insists that the more you use the app, the better it gets. It employs controls for you to choose more or less of a certain topic, as well as images and videos from YouTube to improve your user experience. Right now, the researchers behind the news aggregator are experimenting with a unique visual format called newscasts, that make it easy for you to dive into perspectives and learn more about a story.

As spokesperson Maggie Shiels told me, to filter out bias, Google News will no longer use human editors, nor will it partner with particular news organizations. Rather, its Google's AI will separate content into news, opinion and analysis. This, Shiels says, will also prevent the problem Google had with YouTube, where automated recommendations tended to push people toward more extreme content.

Those changes come at a time when Apple has a news subscription service in its wings. The changes also arrive amid serious concerns among publishers about Facebook’s role in the media business - not only because of fake news, but also because of its dubious methods of ranking content, among other issues.

The Google News Initiative (GNI), launched earlier this month, vowed to strengthen quality journalism and to empower news organizations through technological innovation.

Google News was created 15 years ago simply to organize news articles so users could see a wide variety of sources on a topic. Now, Trystan Upstill, head of News Product and Engineering at Google, said the time had come to "find the best of human intelligence - the great reporting done by journalists around the globe. We know getting accurate and timely information into people’s hands and supporting high quality journalism is more important than it has ever been right now.”

How does AI choose news?

It's a process.

Google’s artificial intelligence crawls stories on trending reports. Once it picks a topic, it checks more than a thousand sources - across the political spectrum - for details. The AI then scripts its own “impartial” version of the story - sometimes in as little as 60 seconds. Google's version of the news contains only the most basic facts.

For some of the more political stories, Google's AI produces two additional versions labeled “Left” and “Right" that skew accordingly.

Example

  • Impartial: “US to add citizenship question to 2020 census.”

  • Left: “California sues Trump administration over census citizenship question.”

*Right: “Liberals object to inclusion of citizenship question on 2020 census.”

Some controversial stories receive “Positive” and “Negative” spins:

Example

  • Impartial: “Facebook scans things you send on messenger, Mark Zuckerberg admits.”

*Positive: “Facebook reveals that it scans Messenger for inappropriate content.”

*Negative: “Facebook admits to spying on Messenger, ‘scanning’ private images and links.”

Even the images used with the stories occasionally reflect the content’s bias, so Google's AI analyzes these images, too.

Can Google's AI make its news aggregator objective?

Google contends that its AI develops objective news sources all across its platform, from ads to images to videos to search results.

Its search engine has had its sad moments.

In 2013, Harvard professor Latanya Sweeney investigated Google ads that came up when names typifying white babies were typed in and that appeared when names of black babies were used. She found that ads containing the word “arrest” came up with about 80 percent of the “black” names but appeared less than 30 percent of the time with “white” names.

Two years later, two men used Google's photo software and found themselves labelled “gorillas”. That's not because the AI was racist, but rather because the engine was starved for data samples of people of color.

In short, it's not that the AI is racist or obtuse. Rather, machines are programmed by humans and implicitly fed their programmers' bias.

To Illustrate:

Let’s say programmers build a computer model to identify terrorists. First, they train the algorithms with photos that are tagged with certain names and descriptors that programmers think typify terrorists. Then, they put the program through its paces with untagged photos of people and and let the algorithms detect the “terrorist”, based on what the machine learned from the training data. The programmers see what worked and what didn't and fine-tune accordingly.

The program is supposed to work, but bias intrudes when the training data lacks scope and diversity, causing the software to give answers based on what it “knows” . That's what happened when the Black men found themselves labelled “gorillas”. It was the only data the AI had.

Mistakes also occur when too few photos of outlier situations are fed into the system. So for example, a Google user types in "terrorist". The engine spits up about 85% of photographs with swarthy bearded males wearing turbans. The terrorist may be a White girl, but the engine has no such picture in its "brain".

The point is that for a news aggregator to be as objective as possible, results depend on the human feeding the system, rather than on the system itself.

It's not the AI. It's the human!

Look at our new Google News site.

To convince us of its objectivity, Google promised: “The selection and placement of stories on this page were determined automatically by a computer program.”

To which one of my followers tweeted: A COMPUTER PROGRAM DEVELOPED BY HUMANS WITH BIASES