AI is finding a place in many industries from manufacturing to security. One of the slightly surprising ways AI is now used: to write news stories.
But in a time when “fake news” has become a political battle cry, can we trust AI to write the truth? There are a lot of discussions whether AI can spread fake news or help fight fake news.
AI in the News Industry
AI has begun popping up in some larger newsrooms around the country with success. The Washington Post, for example, (now owned by Amazon) uses AI to write more straightforward that reach a smaller audience. Heliograf has now written more than 850 stories. They say this frees up human reporters to dig deeper into the complex pieces. Further, the AI system can alert reporters if it observes discrepancies in the data. AI is also used to gauge the credibility of tweets at Reuters and string together short videos at USA Today.
Earlier in 2017, the Press Association in Britain received an $805,000 grant from Google to create AI software to search databases for newsworthy information and then write stories. Humans still oversee the program, editing the stories and choosing which databases to mine. But AI makes no math mistakes as a human might, making them a valuable tool for reporters in analyzing numbers. Executives at agencies using AI are still trying to determine the economic benefits, but thus far, it has proven useful at large organizations requiring a high volume of stories on a variety of topics.
However, AI can’t attend a local Planning Commission meeting, meaning it has its limits for smaller and local news coverage. AI also has its moments of failure, which often make for fun headlines.
The Role of AI in Fake News
Fake news became a loaded political term in the past year, and AI seems to have a role there, too — on both sides of the issue. Websites publish hoax stories for the advertising revenue, and found great success during the 2016 presidential campaigns and may have even affected the election’s outcome.
According to Statistica, 42 percent of traffic for fake news stories is generated by social media. One story about a woman’s reaction to her lottery win received 1.77 million engagements on Facebook.
Right now, fake news stories are written by humans because they need to feel authentic to go viral. But researchers at Nvidia have developed a way for AI to generate what seem to be realistic photos. With that and Heliograf-type technology making progress, is it only a matter of time before a robot can crank out fake news stories by the thousands per day?
Humans have frequently proven they have trouble determining real versus fake news. A satirical news site called The Onion has been around for decades and yet sometimes still manages to fool people and news organizations both abroad and in the U.S.
AI is already hard at work generating thousands of fake product reviews. AI’s ability to leverage deep learning used the massive data set of online reviews to “learn” how they are written, and then generated thousands of reviews on Yelp. Human test subjects were unable to determine between the two. “When asked to rate whether a particular review was ‘useful,’ the human respondents replied in the affirmative to AI-generated versions nearly as often as real ones,” researchers found.
Some have said AI can save us from ourselves, helping prevent the spread of fake news rather than generating it. But as those researchers observed, “when the researchers tested their AI-generated reviews, they found that Yelp’s filtering software—which also relies on machine-learning algorithms—had difficulty spotting many of the fakes.”
However, there may be some hope on the horizon. A new program called AdVerif.ai launched a beta test in 2017 to check for fake news. The program is built for advertisers who don’t want to be associated with false or offensive stories. The program has had success so far, even spotting The Onion as satire. Facebook and Google are both working to detect fake news and build algorithms to suppress it. A group of journalists also organized the Fake News Challenge, inviting people around the globe to find AI-based solutions for rooting out fake news.
But in this race (driven by the almighty dollar), can AI beat itself? Gartner predicts by 2022, the majority of individuals in mature economies will consume more false information than true information.
As many argue, AI is just another tool. Put in the hands of humans, it can work for good or for evil. As Jay Rosen, a professor of journalism at New York University, said, “I think there’s a chance to algorithmically identify things that are more likely than not to be ‘fake news,’ but they will always work best in combination with a person with a sharp eye.”
In other words, we can turn to AI for help, but it may be up to humans to start choosing where they get their news, to determine whether it’s real or fake.
Talk to us about AI solutions on the side of good for your business.