YouTube, the video service of Alphabet Inc's Google, said on Tuesday it would start showing text and links from third-party fact-checkers to U.S. viewers, part of efforts to curb misinformation on the site during the COVID-19 pandemic.
The information panels, launched in Brazil and India last year, will highlight third-party, fact-checked articles above search results for topics such as "covid and ibuprofen" or false claims like "COVID-19 is a bio-weapon," as well as specific searches such as "did a tornado hit Los Angeles."
Social media sites including Facebook Inc and Twitter Inc are under pressure to combat misinformation relating to the pandemic caused by the new coronavirus, from false cures to conspiracy theories.
YouTube said in a blog post that more than a dozen U.S. publishers are participating in its fact-checking network, including FactCheck.org, PolitiFact and The Washington Post Fact Checker. The company said it could not share a full list of fact-checking partners.
In 2018, YouTube started using information panels that surfaced links to sources such as Encyclopedia Britannica and Wikipedia for topics considered prone to misinformation, such as "flat earth" theories. But it said in Tuesday's blog post that the panels would now help address misinformation in a fast-moving news cycle.
The site has also recently started linking to the World Health Organization, Centers for Disease Control and Prevention or local health authorities for videos and searches related to COVID-19.
YouTube did not specify in the blog post how many search terms would prompt the fact-check boxes. It said it would "take some time for our systems to fully ramp up" as it rolls out the fact-checking feature.
The feature will only appear on searches, though the company has previously said that its recommendation feature, which encourages people to watch videos similar to those that they have spent significant time viewing in the past, drives the majority of overall "watch time."
In January, YouTube said that it had started reducing recommendations of borderline content or videos that could misinform users in harmful ways, such as "videos promoting a phony miracle cure for a serious illness."
Major social media companies, which have emptied their offices during the pandemic, have warned that their content moderation could be affected by relying on more automated software. In March, Google said this could cause a jump in videos being erroneously removed for policy violations.