For YouTube, 2017 was a year to forget. The internet’s leading video platform found itself in hot water due to several high-profile cases of questionable content compromising brand safety standards.
YouTube has, however, displayed a commendable level of humility amidst the controversy. It’s publicly recognized its shortcomings and worked to address any fears brands might have about advertising on the platform. With 2018 just getting started, YouTube is ramping up its focus to protect the integrity of its platform with updated guidelines and rules for monetization.
YouTube raises the bar for those eligible to join its partner program
Within the last 10 months, YouTube has changed eligibility requirements to its partnership program twice. In April 2017, it required creators seeking to monetize their content to possess a minimum of at least 10,000 lifetime views.
YouTube, however, quickly realized this new benchmark wasn’t good enough and decided to change the requirements again. Now aspiring partners need at least 4,000 hours of watchtime within the past 12 months and a minimum of 1,000 subscribers.
“These higher standards will also help us prevent potentially inappropriate videos from monetizing which can hurt revenue for everyone,” said YouTube’s Chief Product Officer Neal Mihai and Chief Business Officer Robert Kyncl in a blog post they co-authored.
In mid-February 2018, this standard will also be applied to evaluate the quality of existing partners. YouTube estimates that 99 percent of creators affected by this change made less than $100 in the last year.
Increasing efforts on a humanistic approach to reviewing video
YouTube has also doubled-down on its commitment to evaluate content with human judgment. Though the use of technology can breed success, YouTube acknowledged its need for people to program machine-learning algorithms on the differences between good and bad content.
According to YouTube, nearly 2 million videos have been manually reviewed for violent and extremist nature since June of last year. In 2018, its goal is to have at least 10,000 people working together to review content that might violate any of its policies.
A manual review of YouTube’s Google Preferred partners
For brands looking to partner with YouTube’s elite creators, Google Preferred is an advertising product designed specifically for them. It aggregates the platform’s best video content into easy-to-purchase packages.
Google Preferred includes access to the most popular YouTube channels among 18 to 34 year old Americans and allows brand to choose from 12 packages across a variety of verticals. Categories include beauty and fashion, entertainment and pop culture and foods and recipes.
As part of the sweeping changes to improve brand safety, YouTube is modifying procedures that vet Google Preferred channels. It is now in the process of manually reviewing all creators that are part of the network and will only run ads on videos that have been verified. The hope is that this process will prevent incidents, like the Logan Paul Suicide Forest video, from happening again.
Using technology to tackle questionable content at scale
When appropriately programmed, machine-learning technology has been a major help for YouTube’s quality assurance process. Combined with strong human judgement, artificial intelligence has assisted YouTube with the following brand safety issues:
1. Removing more than 150,000 videos for violent extremism since June of last year.
2. Enhancing the ability of human reviewers to remove negative videos. With the aid of artificial intelligence, people have been able to take down five times as many negative videos.
3. According to YouTube, 98 percent of videos removed for violent extremism are flagged by machine-learning algorithms.
4. Machine-learning has enabled YouTube to remove 70 percent of malicious videos within eight hours of uploading.
\
Improving the transparency of data
Having an open door for brands to examine analytics and data is something YouTube is actively improving. One of its plans to implement this policy is by issuing a report giving both advertisers and creators more information about the flags it receives along with the actions it’s taking to remove content that violates YouTube policies.
It’s also introducing a three-tier suitability system that will give brands a greater say with where their ads are placed. The system will also outline the potential limitations on reach that may come as a result.
YouTube is also examining partnerships with third-party vendors to provide enhanced brand safety reporting. It’s currently in a beta test with Integral Ad Science and will soon launch another a similar experiment with DoubleVerfy.
The final word on YouTube’s brand safety issues
These are just a few of the high-level ways in which YouTube is addressing rising concerns over brand safety. YouTube’s commitment to improving standards in 2018 will surely be tested again as it was in 2017, but the company seems ready for the challenge.
“We are taking these actions because it’s the right thing to do,” wrote YouTube CEO Susan Wojcicki in a blog post about her company’s efforts against platform abuse. “Creators make incredible content that builds global fan bases. Fans come to YouTube to watch, share, and engage with this content. Advertisers, who want to reach those people, fund this creator economy. Each of these groups is essential to YouTube’s creative ecosystem—none can thrive on YouTube without the other—and all three deserve our best efforts.”