Skip to main content

YouTube to reduce conspiracy theory recommendations in the UK

YouTube is expanding an experimental tweak to its recommendation engine that’s intended to reduce the amplification of conspiracy theories to the UK market.

In January, the video-sharing platform said it was making changes in the US to limit the spread of conspiracy theory content, such as junk science and bogus claims about historical events — following sustained criticism of how its platform accelerates damaging clickbait.

A YouTube spokeswoman confirmed to TechCrunch it is now in the process of rolling out the same update to suppresses conspiracy recommendations in the UK. She said it will take some time to take full effect — without providing detail on when exactly the changes will be fully applied.

The spokeswoman said YouTube acknowledges that it needs to do more to reform a recommendation system that has been shown time and again lifting harmful clickbait and misinformation into mainstream view. Though YouTube claims this negative spiral occurs only sometimes, and says on average its system points users to mainstream videos.

The company calls the type of junk content it’s been experimenting with recommending less often “borderline”, saying it’s stuff that toes the line of its acceptable content policies. In practice this means stuff like videos that make nonsense claims the earth is flat, or blatant lies about historical events such as the 9/11 terror attacks, or promote harmful junk about bogus miracle cures for serious illnesses.

All of which can be filed under misinformation ‘snake oil’. But for YouTube this sort of junk has been very lucrative snake oil as a consequence of Google’s commercial imperative being to keep eyeballs engaged in order to serve more ads.

More recently, though, YouTube has taken a reputational hit as its platform as been blamed for an extremist and radicalizing impact on young and impressionable minds by encouraging users to swallow junk science and worse.

A former Google engineer, Guillaume Chaslot, who worked on the YouTube recommendation algorithms went public last year to condemn what he described as the engine’s “toxic” impact which he said “perverts civic discussion” by encouraging users to create highly engaging borderline content.

Multiple investigations by journalists have also delved into instances where YouTube has been blamed for pushing people, including the young and impressionable, towards far right points of view via its algorithm’s radicalizing rabbit hole — which exposes users to increasingly extreme points of view without providing any context about what it’s encouraging them to view. 

Of course it doesn’t have to be this way. Imagine if a YouTube viewer who sought out at a video produced by a partisan shock jock was suggested less extreme or even an entirely alternative political point of view. Or only saw calming yoga and mindfulness videos in their ‘up next’ feed.

YouTube has eschewed a more balanced approach to the content its algorithms select and recommend for commercial reasons. But it may also have been keen to avoid drawing overt attention to the fact that its algorithms are acting as de facto editors.

And editorial decisions are what media companies make. So it then follows that tech platforms which perform algorithmic content sorting and suggestion should be regulated like media businesses are. (And all tech giants in the user generated content space have been doing their level best to evade that sort of rule of law for years.)

That Google has the power to edit out junk is clear.

A spokeswoman for YouTube told us the US test of a reduction in conspiracy junk recommendations has led to a drop in the number of views from recommendations of more than 50%.

Though she also said the test is still ramping up — suggesting the impact on the viewing and amplification of conspiracy nonsense could be even greater if YouTube were to more aggressively demote this type of BS.

What’s very clear is the company has the power to flick algorithmic levers that determine what billions of people see — even if you don’t believe that might also influence how they feel and what they believe. Which is a concentration of power that should concern people on all sides of the political spectrum.

While YouTube could further limit algorithmically amplified toxicity the problem is its business continues to monetize on engagement, and clickbait’s fantastical nonsense is, by nature, highly engaging. So — for purely commercial reasons — it has a counter incentive not to clear out all YouTube’s crap.

How long the company can keep up this balancing act remains to be seen, though. In recent years some major YouTube advertisers have intervened to make it clear they do not relish their brands being associated with abusive and extremist content. Which does represent a commercial risk to YouTube — if pressure from and on advertisers steps up.

Like all powerful tech platforms, its business is also facing rising scrutiny from politicians and policymakers. And questions about how to ensure such content platforms do not have a deleterious effect on people and societies are now front of mind for governments in some markets around the world.

That political pressure — which is a response to public pressure, after a number of scandals — is unlikely to go away.

So YouTube’s still glacial response to addressing how its population-spanning algorithms negatively select for stuff that’s socially divisive and individually toxic may yet come back to bite it — in the form of laws that put firm limits on its powers to push people’s buttons.



from TechCrunch https://ift.tt/2U97T87

Comments

Popular posts from this blog

Bill Gates steps down from Microsoft’s board to focus on philanthropy

In an announcement on Friday, Microsoft revealed that company co-founder Bill Gates has decided to step down from his role on its Board of Directors in order to focus on his philanthropic efforts at the Bill & Melinda Gates Foundation. This is Gate’s biggest change to his role at Microsoft since stepping down as company chairman in February 2014. According … Continue reading from SlashGear https://ift.tt/2We90Gu

World Economic Forum launches Global AI Council to address governance gaps

The World Economic Forum is creating a series of councils that create policy recommendations for use of things like AI, blockchain, and precision medicine. Read More from VentureBeat http://bit.ly/2EKBjD4

A Mini USB Keyboard That Isn’t A Keyboard

A useful add-on for any computer is a plug-in macro keyboard, a little peripheral that adds those extra useful buttons to automate tasks. [ Sayantan Pal] has made one, a handy board with nine programmable keys and a USB connector, but the surprise is that at its heart lies only the ubiquitous ATmega328 that you might find in an Arduino Uno. This isn’t a USB HID keyboard, instead it uses a USB-to-serial chip and appears to the host computer as a serial device. The keys themselves are simple momentary action switches, perhaps a deluxe version could use key switches from the likes of Cherry or similar. The clever part of this build comes on the host computer, which runs some Python code using the PyAutoGui library. This allows control of the keyboard and mouse, and provides an “in” for the script to link serial and input devices. Full configurability is assured through the Python code, and while that might preclude a non-technical user from gaining its full benefit it’s fair to say that ...