This week, YouTube introduced a new feature called First Aid Information Shelves. These are a collection of brief, step-by-step guides – usually around a minute or two in length – designed to help people witnessing a medical emergency to potentially help someone in need. To start, YouTube will include instructional clips for 12 specific topics, including CPR, seizures, choking hazards, bleeding, and psychosis.
Google didn’t produce the collection in-house but farmed it out to various healthcare organizations and medical experts. So far, this includes the non-profit Mass General Brigham – the largest health care system in the state of Massachusetts – the Mexican Red Cross, and the American Heart Association. Obviously, due to the time-sensitive nature of medical emergencies, these videos won’t carry any advertising or bring in any immediate revenue for Google.
By actually curating its own bespoke collection of approved videos, YouTube’s latest attempt to provide vetted, accurate medical information to users – and clamp down on the viral spread of misinformation – is its most aggressive to date. But this isn’t the platform’s first try.
After concerns about the spread of anti-vaccine messaging during the early days of the COVID pandemic, YouTube started juicing its algorithm to favor medical content from “authoritative sources” and added information panels to selected videos noting that the National Academy of Medicine considers them “credible sources.”
In August of 2021, YouTube once more clarified and streamlined rules around medical misinformation in videos, and more aggressively began pulling content containing inaccuracies around “high-risk” topics and conditions, such as COVID-19, reproductive health, cancer, and drug abuse. The following month, the platform announced that it had banned the accounts of a few noted anti-vaccine activists, including alternative medicine proponent Joseph Mercola and lawyer/activist Robert F. Kennedy Jr.
At that point, YouTube clarified that it would simply start removing videos containing certain kinds of misleading information about vaccines, such as the idea that vaccines don’t reduce rates of disease transmission, or inaccurate claims about the molecular make-up of vaccines themselves.
Whether or not these efforts have thus far been successful at actually limiting the spread of misinformation on YouTube remains an active question. In February, a study from Boston’s Brigham and Women’s Hospital found a lot of misinformation floating around YouTube on the subject of healthy sleep and insomnia and concluded that amateur-made videos containing misleading information on the subject were, on average, more popular on the platform than expert-led content.
Last year, Robert F. Kennedy Jr. lost his legal attempt to force Google and YouTube to restore his videos questioning the safety of the COVID-19 vaccine, after claiming that the platform violated his First Amendment rights. He remains active on YouTube, though with the 2024 presidential race quickly approaching, vaccines are no longer his primary focus. As well, those “information panels” highlighting the most credible source remain opt-in and do little to dissuade sufficiently curious or motivated viewers from seeking out less credible, authoritative information elsewhere. In general, it’s a similar scheme that YouTube used to deal with political misinformation around the 2020 election, with questionable results.
So far, all of YouTube’s attempts to address the problem have focused on either tweaking the algorithm to present better content to users or helping those users to make smarter, more informed decisions for themselves. But with limited apparent success on these fronts, Google is now taking the next step, and using humans with expertise to actually curate a collection of its best content, which will surface at the top of its search listings.
But here’s the thing: isn’t that what Google and YouTube were always designed to do, from the very start, by definition? The whole selling point of a search engine is that it will seek out the 5 or 10 best and most accurate links that are relevant to your search, and then show them to you, without leading you down a lot of rabbit holes or dead ends that don’t accurately satisfy your query.
Obviously, it wouldn’t be physically possible for YouTube to curate a collection of expert-made videos on every topic under the sun, or even to pay human staffers to comb through its entire archive looking for the best videos. On some level, these systems have to be automated on any platform that grows as large as YouTube, which by most estimates has over a billion clips and tens of millions of active channels.
But if this can be done for medical emergencies, we know it’s at least possible to do, and these lessons could be applied to other areas where accurate information is maybe not similarly life-and-death, but still crucial.
From a YouTube Creator’s perspective, the First Aid Information Shelf concept is similarly provocative. Nearly all of the big YouTube creator headaches – from the ungainly Content ID system managing copyright to the obscure rules and regulations that determine whose content gets featured and whose gets ignored – stem in some ways from the platform’s inability to individually, carefully, and comprehensively organize, moderate, and manage its own content library.
According to YouTube’s own 2023 transparency report, copyright ID claims reached a new high in 2023, with the tool being used to flag 826 million issues in the second half of 2022 alone. (That’s around 4 million claims per day.) The system generates an estimated $1.5 billion in additional payouts to actual copyright holders after their work is reposted to YouTube by others. But it’s far (far!) from perfect or even functional in its current incarnation, and there are constant stories about the system being misapplied, misused, or abused.
In August of last year, the power metal band DragonForce had one of their own music videos pulled from YouTube after a bogus copyright claim. Just this past week, voice actor and impressionist Brock Baker had a claim filed against his parody of Disney’s classic “Steamboat Willie” cartoon, even though it very notably entered the public domain on January 1st. Last year, scammer Jose Teran was sentenced to 5 years in prison after earning $23 million in royalties from YouTube, based on fraudulent copyright claims. The list goes on and on; this could be its own column.
These are issues that could certainly be addressed via a bit more careful, human curation. Perhaps not entirely – 4 million claims per day is a tall order – but supplementally, perhaps in certain particularly murky or divisive categories of content.
Obviously, YouTube making good faith efforts to share helpful videos when someone around you is experiencing cardiac arrest is a positive. It’s hard to get too upset at that sort of a campaign. But it does demonstrate that YouTube content moderation can be far more effective and impactful than it has been when Google executives decide that quality and accuracy is a top priority.