“We’re working hard to come up with an approach that is very creator-first,” said Eric Shadowens, Patreon’s Europe policy lead, during the platform’s quarterly town hall on March 28.
If 2022 was the year of crypto, one thing is clear—2023 is the year of AI. Thanks to software like OpenAI, Midjourney, and Stable Diffusion, professionals and laymen alike have the ability to make pretty much anything with the help of artificial intelligence: ranging from literature, artwork, deepfakes, and even graphic novels.
But in this new era of AI-aided creation, where does that leave creators? Platforms like Patreon, which have been typically been used to help users fund bespoke and exclusive content for a paid fanbase of subscribers, have been put into a tailspin since AI came into the picture, as the line between what does and doesn’t count as “original” content becomes all the more blurred.
As a result, AI was a hot topic at Patreon’s latest town hall, with creators getting some much-needed clarification on how to navigate the use of AI moving forward. Here’s what we found out.
A ‘creator-first’ approach
“Our team has been exploring how [AI] could impact creators on our platform, and we recognize both the challenges and the potential that AI poses for us,” Shadowens explained during the live-streamed town hall. “It’s a matter of figuring out; what does that look like from a policy perspective?”
He admitted while the Policy team at Patreon “don’t have all the answers right now,” it is “working hard to come up with an approach that is very creator-first.” The key to making this happen, Shadowens continued, is Patreon’s policy team engaging with users as much as possible.
“We want to make sure that any guideline is as creator-first as possible, and so any guideline related to artificial intelligence is no exception,” he explained. “The more we know about how you feel about AI the more we can do to craft policy that addresses your apprehensions while also preserving creative freedom which is of course really important.”
In the past few months, the online art community has been divided over the incorporation of AI into online projects—with Passionfruit previously reporting how many artists are speaking out about AI art, and a Digital Millennium Copyright Act (DMCA) legal case even being filed against leading AI software Midjourney and Stable Diffusion by a coalition of artists.
‘Transparency’ in Patron-funded AI content
With platforms like Patreon having little-to-no precedent on how to deal with AI-hinged creations, the past few months have been something of a learning curve. But something Shadowens is adamant about moving forward on Patreon is the importance of artists being transparent about their use of AI.
“Transparency about those creations is a really important area of conversation,” he said. “We want to make sure that Patrons can make informed decisions about the creators they support and can trust the content that they’re seeing.”
In other words, while it seems likely the use of AI to make content won’t be banned on Patreon, there is an expectation for users to be honest about it so that if a Patron is funding a project, it isn’t under any false pretenses and they’re well aware of the role AI plays in bringing the project to life. So, we can expect a policy related to transparency and AI from Patreon sometime in the future.
AI users will be ‘accountable’ for data privacy
For AI art to be generated, it is believed that leading AI tools are either fed existing artwork by other artists or somehow scraped on the internet for it. The basis of some AI artwork might be made up of other artists’ original content which they may or may not have given permission to be used and reproduced in this way. For instance, remnants of other artists’ signatures were purportedly identified in images produced by LensaAI.
This is why, according to Shadowens, Patreon members using AI “must be conscious of the legal considerations there particularly as they may be taking in user data.” Consequently, if an AI project ends up misusing a third party’s art, the onus of responsibility, and consequences, will fall on the person who used the AI software.
“It’s important for creators to remember that they are accountable for the content that their AI tools produce, including anything that potentially violates our community guidelines as well as other considerations such as copyright,” Shadowens added.
Again, while a specific policy has not yet been made by Patreon to reflect this, it is definitely in process, as Shadowens promised that the future policy would have “clear language” to help guide Patreon users away from legal issues.
He also emphasized how Patreon was working towards “a standard for fair and unbiased systems.” The first step in achieving that, he added, was for Patreon policymakers to consider “the purpose and impact of the tools that creators are building as they continue to evolve.”
As part of this, Shadowens revealed how Patreon’s in-house legal team was launching an initiative to fight content piracy, which he described as “a great example of [Patreon’s] commitment to defending creator IP on the AI-specific front.”
He explained, “In some of these [AI] tools, … you may take a user input that that in fact actually counts as user data, and so how that data is handled ends up mattering.”
What about deepfakes?
As highlighted by streamers like QTCinderella, who discovered deepfake porn using her and other female streamers’ likenesses created and distributed without their consent, another worrying implication of AI content is the use of deepfakes.
So, during the Patreon town hall, one viewer asked whether the principle of ‘privacy,’ which Shadowens previously identified as a core principle driving Patreon’s policymaking, would be expanded to include the prohibition of creating and funding deepfakes on the platform.
On this concern, Shadowens said that Patreon’s policy team was “of course assessing” deepfakes as part of their future AI policies, with attention especially being given to whether the user who made these deepfakes had the usage rights to the images being used.
Purpose, impact, and the ‘harm element’
Fundamentally, when it comes to forming AI policies on Patreon, Shadowens said the policy team was considering two things: purpose and impact.
“What we need to think about is what are the purpose of these tools?” he asked. “Is it something that makes sense on the platform? Imagine a tool that is designed only to help somebody create malware. Now, I’m not saying that that that’s a likely outcome or that’s going to happen but all of those things are kind of potential outcomes in this space.”
So, if an AI-generated project has a largely negative impact, as with this malware example, it’s likely that there will be policies put in place to prohibit it from being supported by Patreon.
“We have to consider what is the purpose or impact of these tools that are being built, and make sure that users understand that they’re accountable for what those AI generators and AI tools actually produce,” Shadowens insisted.
But what about AI-produced content that technically goes against Patreon’s policies, but isn’t necessarily harmful? This, Shadowens explained, is when the “harm element” will be considered by Patreon policymakers.
“There’s some other things that might violate our policies, but there’s not a real world harm element to people but it’s something that still is a violation of our policies,” he explained.
“But what makes Patreon pretty special is that we’ll work with the creator to try to bring the account back within guidelines. Our Trust and Safety Team works every single day … to help clarify policies and make sure that accounts can be brought back within compliance with the policy so that we don’t have to remove them.”
The next steps
During the town hall, it was emphasized several times that Patreon’s policies concerning AI were very much a work in progress, with the platform hoping to work closely with Patreon users to help shape and codify these policies in a way that their artist community approves of. So, moving forward, Shadowens emphasized his desire to work in collaboration with others in shaping and creating these policies.
“Policy is kind of like a language,” he said. “It’s constantly evolving, and so [our policies] are evolving as our world evolves with things like AI. So, when our trust and safety team identifies cases that might be in gray areas, or that we don’t really have a policy that clarifies how we should enforce or what we should enforce, then we work super closely as we mentioned before with our policy team ensure that we can codify that.”