From Accessibility Efforts to Ethical Concerns, Here Are Our AI Takeaways From SXSW Content Creators Should Consider

Photo credit: Dohma48/Shutterstock Patthana Nirangkul/Shutterstock OpenAI/Wikimedia Commons Google/Wikimedia Commons SxSW/Wikimedia Commons (Licensed) by Caterina Cox

During day one of arts and technology conference South by Southwest (SXSW), a line wrapped nearly halfway around the fourth floor of the Austin Convention Center consisting entirely of attendees unable to get a seat in OpenAI president and co-founder Greg Brockman’s featured session.

Having reached a recent breakthrough in mainstream discussion, artificial intelligence (AI) was among the hottest topics at this year’s SXSW. Similar to Brockman’s featured session, panels such as “AI & Cultural Trends: How Data Forecasts Futures” completely met capacity before starting, leaving curious attendees waiting outside the session in hopes that someone leaves midway through.

Beyond the panels, AI was well-integrated throughout the rest of SXSW. An art exhibit, “Ancestral Archives” by artist Josie Williams in association with EY metaverse lab, created an AI system that used the texts of Black leaders, including Audre Lorde, James Baldwin, Zora Neale Hurston, and Octavia Butler, to create a unique connection with users in chatbot form.  

On the television side of the festival, the premier of Peacock series Mrs. Davis tackled the idea of a malevolent AI, with the show’s creators being featured on the panel “Mrs. Davis: The Future of Tech & Entertainment.”

With the role of AI seemingly increasing in creative industries, it’s likely that the tool will continue to become even more integrated into the content creation career path. With that in mind, here are some observations from the conference that could be useful for content creators.

The excitement around the topic launched with Brockman’s keynote, where he tackled a myriad of topics including the accessibility of AI. “We made it accessible,” said Brockman, referring to  ChatGPT becoming the fastest growing application in history. “We built an interface that was super simple. We made it available for free to anyone.”

Beyond Brockman’s session, the topic of accessibility—and explainability—was central in Google researcher Patrick Gage Kelley’s panel “Helping People Navigate an AI World.” Here, Kelley spoke about Google’s efforts with AI explainability being a key element of Google’s AI principles.

Kelley said one of the motivators in explainability was looming governmental regulation. Notably, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights on Oct. 4, 2022, with “Notice and Explanation” being one of the five principles the office deems should be considered in the design, use, and deployment of AI.  

Kelley said explainability efforts should go far beyond textual in-the-moment explanations, and instead understand larger systemic investigations. 

“And so we’re proposing a broader view of explainability,” Kelley said. “Don’t just say, ‘Oh we didn’t get results,’ or, ‘Oh this thing is broken.’ … Explain why. Start to teach people how these products work.”

Among Google’s initiatives in place to address explainability is an educational program titled Discover AI in Daily Life, an online lesson with 13 videos aimed toward middle school students to familiarize them with AI systems. 

However, there was also an air of caution regarding AI throughout the conference. In the panel “Can There Be AI Art Without Artists?” AI artist and researcher Eryk Salvaggio and Northeastern University researcher Avijit Ghosh unpacked the ethical implications of popular AI art generators such as Dall-e.

On TikTok, AI art generators have been quite the trend. However, despite their virality, many TIkTokers have been skeptical

Much of that skepticism lies from ethical concerns, including concerns over how the data sets we choose to feed AI may lead the AI to reflect human bias, including racial and gender-based bias.

Ghosh argued the idea in some circles that AI is unbiased is not necessarily true, as every decision in AI reflects the bias of those who create it. 

“AI generators are all about bias,” Ghosh said. “Whatever is in the data set is going to move the images it produces in those directions. Technical and social bias. And the question of bias is one of the many questions concerning the ethical uses of these and other AI systems. These questions concern human agency, bias, consent, inclusion, and exclusion.”

Ghosh further explained the controversy lying in these data sets. “Not only is it dangerous for power and privacy purposes, but also this represents a shift in economic power. We are not just saying that it is taking away people’s intellectual property for creativity reasons, it’s also for-profit reasons. All of these massive generative AI models to come out cost a few to access.”

As Ghosh mentioned, many of these popular generators are not free to use—Dalle-e charges for additional credits beyond the amount in the free version and Midjourney costs $10 for a basic monthly subscription. 

While companies are charging for AI tools, there have been controversies regarding how these tools, namely Stable Diffusion, are scraping copyrighted art. Passionfruit reached out to OpenAI, Google, Dall-e, Midjourney, and Stable Diffusion for comment via email and did not hear back in time for the publication of this article.

“What is happening here is that companies are taking people’s intellectual art, training these models, and then selling that as a service,” Ghosh said. “It’s not only taking away people’s incentive to do more creative things, but it’s also taking away people’s incentive to have this as their bread and butter now that companies are getting all the profit.”

Ghosh described this cycle as “vicious,” with companies profiting off of “unsuspecting artists who never even agreed to be integrated into the data sets in the first place.”

In a statement given to Passionfruit, Salvaggio said these ethical concerns cause hesitation for creatives to use these tools professionally. 

“Creatives don’t want to use tools that might arbitrarily use someone else’s work,” Salvaggio said. “So really, the market for these tools for professionals in animation or gaming is limited until this issue of imitation is addressed. Nobody wants to work on a piece of art or a trailer just to find out the style belongs to somebody else.”

Salvaggio argued AI-driven tools need a better system for artist attribution and the ability to opt out of being put into a data-set. 

“One artist in a massive dataset may not make much of a difference, but the reality is that the entire dataset relies on every artist in order to function. If you combine that with the power to use an artist’s name to make knock-offs of their work, that’s a challenging position for illustrators and photographers,” Salvaggio said. 

In a statement given to Passionfruit, Ghosh advocated for the regulation of these models. 

“I wholeheartedly support efforts to regulate generative models and respect copyrights and consent from creators before their products are indiscriminately consumed to create these cookie cutter generative models,” Ghosh said. “A combination of both centralized regulation and decentralized defense mechanisms should go hand in hand as dual governance mechanisms to protect artists’ rights, intellectual property, livelihood and dignity.”

Through all the controversy and excitement, it was clear at this year’s SXSW that AI is well on the way to being an increasingly integral part of all of our lives.

Content for Creators.

News, tips, and tricks delivered to your inbox twice a week.

Newsletter Signup

Top Stories