As part of the Meta Connect 2024 conference, the tech company has announced a number of new AI updates — but some are raising questions.
The most notable update is to Meta’s AI assistant. Similar to Apple’s Siri, Meta AI will now be able to reply to questions out loud across some of its numerous platforms: including Facebook, Instagram, WhatsApp, and Messenger.
But the most interesting part of this feature is that users have several voices to choose from, and some of them might sound a little familiar. The voices are modeled on numerous celebrities, like Dame Judi Dench, Awkwafina, Keegan-Michael Key, John Cena, and Kristen Bell.
According to the Wall Street Journal, Meta paid “millions of dollars” to celebrities as part of these official partnerships.
However, the appearance of Kristen Bell’s voice is particularly peculiar. Back in June, Bell reposted a viral Instagram post, which said that she refuses to consent to Meta using her voice to train AI models. So, her stance has either radically changed, or Meta has some serious explaining to do.
What else did Meta announce?
Amongst other things, Meta announced four other updates during the event.
Firstly, for US creators, Meta AI will be able to reply to photos creators share in chat. This AI bot will not only information on the thing photographed, but will also have editing capabilities, like adding or removing an object and altering the picture’s background.
Creators will also be able to broaden their content to appeal to international audiences through automatic lip-syncing and video dubbing on Reels: a feature that is currently in testing with creators in the US and Latin America.
Finally, there’s Meta’s AI “imagine” feature. This feature allows Facebook and Instagram users to use artificial intelligence to “imagine” images of themselves, along with AI-generated captions that have been “imagined” for them.
Along with this, Meta also announced that it’s starting to test other AI-driven “imagined” content across Facebook and Instagram feeds.