- Social media users have been having fun with an app called Lensa and its latest “magic avatar” feature.
- The app allows people to upload photos of themselves, which are then transformed into magical, whimsical AI images.
- Some social media users have warned against the apps, saying they steal from artists and pose a security threat to those who upload their photos to the app.
You might look like a warrior, preparing for battle. Or maybe you would like to be surrounded by flowers, wearing a wedding dress. Better yet, how about a goddess?
These representations, called magical avatars, are images created by an app called Lensa. Owned by Prisma Labs, Lensa lets users upload photos of themselves, pay a fee, and within minutes upload photos of themselves in all sorts of fancy settings.
Avatars quickly gained popularity among social media users and celebrities, including Chance the Rapper, Michaela Jaé Rodriguez, and Taraji P. Henson.
But while AI-generated photos have been a hit among some, there have been allegations that apps like this are stealing from real artists. Others say they produce racist depictions of users uploading their photos.
Here’s what to know about these magical avatars and how the company responded to warnings against them.
Wrong wrong :What is a deepfake? This video technology scares off some politicians
No, Paris Hilton and Tom Cruise are not a couple:TikTok video with impersonator confuses fans
What are the “magical avatars” in the Lensa AI app? How are they created?
This latest fad started with Stability AI, the company that created a network model called Stable Broadcast. The model uses internet data to generate images from text.
Lensa uses a copy of the stable broadcast model, allowing users to upload their own photos and wait for the app to create custom images, the company said on its website. The model was trained using LAION 5B, a large dataset with image-text pairs.
The process takes an average of 10 minutes and involves 120 million billion mathematical operations to analyze the photos, the website says.
Once avatars are created, users’ photos are permanently deleted from their servers, Lensa said.
“Each time a user purchases a new avatar pack, the process repeats itself from scratch,” Lensa said on its website. “That’s why we ask you to upload photos every time you request a new package.”
Images like those generated by Lensa start with something called generative AI, or a cluster of patterns, said Yaron Inger, co-founder and chief technology officer at Lightricks, another company that produces similar AI-generated images and has designed Facetune, a hit among celebrities and socialites. Templates allow users to write text prompts and keywords and create “high fidelity” images, or images that closely resemble the originals.
The models are relatively new, but good models have emerged over the past few months, Inger said.
How was the stable broadcast created?
Stable Broadcast, the open-source model used by Lensa, launched in August.
“The guys who trained (the model) basically took hundreds of millions of photos from the web,” Inger said. “(They) scoured the web, got lots of pictures, and showed the model those pictures with text. The model is like a big brain that can hallucinate new pictures based on that ton of new data.”
Why do artists warn against using AI-generated images?
There have been some serious discussions surrounding these AI images, mostly arguments that Lensa has stolen from real artists trying to make a living.
As a reminder, the app charges users to upload their images and create magical avatars.
A Facebook warning message from December 4, now shared more than 50,000 times on Thursday, claims that the apps use copyrighted works of art from creators around the world.
On its FAQ page, Lensa said the network used by the app is able to recognize patterns and connections between images and text descriptions, not individual art. The AI is then able to apply the techniques to generate new content.
“Once the training is complete, the AI does not refer back to the original dataset,” Lensa said. “Instead, he applies the learned principles he has developed to the subsequent creative process. Therefore, the results are not replicas of artwork by any particular artist.”
Inger, of Lightricks, said her company has taken steps to address user safety concerns. Stable streaming originally came with a “pretty basic NSFW filter”, so Lightricks added its own filters to protect its users.
“We basically monitor everything that’s going on in our systems and make sure people are only creating appropriate content,” he said.
He also doesn’t see apps as a threat to artists and creators. In fact, he thinks it’s the opposite.
“I think these models give creators a lot of power,” he said. “It allows people to celebrate what other artists have created and not copy them. If you think about artists and their creations, a lot of their creations are inspired by other artists, right? ”
Technology:FTC sues to block Microsoft’s $69 billion deal to acquire Call of Duty maker Activision Blizzard
Alexa “thank my driver” allows customers to tip their delivery people. But did Amazon steal tips?
Are these “magical avatars” racist and misogynistic?
Another reason social media users are speaking out against Lensa and AI apps like it is that some of the images they generate are overly sexualized.
Activist Brandee Barker, for example, uploaded photos of her face and received at least four magical avatars depicting her partially nude or with cleavage.
In the same thread, a social media user accused the app of perpetuating “racism and sexism”.
“I ended up looking like a white woman in most photos,” one Twitter user posted. @feministnoire.
Lensa said the app and its model use unfiltered internet content to create the magical avatars, so naturally the AI images will contain biases that humans will include in their own art.
“Creators recognize the possibility of societal bias,” Lensa said. “U.S. too.”
The company also pointed out that Stability AI, which created the stable broadcast model, adjusted the model to make it more difficult to create NSFW images.
Lensa also claims to have taken steps to reduce bias in avatars. It takes time, so the company stressed that minors should avoid the app.
Inger said companies like Lensa and Lightricks use an open-source model made up of tons of photos.
“We didn’t train the model,” he said. “It’s still early days… there’s still a lot of progression in terms of the type of data we feed into these models.”
The goal, he said, is not to include gender or racial bias. Lightricks tries to work internally to ensure that the AI images created in its app are not offensive or biased.
And yes, he is delighted with the way things are going.
“The options are endless,” he said. “We are currently only scratching the surface of what is possible with these models…I think it will only get better with time.”
Saleen Martin is a reporter on USA TODAY’s NOW team. She’s from Norfolk, Virginia – the 757 – and loves all things horror, witches, Christmas and food. Follow her on Twitter at @Saleen_Martin or email him at firstname.lastname@example.org.