Lensa AI: What is it and Why Are Experts Concerned? 

Published on
13/12/2022 03:51 PM
Lensa EM360

Lensa AI has become a viral sensation for its ability to generate vibrant artwork from selfies, but experts warn its AI-wired technology raises serious ethical, security and privacy concerns. 

The AI photo generator app, released by US app Developer PrismaLabs, creates digital avatars of peoples’ faces by taking data from photos and combining it with a massive network of digital art scraped from the internet. 

This impressive generative technology, powered by open-source Stable Diffusion, has brought over 5.8 million users to the app worldwide in the first week of its release, with PrismaLabs making over $8 million in revenue in subscription fees and additional costs. 

The app's success arrives at the end of a year defined by huge advancements in generative AI technology that reimagines the way texts and images are created. 

Just a few months ago, OpenAI’s DALLE-2 gained international acclaim for its impressive imagine-generation technology, whilst the company’s chatbot, ChatGPT-3.5, sparked the interest of millions around the world when it was released last week.

But as Lensa AI tops the Apple and Google app charts, experts worry many users may be blissfully unaware of the ethical implications of using readily-available generative AI tools.

Collection and storage of face data

Lensa AI’s fascinating image generation is not the only thing getting people talking. Several experts have expressed their concern about the app’s process of collecting and storing users’ data to generate its popularised “Magic Avatars”. 

"At the moment, it’s just about faces and selling ads and so on, but it’s going to be much crazier than that,” Juergen Schmidhuber, an internationally recognized computer scientist and leader in the AI field, told Today.com

Companies like Lensa “try to entice you to give your data away and you get something in return, which are pleasurable experiences,” he explained. 

According to Lensa’s privacy policy, the company “collect[s] and store[s] your Face Data for online processing function” and they are then “automatically deleted within 24 hours after being processed by Lensa”.

Yet, for Mari Galloway, AI and cybersecurity specialist, this does little to settle her concern about the app’s collection and storage of face data.

“They don’t keep the photos and videos for longer than 24 hours. But do we really know what they’re doing with that? How are they deleting it? How is the data encrypted?” she explained to Today.com. 

In addition to biometric data, the application also takes additional data collected from the user’s smartphone, including third-party analytics, log file information and device identifiers, and personal information used to create an account. 

The collection this information, paired with the use of face data, has the potential to lead to serious f security risks in the near future, with users’ entire identities potentially being stolen and used for all kinds of fraudulent purposes. 

To read more about data, visit our dedicated Data Management Page 

AI creating sexist and racist images

Like with many recent generative AI technologies, Lensa AI’s process of generating images suffers from several limitations that affect the ethics of its algorithm. 

One major concern relates to the system’s overreliance on stored datasets, which leads its AI generator to create images that portray the same bias as the artwork they’re derived from. 

This has led to a variety of different outcomes - from women being over-sexualised to black and Asian users being subjected to offensive,  racist stereotyping. 

“The internet is filled with a lot of images that will push AI image generators toward topics that might not be the most comfortable,” whether it’s sexually explicit images or images that might shift people’s AI portraits toward racial caricatures,” Grant Fergusson, an Equal Justice Works fellow at EPIC said.

Artists' work stolen by AI

Another issue relates to the tool allegedly stealing art from actual artists. Lensa’s AI training of an online database of images called LAION-5B has led to some artists’ work being stolen without their permission. 

“Some of the work is distinctly recognisable to other artists' work”, Kim Leutwyler, a Sydney-based artist told the Guardian Australia

“They are calling it a new original work but some artists are having their exact style replicated exactly in brush strokes, colour, composition – techniques that take years and years to refine,” she added.

Leutwyler found much of her own artwork in the LAION-5B database trained by Lensa AI and called for improved copyright regulations to stop art from being stolen by Lensa’s AI systems.

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now