Quick Links

Self-flattery is pushing AI into the mainstream. But should you join the thousands of people who are experimenting with AI portraiture? This new trend, enabled by an app called Lensa AI, raises some very difficult questions about craft, consent, and bias.

How Does Lensa AI Work?

Developed by Prisma Labs, the Lensa AI app (iOS/Android) launched in 2018 and offers several photo retouching features. Essentially, it's a photo beautification tool---it makes your selfies prettier by applying filters and removing "imperfections."

But the latest version of Lensa AI includes something called "Magic Avatars." This feature allows you to commission up to 200 AI-generated portraits in a variety of styles. All you need to do is share 10 to 20 images of your face and fork over a few dollars. The results, as you can tell, are pretty impressive.

Now, these "Magic Avatars" aren't generated by a proprietary AI. You're actually paying Prisma Labs to generate portraits using Stable Diffusion, an open-source machine learning model. The Lensa AI app is a middleman and a curator, but it's easier than dealing with Stable Diffusion on your own.

Stable Diffusion is trained on millions of publicly-available images. That's why it can mimic dozens of artistic styles, including manga, sci-fi, pop art, and traditional portraiture. (In extremely simple terms, Lensa AI combines your selfies with existing art. The reality is a bit more complicated, as hundreds of images may contribute to a resulting portrait.)

If you want to try Lensa AI's "Magic Avatars" tool, you can install the app on iOS and Android and spend a few dollars on the feature. At minimum, you can order 50 portraits for $2. But most people are paying $4 for the maximum 200 portraits, as only a handful of the images produced by this AI are actually useful or appealing.

The "Magic Avatars" tool generates images through Stable Diffusion, a machine learning model trained on publicly-available images. These images are sourced without consent, and there's no way for a person, artist, or company to opt out of the dataset.

Obviously, this raises concerns for both personal privacy and copyright. An individual may not want their selfies included in the AI's dataset---it's creepy! And as Prisma Labs rakes in cash, many artists are concerned that their labor is benefitting someone else's bank account. It doesn't help that AI image generators, which are quick and cheap, threaten the job security of professional artists.

There's also the concern that someone may dump your selfies into Lensa AI, producing artwork of your face without your permission. This is important for a reason that we'll highlight later.

Here's the problem; today's laws and regulations don't define how machine learning datasets should operate. We don't know if this stuff violates privacy or copyright rules. And for that reason, you'd have a pretty hard time arguing copyright infringement in court. The images produced by Stable Diffusion contain traces of artists' styles, original content, and signatures (or watermarks), but they don't look identical to any existing images.

On the bright side, well-known corporations are treading lightly in this area. They are openly concerned about how AI may lead to copyright infringement. For example, Getty Images refuses to touch AI until the rules are better defined, and Shutterstock is taking a unique approach to ensure that real-world artists get paid.

Dozens of companies could have invented "Magic Avatars," but few are willing to take the risk. Even if you don't see Lensa AI as a concern for privacy or copyright, it's clear that this topic will eventually land on lawmakers' desks.

Where Does Your Data Go?

We often share selfies on Instagram or Facebook without batting an eye. But when using a strange app like Lensa AI, some people tense up. How will this company use your photos, and will it lead to a violation of your privacy?

Well, according to the terms of service, photos uploaded to Lensa AI are converted into data to train a machine learning model. The actual images are discarded, while information like facial feature position and orientation are retained. Additionally, images shot with an iPhone selfie camera (which uses TrueDepth technology to map your face) may include data like face topology.

Note that Lensa AI is also a photo beautification app. And when reading the terms of service, it's often unclear whether Prisma Labs is referring to its beautification or "Magic Avatar" features. So, unfortunately, I'm not quite sure about the specifics behind this information. (That said, the TOS explicitly states that "Magic Avatar" selfies contribute to Stable Diffusion's training, and that these selfies are deleted after the AI generates its images.)

Because this AI is trained on publicly-available photos, user privacy may not be a huge concern to some people. After all, if you upload a ton of selfies to Facebook or Instagram, your face may already be included in the dataset. (Plus, if you're a fan of AI, you may be happy to contribute your face data.)

But some people have a limited presence online. Privacy is priceless, and if you've done a good job keeping your face off the internet, I suggest that you avoid Lensa AI. After all, we don't really know where this data will end up.

If you've tested Lensa AI and want to remove your information from its dataset, contact  privacy@lensa-ai.com. Note that, according to the TOS, Lensa AI isn't obligated to delete data upon your request.

Lensa AI May Skew Towards Bias

As with all technology, Stable Diffusion and Lensa AI are vulnerable to bias. Some people take this to mean "the AI is a bigot," which is funny (but technically wrong). Artificial intelligence is just an algorithm, and it's crafted by humans using a mess of data.

This becomes apparent when you scroll through strangers' Lensa AI portraits. The AI has an awkward habit of sexualizing women, likely due to the images that are included in its dataset (I'm guessing that softcore fanart makes up a decent chunk of the dataset). To be clear, I'm not trying to sound like a prude---this AI really has a thing for big breasts, and as TechCrunch reports, it occasionally spits out porn.

Related: "Should You Buy an iPhone 14?" As Written By an AI

The AI also struggles with race. There are several reports from Asian women who found that the AI diminished or changed their facial features---as described by one user, the output is "skewed to be more East Asian." Again, this is probably due to the dataset, which may include too many illustrations of fanart (which typically centers around Japanese styles, ideals, and trends).

Now, this isn't just some offensive inconvenience. This is a problem that could easily lead to abuse. What's stopping someone from taking your photos, dumping them into Lensa AI, and producing porn or racist imagery? Is this something that we need to ask about every AI image generator?

Should You Use Lensa AI?

As with all emerging technology, AI image generation is a mixed bag. Tools like Lensa AI can pump out some amazing portraits for a very low price. It's quicker, more convenient, and more accessible than any real-world artist, but this convenience may come at a cost.

Unfortunately, we can't look forward in time to see the impact of this technology. We don't know how it will affect artists, individuals, or businesses. And from a privacy standpoint---well, can you think of a single way that someone might use your face data? This lack of knowledge can be very concerning.

Using Lensa AI is a personal choice, of course. And I don't blame anyone for testing this technology. It's interesting, exciting, and often very flattering. But the potential downsides of this trend shouldn't be ignored.