DALL-E 2 – AI Generated Images – Why Experts Worry

“Hong Kong has a million bears.” “Spaghetti-and-meatball cat.” These are just a handful of the word descriptions supplied to cutting-edge AI systems in recent weeks, which can produce amazingly detailed, realistic-looking visuals. The resulting pictures can be comical, bizarre, or reminiscent of great art. Influential IT figures are sharing them extensively (and sometimes breathlessly) on social media. DALL-E 2 (a newer version of OpenAI’s less competent AI system from last year) can alter images by adding or removing elements.

DALL-E 2 - AI Generated Images - Why Experts Worry

On-demand picture production could be used to make art or ads; DALL-E 2 and Midjourney have previously been utilized to create magazine covers. OpenAI and Google suggest using the technology to alter or create stock photos.

DALL-E 2 and Imagen aren’t public yet: they can provide disconcerting results that reflect the gender and cultural biases of their training data, which includes millions of internet images.

Experts say AI prejudice is a big concern. Technology perpetuates biases and stereotypes. They fear the open-ended nature of these systems, which can generate all types of images from words, and their ability to automate image-making could automate bias on a vast scale. They can also be used to spread falsehoods.

Arthur Holland Michel, a senior fellow at Carnegie Council for Ethics in International Affairs who studies AI and surveillance technologies, said, “Until these damages can be eliminated, we’re not truly talking about systems that can be employed in the real world.”

You may also like: How believing in the consciousness of AI is becoming a real concern

Bias-documentation

AI has become ubiquitous in recent years, but only recently has the public seen how prejudices can enter into the system. Facial-recognition systems have been criticized for accuracy and prejudice.

OpenAI and Google Study have highlighted various difficulties and concerns associated to their AI systems in documentation and research, noting the algorithms are prone to gender and racial bias and presenting Western cultural tropes and gender stereotypes.

OpenAI, whose objective is to construct artificial general intelligence that benefits all people, included photographs in an online page titled “Risks and restrictions” A prompt for “nurse” showed stethoscope-wearing ladies, while one for “CEO” showed mostly white men.

Lama Ahmad, policy research program manager at OpenAI, said academics are still learning how to evaluate AI bias and that OpenAI can change its AI over time. Ahmad led OpenAI’s attempts to cooperate with outside specialists earlier this year to develop DALL-E 2.

CNN Business asked Google for an interview. In a research paper unveiling Imagen, the Google Brain team noted that it encodes “many societal prejudices and stereotypes, including a bias toward creating images of persons with lighter skin tones”

Julie Carpenter, a research scientist at California Polytechnic State University, San Luis Obispo, sees a dramatic difference between the visuals these technologies create and the ethical difficulties they raise.

“AI is pretty cool and can perform some things well. As a partner, “Carpenter: “Incomplete. It’s limited. Our expectations must change. Not like the movies.”

Holland Michel is concerned that no number of protections can prevent such systems from being exploited maliciously, adding that deepfakes, a cutting-edge AI application, were initially used to make bogus pornography.

“It follows that a powerful system might be dangerous,” he remarked.

Biased

Because Imagen and DALL-E 2 took in words and spew out images, they needed both image pairings and text captions for training. Google Research and OpenAI cleaned pornographic photos from their datasets before training AI models, but given the scale of their datasets, such efforts are unlikely to detect all such content or prevent AI systems from producing damaging results. Google researchers utilized a big dataset that included porn, racist slurs, and “harmful social attitudes” in their Imagen article, despite filtering some data.

Filtering can also lead to additional issues: Women are overrepresented in sexual content, so removing it reduces the amount of women in the sample, said Ahmad.

Carpenter said it’s impossible to filter these datasets for undesirable information since people classify and delete content and have diverse cultural attitudes.

“AI doesn’t understand”

Some academics are considering how to decrease bias in AI systems that make images. Less data could be used.

Alex Dimakis, a lecturer at the University of Texas in Austin, said one way involves cropping, rotating, mirroring, and so on to turn one image into many. Dimakis’ graduate student contributed to Imagen research, although he wasn’t involved in its creation.

Dimakis: “This solves certain difficulties, but not others.” The method won’t make a dataset more diverse, but the smaller scale could help individuals be more intentional about the photographs they include.

Rogues

OpenAI and Google Research are focusing on adorable photographs, not distressing or human imagery.

On Imagen’s and DALL-E 2’s internet project pages, there are no realistic-looking photographs of people, and OpenAI says it utilized “sophisticated procedures to prevent photorealistic generations of real individuals’ faces, including those of public figures.” This protection could prevent visitors from obtaining picture results for a prompt showing a politician doing anything illegal.

Since April, OpenAI has given DALL-E 2 access to thousands of waitlisters. Participants must agree to a content policy that prohibits making, uploading, or sharing “non-G-rated or harmful” images – DALL-E 2 includes filters to avoid generating a picture if a query or image upload violates OpenAI’s policies. Users can flag problematic results. OpenAI began allowing users to share lifelike human faces made with DALL-E 2 on social media in late June, but only after adding safety safeguards such as barring users from making photographs involving public individuals.

“Researchers need access,” Ahmad added. OpenAI seeks their help studying disinformation and bias.

Google Research doesn’t let others access Imagen. Imagen has taken requests on social media for photographs to interpret, but it won’t show “humans, graphic content, and sensitive material,” as co-author Mohammad Norouzi tweeted in May.

Imagen encodes social and cultural biases when generating images of actions, events, and objects, Google Research found in its Imagen article.

One of Google’s Imagen images, built from a prompt reading: “Castle wall. Walls display two artworks. The left painting is of the raccoon king. The right painting depicts the royal raccoon queen.”

The artwork shows two crowned raccoons in intricate gold frames, one wearing a yellow dress and the other a blue-and-gold jacket. Holland Michel highlighted that the raccoons are wearing Western-style royal clothes despite the prompt’s vagueness.

Holland Michel stated “subtle” bias is hazardous. “They’re hard to catch when not flagrant,” he said.

The AI that creates any picture you want, explained

Leave a Comment