AI Artist Imagines What Lies Outside the Frame of Famous Paintings, Including Girl with a Pearl Earring

0

An AI artist can now preview what the background settings of famous paintings and photos may have looked like.

OpenAI, a San Francisco-based company, has created a new tool called “Outpainting” for its text-to-image AI system, DALL-E.

Outpainting allows the system to imagine what is outside the scope of famous paintings such as Girl with The Pearl Earring, Mona Lisa and Dogs Playing Poker.

As users have shown, he can do this with any type of image, like the man on the Quaker Oats logo and the Beatles album cover “Abbey Road.”

DALL-E is based on artificial neural networks (ANN), which simulate the functioning of the brain to learn and create an image from a text.

CHANGED: OpenAI, a San Francisco-based company, has created a new tool called “Outpainting” for its text-to-image AI system, DALL-E. Outpainting allows the system to imagine what lies outside the frame of famous paintings such as Girl with The Pearl Earring

ORIGINAL: Girl with a Pearl Earring is an oil on canvas painting (circa 1665) by Dutch artist Johannes Vermeer.  Pictured is the original painting, without any AI manipulation

ORIGINAL: Girl with a Pearl Earring is an oil on canvas painting (circa 1665) by Dutch artist Johannes Vermeer. Pictured is the original painting, without any AI manipulation

HOW IT WORKS?

OpenAI has created a new tool called “Outpainting” for its text-to-image AI system, called DALL-E 2.

DALL-E 2 and its predecessor DALL-E are based on artificial neural networks (ANN), which simulate the functioning of the brain to learn.

ANNs can be trained to recognize information patterns, such as speech, textual data, or visual images.

OpenAI developers have gathered data from millions of photos to allow the DALL-E algorithm to “learn” what different objects are supposed to look like and eventually put them together.

When a user enters text for DALL-E to generate an image, he notes a series of key features that could be present

A second neural network, called the diffusion model, then creates the image and generates the pixels needed to view and reproduce it.

With Outpainting, users must describe new extended visuals as text to DALL-E before it can “paint” them.

Outpainting, which is aimed primarily at professionals who work with images, will allow users to “expand their creativity” and “tell a bigger story”, according to OpenAI.

The company said in a blog post: “Today we’re introducing Outpainting, a new feature that helps users extend their creativity by pursuing an image beyond its original boundaries – adding visual elements into the same style or taking a story in new directions – simply by using a natural language description.

“With Outpainting, users can extend the original image, creating large-scale images in any aspect ratio.

‘Outpainting takes into account the existing visual elements of the image – including shadows, reflections and textures – to maintain the context of the original image.’

American artist August Kamp used Outpainting to reimagine the famous 1665 painting “Girl with a Pearl Earring” by Johannes Vermeer.

Amazingly, the tool managed to create a background that mimicked the painting style of the original.

The results show the famous girl in a domestic setting, surrounded by crockery, houseplants, fruit, boxes and more.

This contrasts with the simplicity of Vermeer’s classic, which depicts the young girl against a dark, blank background.

Other attempts are a little more stupid – a shows the subject of Mona Lisa making the devil’s horn gesture with her hand, with a UFO and a killer robot in the background, while a long version of A Friend In Need shows another table of gambling dogs next to them.

Another one shows the man in the Quaker Oats logo with a large bust and wearing a robe, surrounded by bottles of drinks.

An extension version from A Friend In Need shows another playdog table next to them (pictured, original)

Other attempts are a little more silly - one shows the Mona Lisa subject making the devil's horn gesture with her hand (the original is pictured)

Other attempts are a little more stupid – a shows the subject of Mona Lisa making the devil’s horn gesture with her hand (the original is pictured)

And yet another shows a few people crossing the famous zebra crossing outside Abbey Road Studies with the Beatles with a scattering of autumn leaves, although the original photo was taken in the height of summer.

According the edgeDALL-E is available to over one million people through a beta program, which offers users a number of free image generations.

People can join a waiting list to “create with DALL-E” on OpenAI’s website, though the company said it “sends out invitations incrementally over time.”

DALL-E already allows modifications in a generated or uploaded image – a capability known as Inpainting.

It is able to automatically fill in details, such as shadows, when an object is added, or even change the background to match, if an object is moved or deleted.

DALL-E can also produce an entirely new image from a textual description, such as “a chair in the shape of an avocado” or “a cutaway view of a walnut.”

Another classic example of DALL-E’s work is “teddy bears working on new AI research underwater with 1990s technology.”

DALL-E image of the prompt

DALL-E image of “Teddy bears work on new underwater AI research with 1990s technology” prompt

OpenAI is also known for AI generated audio.

In 2020 he revealed Jukebox, a neural network that generates weird approximations of pop songs in the style of several artists, including Elvis Presley, Frank Sinatra and David Bowie.

The neural network generates music, including rudimentary vocals complete with English lyrics and a variety of instruments like guitar and piano.

OpenAI retrieved 1.2 million songs, 600,000 of which are sung in English, from the internet and matched them with lyrics and metadata, which were fed into the AI ​​to generate approximations of the various artists.

HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURON NETWORKS

AI systems rely on artificial neural networks (ANN), which attempt to simulate how the brain works to learn.

ANNs can be trained to recognize patterns of information – including speech, textual data or visual images – and are the basis of many of the developments in AI in recent years.

Conventional AI uses inputs to “teach” an algorithm about a particular topic by feeding it massive amounts of information.

AI systems rely on artificial neural networks (ANN), which attempt to simulate how the brain works to learn.  ANNs can be trained to recognize patterns of information - including speech, textual data, or visual images

AI systems rely on artificial neural networks (ANN), which attempt to simulate how the brain works to learn. ANNs can be trained to recognize patterns of information – including speech, textual data, or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software, and Snapchat’s image-editing live filters.

The process of capturing this data can be extremely time-consuming and limited to one type of knowledge.

A new breed of ANN called Adversarial Neural Networks pits the minds of two AI robots against each other, allowing them to learn from each other.

This approach is designed to speed up the learning process, as well as to refine the output created by AI systems.

Share.

Comments are closed.