Watching Refik Anadol’s work — what he calls “machine hallucinations” — can make you feel a little like your mind is melting. Anadol creates stories using curated generative adversarial networks, a process he believes is inventing a new type of AI-driven cinema that displays a “community collective consciousness.”
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2511706,"post_type":"story","post_chan":"none","tags":"category-people-society-social-sciences","ai":true,"category":"none","all_categories":"ai,business,","session":"B"}']“How it works is [that] we can roughly see the commonality of consciousness, or commonality of the memory inside the latent space, and I personally fly as … a director or director of photography … and define points of interest that are narrative and allow me to make much more purposeful decisions and use AI to tell a story,” he said.
Before launching into work combining historic and modern images, Anadol honed his machine learning chops while serving as an artist in residence at Google.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Currently, Anadol and a team of 12 based in Los Angeles collect hundreds of thousands of historic images and modern day photos from publicly available sources like social media and archives to create their works. Current audio recordings from local streets are also used to bring sight and sound into what Anadol refers to as latent cinema, in which buildings recreate themselves.
The team’s most recent machine hallucination project — Latent History — opens Saturday. This piece generates imagery from a data set of 300,000 photos, including 150-year-old Stockholm city archives and colorful images taken from the same location within the past 15 years.
Another exhibit that uses similar techniques with more than 100 million images opens in New York City in September. This work will use 18 projectors and images from sources like the New York City Public Library and the Library of Congress.
To create their models, Anadol’s studio receives support from Nvidia GPUs and applies Nvidia’s StyleGANs and PgGANs.
The team uses classifiers to remove all elements of people from images in order to better see the environment and “reconstruct common memories for humanity.”
“We intentionally detach ego, so there’s no human in the photos, There are no logos, there’s pure nature, urban space, buildings, architectures, streets, the space that is exists without any human interaction,” he said.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2511706,"post_type":"story","post_chan":"none","tags":"category-people-society-social-sciences","ai":true,"category":"none","all_categories":"ai,business,","session":"B"}']
Latent History is not Anadol’s first time creating historic art with generative adversarial networks (GANs). For the Los Angeles Philharmonic, the studio collected photos going back 100 years to depict hallucinations on the walls of the Walt Disney Concert Hall.
“In there, we let the building hallucinate its own future. We let Frank Gehry’s Disney hall look at its own memories, and we let the building dream,” he said.
Anadol also did a project called “Archive Dreaming” that depicts 1.7 million documents from a public cultural archive to create an immerse environment.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2511706,"post_type":"story","post_chan":"none","tags":"category-people-society-social-sciences","ai":true,"category":"none","all_categories":"ai,business,","session":"B"}']
For another hallucinogenic art project, called “Melting Memories” Anadol and his team worked with big data sets.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2511706,"post_type":"story","post_chan":"none","tags":"category-people-society-social-sciences","ai":true,"category":"none","all_categories":"ai,business,","session":"B"}']
Other artists currently using machine learning as a medium include musician Hannah Davis, GAN imagery creator Memo Akten, and neurotographer Mario Klingemann.
In recent AI and art news, Google’s Magenta project produced ML-JAM, a model that challenges musicians to improvise and find new creative sounds.
Earlier this week, Google Lens began identifying the works of local artists. Google Assistant’s computer vision can already identify some popular landmarks, but giving people the ability to learn about a local statue or mural could help them feel more connected to their community.
[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":2511706,"post_type":"story","post_chan":"none","tags":"category-people-society-social-sciences","ai":true,"category":"none","all_categories":"ai,business,","session":"B"}']
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More