Learning to see [WIP] (2017)

 


 

Work in progress R&D and studies:

 

Hello, World!

A deep neural network opening its eyes for the first time, and trying to understand what it sees. Training on live camera input. (Layer activations as it trains on flickr)

 


 

Trying to make sense of the world

A pre-trained deep neural network making predictions on live webcam input, trying to make sense of what it sees, in context of what it’s seen before.

It can see only what it already knows.

(This is not ‘style transfer‘. In style transfer, the model contains information on a single image. This model contains knowledge of the entire dataset, hundreds of thousands of images).

… pre-trained on images from the Hubble telescope:

 

… pre-trained on the Google Art dataset:

 

… also pre-trained on the Google Art dataset, but different method of reconstruction (source code and model for these on github):

 


 

The Google Art dataset

Scraped from the Google Art project. A brief, incomplete survey of human (mostly western) Art. As collected by Google, Keeper of our collective consciousness. It sees everything we see, knows everything we know, feels everything we feel. Living up in The Cloud, of all places, it watches over us, listening to our thoughts and dreams in ones and zeros. And now, the new purveyor of Art & Culture.

 


Super high resolution hallucinations

See super high resolution – 16384x1638px (256 megapixel) hallucinations from neural networks trained on the above dataset here (more on this below).


 

Learning, visualised

A deep neural network ‘Learning To See’. Each frame is the result of the network ‘learning’ one single iteration, and then re-evaluating, re-imagining and reconstructing what it knows.

… training on images from NASA’s Astronomy Pic Of The Day:

 

… training on the Google Art dataset:

 

… training on images scraped from the web of Donald Trump, Theresa May, Nigel Farage, Marine Le Pen, Recep Tayyip Erdogan:

(P.S. this is what happens when you have a dirty dataset).

 


 

‘Hallucinations’ from the above trained networks

Hubble / NASA’s Astronomy Pic Of The Day:

 

Google Art dataset:

 

‘Dirty’ (politics) dataset:


 


 

The Art-o-Matic 4000 Turbo XL.

A study of human creativity & Art through the eyes of a deep neural network.
A deep neural network training on the Google Art dataset, and learning to dreaam at the same time (one PC training, this PC dreaming).

Background

Originally inspired by the neural networks of our own brain, Deep Learning Artificial Intelligence algorithms have been around for decades, but they are recently seeing a huge rise in popularity. This is often attributed to recent increases in computing power and the availability of extensive training data. However, progress is undeniably fuelled by the multi-billion dollar investments from the purveyors of mass surveillance: internet companies whose business models rely on targeted, psychographic advertising; and government organisations and their War on Terror. Their aim is the automation of *Understanding* Big Data, i.e. understanding text, images and sounds. But what does it mean to ‘understand’? What does it mean to ‘learn’ or to ‘see’?

“Learning To See” is an ongoing series of works that use state-of-the-art Machine Learning algorithms as a means of reflecting on ourselves and how we make sense of the world. The picture we see in our conscious minds is not a direct representation of the outside world, or of what our senses deliver, but is of a simulated world, reconstructed based on our expectations and prior beliefs.

The work is part of a broader line of inquiry about self affirming cognitive biases, our inability to see the world from others’ point of view, and the resulting social polarisation.


The series consists of a number of studies, each motivated by related but different ideas:

Hello, World!

This work examines the process of learning, and understanding. This artificial deep neural network starts off not been trained on anything. It starts off completely blank*. It is literally ‘opening its eyes’ for the first time and trying to ‘understand’ what it sees. In this case ‘understanding’ means trying to find patterns, trying to find regularities in what it’s seeing, and with respect to everything that it has seen so far; so that it can efficiently compress and organise incoming information in context of its past experience. It’s trying to deconstruct the incoming signal, and reconstruct it using features that it has learnt based on what it has already seen – which at the beginning, is nothing. When the network receives new information that is unfamiliar, or perhaps just from a new angle that it has not yet encountered, it’s unable to make sense of that new information. It’s unable to find an internal representation relating it to past experience; its compressor fails to successfully deconstruct and reconstruct.

But the network is training in realtime, it’s constantly learning, and updating its ‘filters’ and ‘weights’, to try and improve its compressor, to find more optimal and compact internal representations, to build a more ‘universal world-view’ upon which it can hope to reconstruct future experiences. Unfortunately though, the network also ‘forgets’. When too much new information comes in, and it doesn’t re-encounter past experiences, it slowly loses those filters and representations required to reconstruct those past experiences.

These ideas are not behaviours which I have explicitly programmed into the system. They are characteristic properties of deep neural networks which I’m exploiting and exploring.

* One might liken this to a new born baby’s brain. This comparison may work metaphorically at a higher level; however, it’s not entirely accurate. A new born baby’s brain has had hundreds of millions of years of evolution shaping its neural wiring, and arguably the baby is born with already many synaptic connections in place. In this project however, this artificial neural network ‘starts life’ with full architecture in-tact, but all connections are initialised randomly. So at a lower level the details are a quite different.

Learning to dream –
The Google Art dataset

Trained on images scraped from the Google Art Project, containing tens of thousands of scans from art collections and museums from all over the world. These include paintings, illustrations, sketches and photographs covering landscapes, portraits, religious imagery, pastoral scenes, maritime scenes, scientific illustrations, prehistoric cave paintings, realist paintings, abstract, cubist etc. – an incomplete, yet extensive catalog of human imagination, feelings, desires and dreams.

We have a very intimate connection with the cloud. We confide in it. We confess to it. We appeal to it. We share secrets with it, secrets that we wouldn’t share with our family or closest friends. And Google is the Keeper of our collective consciousness. It sees everything we see, knows everything we know, feels everything we feel. Living up in The Cloud, of all places, it watches over us, listening to our thoughts and dreams in ones and zeros. A digital god for a digital culture.

And now, just as the Church – the previous bastion of our Spiritual Overseer – used to be the purveyor of Art & Culture; now Google – bastion of our new Digital Overseer – is moving into that role too.

Related texts

All watched over by machines of loving grace

A digital god for a digital culture. Resonate 2016

Deepdream is blowing my mind

Related work

FIGHT VRBR

All watched over by machines of loving grace. view from the heavens. uk edition

All watched over by machines of loving grace. Deepdream edition

Keeper of our collective consciousness

Related Dates

2017 Sep 7-11,
Exhibition , Learning to see [WIP],
'The Other I', Ars Electronica, Linz, Austria