Deep Meditations

A meditation on life, nature, the universe and our subjective experience of it. A deep dive into – and controlled exploration of – the inner world of an artificial neural network trained on everything; the world, the universe; on art, life, love, faith, ritual, god.

‘Deep Meditations’ is a series of works in different formats – including a 1 hour film presented as an immersive, meditative, multi-channel video and sound installation – wavering on the borders of abstract and representational, photo-real and painterly. It is a continuation and merger of both the ‘Learning to see‘ and ‘Learning to Listen‘ series of works; using state-of-the-art Machine Learning algorithms as a means of reflecting on ourselves and how we make meaning. It is intended as both a piece for introspection and self-reflection, as a mirror to ourselves, our own mind and how we make sense of the world; but also as a window into the mind of the machine, as it tries to make make sense of its observations in its own computational way. But there is no boundary between the mirror and the window, it’s impossible to separate the two, for the very act of looking through this window is projecting ourselves through it.

The piece is a slow journey through the imagination of a machine which has been trained on everything – literally everything. Images tagged ‘everything‘ were scraped from the popular photo-sharing website Flickr. Along with images tagged with world, universe, space, mountains, oceans, flowers etc., as well as more abstract, subjective concepts like art, life, love, faith, ritual, god. What do these labels mean? What do they look like? Do they have a universal, objective aesthetic? Most likely not, but the network is learning what the Internet – at least a small corner of the Internet – thinks they represent.

Using custom techniques and tools, very precise journeys are meticulously crafted in the high dimensional learned internal space of the neural networks, to construct these very particular sequences of images and sounds – a controlled exploration of the inner world of the network. Given such a diverse dataset, the neural networks are not given any labels about anything that they are exposed to. They are not provided the semantic information to be able to distinguish between different categories of images or sounds, between small or large; microscopic or galactic; organic or human-made. Without any of this semantic context, the network analyses and learns purely on aesthetics. Swarms of bacteria merge with clouds of nebula; oceanic waves become mountains; flowers become sunsets; blood cells become technical illustrations.

And then as we look back upon the carefully constructed images bordering between abstract and representational, we project ourselves back onto them, we invent stories, we see things not as they are, but as we are.

The multiple channels of video represent related but slightly varied journeys, happening simultaneously. These journeys affect each other, but they have different characteristics. Some are slower and steadier, focusing on more long term goals, while others are quicker, more exploratory, curious, seeking shorter term rewards. Other channels represent the same journeys, but at different points of the networks training history, shedding light on how slightly more exposure to certain types of stimulus can cause the network to reinterpret the same inputs as visually and compositionally related, but with semantically vastly varying results. All of the journeys are still somehow in sync, branching and spiralling around each other.

The audio network is trained on hours of religious and spiritual chants, prayers and rituals scraped from youtube. Again without any labels, poured into the neural network, all of the sounds fusing into one.

 

Acknowledgements

Created during my PhD at Goldsmiths, funded by the EPSRC UK.

Visual network is a progressively growing gan.
Audio network is bespoke, Grannma MagNet.

Amongst many people, I’d like to especially thank Nina Miolane and Sylvain Calinon for their contributions and help with Riemannian Geometry – and particularly Miolane et al for geomstats.