Deep Meditations (2018)

A meditation on life, nature, the universe and our subjective experience of it. ‘Deep Meditations’ is a deep dive into – and controlled exploration of – the inner world of a neural network trained on everything.

The piece is a slow, meditative, meticulously crafted journey – of slowly evolving images and sounds – through the imagination of an artificial neural network trained on everything (literally images labelled ‘everything’ from the photo sharing website Flickr), as well other abstract, subjective concepts such as life, love, art, faith, ritual, worship, god, nature, universe, cosmos and many more*. As these concepts have no objective, clearly defined visual representations, the neural network instead learns what our subjective, collective consciousness has decided they look like, as archived by the cloud. Sound is generated by a neural network trained on hours of religious and spiritual chants, prayers and rituals from around the world, scraped from YouTube.

The work is also an exploration into the use of deep generative neural networks as a medium for creative expression and story telling with meaningful human control. Using bespoke tools, precise journeys are constructed in the high dimensional latent space of the neural network, following specific narrative paths to create a one hour film oscillating on the borders between abstract and representational, hyper-real and painterly (a paper on this method and custom tools was presented at the academic AI conference Neural Information Processing Systems 2018).

The piece is intended for both introspection and self-reflection, as a mirror to ourselves, our own mind and how we make sense of what we see; and also as a window into the mind of the machine, as it tries to make sense of its observations and memories. But there is no clear distinction between the two, for the very act of looking through this window projects ourselves through it. As we look back upon the meditative, slowly evolving images conjured up by the neural network, we complete this loop of meaning making, we project ourselves back onto them, we invent stories based on what we know and believe, because we see things not as they are, but as we are. And we complete the loop.


*Given such a diverse dataset, the neural networks are not given any labels regarding the data they are exposed to. They are not provided the semantic information to be able to distinguish between different categories of images or sounds, between small or large; microscopic or galactic; organic or human-made. Without any of this semantic context, the network analyses and learns purely on aesthetics. Swarms of bacteria blend with clouds of nebula; oceanic waves become mountains; flowers become sunsets; blood cells become technical illustrations.

The multiple channels of video represent related but slightly varied journeys, happening simultaneously. These journeys affect each other, but they have different characteristics. Some are slower and steadier, focusing on more long term goals, while others are quicker, more exploratory, curious, seeking shorter term rewards. Other channels represent the same journeys, but at different points of the networks training history, shedding light on how slightly more exposure to certain types of stimulus can cause the network to reinterpret the same inputs as visually and compositionally related, but with semantically vastly varying results. All of the journeys are still somehow in sync, branching and spiralling around each other.

‘Deep Meditations’ is a series of works in different formats – including a 1 hour film presented as an immersive, meditative, multi-channel video and sound installation. It is a continuation and merger of both the ‘Learning to see‘ and ‘Learning to Listen‘ series of works; using state-of-the-art Machine Learning algorithms as a means of reflecting on ourselves and how we make meaning.

Technical Motivations & Story-telling

Controlled navigation of latent space

I presented some of the techniques used for the creation of this work at the 2nd Workshop on Machine Learning for Creativity and Design at the Neural Information Processing Systems (NeurIPS) 2018 conference.

“We introduce a method which allows users to creatively explore and navigate the vast latent spaces of deep generative models such as Generative Adversarial Networks. Specifically, we focus on enabling users to discover and design interesting trajectories in these high dimensional spaces, to construct stories, and produce time-based media such as videos, where the most crucial aspect is providing meaningful control over narrative. Our goal is to encourage and aid the use of deep generative models as a medium for creative expression and story telling with meaningful human control. Our method is analogous to traditional video production pipelines in that we use a conventional non-linear video editor with proxy clips, and conform with arrays of latent space vectors.”

More information can be found in this article.

paper: https://nips2018creativity.github.io/doc/Deep_Meditations.pdf

code: https://github.com/memo/py-msa-kdenlive

Acknowledgements

Created during my PhD at Goldsmiths, funded by the EPSRC UK.

Visual network is a progressively growing gan.
Audio network is bespoke, Grannma MagNet.

Amongst many people, I’d like to especially thank Nina Miolane and Sylvain Calinon for their contributions and help with Riemannian Geometry – and particularly Miolane et al for geomstats.