Technical Art Direction & Collaborations

This is a small selection of works I have collected on this page due to their technical innovations. In all of these projects I acted as Technical Director (usually in addition to being Creative and Artistic Director), where I designed the overall system architecture and workflow, put together the right team to pull it off, came up with various technical solutions and basically solved problems. At the time, a lot of these projects where first of their kind in the world, and thus there were no established ways of doing them.

The earlier works (pre 2015) are often involving interactivity, realtime interactive projection mapping, XR (now also called “Virtual Production”), realtime control of lights and lasers etc. The later works (post 2015) are more involved with Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL).

Some of these projects were executed under the name Marshmallow Laser Feast – an arts x technology studio I co-founded in 2011, where I acted as artistic, creative and technical director, and left in 2014 to pursue a PhD in AI.

2008 – Body Paint

Large-scale interactive projection wall turning audience movements into virtual paint splashes to create playful experiences. (Created before devices like MS Kinect, or various toolboxes to create such installations). All running custom software (C++) with a custom setup using infrared emitters and cameras. The opensource fluid simulation library I developed for this (MSA Fluid C++, Java, JavaScript) went on to become incredibly popular and has been used in countless installations and performances. The installation is still touring around the world, is permanently installed in a number of locations including some children’s hospitals in the UK for therapeutic purposes.

Project page and full credits

2009 – Depeche Mode “Fragile Tension” music video

Music video for the band Depeche Mode, created using custom generative software (C++) integrated into a larger traditional production and VFX post-production workflow (filming, editing, compositing etc). A lot of the visuals in the video were created using my custom software which creates realtime interactive generative visuals through analyzing footage of the band and bespoke footage of dancers shot specifically to be processed by my software. (I was involved in quite a few music videos around that time, but I’m including just one for sake of brevity).

Project page and full credits

2009/2010 – Blaze The Streetdance Show

A touring live streetdance performance with realtime projection mapped visuals onto a complex, intricate set. I (and my studio at the time: MSA Visuals) was hired to both create the visuals, and also deliver the technical solution for this touring show.

Some of the constraints we were faced with:

  • The production wanted to be able to tour to as many venues as possible, where projectors’ positions could not be garanteed to always be identical.
  • Entire setup and calibration of the show had to be doable in one day (load-in in the morning; setup, calibrate and rehearse during the day; performance in the evening; load-out and hit the road late evening to arrive at the next venue the next morning!).
  • To keep touring costs low, a specialist (i.e. myself or someone from my team) could not travel with the show. So the on-site calibration had to be quick and easy enough for one of the regular crew members.
  • The order of the songs, and durations of the performances of each song could vary, depending on how the performers and audience felt each night. So the system had to be flexible enough to accomodate for these last minute changes.

When I proposed this idea in 2009, I was told by an “expert” that “the best experts in the world, who have worked with the biggest artists and budgets”, had tried and failed to do a live touring show with projection mapping on only two boxes, and that I was delusional and there was no way this could work. (They later apologized and congratulated us when the show went live at the start of 2010 :). We developed all of the content, and an entirely custom software (C++) to do media management and playback with easily customizable projection mapping onto a 3D set, with both DMX and QLab support to allow simple last-minute programming by the crew on the road.

This was most probably the first ever touring live show using projection mapping on a complex set. (Note that this was two years before Amon Tobin’s ISAM tour 🙂

Project page and full credits

2011 – Sony PlayStation3 Video Store Mapping Videos

Three online promos projection mapping onto a living room (complete with sofa, table, an actor, and ‘puppeteers’ in white gimp suits, with realtime 3D content adapting to the live-action camera’s position, and all effects in-camera. Essentially what is today called Virtual Production.

Our client being Sony PlayStation, we used multiple Sony PlayStation Move controllers to track the live-action camera’s (steadicam) position and orientation. We developed all of the 2D/3D content, and used an early version of the Unity Game Engine for realtime 3D rendering, with a custom plugin to receive camera pose information from the multiple Sony Playstation’s tracking the steadicam camera. An additional MS Kinect tracked the actor, so we could light him in realtime as he moved around the room (with a white mask in the projection). Unity did not support multiple video-outs, so to feed the 6 projectors spanning the living room, we used another custom plugin to Syphon all projector feeds to another custom application (Objective C++) for final output. We also had a number of ‘puppeteers’ in white gimp suits, puppeting various live special fx in the scene (manipulating various props, floating objects on fishing wires, pyrotechnics etc).

Project page and full credits

2011/2012 – Meet your creator

A live theatrical performance / kinetic light sculpture with quadrotor drones, LEDs, motorized mirrors and moving head robotic spotlights dancing in a joyous robo-ballet celebration of techno-spirituality.

During 2010-2011, very impressive quadrotor (now called “drones”) demonstration videos starting popping up on YouTube. We proposed a live performance with a swarm of drones – something that had never been done before. Furthermore, we fitted them with LEDs – also something that had not yet been done at the time (but quite common today) – and we even had motorized mirrors mounted on the drones to reflect laser-like light beams to create floating and morphing sculptures of light – something that has still not been replicated.

For the quadrotor development, we contacted and worked with two PhD students Alex Kushleyev and Daniel Mellinger (who later incorporated as KMel Robotics, which was later acquired by Qualcomm).

We developed our own custom software inside Cinema4D, which allowed a 3D animator to easily design and animate the drone’s paths using simple tools like C4D’s mograph. Within our C4D environment we could visualize the entire show including LEDs, light beams, ‘dirty wind’ and safe fly zones, and warnings about unrealistic velocities, accelerations and rotations. From C4D we could export in a format that allowed KMel to further verify the trajectories and load onto the quadrotors. We also developed a stand alone custom software (C++) to run the show live, keep the quadrotors synchronized to the overall timeline and music, track the position of the quadrotors (using a VICON mocap system) and control the lights to follow the quadrotors in realtime.

This was the very first live quadrotor show in front of a live audience – and it was a tightly choreographed swarm of drones with LEDs and laser-like lightbeams :). We also apparently broke the world record for “highest number of drones flying simultaneously indoors”.

Project page and full credits

2010/2011 – Forms

An abstract generative animation made through analysis (Boujou, 3DSMax, Houdini) of athletes in sports footage. While this aesthetic and approach might appear common today, it was in fact very novel at the time, and effectively helped launch this genre of animation. (Here is a twitter thread of inspirations).

The work won the Prix Golden Nica at Ars Electronica in 2013. One of the most prestigous awards for new media art.

Project page and full credits

2012/2013 – Laser Forest

A large interactive musical forest of lasers. Over 150 rubber coated metal rods covering 450 square meters. The audience can freely explore the space, physically tapping, shaking, plucking, and vibrating the “trees” to trigger sounds and lasers. Due to the natural springiness of the material, interacting with the trees causes them to swing and oscillate, creating vibrating patterns of light and sound. Each tree is tuned to a specific tone, creating harmonious sounds spatialized and played through a powerful surround sound setup. Each rod houses custom electronics which senses the rod’s movements, controls the attached laser diode, and communicates the information to a server running custom software (SuperCollider) which produces 16 channel spatialized sound. A major constraint for this installation was that it had to run for two weeks, and be taken down very quickly and easily every night to allow other events during the evening, and be very quickly and easily reinstalled in the morning!

Project page and full credits

2013/2014 U2 “Invisible”

TODO

2014 – Simple Harmonic Motion Light Installation at Blenheim Place

A generative kinectic light sculpture with moving head robotic spotlights and 32-channel sound installation. Runs entirely on custom generative software (C++) controlling the sound and lights, and is an infinite loop (i.e. never repeating).

Project page and full credits

2015 – Simple Harmonic Motion Live Performance at RNCM

A live performance with 16 drummers, torches, LEDs and spotlights. A central computer running custom software (Max MSP) ‘controls’ each performer (wearing in-ear monitors) with unique click-tracks and commands, and controls lights via piezo-rigged drums.

Project page and full credits

2015 – Journey through the layers of the mind

The very first video ever to have been created using Deepdream (one of the first “Deep Learning” Artifical Intelligence algorithms for synthesizing images).

Project page and full credits

2015/2016 – Pattern Recognition

A live performance with 2 dancers and 8 robotic spotlights responding to the dancers using Machine Learning (ML) / Artificial Intelligence (AI).

A number of depth-sensors (Kinect2) scattered around the stage track the dancers. A bunch of custom software (C++, Max MSP) observes the dancers, and using custom AI/ML models determines how the lights should react, and controls the lights accordingly, all in realtime.

Project page and full credits

2016/2017 – FIGHT! VR

An experimental Virtual Reality artwork and installation exploiting the phenomenon known as “Binocular Rivalry” (presenting radically different images to each eye). The phenomenon has been known to scientists for many centuries, and has been used to study attention (visual attention in particular), and consciousness (e.g. neural correlates). The work explores drifting in and out of rivalry; rivalry in structure (contour), color, and brightness; effects of interactivity vs non-interactivity on rivalry, and rivalry in foveal vs non-foveal vision. And overall, is a meditative, spiritual journey of self-discovery and self-reflection with conceptual underpinnings (as opposed to being a science experiment :).

To my knowledge this is the first example of using Binocular Rivalry in VR, and in fact it attracted the attention (pardon the pun 🙂 of cognitive psychologist Stefan Van der Stigchel, director of the Attention Lab at Utrecht University. They used the artwork (and later versions of the software which I modified for them) in their research on visual attention.

Project page and full credits

2017 – Learning to see: Hello, World!

Custom software (python) which allows a deep neural network (AI) to train in realtime and interactively on camera input while allowing a user to manipulate hyperparameters (in realtime), and observing the results (in realtime). Effectively the system is a realtime visual instrument while manipulating what and how the AI “learns”.

It’s difficult to describe the novelties in this system to those who aren’t familiar with deep learning. Suffice to say that this is a very custom and unusual setup as Deep Learning (i.e. current AI) models don’t typically train in realtime, let alone interactively. In that sense this is probably the first (and only) of its kind.

I go in depth on this project in Chapter 4 of my PhD thesis.

Project page and full credits

2017 – Learning to see

A series of video artworks and an interactive installation that allows a person to interact with a deep neural network in realtime – both via props, and a user interface with a bunch of controls – for meaningful human control and live puppetry.

I go in depth on this project in Chapter 5 of my PhD thesis, and I presented a paper at SIGGraph 2019.

Project page and full credits

2017/2018 – Deep Meditations

A large scale immersive video and sound installation made using a number of custom tools (python) that allow full control over the ‘trajectories’ (i.e. narratives) in the latent space of a generative AI model.

This may be the first film that uses Deep Generative Neural Networks as a medium for creative expression and story telling with meaningful human control over narrative. (While the explorations of these so-called ‘latent spaces’ in generative neural networks are typically random, in this project, I developed tools to construct precise journeys in these high dimensional spaces to create a specific desired narrative). The audio was also generated by a custom AI-powered audio engine called Grannma.

I go in depth on this project in Chapter 6 of my PhD thesis, and I presented a paper at the ML for Creativity and Design workshop at Neural Information Processing Systems (NeuRIPs) 2018.

Project page and full credits

2017/2018 – Grannma: Granular Neural Music & Audio

A custom audio engine (python) that synthesizes 16bit 44Khz audio using deep neural networks in realtime

To my knowledge, this is the first engine to synthesize audio (of any quality, let alone 16bit 44Khz) in realtime using deep neural networks, and even today is probably one of the very few systems which is capable of such a task. This is the underlying audio engine for Deep Meditations, Ultrachunk and Nimiia Cétiï.

Project page and full credits

2018 – Ultrachunk

A live audio-visual performance where both video and audio is synthesized in realtime by deep neural networks, and interactively, in response to both a live vocal performer (Jennifer Walshe) and myself (using midi controllers).

To my knowledge this is the first ever live performance using deep learning for either realtime video synthesis, or realtime audio synthesis, let alone both audio and video synthesis simultaneously. This project involved lots of custom software and custom models and architectures to synthesize and connect ingoing and outgoing audio, synthesized facial features and synthesized images.

Project page and full credits

2018 – Nimiia Cétiï

A video and sound installation driven by an (actual) bowl of bacteria. Footage of the bacteria is analyzed to drive a virtual pen to create asemic writing (i.e. non-sensical, but appearing to be genuine-like hand-writings), and also synthesize glossolalia-like speech (i.e. non-sensical speech, often created in a state of trance). Lots of custom software (python, javascript) to analyze the bacteria footage, generate the handwritings, and synthesize audio (using Grannma mentioned above).

Project page and full credits

2021 – All watched over by machines of loving grace / Deeper meditations

A film made using text-2-image technologies (CLIP+VQGAN) and custom software (python), before there was Midjourney, before StableDiffusion, before DiscoDiffusion, and before there were any Google Colab notebooks to create such a film. In that sense this is probably one of the first films to be made using text-2-image technologies (July 2021).

Project page and full credits