NOTE: This site is now defunct and only online as an archive. For further updates please visit, or

Memo Mehmet Akten

Welcome. My name is Mehmet S. Akten (aka Memo) and I'm an artist, musician, engineer, toy maker, mad scientist based in London, UK.

I like to touch people, in their most private places, and make them giggle or cry.

I run The Mega Super Awesome Visuals Company and my work focuses on designing, developing and hijacking technology to create emotional and memorable experiences. I'm driven by the urge to make the seemingly impossible, possible; and awaken our childlike instincts to explore and discover new forms of interaction and expression.

Find out more about me here and my schedule for installations, exhibitions, lectures, performances here.

This is my blog containing experiments, examples, tutorials, snippets and thoughts on art, technology, new media, experience design, interaction design, computational design, human-computer interaction, computer vision, openframeworks, processing, quartz composer etc.

Dec 02 15:46

Dynamic projection mapping with camera / head tracking - Sony PlayStation Video Store

Three videos I co-directed with fellow Marshmallow Laser Feast posse Barney Steel and Robin McNicholas have just been released.

For the launch of the Sony PlayStation Video Store, our job was to bring a living room alive with hints at various hollywood blockbuster franchises - so we decided to push projection mapping to a new level. We projection mapped a living room space with camera (or head) tracking and dynamic perspective. All content realtime 3D, camera (or head) is tracked to match and update the 3D perspective in realtime to the viewers point of view. Add to this real props, live puppetry, interaction between the virtual and physical worlds, a mixture of hi-tech and lo-tech live special effects, a little bit of pyrotechnics and a lot of late nights.

The project is driven using custom software based on the Unity Engine, here all the realtime 3D content was animated and triggered, and mapped onto the geometry. Camera (and head) tracking was done on the PlayStation3 using PlayStation Move controllers and PlayStation Eye cameras, and communicated to our Unity application via ethernet. Six projectors were used to cover the area: 6x Optoma EW610ST, and the open-source Syphon Framework was used to to communicate between our Unity based 3D and mapping software, and our custom native Cocoa application to output to all projectors.

Full credits of the amazingly talented team, more information and full write-up coming soon, in the meantime I hope you enjoy the videos!


Agency: Studio Output
Ian Hambleton - Producer

Client: Sony PlayStation

Produced and Directed by MarshmallowLaserFeast

Mehmet Akten (MSA Visuals) - Director
Robin McNicholas (Flat-E) - Director
Barney Steel (The Found Collective) - Director / Producer

Ian Walker (The Found Collective) - Post Production Producer
Nadine Allen - Production Assistant
Kavya Ramalu - Production Secretary
Thomas English - Camera Man / AR
Richard Bradbury - Focus Puller
Celia Clare-Moodie - Camera Assistant
Jonathan Stow - Assistant Director
Philip Davies - Digital Imaging Technician
Jools Peacock - 3D Artist
Dirk Van Dijk - 3D Artist
Tobias Barendt - Programmer
Raffael Ziegler - 3D Artist
Alex Trattler - 3D Artist
Neil Lambeth - Art Department Director
Elise Colledge - Art Department Assistant
Oli van der Vijver - Set Construction
Robert Pybus - Assistant Director / Puppeteer
Gareth Cooper - Actor
Kimberly Morrison - Puppeteer
Rhimes Lecointe - Puppeteer
Jen Bailey-Rae - Puppeteer
Tashan - Puppeteer
Ralph Fuller - Puppeteer
Frederick Fuller - Puppeteer
Jane Laurie - Puppeteer
Sandra Ciampone – Photography / Puppeteer
Daniel McNicholas White – Runner
Maddie McNicholas - Runner
Nick White - Runner
Rosa Rooney - Runner

Aug 25 00:05

Simple Harmonic Motion

Simple Harmonic Motion is an ongoing research and series of projects exploring the nature of complex patterns created from the interaction of multilayered rhythms.


(watch fullscreen)

This version was designed for and shown at Ron Arads Curtain Call at the Roundhouse.
This ultra wide video is mapped around the 18m wide, 8m tall cylindrical display made from 5,600 silicon rods, allowing the audience to view from inside and outside.

Video of the event coming soon, photos at flickr.

Visuals made with openFrameworks, which also sends midi to Ableton Live to create the sounds.


(sounds much better with headphones, seriously)

Here 180 balls are bouncing attached to (invisible) springs, each with a steady speed, but slightly different to its neighbour. Sound is triggered when they hit the floor, the pitch of the sound proportional to the frequency of the oscillation. The total loop cycle duration of the system is exactly 6 minutes.

Visuals made with Cinema4D + COFFEE (a C-like scripting language for C4D), audio with SuperCollider.

I prefer the look, sound and behaviour of the previous test (see below) though this one does have interesting patterns too.


Here 30 nodes are oscillating with fixed periods in simple harmonic motion, with each node having a slightly different frequency. The total loop cycle duration is exactly 120 seconds (60s for the audio).

Specific information about this particular simulation and audio at

Visuals made with openFrameworks, audio with SuperCollider

See also John Whitney's Chromatic

I recently came across this beautiful video. 

Fifteen pendulums, all with precisely calculated string lengths so they all start at the same time, slowly go out of sync, create beautiful complex rhythmic patterns, and exactly 60 seconds later come back into sync again to repeat the cycle. These techniques of creating complex patterns from the interaction of multilayered rhythms have been explored by composers such as Steve Reich, Gyorgi Ligeti, Brian Eno and many others; but the patterns in this particular video seemed so simple yet complex that I wondered what it would sound like. The exact periods of each of the pendulums are described in detail on the project page, so I was able to easily recreate the experiment quite precisely in processing, the video below.

The processing source code for the demo can be found at

I've also started playing with supercollider, an environment and programming language for real time audio synthesis and algorithmic composition. As an exercise I was wondering if I could re-create the demo in supercollider. The source code below seems to do the job pretty nicely. I was trying to fit the code under 140 characters so I could tweet it, so I took a few shortcuts.

{o=0;{|i|*(1.0595**(3*i)), 0,, 0, 0.05, 0.1))}; o;}.play;

The sounds in the video above are NOT from supercollider. They are triggered from the processing sketch as midi notes sent to Ableton Live. The notes in the processing sketch are selected from a pentatonic scale. I wanted the supercollider code to fit in single tweet ( less than 140 chars), so I omitted the scale and instead pick notes which are spaced at minor 3rd intervals, creating a diminished 7th arpeggio. The base note is C#. Toccata and fugue in d minor anyone?

Aug 23 19:17

MarshmallowLaserFeast at Ron Arad's Curtain Call at the Roundhouse, 26th Aug 2011

As part of MarshmallowLaserFeast, we are proud to annouce that we are hosting an evening at the Roundhouse as part of Ron Arad's Curtain Call on 26th August 2011. We are showing a selection of old and new work, specifically designed and adapted for the giant 360 degree projection screen installation. The event is free and we would love it if you could pop by for the show. It is also my birthday (on 28th August), so this will be doubling as a birthday celebration!

The program is as follows:

19:00 Ultre Performance
19:20 Ionisation+Pangs
19:30 Honoi Panoramics
19:50 Simple Harmonic Motion
20:00 Reincarnation
20:05 Really really nice
20:10 My Secret Heart

Aug 10 16:32

Presentation at Music Video Production Forum, 11/08/2011

I will be giving a short presentation at the Music Video Production Forum in Shoreditch Church Tomorrow.

Contrary to what it says on the flyer I will not be talking about Cher Lloyd, as I've never worked with her, I don't even know who she is. I will be talking about the Depeche Mode video, the Wombats video, and other projects and experiments (Body Paint, My Secret Heart, Reincarnation, Webcam Piano, Simple Harmonic Motion etc.), which are not directly related to the music video industry, but incorporate techniques and methodologies I've been adapting (mainly keyframe-free, realtime performance based content generation and animation) which can be applied to almost any form of visual arts, including music videos.

Aug 05 15:22

Simple Harmonic Motion study #2a

Another study in simple harmonic motion and the the nature of complex patterns created from the interaction of multilayered rhythms.

Here 30 nodes are oscillating with fixed periods in simple harmonic motion, with each node having a slightly different frequency. The total loop cycle duration is exactly 120 seconds (60s for the audio).

General information at

Specific information about this particular simulation and audio at

Visuals made with openFrameworks, audio with SuperCollider

See also John Whitney's Chromatic

Jul 20 11:02

The Wombats "Techno Fan"

I worked with the Found Collective on this Wombats music video. I designed and developed software (using C++ / openframeworks) to process live footage of the band. All images seen below are generated from this software.


In 2010 the label had originally commissioned someone else for the video (I'm not sure who), they filmed and edited a live performance of the band. The label (or band or commissioner) then got in touch with Barney Steel from the Found Collective ( ) to "spice up the footage", having seen the Depeche Mode "Fragile Tension" video which we worked on together ( ). Barney in turn got in touch with me to create an app / system / workflow which could "spice up the footage". In short we received a locked down edit of band footage, which we were tasked with "applying a process and making it pretty". 



We received a locked edit of the band performing the song live. This was then broken down shot by shot and various layers were rotoscoped, separated (e.g. foreground, background, singer, drummer etc.) and rendered out as quicktime files. (This was all done in the traditional way with AfterEffects, no custom software yet). Then each of these shots & layers were individually fed into my custom software. The software analyzes the video file and based on the dozens of parameters outputs a new sequence (as a sequence of png's). The analysis is done almost realtime (depending on input video size) and the user can play with the dozens of parameters in realtime, while the app is running and even while it is rendering the processed images to disk. So all the animations you see in the video, were 'performed' in realtime. No keyframes used. Lots of different 'looks' were created (presets) and applied to the different shots & layers. Each of these processed sequences were rendered to disk and re-composited and edited back together with Final Cut and AfterEffects to produce the final video.



This isn't meant as a tutorial, but a quick, high level overview of all the techniques used in the processing of the footage. There are a few main phases in the processing of the footage:

  1. analyze the footage and find some interesting points 
  2. create triangles from those interesting points 
  3. display those triangles
  4. save image sequence to disk
  5. profit

Phase #1 is where all the computer vision (opencv) stuff happens. I used a variety of techniques. As you can see from the GUI screenshots, the first step is a bit of pre-processing: blur (cvSmooth), bottom threshold (clamp anything under a certain brightness to black - cvThreshold), top threshold (clamp anything above a certain brightness to white - cvThreshold), adaptive threshold (apply a localized binary threshold, clamping to white or black depending on neighbours only - cvAdaptiveThreshold), erode (shrink or 'thinnen' bright pixels - cvErode), dilate (expand or 'thicken' bright pixels - cvDilate). Not all of these are always used, different shots and looks require different pre-processing.

Next, the first method of finding interesting points was 'finding contours' (cvFindContours) - or 'finding blobs' as it is also sometimes known as. This procedure basically allows you to finds the 'edges' in the image, and return them as a sequence of points - as opposed to applying say just a canny or laplacian edge detector, which will also find the edges, but will return a B&W image with a black background and white edges. The latter (canny, laplacian etc) finds the edges *visually* while the cvFindContours will go one step further and return the edge *data* in a computer readable way, i.e. an array of points, so you can parse through this array in your code and see where these edges are. (cvFindContours also returns other information regarding the 'blobs' like area, centroid etc but that is irrelevant for this application). Now that we have the edge data, we can triangulate it? No, because it's way too dense - a coordinate for every pixel. So some simplification is in order. Again for this I used a number of techniques. A very crude method is just to omit every n'th point. Another method is to omit a point if the dot product of the vector leading up to that point from previous point, and the vector leading away from that point to the next point, is greater than a certain threshold (that threshold is the cosine of the minimum angle you desire). In english: omit a point if it is on a relatively straight line. OR: if we have points A, B and C. Omit point B if: (B-A) . (B-C) > cos(angle threshold). Another method is to resample along the edges at fixed distance intervals. For this I use my own MSA::Interpolator class ( (I think there may have been a few more techniques, but I cannot remember as it's been a while since I wrote this app!)

Independent to the cvFindContours point finding method, I also looked at using 'corner detection' (feature detection / feature extraction). For this I looked into three algorithms: Shi-Tomasi and Harris (both of which are implemented in opencv's cvGoodFeaturesToTrack function) and SURF (using the OpenSURF library). Out of these three Shi-Tomasi gave the best visual results. I wanted a relatively large set of points, that would not flicker too much (given relatively low 'tracking quality'). Harris was painfully slow, whereas SURF would just return too few features, adjusting the parameters to return a higher set of features just made the feature tracking too unstable. Once I had a set of points returned by the Shi-Tomasi (cvGoodFeaturesToTrack) I tracked these with a sparse Lucas Kanade Optical Flow (cvCalcOpticalFlowPyrLK) and omited any stray points. Again a few parameters to simplify, set thesholds etc.

Phase #2 is quite straightforward. I used "delaunay triangulation" (as many people have pointed out on twitter, flickr, vimeo). This is a process for creating triangles given a set of arbitrary points on a plane ( See for more info ). For this I used the 'Triangle' library by Jonathan Shewchuk, I just feed it the set of points I obtained from Phase #1, and it outputs a set of triangle data.

Phase #3 is also quite straightforward. As you can see from the GUI shots below, a bunch of options for triangle outline (wireframe) thickness and transparency, triangle fill transparency, original footage transparency etc. allowed customization of the final look. (Where colors for the triangles were picked as the average color in the original footage underneath that triangle). Also a few more display options on how to join together the triangulation, pin to the corners etc.

Phase #4 The app allowed scrubbing, pausing and playback of the video while processing in (almost) realtime (it could have been realtime if optimizations were pushed, but it didn't need to be, so I didn't bother). The processed images were always output to the screen (so you can see what you're doing), but also optionally written to disk as the video was playing and new frames were processed. This allowed us to play with the parameters and adjust while the video was playing and being saved to disk - i.e. animate the parameters in realtime and play it like a visual instrument.

The software was written in C++ with openFrameworks

libraries used:


WOMBATS__0004_Layer 76
WOMBATS__0013_Layer 67
WOMBATS__0002_Layer 78
WOMBATS__0003_Layer 77
WOMBATS__0011_Layer 69
WOMBATS__0014_Layer 66
WOMBATS__0063_Layer 17
WOMBATS__0062_Layer 18
WOMBATS__0039_Layer 41
WOMBATS__0048_Layer 32
WOMBATS__0072_Layer 8
Screen shot 2011-07-12 at 15.26.39
Screen shot 2011-07-12 at 15.26.49
Screen shot 2011-07-11 at 18.05.00
Screen shot 2011-07-11 at 18.01.14
Screen shot 2011-07-11 at 18.02.07

Jul 15 19:11


In case you've been living under a rock for the past week, this happened recently:

(Cease & Desist letters may have affected the content on these sites since posting).

Inspired by the events and the FAT Lab censor, I knocked up this project. It slaps on a Steve Jobs mask on any face it finds in a live webcam feed.

Feel free to install it on Apple Stores around the world. It should be legal (though don't quote me on that).

Download the source and mac binary at


May 21 15:18

30 chromatic metronomes at 0-29 BPM

30 chromatic metronomes at 0-29 BPM by memotv

30 Metronomes, set from 0bpm to 29bpm, each triggering a note from the chromatic scale starting at C# and increasing sequentially. Branching off a project I've been developing for a few years with Mira Calix inspired by Ligeti's Le Poeme Symphonique for 100 metronomes.

This particular incarnation is also inspired by

The period duration of the pattern is exactly 1 minute.

Made with supercollider

I was aiming to keep the full code less than 126 characters (so I could tweet it with the #supercollider hashtag):

play{o=Mix.fill(30,{arg i;*(1.0595**i)*,0,0.02,1),0,,0,0.05,0.1))});[o,o]}

Technically this isn't phasing as made famous by Steve Reich. His phasing works were built on the concept of playing the same pattern simultaneously, altering the tempo for each instance. "30 Chromatic metronomes" could be forced under the 'phasing' umbrella if it is considered as phasing an extremely simple pattern (i.e. a single 'hit') with 30 instances but adding a pitch shift. It can also be thought of as a 30-part polyrhythm. 

Apr 30 18:07


Apr 27 18:27

Interview on sharing, by Kyle Mcdonald

First in a series, Kyle Mcdonald interviewed me on the topic of "why do you share". The interview was conducted on piratepad, where you can watch it develop over time, and backed up on github. All content is licensed under a Creative Commons Attribution 3.0 Unported License. 

Mar 18 13:08

Inspired by Ai Weiwei's Sunflower Seeds

Inspired by Ai Weiwei's Sunflower Seeds at the Tate Modern.

Featuring (a very happy) Pearl & Bruce

Mar 17 01:14

Tweak, tweak, tweak. 41 pages of GUI, or "How I learned to stop worrying and love the control freak within"

I often tell people that I spend 10% of my time designing + coding, and the rest of my time number tweaking. The actual ratio may not be totally accurate, but I do spend an awful lot of time playing with sliders. Usually getting the exact behaviour that I want is simply a balancing act between lots (and lots (and lots (and lots))) of parameters. Getting that detail right is absolutely crucial to me, the smallest change in a few numbers can really make or break the look, feel and experience. If you don't believe me, try 'Just Six Numbers' by Sir Martin Rees, Astronomer Royal.

So as an example I thought I'd post the GUI shots for one of my recent projects - interactive building projections for Google Chrome, a collaboration between my company (MSA Visuals), Flourish, Seeper and Bluman Assoicates. MSA Visuals provided the interactive content, software and hardware.

In this particular case, the projections were run by a dual-head Mac Pro (and a second for backup). One DVI output went to the video processors/projectors, the other DVI output to a monitor where I could preview the final output content, input camera feeds, see individual content layers and tweak a few thousand parameters - through 41 pages of GUI!. To quickly summarize some of the duties carried out by the modules seen in the GUI:

  • configure layout for mapping onto building architecture and background anim parameters
  • setup lighting animation parameters
  • BW camera input options, warping, tracking, optical flow, contours etc.
  • color camera input options
  • contour processing, tip finding, tip tracking etc.
  • screen saver / timeout options
  • fluid sim settings
  • physics and collision settings
  • post processing effects settings (per layer)
  • tons of other display, animation and behaviour settings

(This installation uses a BW IR camera and Color Camera. When taking these screenshots the color camera wasn't connected, hence a lot of black screens on some pages.)

Check out the GUI screen grabs below, or click here to see them fullscreen (where you can read all the text)

Feb 19 00:16

Speed Project: RESDELET 2011

Back in the late 80s/early 90s I was very much into computer viruses - the harmless, fun kind. To a young boy, no doubt the concept of an invisible, mischievous, self-replicating little program was very inviting - and a great technical + creative challenge.

The very first virus I wrote was for an 8088, and it was called RESDELET.EXE. This was back in the age of DOS, before windows. In those days to 'multitask' - i.e. keep your own program running in the background while the user interacted with another application in the foreground - was a dodgy task. It involved hooking into interrupt vectors and keeping your program in memory using the good old TSR: Terminate, Stay Resident interrupt call 27h.

So RESDELET.EXE would hang about harmlessly in memory while you worked on other things - e.g. typing up a spreadsheet in Lotus 123 - then when you pressed the DELETE key on the keyboard, the characters on the screen would start falling down - there and then inside Lotus 123 or whatever application you were running.

RESDELET 2011 is an adaptation of the original. It hangs about in the background, and when you press the DELETE or BACKSPACE key, whatever letters you have on your screen start pouring down - with a bit of added mouse interactivity. This version does *not* self-replicate - it is *not* a virus, just a bit of harmless fun.

Source code coming real soon (as soon as I figure out how to add a git repo inside another repo)

This is a speed project developed in just over half a day, so use at your own risk!

Sorry for the flicking, there was a conflict with the screen recording application I couldn't resolve. Normally there is no flicker it's as smooth as silk.

Nov 15 23:50

Kinect - why it matters

There's been a lot of buzz on the internet lately - at least in the circles I frequent - about the recently released Microsoft Kinect for Xbox. For those who know nothing about it, it's a peripheral for Microsoft's Xbox game console, that allows you to play games without a game controller, instead you just move your arms, body and legs, and it tracks and interprets your movements and gestures. The impact this will have on gaming is debatable. The impact this will have on my life and many others involved with new media art, experimental visual and sound performance, is a bit more significant. More on that below.

The tracking is made possible by some very clever hardware. It has a normal color camera, similar to a webcam; an array of microphones; accelerometer; motor etc.; but most interestingly - at least for me - it has a laser IR projector and an IR camera, which it uses to calculate a depth map, and for roughly every pixel in the color image, you can retrieve its distance to the camera. Why does that matter? More on that below. 

While the kinect was designed to be used only with the Xbox, within a few hours of it being released its signal was simultaneously decoded by unrelated people around the world and open-source linux drivers were released to the public. Others then ported the linux drivers to Mac and Windows, so everyone could start playing with the hardware on their PCs. A nice brief summary of this period and those involved can be found at To keep it brief I won't go into details, I'd like to focus on why this matters.

What the kinect does, is nothing new. There have been depth sensing cameras on the market for quite a while, and some probably with better quality and build. What sets the kinect apart? Its price. At £130 it isn't something everyone can go out and buy a handful of, but it is a consumer device. It is a device that most people who want it can either buy it, or will know someone who can get hold of one or they can borrow. It is a potential common household item. Whereas anything else on the market that comes close to its capabilities costs significantly more (starting at £2000, jumping up to £4000-£5000+), and not to mention being aimed at industrial businesses, robotics, military etc. they are considerably more complicated to acquire and use.

But why does this matter?

For me it's very simple. I like to make things that know what you are doing, or understand what you are wanting to do, and act accordingly. There are many different ways of creating these things. You could strap accelerometers to your arms and wave them around, and have the accelerometer values drive sound or visuals. You could place various sensors in the environment, range finders, motion sensors, microphones, piezos, cameras etc. Ultimately you use whatever tools and technology you have / create / hijack, to create an environment that 'knows' what is happening inside it, and responds the way you designed and developed it to.

What interests and excites me is not the technology, but how you interpret that environment data, and make decisions as a result of your analysis. How intuitive is the interface? Does it behave as you'd expect? You could randomly wire the environmental parameters (e.g. orientation of arm), to random parameters (e.g audio frequency or speed of video), and it will be fun for a while, but it won't have longevity if you can't ultimately learn to play and naturally express yourself with it. It won't be an *instrument*. In order to create an instrument, you need to design a language of interaction - which is the fun side of interaction design. That is a huge topic in itself which I won't go into now. The next step, is the technical challenge of making sure you can create a system which can understand your newly designed interaction language. It's too common to design an interaction, but not have the technical capabilities to implement it - in which case you end up with a system which reports incorrectly, and makes inaccurate assumptions resulting in confusing, non-intuitive interaction and behaviour. The solution? Smarter analysis of course. See if there are better ways of analyzing your data to give you the results you need. A complimentary solution, is to ask for more data. The more data you have about the environment, the better you can understand it, and the smarter, more informed decisions you can make. You don't *need* to use all the data all the time, but it helps if it's there when you need it.

Kinect, being a depth sensing camera, gives us a ton of extra data over any consumer device in it's price range. With that extra data, we are a lot more knowledgable about what is happening in our environment, we can understand it more accurately, thus we can create smarter systems that respond more intuitively.

A lot of people are asking "what can you do with kinect that you couldn't do before". Asking that question, is missing the point. It depends what exactly "you" means. Is the question "What can I, Memo, do with kinect that I couldn't do before?" Or is it "what could Myron Krueger do with kinect that he couldn't before?" (answer is probably not much), or is it referring to a more generic "you"?

Kinect is making nothing which wasn't already technically possible, possible. It is just making it accessible, not just in terms of price, but also in terms of simplicity and ease. The question should not be "what can you do with kinect that you couldn't do before", but it should be "how much simpler is it (technically) to do something with kinect, which was a lot harder with consumer devices before kinect". To demonstrate what I mean, here is a rough prototype I posted yesterday within a few hours of getting my hands on a kinect.

Kinect is hooked up to my macbook pro, I'm using the opensource drivers mentioned above to read the color image and depth map, and wrote the demo prototype you see above. One hand draws in 3D, two hands rotates the view.

Without kinect this is completely possible. You could use high end expensive equipment, but you don't even need to. You could use two cheap webcams, make sure you have good control of your lighting, you might need to setup a few IR emitters, ideally try and get a clean unchanging background (not essential but helps a lot). And then you will need a *lot* of hairy maths, algorithms and code. I'm sure lots of people out there are thinking "hey what's the big deal, I don't find those algorithms hairy at all, I could do that without a Kinect, and I already have done". Well smartass this isn't about you.

With the kinect, you pretty much just plug it in, make sure there isn't any bright sunlight around, and with a few lines of code you have the information you need. You have that extra data that you can now use to do whatever you want. Now that interaction is available for artists / developers of *all* levels, not just the smelly geeks - and that is very important. Once we have everybody designing, creating and playing with these kinds of interactions - who prekinect would not have been able to - then we will be smothered in amazing, innovative, fresh ideas and applications. Sure we'll get thousands of pinch-to-zoom-and-rotate-the-photo demos, which will get sickening pretty quickly, but amongst all that will be ideas that you or I would have never thought of in a million years, but we'll instantly fall in love with, and it will spark new ideas in us, sending us off in a frenzy of creative development, which in turn feeds others and the cycle continues.

And that's why it matters. 

Of course there are tons of amazing computer vision based projects that were created before Kinect, some created even before computers as we know them existed. It still blows my mind how they were developed. But this isn't about those super smart people, who had access to super expensive equipment and the super skills and resources to pull off those super projects. This is about giving the tools to everyone, leveling the playing field, and allowing everyone to create and inspire one another.

It's still very early days yet. It's mainly been a case of getting the data off the kinect into the computer, seeing what actually is that data, how reliable is it, how is it's performance and what can we do with it. Once this gets out to the masses, that's when the joy will start pouring in :)

Thank you Microsoft for making this, and all the hackers out there who got it working with our PCs within a few hours.

Nov 14 19:46

First tests with Kinect - gestural drawing in 3D

Yes I'm playing with hacking Kinect :)

The XBox Kinect is connected to my Macbook Pro, and I wrote a little demo to analyse the depth map for gestural 3D interaction. One hand to draw in 3D, two hands to rotate the view. Very rough, early prototype.

You can download the source for the above demo (GPL v2) at

Within a few hours of receiving his Kinect, Hector Martin released source code to read in an RGB and depth map from the device for Linux.

within a few hours of that Theo Watson ported it to Mac OSX and release his source, which - with the help of others - became an openFrameworks addon pretty quickly.

Now demos are popping up all over the world as people are trying to understand the capabilities of this device and how it will change Human Computer Interaction on a consumer / mass level.

Nov 05 15:38

OpenCL Particles at OKGo's Design Miami 2009 gig

For last years Design Miami (2009) I created realtime visuals for an OKGo performance where they were using guitars modded by Moritz Waldemeyer, shooting out lasers from the headstock. I created software to track the laser beams and project visuals onto the wall where they hit.

This video is an opensource demo - written with openframeworks - of one of the visualizations from that show, using an OpenCL particle system and the macbook multitouch pad to simulate the laser hit points. The demo is audio reactive and is controlled by my fingers (more than one) on the macbook multitouch pad (each 'attractor' is a finger on the multitouch pad). It runs at a solid 60fps on a Macbook Pro, but unfortunately the screen capture killed the fps - and of course half the particles aren't even visible because of the video compression.

The app is written to use the MacbookPro multitouch pad, so will not compile for platforms other than OSX, but by simply removing the multitouch pad sections (and hooking something else in), the rest should compile and run fine (assuming you have an OpenCL compatible card and implementation on your system).

Uses ofxMultiTouchPad by Jens Alexander Ewald with code from Hans-Christoph Steiner and Steike.
ofxMSAfft uses core from Dominic Mazzoni and Don Cross.

Source code (for OF 0062) is included and includes all necessary non-OFcore addons (MSACore, MSAOpenCL, MSAPingPong, ofxMSAFFT, ofxMSAInteractiveObject, ofxSimpleGuiToo, ofxFBOTexture, ofxMultiTouchPad, ofxShader) - but bear in mind some of these addons may not be latest version (ofxFBOTexture, ofxMultiTouchPad, ofxShader), and are included for compatibility with this demo which was written last year.

More information on the project at

Most of the magic is happening in the opencl kernel, so here it is (or download the full zip with xcode project at the bottom of this page)

typedef struct {
    float2 vel;
    float mass;
    float life;
} Particle;
typedef struct {
    float2 pos;
    float spread;
    float attractForce;
    float waveAmp;
    float waveFreq;
} Node;
#define kMaxParticles       512*512
#define kArg_particles          0
#define kArg_posBuffer          1
#define kArg_colBuffer          2
#define kArg_nodes              3
#define kArg_numNodes           4
#define kArg_color              5
#define kArg_colorTaper         6
#define kArg_momentum           7
#define kArg_dieSpeed           8
#define kArg_time               9
#define kArg_wavePosMult        10
#define kArg_waveVelMult        11
#define kArg_massMin            12
float rand(float2 co) {
    float i;
    return fabs(fract(sin(dot(co.xy ,make_float2(12.9898f, 78.233f))) * 43758.5453f, &i));
__kernel void update(__global Particle* particles,      //0
                     __global float2* posBuffer,        //1
                     __global float4 *colBuffer,        //2
                     __global Node *nodes,              //3
                     const int numNodes,                //4
                     const float4 color,                //5
                     const float colorTaper,            //6
                     const float momentum,              //7
                     const float dieSpeed,              //8
                     const float time,                  //9
                     const float wavePosMult,           //10
                     const float waveVelMult,           //11
                     const float massMin                //12
                     ) {                
    int     id                  = get_global_id(0);
    __global Particle   *p      = &particles[id];
    float2  pos                 = posBuffer[id];
    int     birthNodeId         = id % numNodes;
    float2  vecFromBirthNode    = pos - nodes[birthNodeId].pos;                         // vector from birth node to particle
    float   distToBirthNode     = fast_length(vecFromBirthNode);                            // distance from bith node to particle
    int     targetNodeId        = (id % 2 == 0) ? (id+1) % numNodes : (id + numNodes-1) % numNodes;
    float2  vecFromTargetNode   = pos - nodes[targetNodeId].pos;                        // vector from target node to particle
    float   distToTargetNode    = fast_length(vecFromTargetNode);                       // distance from target node to particle
    float2  diffBetweenNodes    = nodes[targetNodeId].pos - nodes[birthNodeId].pos;     // vector between nodes (from birth to target)
    float2  normBetweenNodes    = fast_normalize(diffBetweenNodes);                     // normalized vector between nodes (from birth to target)
    float   distBetweenNodes    = fast_length(diffBetweenNodes);                        // distance betweem nodes (from birth to target)
    float   dotTargetNode       = fmax(0.0f, dot(vecFromTargetNode, -normBetweenNodes));
    float   dotBirthNode        = fmax(0.0f, dot(vecFromBirthNode, normBetweenNodes));
    float   distRatio           = fmin(1.0f, fmin(dotTargetNode, dotBirthNode) / (distBetweenNodes * 0.5f));
    // add attraction to other nodes
    p->vel                      -= vecFromTargetNode * nodes[targetNodeId].attractForce / (distToTargetNode + 1.0f) * p->mass;
    // add wave
    float2 waveVel              = make_float2(-normBetweenNodes.y, normBetweenNodes.x) * sin(time + 10.0f * 3.1416926f * distRatio * nodes[birthNodeId].waveFreq);
    float2 sideways             = nodes[birthNodeId].waveAmp * waveVel * distRatio * p->mass;
    posBuffer[id]               += sideways * wavePosMult;
    p->vel                      += sideways * waveVelMult * dotTargetNode / (distBetweenNodes + 1);
    // set color
    float invLife = 1.0f - p->life;
    colBuffer[id] = color * (1.0f - invLife * invLife * invLife);// * sqrt(p->life);    // fade with life
    // add waviness
    p->life -= dieSpeed;
    if(p->life < 0.0f || distToTargetNode < 1.0f) {
        posBuffer[id] = posBuffer[id + kMaxParticles] = nodes[birthNodeId].pos;
        float a = rand(p->vel) * 3.1415926f * 30.0f;
        float r = rand(pos);
        p->vel = make_float2(cos(a), sin(a)) * (nodes[birthNodeId].spread * r * r * r);
        p->life = 1.0f;
//      p->mass = mix(massMin, 1.0f, r);
    } else {
        posBuffer[id+kMaxParticles] = pos;
        colBuffer[id+kMaxParticles] = colBuffer[id] * (1.0f - colorTaper);  
        posBuffer[id] += p->vel;
        p->vel *= momentum;

Oct 30 19:57

ofxQuartzComposition and ofxCocoa for openFrameworks

Two new addons for openFrameworks. Actually one is an update, and major refactor, so much so that I've changed its name: ofxCocoa (was ofxMacOSX) is a glut-replacement addon for openframeworks to allow native integration with opengl and cocoa windowing system, removing dependency on glut. Has a bunch of features to control window and opengl view creation, either programatically or via InterfaceBuilder.

ofxQuartzComposition is an addon for openFrameworks to manage Quartz Compositions (.qtz files).

Currently there is support for:

  • loading multiple QTZ files inside an openframeworks application.
  • rendering to screen (use FBO to render offscreen)
  • passing input parameters (float, int, string, bool etc) to the QTZ input ports
  • reading ports (input and output) from the QTZ (float, int, string, bool etc)


  • passing Images as ofTextures to and from the composition (you currently can pass images as QC Images, but you would have to manually convert that to ofTexture to interface with openFrameworks)


How is this different to Vades ofxQCPlugin ( ? 
ofxQuartzComposition is the opposite of ofxQCPlugin. ofxQCPlugin allows you to build your openframeworks application as a QCPlugin to run inside QC. ofxQuartzComposition allows you to run and control your Quartz Composition (.qtz) inside an openframeworks application.

Here there are two quartzcompositions being loaded and mixed with openframeworks graphics, in an openframeworks app. The slider on the bottom adjusts the width of the rectangle drawn by openframeworks (ofRect), the 6 sliders on the floating panel send their values directly to the composition while it's running in openframeworks.

Sep 22 18:13

Impromptu, improvised performance with Body Paint at le Cube, Paris.

My Body Paint installation is currently being exhibited at le Cube festival in Paris. At the opening night two complete strangers, members of the public, broke into an impromptu, improvised performance with the installation. Mind blowing and truly humbling. Thank you. My work here is done.

Sep 15 18:07

"Who am I?" @ Science Museum

MSA Visuals' Memo Akten was commissioned by All of Us to provide consultancy on the design and development of new exhibitions for the Science Museum's relaunch of their "Who Am I?" gallery, as well as taking on the role of designing and developing one of the installations. MSAV developed the 'Threshold' installation, situated at the entrance of the gallery, creating a playful, interactive environment inviting visitors to engage with the installation whilst learning about the gallery and the key messages.

Sep 08 16:06

"Waves" UK School Games 2010 opening ceremony

MSA Visuals' Memo Akten was commissioned by Modular to create interactive visuals for the UK School Games 2010 opening ceremony at Gateshead stadium in Newcastle. The project involved using an array of cameras to convert the entire runway into an interactive space for the opening parade of 1600+ participants walking down the track as well as a visual performance to accompany a breakdance show by the Bad Taste Cru. All of the motion tracking and visuals were created using custom software written in C++ and using openFrameworks, combining visual elements created in Quartz Composer. Again using custom mapping software, the visuals were mapped and displayed on a 30m LED wall alongside the track. The event was curated and produced by Modular Projects, for commissioners Newcastle Gateshead Initiative. 

Aug 06 18:14

Announcing Webcam Piano 2.0

Jul 13 12:14

MSALibs for openFrameworks and Cinder

I am retiring my google code rep for openframeworks addons in favor of github. You can now find my addons at . Actually I've taken a leaf out of Karsten Schmidt's book and registered too. For now it just forwards to the github rep, but maybe soon it will be it's own site. (Note you can download the entire thing as a single zip if you don't want to get your hands dirty with git - thank you github!).

There are some pretty big changes in all of these versions. Some of you might have seen that the Cinder guys ported MSAFluid to Cinder and they got a 100% speed boost! Well it's true, they've made some hardcore mods to the FluidSolver allowing it to run exactly 2x faster. Now I've ported it back to OF, so now we have the 100% speed boost in OF too. In fact carrying on their optimization concepts I managed to squeeze another 20% out of it, so now it's 120% faster! (And these mods also lend themselves to further SSE or GPU optimizations too).

To prevent this porting back and forth between Cinder and OF I created a system introducing an MSACore addon which simply maps some basic types and functions and forms a tiny bridge (with no or negligible overheads) between my addons and OF or Cinder (or potentially other C/C++ frameworks or hosts). MSACore is really tiny and not intended to allow full OF code to run in Cinder or vice versa, but just the bare essentials to get my classes which mainly do data processing (such as Physics, Fluids, Spline, Shape3D etc. - hopefully OpenCL soon) to run on both without modifying anything.

So now any improvement made to the addon by one community will benefit the other. Feeling the love :) ?

Some boring tech notes: Everything is now inside the MSA:: namespace instead of having an MSA prefix. I.e. MSA::FluidSolver instead of MSAFluidSolver. So just by adding using namespace MSA; at the top of your source file you can just use FluidSolver, Physics, Shape3D, Spline etc. without the MSA prefix (or just carry on using MSA:: if you want). I think it aids readability a lot while still preventing name clashes.

There are more changes in each addon so check the changelog in each for more info. e.g. MSA::Physics now has a MSA::Physics::World which is where you add your particles and springs (instead of directly to physics), and the MSA::Fluid has an improved API which is more consistent with itself. So backwards compatibility will be broken a bit, but a very quick search and replace should be able to fix it. Look at the examples.

P.S. this is the first version of this MSACore system (more like 0.001) so it may change or there may be issues. If you are nearing a deadline and using one of these addons, I'd suggest you make a backup of all of your source (including your copy of MSAxxxx addon) before updating!

Any suggestions, feedback, tips, forks welcome.

Jul 09 00:29

ofxWebSimpleGuiToo for openFrameworks (call for JQuery gurus!)

ofxWebSimpleGuiToo is an amazingly useful new addon for openFrameworks from Marek Bereza. With one line of code it allows you to create a webserver from within your OF app and send your ofxSimpleGuiToo gui as an html/javascript page, allowing remote clients to control your OF app from a regular web browser. These can be another PC or Mac, or android device, iPod Touch, iPhone, iPad etc. you name it. No specific app is needed on the client, just a simple web browser. In the photo below you can see the OF app running on the laptop sending the gui structure to an iPad and an iPhone - both running safari, which in turn can control the OF app.

there is still more work to be done, especially any Javascript / JQuery gurus out there willing to improve the client end are encouraged to come on board and finish it off!

If you're interested please get in touch

More information on ofxWebSimpleGuiToo and download can be found on Marek's google code
(you will also need his ofxWebServer).

and you will need the latest ofxSimpleGuiToo from my github
(from here you will also need ofxMSAInteractiveObject)

Jun 23 11:16

BLAZE visuals

Earlier this year MSA VIsuals directed and produced the visuals for the west-end dance show BLAZE. Below is a short snipped of some of the visuals. You can see more information at

Jun 03 16:42

Metamapping 2010

I was recently at a workshop / residency at the Mapping festival 2010, Geneva, Switzerland. Many collaborators, artists, musicians, performers, visualists took over various spaces at La Parfumerie to create audio-visual performances and installations. Constructing a large scaffold structure in the main hall, armed with DMX controlled lights, microphones, cameras, sensors and projectors; we converted the space into a giant audio-visual-light instrument for the audience to explore and play with, and be part of and experience a non-linear narrative performance. The project involved live projection mapping, motion tracking, audio-reactive visuals, piezo-reactive audio and visuals, DMX controlled lights, rope gymnasts, acrobats and much more!

Video coverage of the event coming soon.

More information can be found at Mapping festival and 1024 Arcitecture