Knowledge - 2019

disguise insights - Projection Mapping - How it’s done

disguise insights - Projection Mapping - How it’s done

Welcome to disguise Insights, a series of tutorials and knowledge focusing on different elements of the disguise production toolkit. Last week we looked at video mapping, what it is and why we do it, this week we will explore projection mapping and how it is done.

Written by Peter Kirkup, Technical Solutions Manager EMEA  

All projection mapping starts with a concept. It’s important when considering the concept to think about who the audience is, where they will be located, and most importantly, what object you want to projection map. Most often, this is a building, but it could also be a sculpture or a temporary structure. From there, 3D models of the projection mapped objects are built. These 3D models will form the basis of the mapping, so it’s important that they are accurate; for complex geometry, a laser scan can be performed, which can give millimeter precision to the mapping.

Once a 3D model of the projection mapped object is complete, a process called UV mapping converts the object into a surface capable of receiving video, in software terms. The UV map comes between the mapping between the 2D video file and the 3D geometry of the object; it defines which pixels will fall where on the object and is critical to modern projection mapping workflows.

Video mapping at Circle of Light Moscow International Festival

Video mapping at Circle of Light Moscow International Festival

When the UV mapping is complete, the 3D object file can be loaded into a projection simulation tool, such as the disguise production toolkit. This tool enables the production team to simulate the position of the projectors and projection mapped object and see the behaviour of the projection pixels on that setup. Advanced tools also include simulation of light level (luminosity) to enable photometric calculations to understand just how much light will be visible on the object during the projection show. At this point, projection designers can design the optimum projector setup, with choice of projector lenses and stacking configurations, to reach the desired outcome.

Now it’s time for content production. The projection design will influence the overall resolution of content required to hit the object and make it look good. Once this is known, content can be produced using the same UV mapped 3D file. There is a plethora of content production methods out there from analogue stop motion graphics all the way to real-time rendered effects using 3D content tools such as Notch. A good content house will know which tool to apply depending on the desired storyboard. 

Architectural Mapping of Aachen Cathedral

Architectural Mapping of Aachen Cathedral

Whatever the content source is, you’ll also need a media server to play back the created content and manage on-site alignment and blending. disguise has been built to support multiple editors working simultaneously on the same file, so whilst the content team works on updates in the 3D simulation engine, the technical delivery team on site can be carrying out a line up. The line up is the process of matching the virtual 3D model with the real world model, including moving the virtual projectors into place exactly as they have been rigged on-site. This process is helped by such disguise tools as QuickCal or OmniCal calibration, which uses cameras to capture the projector setup in just a few clicks.

When you’re all set on site, it’s time to run the show - and of course reliability is key. High-profile shows often demand redundancy and a network of multiple servers, all working together to deliver the show. After all, the world is watching.