Blog
Sunday 01/01/2023 |

Inside Moment Factory: How the innovative studio is experimenting with AI motion capture

AI-powered real-time motion capture

Multimedia studio Moment Factory is pushing the limits of immersive experiences. Read on to discover how the team uses Disguise and Move.ai’s AI-powered real-time motion capture solution, Invisible, to redefine immersive experiences.

From large-scale concert visuals for Billie Eilish to an interactive passenger experience at Hong Kong airport, Moment Factory has done it all. The worldwide team of top-tier creatives are experts in delivering mesmerising location-based experiences that merge real and virtual worlds through an awe-inspiring blend of colours, lights and sounds. 

So what could such brilliant minds be planning for the future? According to the studio’s Director of Innovation Céline Mornet, the answer is simple: live hybrid performances that audiences don’t just watch—but actively engage in, too.

 

Capturing the moment

In order to achieve this vision, the studio is experimenting with AI through Disguise and Move AI’s Invisible solution for real-time markerless motion capture.

“Our ultimate goal is to unite and connect all genres of audiences with a live performance by taking them beyond just being passive spectators. We want people to influence our experiences, regardless of their platform or location,” says Mornet. “Doing this, however, means a tracking solution with motion capture is needed. With a markerless motion capture solution, we are able to make performers evolve more freely while recording the datasets of their movements. We can then use this data in real time to augment the performances in two main ways: by giving the artists the necessary tools to express themselves in the context of a hybrid performance and by offering the audiences windows of interaction. So real-world audiences could affect what is happening in Moment Factory’s digital experiences and vice versa.”

“We tried plenty of motion capture solutions,” Céline Mornet continues, “and none of them could cope with all the demands of real-world locations, with latency, calibration and low-light conditions being common issues.”

When Disguise and Move.ai announced Invisible, it felt like the news Moment Factory had been waiting for. The software works by extracting natural human motion from video, using advanced AI to automatically retarget the data to a character rig in Unreal Engine. For Moment Factory, that meant human motion could be accurately tracked in real-time, all without costly suits, markers and long set-ups.

AI motion capture

The future in motion

Today, the studio successfully integrated Invisible into several proofs of concept, enabling the team to augment various aspects of a captured performance with shadows, reflections, dynamic occlusion, particle effects and more. 

Invisible is also the key to unlocking even more advanced interactive experiences through scene controls and environment changes. “We aren’t just using Invisible to explore motion-triggered events, we are seeking other reactive elements that give the users an opportunity to interact with their surroundings,” says Mornet. 

“Now, we can also use it to explore augmented performances, which allow artists to perform simultaneously in different locations. Whatever it is, Invisible's flexibility really amplifies our ability to create exciting and engaging experiences for both physical and online audiences, immersing them completely in the environment.”

The studio recently showcased this with XR Karaoke, a hybrid experience that explored the potential of Disguise’s extended reality workflow. In the span of seven weeks, Moment Factory’s Innovation team deconstructed the structural elements and UX of a classic karaoke and rebuilt a virtual karaoke system from scratch directly in Unreal Engine. On-site people could select and queue their song of choice from a UI and were then called to perform on the xR stage. The stage was programmed to react to the performer's presence and automatically launch the song in the queue when someone with a tracked mic was on stage. 

As the team wanted anyone to be able to participate in this hybrid karaoke experience, they programmed various exposed parameters and triggers for remote audiences to use, either online via Twitch and in a secondary detached zone, allowing them to have fun and participate by influencing the karaoke’s visual rendition in real time. 

Multimedia studio Moment Factory

The next stage

For now, Moment Factory’s sights are set on integrating Invisible into its offering.

The Innovation team is looking to combine Invisible with existing tools and sensors such as OptiTrack for precise and low latency moving objects tracking and nCam, as well as mixing different sets of data to augment performances even further and scale them to spaces of any size and shape.

In terms of market opportunities, the potential is wide, Mornet believes. “From brand activations, hybrid product launches, to immersive digital art, digital fashion, esport ceremonies, experimental and innovative human performance shows, the fields of application are vast and varied.” She views large-scale programmable spaces, like the AT&T Discovery District or Time Square Hybrid Theater, as ideal applications for Invisible thanks to their multi-functional rooms for hybrid performances and activations, enabling shows and performances to happen at the same time in the real and in the virtual world. 

Whether at a concert, a flagship store or across an urban square, we aim to inspire a sense of collective wonder and connection, we feel the Invisible technology totally aligns with this vision.
Céline Mornet

Director of Innovation at Moment Factory