The 2022 Honorees are listed below, alphabetically:

EPYC is AMD’s multicore data center CPU. The innovation is that the processor has more cores and threads inside it than any other CPU involved in VFX/Media & Entertainment to this day. With a lower power footprint, the unmatched performance of this CPU provides VFX and animation studios with genuinely more rendering and creation power than they’ve ever had. AMD’s EPYC has change the economics of rendering and content creation, giving artists more time with their art by giving them the ability to iterate their shots many, many more times than they could with legacy processors.

The Tessera SX40 LED video processor is playing a pivotal role in the in-camera visual effects revolution currently sweeping the industry. Chosen for pioneering projects such as The Mandalorian, it delivers exceptional image quality for both the eye and the camera and has become the gold standard for LED processing. It is the task of LED processing to receive a video input and ensure it is accurately displayed on an LED wall made up of many individual LED panels and millions of individual LEDs. Precise genlock between the screen and camera is essential to avoid visual artefacts and the SX40 is the only processor on the market that achieves this reliably in a wide range of situations.

CINIONIC – Barco Series 4 SP4K-55 Cinema Laser Projector
A next-generation laser projection family for all cinema screens with a 52,000-lumen model to power larger cinema screens. The Barco Series 4 SP4K-55 makes delivering best-in-class, laser-powered cinematic experiences possible for some of the biggest cinema screens. Cinemas can now offer crisp, sharp and vivid laser projection on every screen.
HP – HP Reverb G2 Omnicept Edition
HP Reverb G2 Omnicept Edition is the world’s most intelligent VR headset, equipping developers with the ability to create adaptive, user-centric experiences with a state-of-the-art sensor system that measures muscle movement, gaze, pupil size and pulse. Content creators can now design applications that adapt to each user and take VR experiences to the next level.

Advanced In-Camera VFX Volumes incorporates innovations such as flexible and fast LED tile removal from anywhere in the LED wall, a modular LED ceiling that can be made completely seamless, and a ground support structure that incorporates safety and maintenance positions that allows the LED wall to be flush with the floor.

The Snapdragon Spaces™ XR Developer Platform paves the way to a new frontier of spatial computing, empowering developers to create immersive experiences for AR Glasses that adapt to the spaces around us. Snapdragon Spaces equips developers to seamlessly blend the lines between our physical and digital realities, transforming the world around us in ways limited only by our imaginations.

TENCENT MEDIA LABS – Holographic Live Streaming
This end-to-end holographic livestreaming and VOD system comprises of: Realtime capture and virtual production, machine learning based compression and live transmission, multi-viewpoint light field rendering with eye tracking and contextual content and interactive feedback from viewer effects both the live stream as well as the local content rendering system. The scale of the platform presents the opportunity for the most impactful implementation of advanced immersive technology achieved thus far, with unprecedented social impact in the aiding of underserved communities.

V-NOVA LIMITED – V-Nova Point Cloud Compression
V-Nova Point Cloud Compression released on the Steam Store the world’s first photorealistic 6DoF VR movie, enabling whole new levels of quality and immersion to anyone with a VR gaming set up. The technology achieves unprecedented quality, file sizes many times smaller than any existing alternative and ultra-fast processing. Effective volumetric data compression unleashes the commercial potential of 6DoF VR movies and advertising to the masses.

WĒTĀ FX – Wētā FX Face Fabrication System
Wētā FX’s Face Fabrication System (FFS) provides a novel approach to utilizing neural networks for final facial likeness rendering that meets the quality demands and production rigours of visual effects for feature films. The system was designed to execute face replacements using imagery from a stunt performer to generate a similar corresponding image in the principal actor’s likeness, using neural rendering to produce the new image of the principal actor with perspective, lighting and facial expressions within the specific shot context.