San Francisco News

50 Years | 500+ Film and TV credits | 135+ Awards

SINCE 1975

The acclaimed director of photography discusses collaborating with the visual effects artists at Industrial Light & Magic.

By Mark Newbold

Actors Christopher Lloyd (Doc Brown) and Michael J. Fox (Marty McFly) shot blue screen elements at ILM’s Kerner facility (Credit: ILM & Universal).

“When I was a kid, I was interested in magic,” remembers Dean Cundey, the Academy Award-nominated cinematographer behind Romancing the Stone (1984) and Who Framed Roger Rabbit (1988), who sat down with us recently to reminisce about another Robert Zemeckis classic, Back to the Future (1985). “When I got my first little magic set for Christmas, I enjoyed the idea of fooling people, using magic to create an illusion of the impossible or the unexpected. I think that mindset coincided perfectly with what I did in film. It was all about making the audience believe something impossible, like traveling in time.”

That mindset married perfectly to the goals of Industrial Light & Magic, which by 1985 had not only won six Academy Awards for their groundbreaking visual effects, but was forging on into the future after the end (for the time being) of Star Wars and into a new realm where the potential of digital visual effects grew more promising by the day. Exciting times, but then as now, technical prowess needed to be married to a strong vision and a story worth telling. ILM was brought in to provide a total of 27 visual effects for a new Steven Spielberg-produced action comedy called Back to the Future, and that innovation and thirst for an engaging audience experience needed to be front and center.

Enter Ken Ralston, one of just five visual effects supervisors at ILM at the time, tasked with helping bring to the screen the unique requirements of a time-travel story that took us from 1985 to 1955 and back again.

“I took the project initially because there were a lot of pyrotechnics in it,” Ralston told American Cinematographer back in 1985. “In the first draft of the script, to get back to the future Marty had to go through one of the first A-bomb testing sites and drive into an A-bomb that’s dropped on him. That really intrigued me because we were going to have to duplicate all the old footage you’ve seen where the blast rips those buildings and houses out of the ground. I thought, great! And, as usual, as soon as I was on the show it changed dramatically.”

Budget and technology led to a number of alterations, but good old-fashioned spitballing brought one of the film’s most memorable concepts to life: the DeLorean ripping through time at 88 miles per hour as it leapt forward or backward in time. ILM animation supervisor Wes Takahashi and his team were handed the task of bringing this concept to life on celluloid. With Zemeckis, the ILM crew explored multiple iterations of a so-called “time slice” created by the vehicle.

“No one can describe what they want in a shot like Bob Zemeckis can,” Ralston told Cinefex.
“He wanted something really powerful – everything in the show had to be very fast and very violent. The way he put it was: ‘Time travel is not pretty.’”

Speaking recently with ILM.com, Dean Cundey agrees with what Ralston said 40 years ago. “Bob Zemeckis was very much a visual storyteller, who was open to suggestions and embellishments on how to implement the vision he had for a scene.”

Cundey explains the cinematographer’s place in the creation of a scene. “I’ve always looked at the cinematographer as the bridge between the technical stuff and telling the story. Typically, the director is at the forefront of storytelling and the emotion that the audience sees and experiences, and the cinematographer is the one who implements that vision using cameras, lights, and so forth. Those two sides have to blend in the middle for the audience to experience the movie. Sometimes the director will rely on the cinematographer for inspiration; how wide is the shot, should the camera move, that kind of stuff.”

That ease of collaboration between director and cinematographer, a partnership that would continue throughout the Back to the Future trilogy, was a creative gold mine that freed up both Zemeckis and Cundey to bring their very best work to the show. “I think Bob realized that I was very much interested in the storytelling as opposed to ‘What’s a cool way to light this shot?’ I’ve always been interested in immersing the audience in the scene as opposed to just creating a striking image.

“One of my techniques is I’d watch a rehearsal,” Cundey continues, “speak with the actors, and have a preliminary conversation with the director about the scene and the images that would tell the story of the scene, as well as evoking the proper emotional response from the audience. Bob would describe a scene to the actors and then work on the staging and how the characters react, and every now and then, he would turn to me and then say, ‘Okay, what do I want to do here?’ Which was a very, very nice way of saying, ‘What suggestions do you have?’”

Cundey continues, “Rather than standing next to the director, I would go and stand off to one side, or in a different spot to see if there was anything from that position that told the story as far as character relationships and movement. I very much appreciated the fact that I could watch and interpret on my own, and then present some of these thoughts and ideas to Bob, so I could understand what he wanted from the scene, the characters, and the storytelling.”

With less than 30 visual effects judiciously placed throughout the movie, the focus in Back to the Future is placed on a trio of key sequences: the parking lot of Twin Pines Mall in 1985 and the initial jump forward when Doc Brown’s dog, Emmett, is successfully sent a minute forward in time without creating a paradox; the iconic clock tower sequence in 1955 as Doc Brown has to connect an elaborately rigged series of events, timed to perfection, to send young Marty McFly back to his present in 1985; and the final scene as Doc returns from the future of 2015 to collect Marty and his girlfriend Jennifer, where we see a future version of the DeLorean fly down the road and turn, roaring towards the camera as the credits roll.

Let’s start in Hill Valley and the parking lot of Twin Pines Mall at 1:15 a.m. on October 26, 1985. Speaking with American Cinematographer back in 1985, Ken Ralston described the process of capturing the moment when Doc Brown and the DeLorean return from a minute in the future.

Twin Pines Mall

(Credit: ILM & Universal).

“The professor is talking and suddenly he grabs Marty and starts to spin him out of the way of the camera,” Ralston explained to American Cinematographer. “That’s all blue screen. The actors were blue screened separately from the background. We pan off as the car is forming and skids into the shot and spins around, then it’s frozen as it comes back into the present. That was certainly one of the better blue screen things.”

Ralston continued with Cinefex. “There were lots of little tiny details that no one ever sees. The time slice shots are quick, but that makes them work nicely, too. They don’t feel like effects shots, just kind of like ‘wham’ and suddenly the thing’s going. Bob never wanted the audience to get ahead of what his gags were. He never wanted you to be able to think about what was happening, he wanted the car gone by the time you had figured it all out.”

Forty years later, Dean Cundey thinks back to ILM’s approach, a mindset that has remained consistent for over half a century.

“ILM were so great at finding new ways and even old ways that had been pioneered back in the earliest days of filmmaking, but embellishing and improving it so that it became even more believable. It fooled you completely.”

The Clock Tower

(Credit: ILM & Universal).

Towards the conclusion of the film, as Doc struggles to ensure Marty returns to 1985, we find Doc hanging off a cable attached to the Hill Valley Clock Tower, waiting for the inevitable bolt of lightning that would strike at 10:04 p.m. on November 12, 1955. Filmed on the Universal backlot on a set designed by production designer Lawrence G. Paull and his crew, Wes Takahashi and ILM’s animation department created the bolts of lightning, something that carried huge expectations.

“The script had called for ‘the largest bolt of lightning in cinematic history,’” explained Takahashi to Cinefex. “It was initially intended that the bolt should last for over a hundred frames, but to have the thing flash over any length of time makes it clear that the shot is obviously animated.”

It was Zemeckis who found the solution, identifying a specific frame from the lightning footage and describing to Takahashi how he wanted the strike to look, “to travel in this S-shape and hit.” And hit it did, landing cinema’s most famous lightning strike.

The Final Scene

(Credit: ILM & Universal).

Combining live-action filmed in Arleta, California, miniature work by veteran modelmaker Steve Gawley (with Ira Keeler, Mike Fullmer, and anyone else available to pitch in), ILM’s visual effects, and the cinematography of Dean Cundey, the DeLorean’s flight into 2015 left audiences hungry for more. Steve Gawley explained to Cinefex the process for bringing that model to life and making it fly.

“Because we didn’t have any existing model kits to use, the entire car was built from scratch. We had looked into an eighth-scale Corvette model and perhaps using its tires, but we decided that wasn’t going to work either. So we ultimately built our own model, which turned out to be one-fifth scale of a real DeLorean – about 32 inches – and used the photographs we got from the existing stunt car for reference.”

The marriage between background plates shot on ILM’s VistaCruiser motion-control camera system in a windy Arleta, and the ILM model was a hefty undertaking, especially given the less-than-direct route the crane was required to go.

Matching the crane shot was straightforward enough, but plotting out where the car would be in a specific frame required plotting out the move with graph paper on a Movieola and lining up all the points, a process that stretched the rig to its capacity. With the car completely turning around and accelerating back towards the camera, it had to be mounted so audiences never lost sight of the car, right up to the moment it leaps through time towards 2015.

Animator Ellen Lichtwardt explained to Cinefex the intense work that went into that final shot of the DeLorean passing under the trees of Lyon Drive. “About ten people from rotoscope and animation worked day and night on it for about a week, and that was aside from the motion control work.”

With all of these elements brought together for a shot no one would ever forget, there was still time for an 11th-hour idea from Zemeckis to be added in, as Ralston explained to American Cinematographer.

“At the last moment, Bob Zemeckis thought it would be funny to put a little turn signal on the car when it goes back and does the turnaround. I don’t know if anyone sees it, but it’s there – a little tiny thing blinking on and off. It was done in animation because we didn’t have that when the scene was shot.”

The final completed shot was taken from the lab and printed, without the shadows of the trees being added to the car as it flew under them, despite Ralston and his team having the assets to complete the shot. As is often the case, time was the enemy.

Looking back, Dean Cundey remembers the end sequence well. “Ken Ralston came to us one day and said, ‘We think we can make this DeLorean fly, here’s what we’re going to do. Put it on cables, and then we’re going to remove the cables.’ He talked a little bit about it, shooting the shot twice and all of that, but the first film was about the storytelling, as opposed to the effects.”

In terms of ILM matching their miniature photography to the onset look Cundey had established, there was constant collaboration. “I was always interested and paid a lot of attention to the techniques and the technology that ILM was creating, and did my best to understand or keep abreast of the creations that were happening, so it was very much a two-way street of me understanding and then talking to the ILM people that were involved to understand the technology and their techniques. For example, I didn’t have to know how to run an optical printer, but I would understand its purpose.”

The Post-Production Rush

Modelmaker Ira Keeler at work on a wooden sculpture of the DeLorean that would be used as a mould for the final miniature (Credit: ILM & Universal).

With the film in the can and visual effects work well underway, the comparatively brief but ultimately very efficient post-production period began, with all corners of the production racing to assemble this time-shifting jigsaw puzzle in time to arrive at the film’s release on July 3, 1985. That meant all hands to the pump, even hands belonging to former key ILM team members, like post-production supervisor Art Repola, who’d started at ILM as a production coordinator. With a tight turnaround, the team had roughly eight weeks to complete the film after principal photography wrapped.

Ken Ralston remembered that even more brutal schedule, speaking with American Cinematographer. “They cut about two weeks out of our schedule because they moved up the release date. They handed out thousands of those buttons, “On July 19 we’re going BACK TO THE FUTURE,” but it was really July 3! So we killed ourselves for a few days getting the thing shot.”

Cundey fondly remembers his association with ILM, which also included his part in a monumental leap forward in visual effects on Steven Spielberg’s Jurassic Park (1993). “I really enjoyed working with ILM so often because they’re the people who are most interested in furthering the illusion that cinema could do. If you look at my filmography with ILM, so much of it is all about finding new ways to create the illusion.”

Cundey gives insight into how he explains the collaborative process to film students eager to inhale any knowledge they can from the industry veteran.

“I often say to film students, you can have a canvas and oils, go in your garage and create by yourself, or sit in front of your typewriter and write a novel by yourself, but when you get into feature films, it’s a huge collaboration, it’s one of the artforms that requires a great number of people and visions. You hope that the substance of it – the story, the plot and the characters – are good, and that this ensemble of skilled and talented and hardworking people can contribute and bring together these separate visions under the guidance of the director.”

Modelmaker Steve Gawley at work on the DeLorean with its futuristic upgrades (Credit: ILM & Universal).

Back to the Future’s sense of legacy and of making a film of substance was something Cundey was aware of during its production. He also treated it as a learning experience in a career that began over 50 years ago.

“You hope that what you’re working on at the moment is going to be a great success, and you always approach it that way,” he remembers. “Back to the Future was such a unique story, and with Michael J. Fox and Christopher Lloyd, you had such a great set of characters. I thought, ‘We’re really on to something interesting and unique here,’ and here we are 40 years later with so many people still engaged.

“I don’t know how many times I go somewhere and someone says, ‘Oh my gosh, Back to the Future is one of my favorite movies,’” Cundey says. “I just showed it to my kids. They’re passing on this tradition, this adventure, and even though it’s set in a particular time and place where 1955 was the past, well, 1985, that’s the past too, even though it was supposed to be the present when it first came out. They’re willing to overlook the fact that they’re looking at 1985 because there’s something about the characters, the situations, the places that still apply. People can empathize with it.”

Ultimately, as with all great films, be they financial and cultural smashes or overlooked gems, cinema is the canvas that creatives like Cundey, Zemeckis, and the artists at ILM paint on, along with numerous other cast and crew. It’s a magic trick, an illusion not unlike ones found inside that “little magic set” Cundey was gifted for Christmas all those years ago.“

When you think about it, you go into a big room and in front of you is a flat white wall. Then they project moving images on it that so many people have been part of creating, and hopefully the audience get so immersed that they believe what they’re watching. Then, when the film is over, the white wall comes back, and they leave having experienced this magical, intangible thing. Hundreds of people are involved, from the early writing and pre-production through production, post-production, all bringing their best to make that intangible thing. That’s the thing about Back to the Future. The believability of the illusion is there, so people continue to follow it.”

The DeLorean miniature is photographed by ILM’s Vista Cruiser motion-control camera system (Credit: ILM & Universal).

Mark Newbold contributed to Star Wars Insider magazine for twenty years, is a 4-time Star Wars Celebration stage host, avid podcaster, and the Editor-in-Chief of FanthaTracks.com. Online since 1996. You can find this Hoopy frood online @Prefect_Timing.

The ILM visual effects supervisor discusses helping craft the world…and the Creature…of Frankenstein for the Netflix production.

By Mark Newbold and Jay Stobie

(Credit: ILM & Netflix).

Written and directed by Guillermo del Toro, Netflix’s Frankenstein (2025) delivers a vibrant take on Mary Shelley’s classic story, with Oscar Isaac and Jacob Elordi supplying masterful performances in their respective roles as Dr. Victor Frankenstein and The Creature. Industrial Light & Magic’s visual effects contributions to the film were overseen by ILM visual effects supervisor Ivan Busquets (Bumblebee [2018]; The Irishman [2019]; Aquaman and the Lost Kingdom [2023]), who sat down with ILM.com to reflect on his work, from the real-world references used to create Frankenstein’s environments and animals, to utilizing visual effects as a way to supplement the film’s practical elements, and much more.

Frankenstein’s ‘Filmscape’

“ILM was one of the two main visual effects vendors on the show, alongside MR. X,” Busquets tells ILM.com. “ILM was assigned around 200 shots, primarily dealing with some of the icescape sequences at the beginning and end of the movie, as well as a lot of the creature work, both for The Creature itself and also the deer, wolves, and mice.” While a portion of the mouse shots were handled by ILM’s Mumbai studio, the vast majority of the work was split between the Vancouver and San Francisco studios. “Logistically, being in the same time zone made sharing work seamless. As long as we were in the same time zone, it didn’t matter whether we were in one location or the other.”

Busquets began his tenure on Frankenstein in August 2023 in preliminary meetings with Guillermo del Toro and production visual effects supervisor Dennis Berardi. “We were lucky to have an asset early on – production provided ILM with what they called the ‘filmscape,’ a document compiled by their production design department,” Busquets recalls. “Created by production designer Tamara Deverell and her team, the filmscape featured everything from location references to artwork, sketches, and even storyboards. From the early stages, we had access to that visual reference, whether it was for sequences that we were going to be working on or others. It supplied an exceptional overview of the movie’s style.”

The filmscape provided ILM with an idea of the final image the filmmakers sought, leading Busquets to praise the director and his team, emphasizing, “In terms of tasking and determining what would be required of ILM, the filmscape was super valuable. Oftentimes, when we’re prepping, in the absence of a document like that, what we’d normally do is go out and find real-world references for the sequences we set out to do.” Although ILM did gather such references for this project, the filmscape expedited the process. “For example, we had the Carpathian Alps environment around the millhouse where The Creature stays. We could’ve scouted a location or photographed mountains, but Guillermo gave us references of landscape painters like Albert Bierstadt. While we based our matte paintings on real photography, we started by studying what Guillermo liked about the filmscape’s visual references.”

(Credit: ILM & Netflix).

Creature Features

ILM’s depiction of the dinosaurs in Jurassic Park (1993) was a seminal moment in visual effects history, presenting the capacity to create authentic animals that would delight audiences. The challenge of organic elements like fur and feathers was still beyond early 1990s technology. Jumanji (1995) saw the first computer graphics photo-realistic hair and fur, bringing lions and monkeys to life as they stampeded through Joe Johnston’s classic adventure film. As the tech and their mastery of these elements developed, ILM was able to bring creatures from this world and beyond to the screen, crafting a menagerie of wildlife that would dazzle even the great John Hammond. For Frankenstein, ILM was tasked with bringing a trio of very real animals to life, and as Busquets explains, that took some work.

“There’s a wolf trainer in Alberta called Andrew Simpson, who’s worked extensively in the industry. The production cast seven wolves to be the pack in the movie, and myself and ILM animation supervisor Adrian Millington went to his ranch during preproduction to gather photographic and video reference of the wolves.

“They trained the wolves for months to do specific actions,” Busquets continues, “so the whole scene had been prepped and blocked for the choreography of the fight. We had the advantage of not only taking high-resolution close-ups of fur, teeth, and claws, but also to film them performing those actions while they were training.” 

In parallel with this intense training period for the wolves, the ILM team “always knew there was a percentage of those wolves that would become CG,” as Busquets explains. “But the real wolves were able to actually perform those actions, so we had the perfect reference from the real world.”

(Credit: ILM & Netflix).

The menagerie extends beyond the wolves’ feral nature. Other animals encounter The Creature in the snow-covered woods, and just as the lupine ferocity of the wolves had been studied, so was the gentle nature of the inquisitive mice and deer.

“The approach to both was a little bit different,” explains Busquets. “We always knew there was going to be a deer in there, but in pure Guillermo fashion, it couldn’t be just any deer. He brought in a deer from a taxidermy artist that he loves and trusts.” Truly, the film was stuffed with granular detail. “It had to be that species of deer. It essentially became the perfect reference that we could have on set for lighting, likeness, and everything.” 

They had their del Toro-approved cervine muse. Now the task turned to bringing it back to life. “Our task was to create a digital version of it and animate it. Guillermo was so detailed in describing the deer that he wanted that even in early conversations, he had pushed us to consider the idea of using some part of the puppet in the movie.” An interesting request, but one that ultimately didn’t happen. “In the end, it was more practical for us to match it closely and replace the whole thing. That’s how specific he was about it.”

In Frankenstein, the mice are a charming curiosity, but their creation was very much after-the-fact, as Busquets explains. “The mice weren’t an afterthought, but they were a later addition. They didn’t exist in the script, but as the movie evolved, Guillermo listened to what the movie was asking for, and he felt he needed to add something to the arc of The Creature to show his connection to nature. Nature isn’t afraid of him, even though humans are.”

Added in postproduction, the mice still needed a motivation, as Busquets explains. “There’s a group of mice living in the millhouse that aren’t necessarily scared but are apprehensive of him. As the days go by, they get more comfortable, and they end up crawling all over him and sleeping with him.” 

With a story in place, the attention turned to the animation. “Because the shots weren’t created with that intent, it was a very interesting exercise in choosing plates and thinking, ‘Okay, how could mice be interacting with The Creature here?’ Guillermo gave us an initial brief, but beyond that, he gave the artists at ILM a lot of room to come up with ideas and interpretations and pitch ideas to him, so visual effects had a lot of creative input. There’s one shot where The Creature’s hand is cupped, and the mouse is hanging out. It looks like it was shot for that specific purpose.”

(Credit: ILM & Netflix).

A Shipshape Standard

Beyond ILM’s animal contributions, the company also brought its expertise to the events on Captain Anderson’s (Lars Mikkelsen) sailing ship. These sequences demonstrate how visual effects function in tandem with their practical counterparts. “It all begins with the integration of assets that exist and are tactile, which production did very successfully in this film. The physical sets were grandiose with a high degree of detail, and an amazing amount of work went into them,” Busquets divulges. “A life-size, full-scale version of the ship was built on a gimbal, which I believe rotated nine degrees from side to side. In the shots where you see The Creature pushing the ship, there’s a weight and tactile nature to it that would have been tricky for the actor to mime.

“The counter side is that the set was installed in a parking lot in Toronto, so there were plenty of visual effects to be incorporated around it,” Busquets continues. “ILM handled some extensions for the ship, such as the sails and the masts. For the ground or the ice sheet surrounding the ship, set decoration did an excellent job of putting out snow blankets and dressing it up with tents and props. The immediate area next to the ship was covered with set decoration, but we extended the environment from there. Our goal was to blend into what they had placed, and production did a splendid job of scanning and giving us photo references of everything on set – the barrels, tents, shovels, and the tools that they used to try to free themselves from the ice.”

Busquets indicates assets that ILM focused on, explaining, “There’s a ladder on the side of the ship that, towards the end, when the ship gets pushed, is knocked over. That ladder was CG. There were practical objects that existed, but we ended up replicating and extending a few of them for various reasons. For instance, we replicated tents because we had to extend that environment a little further. The ship was built practically up to a certain height with a bit of rigging, but we extended the masts and added sails and extra rigging.”

(Credit: ILM & Netflix).

Frozen at Sea

Pivoting toward the ice that freezes to the ship’s hull, Busquets notes, “We created the ice that encased the ship, which is an important story point because the expedition was stuck and couldn’t leave were it not for the kindness of The Creature at the end. The heavy lifting is done with the practical set, but breaking ice is not something that you can easily do practically. It would be incredibly difficult to reset after each take, so using visual effects in that case allows them to do what you can’t accomplish otherwise. The way ILM approached the ice fracturing – both the fissures that open beneath The Creature and the ice cracking as he pushes the ship – was to ground things in reality. We studied footage of ice breakers, as well as the dynamics of large formations of ice breaking.”

As critical as factoring in simulation tools for real-world physics was to ILM’s process, they were not the only elements to be considered. “Our ability to art direct the timing and shape of the fractured pieces to match the pre-broken shards, which production designer Tamara Deverell had, was crucial for Guillermo,” Busquets shares. “We needed to tie into that visual language, so it wasn’t so much about finding reference for breaking ice, but making ice break so it ended up looking like what we had on set. It’s a different challenge, in that it was more keyframe animation rather than simulation-driven.”

Busquets reflects on the complexity, noting that “Once we had the building blocks and the timing and shapes that Guillermo preferred, then every single piece of ice had to break into smaller pieces. There was snow on top of the ice and water beneath. I remember someone on my team described it as a game of ‘rock, paper, scissors,’ but with states of matter – we had ice pushing snow, snow melting into water, and then water displacing the ice again. Our work involved studying how each element interacted and pushed everything else, and it all stemmed from art-directed animation.”

(Credit: ILM & Netflix).

Choosing the Right Tool

The marriage between on-set practical special effects and postproduction visual effects is at the heart of many major productions, forming the connection between set, actor, and the final shots visible on-screen. Understanding how important that marriage between departments is, Busquets explains how the two work in concert to bring sequences to life.

“Visual effects is a tool that adds to every other craft, it’s not a replacement. Even if it’s like, ‘Oh, why did we do all of that? We ended up replacing it anyway?’ I think it makes a difference.” On set, the presence of physical props, rain, fire, wind, and other tangible assets all help to place the actors in the scene, and Busquets appreciates the value of that. “When a film crew sees a blue screen, and nobody knows what it’s going to be – they’ll figure it out later in post – there’s a sense of, ‘What we’re doing doesn’t matter.’ By contrast, when special effects and makeup show up on the day, you’d be surprised with the things that, with some ingenuity, end up working.” As students of real life, visual effects artists constantly learn from what they see on set. “Things that might not work on the set teach you lessons about how to fix it in post.”

A primary location that frames the film, the ice-trapped ship required significant departmental overlap.

“A good example is the gimbal. There’s a point in the story where the crew intentionally fires at the ice to break it. It creates a fissure, and The Creature falls into the water.” A scene that sounds simple, but as with most cinematic moments, there was more to it than meets the eye. “The fall into the ice was a platform that was rigged on set above a pool of water. The shot where he drops in the ice and slides in was shot practically on that platform. We then replace the surface to make it look more ice-like.” 

Despite all the planning, safety checks, and preparation, things can still go wrong, as Busquets reveals. “That stunt ended up breaking his jaw. The shot where that happened is in the movie and has a great physicality to it, and I’m not advocating that people put themselves in that situation, but the alternative to that is we do it in CG, and the impact wouldn’t feel the same.”

Another sequence that required a practical element is when The Creature hurls a crew member onto a burning log fire. “That was a practical, pull-rig stunt. We added augmentation of the embers and made it a bigger fire, along with extending the environment and adding atmospherics like snow and flurries, but by and large, the stunt was all practical.”

(Credit: ILM & Netflix).

Fight or Flight

Once The Creature has fallen through the ice near the ship, his body descends into the depths amidst air bubbles and his own blood. “That was shot dry-for-wet, following a technique that both Dennis and Guillermo had used on The Shape of Water [2017] to great effect. When the shot came up, we wondered if we should shoot it dry-for-wet or inside of a tank, but production said they’d done the former successfully in the past,” Busquets explains. “The actor was on a suspension rig and dropped away from the camera along with a light projection – the idea being to keep the element of The Creature and have ILM replace what’s around him to make it appear as if he’s in deep water. We added the bubbles and – because he’s just come through the ice-covered surface – we placed lots of floating ice chunks. At that point, he’s heavily wounded, so we added blood trickling and mixing with the water, too.”

We eventually see a flashback to The Creature’s pursuit of Victor Frankenstein, learning that a portion of his wounds originated from allowing a stick of dynamite to go off against his body. This offered ILM another opportunity to pair its visual effects prowess with the expertise of another department. “The majority of the looks of The Creature’s wounds were established by the makeup department. They did incredible work,” Busquets declares. “ILM only stepped in a bit with things like negative space, for example. You can’t do negative space makeup, gouge somebody’s eye out, or hollow the inside of their ribcage [laughs].

“After the dynamite goes off, the camera pushes in, and there’s makeup work already done for the wounds,” Busquets continues. “We just replaced the rib cage so that you could see it hollow. The Creature’s hand gets blown off, so half of his hand was a CG replacement as well, so that we could show bones. Basically, ILM’s contributions were additive on top of the makeup department’s work. The same goes for The Creature’s injured eye. They made a tremendous prosthetic eye, but they obviously couldn’t scrape it into the actor’s head, so it bulged out. However, Guillermo really loved the design of that eye, so our job was to replace it while also making it look exactly like the prosthetic.”

(Credit: ILM & Netflix).

Monster Moments

It’s not the fairest of questions, given the hours, teamwork, and craft required to layer in what constitutes a great shot, but Busquets points to a couple of moments that stood out for him. “After the ship gets pushed to freedom by The Creature, there’s a wide shot of the ship sailing out to sea. I like that shot for a number of reasons. One, all of the environment is CG, but we still used the part of the ship that was practical.” Incredibly, amongst that most Arctic of vistas, part of what we see was literally shot in a car park. “It’s the same ship that was in a parking lot on a gimbal, but the gimbal only allowed it to rotate side to side, meaning you couldn’t physically move it front and back.” To enable the use of the landlocked ship, older methods were employed. “We used a very old technique. We can’t move the ship, but we can move the camera. So we were able to keep the ship from the plate and pretend that it was moving,” explains Busquets. “You’re making the camera travel twice the distance. In post, we halve the distance and put the rest on the ship. That technique worked incredibly well because when we first looked at that plate, we thought we’d have to replace the whole ship.”

It was an elegant solution to the problem, but one that still needed the thumbs-up from the film’s director. “This was a shot that needed to be auditioned, so to speak. Dennis shot this plate and told us, ‘Guillermo is not keen on having drone-like shots because it’s not in the language of the movie, but this environment looks so beautiful that I think we can earn it.’ We did the shot not knowing if it was going to go in the movie or not, but from one of the first iterations, it had a lot of potential. It showed a beautiful environment, and it matched the emotion of the sequence that Guillermo was going for, so he cut it in.” Busquets smiles at the memory. “I’m proud of it for that reason. It’s a shot that originated in visual effects and earned its way into the movie.”

Busquets has another memorable moment from the show, the “spark of life” shot when The Creature first comes to life. “It’s a journey inside of The Creature’s body. The shot starts practical on the set with The Creature, and then we go through the battery, into the heart, and into a blood vessel. We end up in a cavern that has a lymph node that starts to glow.” It’s a beautifully crafted scene, inspired by Mary Shelley, a writer born in 1797 who changed the landscape of modern fiction when, at age 20, she wrote Frankenstein; or, The Modern Prometheus in 1818. “The lymphatic system is where the movie says that life starts. Essentially, it’s visualizing that spark of life in the lymph node.”

Beyond ILM’s impressive work, Busquets saw a deeper connection to the sequence. “What I like about it are the parallels between the creative process for the shot and the creative process for Victor creating The Creature. Victor had human pieces that he stitched together, and I can imagine it was a process of trial and error, and it was similar for us. We were prepared to do a full CG journey, and we had laparoscopic footage that we used as reference, but what’s more visceral is to use the footage itself and project it onto our inner organ assets. It took a lot of exploration to decide which parts needed to be CG and which needed to use real footage, and there were plenty of ways to assemble it. It took a while to get there.” 

Like the mad doctor, Busquets felt a similar sense of triumph when the scene came together. “I always equated it to how Victor must have felt when creating The Creature, so it was worth it.”

(Credit: ILM & Netflix).

A Visionary Director

In the annals of cinematic history, few directors have such an instantly recognizable style as Guillermo del Toro, a flair and vision he used to full effect in Frankenstein.

“For me personally, it was not a hindrance,” explains Busquets. “It was extremely helpful to have someone with such a strong vision. Even if you didn’t know anything about his previous body of work, one thing that stood out to me was that every time we showed him something, he reacted quickly and decisively.” That decisiveness was a boon to the production. “That was a clear indication to me that he already had a clear vision of what he was going for. It was exploratory in terms of how we’re going to get there, but the vision was there from the beginning.”

In ILM’s quest to deliver realistic and engaging visual effects, was that clear vision a help or a hindrance? “It only makes it more difficult if you try to fight it!” says Busquets, laughing as he continues. “We often try to ground things in reality, and that’s very important, but in this film, it was even more important to ground things in the visual language, themes, and the design that were set forth by Guillermo and production design. Staying true to those themes trumped trying to do something physically accurate and photographic. As long as you’re willing to accept that with Guillermo, then his direction is helpful for visual effects artists, and it’s also very satisfying to work for a director like that.” 

Busquets gives an example. “I like that he never told us ‘Go 20% brighter or 50% faster.’ He never gave us notes like that. Instead, he would paint a picture or direct you to another example or bring up obscure references. He has such a database in his head of visual reference; he’d direct you to things that aren’t exactly the thing he wants but are inspirations for it. For artists, that’s gold. It puts you in his head a little bit. Then it’s like ‘Now you give me your interpretation,’ which I thought was brilliant.” 

That would lead to some satisfying interactions with the Academy Award-winning director. “Sometimes he also gets surprised – ‘Oh, I like this idea. I hadn’t thought of that, but I love it, and it’s going in the movie!’ I really appreciate that style of directing.”

(Credit: ILM & Netflix).

Serving the Story

Busquets reinforces the notion that ILM’s primary goal is to craft visual effects that support the story filmmakers wish to tell. Referring back to his previous statement that having ILM encase the sailing ship in ice became key to the vessel’s inability to flee from The Creature, “That scene is a perfect example of visual effects in service of the story, which is the best use of visual effects, in my opinion. On a number of occasions, Guillermo insisted to us that he didn’t want to make eye candy – he wanted to make eye protein, as he called it,” Busquets beams. “Yes, we need the visuals, but those visuals need to give people something more meaningful behind them.”

Citing the film’s stylized nature as an indication of the metaphors and connections that del Toro baked into it, Busquets comments, “There are countless visual metaphors in Guillermo’s movies. There are strong visual connections with color, such as the color red being tied to Victor Frankenstein’s mother. If visual effects can tack onto that and further the story, that’s terrific. On the other hand, if you’re just trying to give him something flashy for the sake of being flashy – and that might serve the purpose for other types of stories – it’s not what Guillermo is looking for. Visual effects are important to Frankenstein’s story because Guillermo wouldn’t have had it any other way.”

Busquets believes the visual effects community as a whole must strive to convey a more accurate picture of their on-screen contributions to the public, asserting, “We have a ways to go when it comes to informing audiences and showing what our work actually is. In the past few years, there’s been a trend to consider that anything done with CG is somehow less authentic. There are valid points in that, but I think that criticism often comes from only seeing the CG that’s visible – and not the parts that are invisible. The way I see it, CG and visual effects should be treated as another tool in the toolbox.

“If you try to use the wrong tool for the job, it’s not going to come together. I always equate visual effects to other departments in a movie. A film is not ‘more than’ or ‘less than’ because it uses makeup or stunts – and that goes for every department,” Busquets concludes. “I would love for visual effects to be understood as just another player at the table. Sometimes, visual effects work out in certain situations more so than others, and that’s totally fair. But the theory that ‘CG equals bad’ – we have to aim to correct that.”

(Credit: ILM & Netflix).

Watch Frankenstein on Netflix.

Mark Newbold contributed to Star Wars Insider magazine for twenty years, is a 4-time Star Wars Celebration stage host, avid podcaster, and the Editor-in-Chief of FanthaTracks.com. Online since 1996. You can find this Hoopy frood online @Prefect_Timing.

Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.

Visual effects supervisors Pablo Helman and Anthony Smith dive deep into one of ILM’s most challenging shots for the magical sequel.

By Brandon Wainerdi

Even audience members intimately familiar with Wicked’s source material were in for a new experience when they finally sat down to watch Wicked: For Good (2025). “The Girl in the Bubble” was a brand new song written by the original composer, Tony-winner Stephen Schwartz, expressly for the sequel movie – it had never appeared before on stage. And so, with no previous Broadway blocking to use as a jumping off point, how do you then show the inner turmoil of Glinda (played effortlessly once again by Ariana Grande)?

Initially, director Jon M. Chu envisioned the sequence simply: Glinda would deliver the new number while singing to herself in the mirror in her private Ozian abode. One mirror. But it quickly evolved into a much more complex idea, engineered and solidified by director of photography (and longtime Chu collaborator) Alice Brooks. As production visual effects supervisor Pablo Helman recounts to ILM.com, “She [Brooks] used a lot of props like bananas, toys, iPhones, and all sorts of things to understand what we then needed for pre-vis.” Helman’s fellow visual effects supervisor at ILM, Anthony Smith, added, “Alice worked out the concept of the sequence on her breakfast table with everyday objects, which I thought was such a great way to block out the initial idea.”

When it was all finally laid out, the mid-movie musical sequence became a “oner,” effortlessly following Glinda across her two-story apartment, through different rooms, and through different mirrors. With multiple mirrors situated around her sprawling home, the camera would need to travel into a mirror’s reflective surface and come out the other side – multiple times.

The seamless, singular appearance of this shot was anything but – it required over three years of work from multiple teams, resulting in a sequence that, for lack of a better word, ended up feeling magical. “It was a real collaboration,” says Helman. “We started that first week by shooting ‘The Girl in the Bubble’ because we knew it was going to take three years to finish.” And, in fact, it did. As Smith notes, “I believe it was one of the longest shots, not just in terms of frame range, but also in terms of production time. It was one of the first things shot at the beginning of the combined principal photography, and it was one of the last things finished for the second movie. It was a very, very long period for a very, very long shot, coming in at 4,767 frames long – 3 minutes and 18.5 seconds.” (For reference, the average visual effects shot is in the 3-4 second range.)

A Brief History

When the shots finally arrived at Industrial Light & Magic, Smith worked closely with Chu to make sure they followed the director’s vision as closely as possible. The brief from the filmmakers to ILM was straightforward: “We were provided with the selected takes and a rough assembly from editorial, which was a great initial brief in itself. Watching the assembly for the first time, I knew it was going to be a special shot. Then, watching it for the second time, my brain started trying to process all the different technical challenges that a oner like this creates, one of them being that, because the shot is a song, the timing is locked. We couldn’t slip the timing of any of the elements to make those transitions easier,” recounts Smith.

The actual nuances about how it should feel were learned along the way through discussions with Chu. “I knew we could solve the technical challenges with the shot, but there were a few key creative choices that I wanted to explore with Jon,” remembers Smith. “I love working with him because he is able to explain his ideas in such vivid detail that it’s easy to visualize what he’s looking for, and trusts that we will execute; but the important thing for me with ‘Girl in the Bubble’ was getting a really good understanding of how Jon wanted the mirror transitions to feel. It was important to him that there was a smooth flow to the shot, that the transitions were subtle, and that the viewer could become so captivated by Ariana’s performance without distractions that those transitions were magically invisible. I absolutely loved this aesthetic, and part of the joy of working on a shot like this was having the opportunity to translate the feeling that Jon described into an actionable to-do list for our artists.” 

Of course, tricky “mirror shots” have been attempted throughout much of cinematic history, including one of the most famous and beloved: the early mirror moment from Contact (1997), an ILM project. “I have always loved the subtlety of that shot,” says Smith. “You really don’t realize that you have gone through the mirror until the cabinet is opened. But by the time you start to even analyze it in your brain, the shot is gone, and you have to watch it again. That feeling was something that I was very attracted to when it came to this shot and is one of the reasons why I work in visual effects today!”

(Credit: ILM & Universal).

Dirty Work

One of the first challenges that needed to be worked out was something rather simple: Just how dirty are mirrors in Oz? “Yes, the camera could go through a mirror, but to achieve the right level of subtlety, we needed to understand what the surface of the glass needed to look like to best work for the shot,” said Smith. “If it was a very dirty surface with dust and smudges all over it, the surface would be too obvious as the camera approached and passed through it, breaking the illusion, so finding the right level of dirt on the mirrors was key. We ended up with some light smudges and little bits of dust to catch the reflected light. That surface texture actually included real fingerprints from our Production team, so their fingerprints are literally all over the shot. We spent a lot of time in the composite making sure we accurately matched the camera’s depth of field. This meant that the detail on the glass surface beautifully dropped out of focus as the camera got close, which added to the elegance of the transitions. For the closet mirror, we gave Jon a bunch of options for the vertical divisions between the three sections of mirror – different placements and sizes – which again, was all about finding the right level of subtlety for Jon.”

However, mirrors inherently create a bit of filmmaking chaos. Because of a person’s innate familiarity with reflections, if something doesn’t feel right (there it is again – feel), an audience is rather immediately able to tell if a reflection isn’t correct. “None of the plates were shot with motion control, which was an intentional choice that contributed to the success of the shot – the handheld and crane movements really helped to ground it and make it feel like we were there, moving around the space with Glinda – but they also created challenges with the transitions. Each take had to be camera tracked and manipulated to align correctly with the one it was transitioning to or from. The Layout team did some great work to create a solid technical base so we could then make creative decisions. For example, for the first transition, we used a CG Glinda on the other side of the mirror as a reference to make sure we knew exactly where the reflection should be. Then we moved the reflection plate to match that position,” says Smith. “Another challenge with the first transition was that Glinda’s eyeline on the other plate wasn’t correct, so she wasn’t looking at her own reflection. To fix this we actually warped her eye direction in the reflection to make it feel like she was gazing at herself, rather than towards the camera. It only needed a very subtle change to achieve this – if we moved her pupil even a single pixel, it was too much.”

(Credit: ILM & Universal).

Making Room

While there was talk about creating and utilizing a digital double for Glinda, it was decided that the ILM team could complete the shot with only the real performance and some “really clever 2D work.” It was all Ariana Grande. But her room, built and shot practically, had its own physical limitations. “One of the most challenging parts of the whole shot was the transition that takes place when the camera reverses through Glinda’s closet,” Smith explains. “There was a physical closet on set, but only half of it had been built, and at the end of it, there was a blue screen filling the back wall, which is where the floor-to-ceiling mirror would go so that, combined with its reflection, the closet would seem to be double its length.”

The team recreated parts of the apartment digitally, and then used pieces of it to flesh out the previously filmed set. The movie had been shot anamorphic, with a very specific “squeeze” on some lenses Alice Brooks used, which meant the team didn’t have much extra frame to work from. “We decided to build a CG version of some parts of the set because once we started our Layout process, we were able to assess how much we would need to manipulate what was shot to achieve each transition, and it became clear where things then needed to be fixed,” said Smith. “The foundation of the build was the LiDAR scan of the space. For some transitions, we were able to project the shot footage onto the geometry and render it from the manipulated camera positions. For the closet transition, this was our main methodology. Our Environments team did an incredible job of producing a huge number of camera projections to recreate all of the layers of dresses, shoes and boxes in the closet.”

(Credit: ILM & Universal).

And although the ILM team was able to use the room’s geometry and high-dynamic-range photography taken on set, Smith mentions, “Texture projections would only get us so far with some of the transitions. There were parts of the room that simply had to be CG extensions. A completely CG balustrade was needed because the practical one had to be removed on set to allow the camera crane to move correctly through the space. The ceiling and its mirror were completely digital too, again because they didn’t exist on set.”

But, even though there was a real floor on set, it also sometimes needed to be replaced, as Smith explains. “When deviating from the original camera position to achieve a successful transition, one of the first things that breaks with the texture projection approach is reflective surfaces – where reflected light no longer sits in the correct screen space – and all of Glinda’s apartment floors were quite reflective. So we recreated the floors in the main apartment space and the closet to be able to move the camera away from where it was on set and still get the correct reflections. This was key for the ceiling mirror transition, where the ceiling height of the set prevented the camera from shooting Ariana from the correct height for her reflection element, so we moved her element further from camera and digitally extended the room all around her.”

There was one additional tell for eagle-eyed viewers: “Obviously, the room was never actually physically inverted. There was only ever one room. So whenever we went from the real world into the mirror world, the entire image flopped. The idea was that there was no clue there and it just felt completely seamless all the way through.”

Not all the work is “showy,” but it all has a special place in the film and in Smith’s heart: “There’s a moment after Glinda reaches the top of the stairs where there’s a really successful transition that no one is really aware of – as she walks past the camera, we did a morph blend and lots of paint work. Her walking gait – the timing of her steps – was different between the two takes we joined, so her actual step timing had to be manipulated to get them in sync. We completely extracted her from both backgrounds to do this and achieved it with some really top-notch comp and paint work.  It’s one of the transitions I’m the most proud of because no one ever spots it.”

“Isn’t it high time for her bubble to pop?”

Even near the end of the process, ILM had to be flexible and communicative to deliver the final shots. As Smith remembered, “When we were a couple of months from finishing, Jon and his editor Myron Kerstein asked us if we could swap out two of the takes for alternate Ariana performances. This was not a small ask at that point because the waterfall effect of dependencies of each transition were significant, but as it was so important for the shot to get this right, we immediately started reworking them. The good thing was that we knew the feeling we had to hit with each of them, so it was only a matter of reworking the technical side of things to hit the same creative notes. The team did such an awesome job with that, and getting Jon’s approval on that camera rework for such a long shot was a really important milestone for us!”

After all of that work, “The Girl in the Bubble” cemented itself as one of the movie’s most memorable moments. “It’s kind of the reason why I love visual effects – it’s something that makes you think,” explains Helman. “Sometimes you’re working with others, and you’re in the meeting trying to solve a difficult science problem, and sometimes you can’t solve it! ILM is a perfect place for it because we are allowed to sit at a table and say, ‘Forget about what we did before, forget about everything. What if we did this thing? Would it work?’ And then I realize how lucky I am to have a job that is so creative.”

Read more about both Wicked films here on ILM.com:

“Preparation is the key to success”: Pablo Helman on the complexity of making ‘Wicked: For Good’

Defying Expectations: How ILM’s Collaborative DNA Helped Bring the World of ‘Wicked’ to Life

Brandon Wainerdi is a writer and interviewer, whose work can be found in Star Wars Insider, on StarWars.com, and inside the iconic horror magazine, FANGORIA. He is the author of two Star Wars books, including the recent Star Wars Encyclopedia of Starfighters and Other Vehicles. He is also the host of Talking Bay 94, a long-running behind-the-scenes podcast that interviews the cast and crew of the Star Wars saga. You can find him on Twitter and Instagram.

The production visual effects supervisor discusses the Emerald City, Elphaba in flight, and collaborating with Jon M. Chu.

By Mark Newbold

Based on the novel Wicked: The Life and Times of the Wicked Witch of the West by American author Gregory Maguire, the Wicked musical has enchanted audiences worldwide, both on the stage and on the big screen. Maguire’s 1995 novel was not only an adult-oriented version of L. Frank Baum’s classic children’s story, The Wonderful Wizard of Oz, but it was also the first entry in The Wicked Years book series.

With music and lyrics by Stephen Schwartz and a book by Winnie Holzman, the stage version of Wicked (or to give it its full title, Wicked: The Untold Story of the Witches of Oz) premiered on May 28, 2003 at the Curran Theatre in San Francisco and has broken records wherever it has been performed, including over 7,000 performances in London’s West End.

A true phenomenon ripe for further adaptations, director Jon M. Chu’s Wicked landed in cinemas in November 2024. Capturing the imagination of the cinema audience, the final moments of the film promised a sequel, and that promise was kept when Wicked: For Good debuted a year later in November of 2025, continuing the story of Elphaba Thropp (Cynthia Erivo), Glinda Upland (Ariana Grande), Fiyero Tigelaar (Jonathan Bailey), the Wizard (Jeff Goldblum), and the people of Oz.

Loaded with sequences of dizzying visual complexity, Industrial Light & Magic was tasked with bringing Chu’s vision to vivid, yellow-bricked life. ILM.com had the opportunity to sit down with the production’s four-time Oscar-nominated visual effects supervisor Pablo Helman to discuss Wicked: For Good and the task of unveiling even more of Munchkinland, Shiz University, and the Emerald City.

“It was a 155-day shoot for a two-part story,” says Helman. “We thought of Wicked and Wicked: For Good as one movie, and we shot it that way.” That meant intense preparation and planning, given the logistical and technical nature of certain sequences in the films.

“Visual effects can often be challenging because you’re asking the director, the editor, and everybody else to think about things that they normally don’t want to think about,” explains Helman. “A director is thinking of the whole story, but we’re asking them to look at specific sequences because we need to turn over a certain number of shots. They don’t like to be presented with choices because they think they’ve already made their choice, so why present another one? That means they have to rethink, and that takes time.” 

Jon Chu was open to the challenge. “Jon takes an organic approach to filmmaking, he loves having choices and different possibilities,” Helman says. “There might be a script and a plan in place, but the process of making a major motion picture still has plenty of fluidity. “Lots of things change throughout the process of filming, and there are lots of choices to make.”

(Credit: ILM & Universal).

“It’s a Transactional Thing”

Creative choices are one thing, but a production like Wicked: For Good requires a large amount of preparation and resources, and a sizable portion of those resources are given over to visual effects. As production visual effects supervisor, Helman was deeply involved with practical as well as creative duties.

“Part of my role is managing the project in partnership with my effects producer,” Helman explains. “That includes talent, financials, resources, all kinds of things. So if you’re not thinking ahead, you’re not doing your job. Nobody likes to be in dailies and not be able to say, ‘Yes, I can do this,’ and the only reason you say ‘I can do this’ is because you have a plan A, a plan B, and a plan C.” 

Those plans require intense work from the team, gathering as much data as they can. “We have LiDAR [a laser system that scans objects and environments to recreate physical objects and sets as digital models], we have high resolution textures, HDRIs for lighting, all the science behind it, so that when we’re in dailies, I can say ‘Yes, we can do this.’” That preparation is integral to the production. “Nobody likes to be surprised or ambushed. You don’t want to get into those situations, and we never did with Jon. He’s a great communicator and a terrific person. There was never a moment when he was annoyed about anything. For four years on a project, that is an accomplishment.”

Art always comes with the cost of tools, be it the canvas, brushes, and oils a painter uses, or the clay and tools of a sculptor. It’s the same for visual effects artists, but as Helman explains, creativity always leads the discussion.

“The creative stuff that we put together as a team comes first, and after that, you have to be conscious of the resources,” Helman says. “I could go to a producer and say, ‘Look, I know that we’re spending a lot more resources in this section, but I promise that when we get here, I’m going to find a way to get back the resources that we’re putting into this.’ It’s a transactional thing, but it’s all about the storytelling. 

“There’s always a way of doing what is needed for the movie,” Helman continues. “You look at the storytelling and the amount of resources and ask, ‘Is it worth it? Is there a payoff?’” Time and money are challenges for every production, no matter the scale, but Helman believes one is more of a problem than the other. “At some point, you run out of resources, but the resource isn’t money; it’s time, which is finite. You can throw all kinds of money at the problem, but it won’t get done, because it needs more time in the oven. Then it’s not my choice, it’s a choice that we make together.”

(Credit: ILM & Universal).

 “Every Department Brings Something to the Storytelling”

The work of the visual effects team overlaps with many departments, few more than the special effects team, whose focus is on-set effects like steam, smoke, weather elements, and anything the performers physically touch. Helman gives an example of where the seam between the two lies. “Production design can’t build a 79-story building. They can build 55 feet of it, and then visual effects takes that and develops it, all based on what was done on set.” ILM and their fellow effects houses can expand the world of Wicked, but it only works if the departments are on the same page.

“There’s plenty of discussion about special effects, what can and cannot be done in camera, what’s safe and what’s not,” says Helman. “Filmmaking is one of the most collaborative art disciplines because if you don’t collaborate, you end up with something that is flat. Every department brings something to the storytelling and adds nuance in a way that is individual, surprising, interesting, and curious. So it’s a combination of all those things. There were about 1,000 artists and production crew on these films, and I really would like to thank them because if it wasn’t for them, we wouldn’t be doing this.”

To the Emerald City

The Emerald City is as much a character in both Wicked films as Elphaba, Glinda, Fiyero, Boq (Ethan Slater, aka the Tin Man), Madame Morrible (Michelle Yeoh), and The Cowardly Lion (voiced by Colman Domingo). It is the home of the wonderful wizard and the high society of Oz, as well as the underclasses who have to fight for every scrap. Wicked showed us a vast swathe of the city, and Wicked: For Good not only takes us back there, but it takes us into places we’ve not seen before, a task that fell to Helman and his team.

“There were certain parts of Emerald City that we saw in the first part of the story, and certain parts of that we see in the second, so we built different assets for different parts of the story,” explains Helman. “The first movie was a lot more presentational. Things were being set up, and tonally, the movie was lighter, whereas the second one is a lot darker.” That visual change also mirrors the journey of the characters.

“Every character has a specific arc, so in Wicked: For Good, they deal with the consequences of the choices they made in the first film. Part of production design and visual effects is to accompany the performances with the environment,” Helman continues. “The atmospherics are always thick, and the light direction is purposely very dramatic at times. The sun is low, so there’s less light.” Time is also a factor in the progression of the environment. “It’s a combination of things. The clothes and the creatures get used, and the buildings get worn because they went through a specific experience.” He laughs, as he adds with a wink, “When I started the movie, my hair was brown, and now it’s white.”

(Credit: ILM & Universal).

“We’re Off to See the Wizard”

At the heart of the Wicked films are Elphaba and Glinda, and the second film gave the production the opportunity to delve deeper into the classic characters of L. Frank Baum’s original novel and add a 2025 spin on the characters. 

“It was really exciting,” says Helman. “I remember the first test that we did with Dorothy, Toto, the Tin Man, and the Scarecrow. It’s funny because some of the critics were saying that the chronology of the story didn’t do this or that. Well, you know what? It’s a complicated story, and everybody remembers it differently, so I think this is another point of view of that story.”

Based on Gregory Maguire’s novel, which itself presented the story and characters of Baum’s world in a more mature, complicated light, it gave the filmmakers the chance to overlap parallel stories in ways that didn’t step on each other. “It was fun to think about. One thing is right there in the background, but if I come around and go in front, the story is different. It’s an opportunity.”

“Mitigate the Forces of Gravity”

With plenty of experience in making us believe a human, a superpowered dog, or a battered old Corellian freighter can fly, ILM was charged with taking Elphaba into the skies of Oz, a task that required visual effects know-how and a game performer in Cynthia Erivo.

“The approach was always going to be Cynthia doing the flying,” says Helman. “That meant that she needed to mitigate the forces of gravity, no pun intended. She’s singing while trying to get her body to do very specific actions. Cynthia is very strong, but it takes some effort.” 

A willing actor and all the right equipment don’t necessarily mean the results are what is required by the director or the visual effects team. “Sometimes it doesn’t work. Sometimes, because of safety and other things, the actors are not fully exerting themselves, but Cynthia did.” That combination meant that when all was said and done, Elphaba looked even more imposing in the air in the sequel than she did in the first film. “In the arc of the story, she’s proficient. She’s gliding and then stopping. She knows what she’s doing and she’s done it before. She takes time to look at the world under her, and that requires some skill.”

Along with flying, there’s another aspect to the illusion: Elphaba’s cape, one of the most striking elements of her appearance as she heads down a dark path. “Elphaba’s cape isn’t in the cast, but it is a character in the movie, and it does a lot, not only when flying but also landing and taking off,” notes Helman. “Its creation required the skills of two visual effects houses. “ILM and Framestore created it because we couldn’t do a 30-foot-long cape physically.” With practical and visual effects work woven together, parts of the performer were also replaced to create the complete shot, but as Helman explains, “we always used Cynthia’s face and performance.”

Creating the physical cape meant crafting clothing that gives the right look and says something about the character, a challenge the designers went to great lengths to achieve. “The cape has different layers of materials and transparency, but we took some liberties with it,” explains Helman. “We know how difficult it is from doing different capes, from the Vader cape to Superman. The cape says who he or she is.” 

As is often the case, trial and error was the path to finding the right blend. “There was a lot of testing, we did simulations and resins, but at the end of the day, we said let’s forget about the science of it because it’s about the content. It might be scientifically correct, but it doesn’t work if it’s not doing what we need it to do and it’s not correct for the story we’re trying to tell.”

(Credit: ILM & Universal).

Unlimited Together

Just like its smash-hit predecessor, Wicked: For Good brings together an impressive array of on-screen and off-screen talent, all laser-focused on bringing their utmost in service of making the best film they possibly can. On visual epics like Wicked, visual effects, storytelling, and direction need to be in lockstep.

“There’s a four-minute sequence with this beautiful song that Glinda sings at the beginning of the movie called ‘I Couldn’t Be Happier,’” Helman says. “We redressed Munchkintown, we replaced the sky, the tulips, all kinds of things, but when we looked at it, Jon said ‘We’re missing something because this is a very subtle song.’ Jon said, ‘What if, when she starts singing, the confetti stops?’” 

It’s a striking visual as the confetti hangs motionless in the air, but one that entailed more work than one might imagine. “I asked how long are we doing this for, and Jon said the whole scene. That’s four minutes of really resource-intensive particle work that we didn’t know would work or not, but we had to complete it because we needed to know.” That would require Helman’s most valuable resource: time. “Jon understood that if he wanted to see this, it was going to take weeks to get it. It’s important to have that communication with the director, and to have somebody who understands what we’re doing.”

Magic, Glory, and Love

The ultimate combination of visual effects, performance, direction, production design, and numerous other departments is “The Girl In The Bubble,” written by Stephen Lawrence Schwartz and performed by Ariana Grande. Here we find Glinda in her home, inspired to finally take action following the cyclone that killed Elphaba’s favored sister, Nessarose. It’s a sequence brimming with emotion and meaning, and one that took almost the entire production schedule (that’s both films) to complete.

“We started ‘The Girl in the Bubble’ during the first week of filming on the first movie because it was a very complicated sequence, and we knew that it was going to take us two years. It was a four-minute, continuous shot.” A weighty task, and one that needed to pull in all the eyes it could to make it work. “We did a lot of work with [cinematographer] Alice Brooks and used lots of props before we understood what we needed for the previs.”

It’s one thing to dream up a film sequence, but it’s another to make that dream reality. So with previsualisation underway, Helman and his team also needed to work out the real-world technical aspects of the scene.

“Once we had the previs, then we did a techvis, which meant taking a look at the previs and taking a step back,” Helman continues. “For example, let’s say we previsualize where the camera is moving and, BOOM, there’s a wall in the way, but it’s not a wild wall [meaning it can be easily moved and then put back]. On set, the director might say, ‘Well, move the wall,’ but then you’re wasting two hours of time and resources. Techvis will look at the distance between two points, how fast things will move, and where the blocking is. If you don’t prepare before the shoot, we might put the lens on and find we can’t focus because it’s too close or too far, or the camera doesn’t fit in the space and the director has to change the shot.” 

With the techvis in progress, the team moved to the actual set itself. “We went on set to look at what the camera was doing,” says Helman, “and we realized that when she goes up the stairs, the set would have to be stripped because the camera couldn’t get there. We’d need a 50-foot crane. So we’d have to take the wall out and build a CG set as the camera comes around.” As it was for the entire production, planning in advance was key. “You have to figure these things out beforehand. It’s not necessarily something that a director would look at, but the other departments need that techvis information as well.”

While there was a physical set with props, Helman’s visual effects team added a surprising amount of detail afterward. “The railing is created in computer graphics, everything behind Glinda is created in computer graphics, and once she gets to the closet, only half of the closet was built, so we had to build the reverse of that.” There are even different takes of Grande’s performance brought together for the completed sequence, and that meant more delicate work for the team. “We had performances that were morphed, so the reflections had to match those performances. There were morphs in the middle of it that were very, very difficult, so there was the nuance of doing that.”

Even after completing this technical maze of work and collaboration, changes were still required. “Once we were done, Jon and Myron [Kerstein, editor of Wicked: For Good] changed two performances. We had about seven different plates that needed to be stitched into one, but they changed two of them because they thought the performances were better, so we had to redo the layout. We had all the assets, but then you have to resync everything so that it works.”

That meant the team on the ground needed to be extra vigilant and imaginative to keep the pieces where they needed to be. “The on-set video assist was so important because they needed to play it back and flop it [reverse the image],” Helman explains. “The floppiness of it was mind-boggling, and you have to make sure that you have plenty of imagination because there’s a lot of compositing that goes into the sequence that can’t be done on set in real time. You have to do all the thinking before.” 

Nevertheless, the reward is in the work itself. “It took two years to do, but it was really satisfying,” Helman concludes. “It’s one of the reasons why I love visual effects. It’s that satisfaction, and I realize how lucky I am to have a job that is so creative, because a lot of people don’t.”

(Credit: ILM & Universal).

Read more about ILM’s work on Wicked here on ILM.com.

Mark Newbold has written for Star Wars Insider magazine since 2006, ILM.com, Skysound.com, and news site FanthaTracks.com, having previously contributed to StarWars.com and StarTrek.com. He is a 4-time Star Wars Celebration Podcast Stage host, podcasting for over 20 years, and has been involved in websites since 1996. You can find this Hoopy frood @Prefect_Timing.

One of ILM’s first visual effects supervisors looks back at the film’s mix of practical methods and revolutionary digital effects 40 years later.

By Amy Richau

(Credit: ILM & Paramount).

“The game is afoot!”

In 1985, director Barry Levinson and writer Chris Columbus brought a new tale centered around a teenage Sherlock Holmes to audiences with Young Sherlock Holmes. The film’s effects team, led by visual effects supervisor Dennis Muren, ASC (Star Wars: A New Hope, 1977), that included Kit West (Raiders of the Lost Ark, 1981), John Ellis (The Goonies, 1985), and David Allen (Willow, 1988), was nominated for an Academy Award for Best Visual Effects the following year. The film holds a unique place in Industrial Light & Magic’s history. It includes an abundance of practical visual effects methods the company had developed and perfected over its first ten years, as well as the first fully digital character ever depicted in a feature film, a stained glass knight.

Young Sherlock Holmes arrived in theaters the same year as The Goonies, Cocoon, Explorers, and Back to the Future, when ILM was increasingly working on more projects outside of Lucasfilm. Muren recently spoke to ILM.com about the making of Holmes and its unique mix of old-school and groundbreaking visual effects.

Many of ILM’s biggest breakthroughs occurred during the making of epic blockbusters like Star Wars, The Abyss (1989), Terminator 2: Judgment Day (1991), or Jurassic Park (1993). However, because Young Sherlock Holmes was a smaller film, it became the perfect vehicle for testing onscreen photoreal computer graphics (CG) effects. Also key was ILM’s proximity at the time to a smaller group ILM founder George Lucas was running a few hundred feet from their offices – the Lucasfilm Computer Division, a portion of which would later become Pixar Animation Studios.

Matte artist Chris Evans (left) and visual effects art director David Carson in the ILM Matte Department (Credit: ILM & Paramount).

The Height of Practical Effects

The story of Young Sherlock Holmes follows its teenage namesake (Nicholas Rowe) and newly arrived John Watson (Alan Cox) during a year at a London boarding school. The duo discovers a series of mysterious murders that lead them to a secret cult in Victorian London.

The snow that appears in many sequences of the film, which today might be created with CG effects, was accomplished with practical, old-school methods. Kit West, who was in charge of many of the film’s physical effects, needed the snow to both look real and leave no trace after shooting wrapped. West, who died in 2016, told Cinefex that despite filming on location in the United Kingdom at Eton College, Belvoir Castle, and Oxford University during the winter, all of the snow seen in the film was made by the production.

For snow on the ground, West’s team used 150 tons of dendritic salt. Snow on the buildings was made from over 100 tons of magnesium sulfate that had “a glint to it just as real snow,” said West. High-expansion foam that evaporated after about three hours was used in larger areas to mimic snow, while falling snow was made from a biodegradable insulation material that consisted of finely chopped paper, deployed by agricultural grain blowers.

One of the quirkier characters in the film, retired professor Rupert Waxflatter (Nigel Stock), spends much of his time on-screen trying to perfect his flying machine design. Surprisingly, none of the shots of the flying machine in Young Sherlock Holmes include miniatures. West recounted to Cinefex that an aviation company that worked on the film Those Magnificent Men and Their Flying Machines (1965) built a full-scale flying machine with a 25-foot wingspan, which production then tinkered with to make it functional.

Getting the flying machine in the air included two 120-foot cranes. “They were tower cranes,” West told Cinefex, “like those used for building skyscrapers, one on either end of the flight path. We had a stretch cable between them, and the machine was on runners. We attached all our own runners and rails, as well as the raising and lowering mechanisms.”

Concept art of an anthropomorphized pastry that attacks young Watson during a hallucination (Credit: ILM & Paramount).

A Whole New World

One thing was clear from the beginning with Young Sherlock Holmes: Muren and the creative team behind the film wanted the effects to look as photoreal as possible. A challenge that, in the mid 1980s, even Sherlock Holmes could appreciate.

While many of the Computer Division’s projects at the time were focused on animation, Muren wanted to see if their technology could make the jump to photoreal effects. As Muren tells ILM.com, “I just needed to see if this technology had the controls necessary to make something look 100% real or not.” The sequences in Holmes that needed heavy visual effects were mostly split into discrete sections where characters experienced hallucinations, giving Muren the opportunity to use different methods throughout the film.

CG effects had been used in films by ILM before, most notably the Genesis sequence in Star Trek: The Wrath of Khan (1982), another collaboration with the Computer Division. But that sequence was intentionally not photoreal, and Muren knew from seeing other tests that in many cases reflections were too high, edges were too sharp, or the shots were missing essential shading and shadows to achieve a more realistic feel.

A CG test done by Triple-I ahead of Star Wars: The Empire Strikes Back (1980) involving five X-wings in flight increased Muren’s desire to play around with this emerging tech. “Triple-I’s test didn’t look photoreal, but they did a camera maneuver with the ship that there’s no way we could have done, and it looked pretty neat,” says Muren. “So it’s another temptation. This thing was out there, and I wanted to get it on a show and figure out how to do it.”

Muren decided to tackle the effects-heavy sequences as a bake-off, doing each one in a different way and seeing if a clear winner emerged. “With the stained glass man, that looks small enough, so let’s try CG, right?” Muren recalls, “If we find out in two months it’s not working, we can back off and do it another way.” For other hallucination sequences, Muren planned to use rod puppets in front of a blue screen and utilize Go-Motion with motion blurs.

Modelmaker Charlie Bailey creates an armature for one of the harpy puppets (Credit: ILM & Paramount).

Bringing Hallucinations to Life

The hallucinations in the film result from poisonous darts the cult’s leader, Professor Rathe (Anthony Higgins), uses as he seeks revenge against enemies from his past.

In one hallucination sequence that opens the film, an accountant, Bentley Bobster (Patrick Newell), sees his pheasant dinner attack him in a restaurant. After retreating to his home, Bobster sees the serpent handles on his coatrack turn into actual snakes that wrap around him and bite at his face. After the lamps in the room appear to start spitting out fire, Bobster leaps from his window to escape the flames.

In other hallucination sequences later in the film, Professor Waxflatter is attacked by harpy statues in an antique store. His niece, Elizabeth Hardy (Sophie Ward), finds herself fighting off skeletons at the bottom of a grave. Cameraman Michael Owens handled the motion-control programming and lighting for the harpy sequence, and the Go-Motion animation performed by Tom St. Amand.

A harpy puppet is photographed by a motion-control camera (Credit: ILM & Paramount).

David Allen supervised the startling hallucination that young Watson experiences in a cemetery, which manages to be equally hilarious and disturbing. After being shot with a toxic dart, the ever-peckish Watson sees a wall of pastries. After he grabs one to eat, it comes to life in his hands and wraps a vine around him, knocking him to the ground. The other pastries soon leap off the shelves and start shoving whipped cream into his mouth.

Muren directed Allen and his crew to use rod puppets to bring these pesky, chaotic, and downright naughty desserts to life. The individual puppets were made of rubber and were approximately eight inches high. Each puppet had rods coming out of their elbows, torsos, heads, and legs with three or four puppeteers moving them in unison. Notes Muren, “Each element was shot in front of a blue screen, so when we combined them, twelve pastries would be in the same shot.” Since each puppet was shot separately, it took two to three days to shoot the eight to twelve pastries that would appear in each shot with Watson.

According to Muren, the pastry sequence in Holmes is a throwback of sorts to the mouse puppet Topo Gigio, who was manipulated by black rods in front of a black background in the early days of television. “It’s all how you angle it,” says Muren, “how you frame the shot. If you shoot the wrong way, you can have a rod go in front of the carrier’s face. So all the performances have to be manipulated to make sure the rods don’t go in front of the figures, or else you’ll see this black thing that will tip off audiences. It’s not using Go-Motion. It was all done by hand and mostly at real speed. I think at times we slowed it down to make it look a little more staccato from what the puppeteers could do. Adding a little more whimsy to it.”

Before sending the shots to the rotoscope department to remove the rods and the puppeteers, they stacked black-and-white footage of the multiple puppets and viewed the scene on a Moviola to make sure the performance had worked out as expected. Shots then went to optical for matting work and printing. “It was complicated. It’s not against black like the spaceships in Star Wars, so it was pretty difficult stuff,” adds Muren.

The ILM team puppeteers one of the anthropomorphized pastries (Credit: ILM & Paramount).

Six Months for Seven Shots

The Lucasfilm Computer Division, via its graphics group, had previously created a terraforming planet simulation, better known as the “Genesis demo sequence,” for The Wrath of Khan and a CG spinning hologram of the Death Star in Star Wars: Return of the Jedi (1983).

Muren went into the stained glass knight sequence – where a knight jumps out of a stained glass window in a church and walks towards a priest experiencing a hallucination – knowing that creating it digitally may not work. They had to have a backup. And Muren had to sort out how a walking CG character might look. “Should it look like the knight is a walking, full-size, flat glass figure, simply cutout from the window? That didn’t seem very threatening and too literal for a nightmarish hallucination. What if it wasn’t flat but a man-sized three-dimensional glass figure of the knight? Maybe. We also tried some other ideas but nothing really popped.”

The one design that did pop came from Muren’s wife, Zara, who suggested that the knight could jump out of the window in its many individual glass pieces that magically reassemble without touching each other when they land, making something like a hanging mobile but without the strings. Each of the pieces could twist and turn to make up the knight’s figure which could be moved and animated as one menacing figure.

Eben Ostby (left) and John Lasseter of the Lucasfilm Computer Division ready a lighting test of practical stained glass samples, which were used as reference for the CG knight (Credit: ILM & Paramount).

To make the knight even more menacing, Muren asked the visual effects artists to make each piece of glass of an inch thick with sharp jagged edges. Some of the pieces were bowed in the middle, convex pushing out from behind, so they were domed and coming towards the priest, making it appear more aggressive. “Everything in movies is feelings,” notes Muren. “And if I didn’t feel it, and the audience didn’t feel it, then you’re just telling a story, and you might as well be doing it by telephone.”

This was all done before shading and motion blur in CG shots were the norm, and Muren leaned on the fact that the knight was a hallucination, so it didn’t have to be as real-looking as ships flying through space. The seven shots of the knight took about six months to complete and included some of the first digital composites.

“George’s graphics group had been making an input-output scanner as a prototype,” said Muren, “and that was so troublesome because it was so cutting-edge that it would often break down unexpectedly. I think out of every input scan, it was 10 or more times before it would make it through as few as 120 frames.”

The breakthrough laser film scanner was pioneered by David DiFrancesco and the Lucasfilm Computer Division and was later used by ILM on its earliest CG productions (Credit: ILM & Lucasfilm).

The entire knight sequence lasts less than a minute in the finished film. It starts with a wide shot showing the church’s stained glass window bowing a bit before the knight breaks out and lands on the ground. While many traditional matte paintings made with oil paints were used in Young Sherlock Holmes to recreate exteriors of Victorian London and a pyramid temple, for the opening shot of the knight sequence, matte artist Chris Evans created the first CG image used as a film background. “I remember,” says Muren, “it took him a really long time to do it because the tools were so hard to use. The paint program was in existence, but it was very slow to use, to be able to paint and get the brush strokes right.”

After breaking out of the window, the knight’s 100-plus pieces reassemble as he lands on his feet, holding up a large sword. The next few shots depict the knight walking menacingly towards the priest. As the knight walks past the camera, audiences can see through the backside of the knight’s glass.

“It was all shot very traditionally,” explains Muren. “I shot a lot of plate backgrounds of the church.” In addition to footage of the priest, the location also had several candles and mist. When Muren returned from shooting, he still wasn’t sure exactly how they were going to pull this off or if it was going to work at all. “It could all hit a limit where the blacks or the whites never match. There were all sorts of things that could go wrong. I didn’t know what was going to work and what wasn’t. So I shot for any technique we were going to use.”

The next step in the process was getting the digital technicians to constrain their tools to what the eye sees on film. “A lot of what’s made for software manipulation, whether it’s brightness, camera movements, or distance, go to infinity,” said Muren. “So part of the process is constraining it down to what film records. We don’t want to go above or below what film records as black and white, even though the software could go beyond that. When it is constrained to the world of photography, then I can start to understand it again.”

Muren and his team also “cheated” what audiences saw through the glass of the knight at times. “What you see through the glass, let’s say a yellow piece of the knight, is brightening up the color that’s on the glass, not what you would really see if you held up all those color pieces where parts of the background of a yellow piece were blue and yellow. That would appear grey, which would take you out of the drama of the scene. So the whole transmission through the glass, what you see on the other side, is black and white. You can’t tell because it’s got this yellow, but it’s a cheat, just black and white to light it up. We did a lot of that later in The Abyss with the water snake, all the refractions in the rooms, we cheated all the way through.”

An animation pass of the stained glass knight seen as a wireframe (Credit: ILM & Paramount).
The final composite (Credit: ILM & Paramount).

All About the Blur

A key element to achieving realism in the stained glass knight sequence was understanding the importance of motion blur, where objects on-screen shot at 24 frames per second appear blurry as they are in motion. The problem was that at the time, ILM had yet to develop the ability to digitally render blurs. “We’re used to what those blurs look like,” says Muren. “They make things look fluid. That’s very important for an effect to look real because the rest of the movie has got that in it. I didn’t want the stained glass knight to look like it came from ILM, that it was stuck onto the background.”

To help achieve the blur effect, every frame in the knight sequence was rendered nine times in slightly different positions. As the render time in 1985 was so long, one primary frame would be rendered at a higher resolution than the rest to save time. The result was a blur made up of a number of static pictures. “So you put them all together, and you’re doing this 24 times a second, and each of these blurs has eight pictures in it that are kind of similar, but some are weaker on the outside, and some are strong in the middle – then it all looks like a normal blur.”

This experience pushed Muren and the Lucasfilm Computer Division to learn more and create the tools to execute their vision for the finished shots. “It was an introduction to them and to me,” explains Muren, “about what you could do. Motion blur, overexposure, underexposure, tracking or hanging the camera around. I hadn’t really thought about how you have to track the camera with the background. For a camera guy like me, who understands filmmaking technically, I could go in there and say, ‘Can we get this tool?’ ‘We need that one too.’ And they would 99% or 100% of the time come up with it in either hours, or they already had it, and they just adjusted something, or they could write something for it, within a few days.”

(Credit: ILM & Paramount).

A Wealth of New Tricks and Tools

It’s almost impossible to list all of the innovations and challenges the Young Sherlock Holmes effects crew faced during postproduction. The film not only includes the first CG character but also broke ground in developing digital matte paintings and digital compositing. In order for ILM artists to match camera movements from the live-action set into a computer’s 3D space, they projected footage shot on location in England with gridlines over it onto a computer screen. A new preview system gave creators the chance to work with a simple black-and-white wireframe of an image, so they didn’t have to wait for an image to be completely rendered to continue working on the shot.

To record the computer animation back to film, a laser scanner was used that could only print approximately one frame a minute, so each second of footage would take 24 minutes to complete. “I don’t think I rendered anything at 4K or even scanned it out because it was just taking too long,” recalls Muren. “We just did everything at either 1 or 2K. At least the tool was able to change and wasn’t locked into 4K, or we’d still be working on the film today.”

Among the most challenging shots to finish in the film was a panning shot of the knight coming toward the audience. “I think John [Lasseter] came up with the idea of panning the camera,” says Muren. “I didn’t even know if we could match the camera’s pan in the computer. When we shot it, I had somebody walk by and the operator followed as a reference. Then they shot the actual plate without the person in there. It took a while to get that, but it wasn’t hard once we figured out we could do it. It was somebody trying to track it manually every frame in 3D space because we didn’t want the stained glass knight to be locked into a candle that’s seven feet farther back. It had to be locked into them, closer to the camera.”

The final shot was a side view of the priest and the knight raising his sword above his head in a threatening way. Muren asked engineer Bill Reeves if they could add a glint of light to the sword blade for a dramatic end to the shot. “They didn’t know how to do that, how to put a light to reflect a certain thing, but they had all of the spatial information.” Muren suggested they track the shot backward, look at where the camera was, the angle of the sword, and then put a digital light there. “No one’s going to know that you cheated that light and it didn’t take a lot of time,” explains Muren. “That’s what we always do in moviemaking. What you care about is what the camera sees.”

Shortly after Holmes hit theaters, the Lucasfilm Computer Division was spun off into two pieces – one half funded by Apple co-founder Steve Jobs as Pixar, the other half as the digital editing company known as DroidWorks. In Young Sherlock Holmes’s 1 hour and 49 minute running time, ILM artists used just about every tool they had access to at the time, including a few newly invented ones. The seeds planted during their effects work would pay off in the ensuing years with a computer-generated water creature in The Abyss, the T-1000 in Terminator 2: Judgment Day, and the dinosaurs in Jurassic Park, the latter directed by one of Holmes’s executive producers, Steven Spielberg.

Amy Richau is a freelance writer and editor with a background in film preservation. She’s the author of several pop culture reference books including Star Wars Timelines, LEGO Marvel Visual Dictionary, and Star Wars: The Phantom Menace: A Visual Archive. She is also the founder of the 365 Star Wars Women Project – that includes over 90 interviews with women who have worked on Star Wars productions. Find her on Bluesky or Instagram.

Tha layered shading system has been the standard at ILM for many years as it continues to impact the wider visual effects and animation industries.

By Lucas O. Seastrom

(Credit: Academy of Motion Picture Arts and Sciences).

2026 marks Industrial Light & Magic’s 39th Scientific and Technical Award from the Academy of Motion Picture Arts and Sciences. The recipient innovation “Lama” – its name derived from the first two letters of each word in the term layered materials – is the first modular, production-ready, commercially available layered shading system of its kind in the visual effects and animation industries. Recognized on the award are Lama’s lead originators, including former ILM lookdev supervisor Jonathan Moulin, and former ILM rendering engineers Vincent Dedun and Emmanuel Turquin.

The concept for Lama first emerged ten years ago as a means to solve what had become a common problem with shading and rendering computer graphics imagery. A typical layered material network helps to define how light interacts with a digital surface like metal, wood, or skin. Light can reflect off a surface, but it can also refract between multiple, differing layers. Until 2016, material systems were commonly made specifically for the types of imagery in a given production. They were rigidly designed and often difficult to share between different productions. This inflexibility made it challenging for artists to adjust their work quickly while still maintaining the realistic dimensions of their images.

“In the early days of rendering – i.e. writing shaders to make objects look like real objects that are in fact CG objects – we had purpose-built shaders,” explains principal R&D engineer André Mazzone, who has been involved with Lama since its inception and currently manages the product. “There were shaders for glass, skin, metal and everything else. It was insular and isolated. Then there was a period when we developed general purpose shaders that would combine multiple properties. In certain cases, some parts of an asset might be clear but others might be opaque. For example with an eyeball, there’s a white, cloudy area but then there’s a transition into a transparent region where the lens is focusing light onto the retina. This blending needs to be smooth, so we require an expressive shader that comprises all of these behaviors. General purpose shaders were fixed in their designs as templates. If we wanted additional behavior, we had to jump in and code it. On Rango, they needed more dirt controls, so we had to splice in new pieces of code to make upgrades. That’s how it used to work.”

To eliminate this often cumbersome process, Lama was envisioned as a modular system where materials are layered and combined without the need for customized code. It’s a simple, lean, and artist-friendly method that ensures both physical accuracy and creative flexibility.

“The way Lama decomposes material responses is akin to the historical bespoke shader solutions for different materials, but the glue is now something that an artist can apply instead of an engineer,” explains Mazzone. “The engineering job is to provide all of the building blocks that might be needed, and the artists can make new additions themselves. This is Lama’s true strength. It employs an infrastructure that conserves energy across material layers. We had experimented with this in the past, but not in a way that allowed general arbitrary layering. This commitment to automatic physically-inspired energy conservation while rearranging components is what has made this tool so flexible and useful.”

Starting as an incubator project at ILM’s London studio in 2016, by mid-2017 Lama was already being used in productions. Disney’s Aladdin (2019) was the first to receive full Lama deployment to great success, and later, Terminator: Dark Fate (2019) resulted in the tool’s deployment throughout the wider network of ILM’s studios. “Any film that includes CG elements from our main-line pipeline – hero creatures, crowds and environments – has been 100% powered by Lama since 2019,” Mazzone notes. That includes episodic series like The Mandalorian, Skeleton Crew, Andor, and many of the Marvel shows. All main-line assets at ILM now go through Lama.”

2019’s Aladdin was ILM’s first production to fully integrate Lama (Credit: Disney).

However, this was only the beginning of Lama’s impact. At the same time that ILM fully integrated the system, it began sharing Lama’s possibilities with sister companies, Pixar and Walt Disney Animation Studios. Pixar was so taken with it that they chose to adapt the tool into their iconic RenderMan product. Lama first premiered with RenderMan 24 in 2021, and since then, studios across the industry have benefited from this ILM-grown innovation, including Laika, DNEG, and MPC, among others. Pixar’s newest feature Hoppers is just one example, wherein Lama’s workflow for hair, fur, and feathers was utilized to great success.

“Most importantly, Lama shifts the artist’s mindset,” says Mazzone. “Materials are now no longer abstract parameter blends, but substrates and layers, much closer to their real-world counterparts. They can be developed independently and combined later, improving efficiency and giving artists and engineers a clear, shared language. This balance, simplicity at the surface, and complexity through composition, makes Lama both approachable for artists and robust in production, enabling faster iteration and higher quality outcomes.”

Congratulations to Jonathan Moulin, Vincent Dedun, Emmanuel Turquin on their Scientific and Technical Award, and to everyone at ILM who has supported Lama’s continued development, including engineering lead and current product owner André Mazzone, former rendering engineer Henrik Dahlberg, rendering engineers Sam Cordingley, Alain Hostettler, Chong Deng and Khang Ngo, and lookdev supervisors Hugo Debat-Burkarth and Joseph Szokoli.

See the full list of Scientific and Technical Award Winners for 2026.

To learn more about Lama, visit RenderMan’s website.

Lucas O. Seastrom is the editor of ILM.com and Skysound.com, as well as a contributing writer and historian for Lucasfilm.

Behind every complex shot is a network of people supporting, teaching, coordinating, and looking out for one another. Drawing on perspectives from animation, production, training, and talent management, this article looks beyond the work on-screen to explore how everyday behavior, collaboration, and care shape life inside ILM’s Vancouver studio.

By Jamie Benning

(Credit: David Dovell & ILM).

When you arrive at Industrial Light & Magic’s Vancouver office, located in a unique skyscraper known as “The Stack,” the lobby displays creatures, props, and costumes tied to the company’s history, while the view beyond the windows reveals one of the most distinctive environments in the ILM network. Glass, steel, ocean light, and mountain silhouettes frame a workspace where some of the most technically complex and creative imagery in modern filmmaking is created. The Vancouver studio is shaped by its artists, influenced by its location, and sustained by a culture built on collaboration and shared purpose.

This portrait of ILM’s Vancouver studio emerges from conversations with people working across very different roles inside the studio: senior visual effects trainer Matt Leonard; lead animator Wesley Chandler; senior talent management coordinator Riya Ramani; and visual effects production coordinator William Wu. Their perspectives are reinforced by insights from Toban Taplin, executive in charge at the Vancouver studio, whose role bridges creative leadership, operations, and long-term studio strategy. Across these conversations, a consistent theme emerges. The Vancouver studio is a place defined by people who support each other, a city that inspires them, and a culture that reflects the best of ILM’s past and present.

Leonard’s role as senior visual effects trainer places him at the center of artist support, sharing knowledge across the studio as tools, workflows, and expectations continue to evolve. As a lead animator, Chandler works directly on performance and motion, guiding teams through some of the most creatively demanding sequences on ILM’s projects. Ramani, as senior talent management coordinator, sits at the intersection of people, logistics, and wellbeing, helping ensure that crews are supported not just creatively but sustainably. From the production side, Wu’s role as visual effects production coordinator focuses on communication and continuity, tracking work as it moves between departments and making sure artists have what they need to do their jobs effectively.

People


The Vancouver team consistently describes an environment shaped by openness, humility, and care. Matt Leonard, who works across ILM’s global studios, sees this as one of the company’s defining characteristics.

“That was one of the things that really drew me to ILM. From the outset, it felt like a very humble group of people. Having been here nine years, it still feels like there are no egos at all, which is staggering when you think about the calibre of people who work here.”

That absence of ego shows up, not as a slogan, but in everyday interactions. Production staff move between desks, checking in on shot progress. Artists gather for dailies, where work is reviewed openly, with feedback offered constructively from all present. Trainers circulate through departments answering highly specific technical questions. Talent managers quietly track crew wellbeing alongside schedules and contracts. The studio functions as an interlocking system, where each role supports the other.

That sense of care is reflected not only in how people are supported during difficult moments but also in how their time and energy are respected between projects. Wesley Chandler recalls how that approach stood out to him early on.

“I really loved how artist-focused ILM tries to be. That stood out to me quite a bit. I was finishing a project, and my talent manager at the time asked me, ‘Do you want to take some time off after this?’ Then I asked, ‘What do you mean?’ Usually, in visual effects, you go from one very busy project straight to the next. The idea that people could take time off if they wanted to really stood out to me. It felt like they genuinely wanted to make sure artists were well taken care of.”

For Toban Taplin, that environment is not accidental. His own background as an effects artist continues to shape how he thinks about leadership and studio culture.

“When I look back at my time as an artist, the places where I did my best work were the ones where the environment was good, and the people around you were all pulling in the same direction. The challenges on a show don’t feel quite so daunting when you’re sitting next to people you get on with, and feel supported by. A big part of my job is helping to create that environment so people can do their best work.”

For many, that sense of support extends far beyond project deadlines and delivery schedules. Chandler joined the Vancouver team when the industry itself was undergoing significant change, and he experienced that culture at a deeply personal level. “I’m incredibly grateful for how ILM supported my family and me, including giving us time to process a loss in the family. It really felt like they cared about my well-being as a person, not just what I could produce at work.”

That feeling of being valued as a person, not just as a contributor to a shot or a sequence, echoes across departments. Riya Ramani experienced that sense of belonging so strongly that she returned to ILM after a period working abroad. “My journey through different studios eventually led me back to ILM in Vancouver, which I now consider my ohana. What brought me back wasn’t just the work, but the people and the genuine sense of community that makes this place so valuable.”

Even those at earlier stages in their ILM careers feel actively encouraged to participate, learn, and grow. Staff describe an environment where questions are welcomed and curiosity is rewarded, creating a studio culture that supports learning alongside delivery.

Across every role, from production through artists, training, and talent management, the language is consistent. People feel supported, listened to, and encouraged to ask questions. It is a culture built as much on kindness as it is on craft, where emotional intelligence is valued alongside technical mastery.

While the work on-screen often draws the public spotlight, the Vancouver studio is sustained by a much wider network of expertise. Production, talent management, training, facilities, IT, and operations all work in parallel with the artists. Schedules are shaped, careers are guided, systems are maintained, and problems are solved quietly in the background.

Taplin recalls a message forwarded to him by a manager, written by an artist after an ordinary day at work. “They talked about coming into the studio, having breakfast that morning, then later picking up their production gift, and finding hot chocolate and donuts waiting upstairs. They were working on a Star Wars project, surrounded by memorabilia, and they said it felt like they were living their best life that day. Being able to share that feedback with the teams who created that experience is really important. It helps people see that what they’re doing matters.”


Place

Vancouver’s geography is central to the experience of working here. The proximity of mountains, forest trails, and the Pacific Ocean offers people across the studio a balance that many describe as both grounding and energizing. It is a city where an intense day at the workstation can be followed by a swim, a hike, or an evening on the beach. The natural world sits unusually close to the digital one.

Matt Leonard explains the appeal of the surrounding environment. “Within 10 or 20 minutes, you can cross a bridge into the North Shore and suddenly be in the mountains, or head the other way and be on the beach.” For Chandler and his family, that access to the outdoors is part of daily life. “My wife, daughter, and I love the outdoors! There are so many trails around here. We love to do a lot of hiking and camping!”

For William Wu, the character of the city runs deeper than its landscape. Vancouver’s multicultural identity shaped his upbringing and continues to shape his experience at ILM. “For me, Vancouver is home. Growing up in an Asian household, I was never tied to just one culture or one community. I was always surrounded by different cultures, and that became normal. People here are curious about what you appreciate in your culture, what you do for holidays, what your day-to-day life looks like. There’s a real willingness to learn and be open, and people are very kind and respectful. Vancouver is incredibly rich and diverse, and it doesn’t feel like anywhere else in the world.”

Taplin’s own relationship with the city began as a short-term experiment that became something more permanent. “We moved here on a whim, thinking we’d try it for a year. What made us stay was how accessible everything is. I live on the North Shore now, and within 15 or 20 minutes, you can be on a mountain trail, skiing in the evening, or hiking above the clouds. Even on the many grey, rainy days Vancouver has, you can drive up into the mountains, and suddenly you’re above it all, in the sunshine, with snow all around you. That ability to escape so quickly is pretty amazing. You’re immersed in nature all the time, and that’s incredibly inspiring.”

Vancouver has fully embraced its identity as a production city, with everything from major studio features to independent films and television series shooting across the region. Ramani notices that industry presence almost daily. “Working full-time at the office has its perks – our window overlooks Melville Street, where my colleagues and I have had a blast watching camera crews filming outside The Stack.”

That proximity to live production and nature feeds directly into the studio’s creative energy. Forests become reference, shifting Pacific light influences how people observe color and atmosphere, and rain, mist, rock, and water subtly inform the textures seen on-screen. Vancouver is not just a place where ILM happens; it actively shapes how people here see and imagine.

Author Jamie Benning (left) chats with Matt Leonard (Credit: David Dovell & ILM).

Culture

ILM’s global culture is rooted in a long tradition of collaboration, problem-solving, and shared creative ownership. The Vancouver office reflects that tradition, while adding its own local energy and character.

Training plays a central role in how ILM maintains that culture. Matt Leonard introduces new artists not only to the studio’s tools and workflows, but also to its history. “We run sessions on the history of ILM where we show images from the early days and talk about the people who built the studio. It helps new artists feel part of a much bigger story.”

Access to senior artists and long-time ILM innovators is another constant. Knowledge is not hoarded. It circulates. “You can talk to almost anyone in the company and say you’re struggling or ask how something works,” Leonard says. “People genuinely want to help.”

That openness is visible every day in Vancouver. Wu recalls moments when simple questions lead to unexpected insight, even on landmark films. “I remember someone sending out a question about Jurassic Park, and people who actually worked on the film replied with real details about how those shots were done. It really shows how open the culture is.”

The studio’s hybrid work pattern provides flexibility, but in-person collaboration remains important for many. The ability to sit alongside someone, sketch an idea, or solve a problem together still carries enormous creative value.

“Working from home has brought flexibility that people really value,” Wu explains. “But what being in the studio brings to the collective is different. When senior artists sit next to someone who hasn’t been in the industry for 20 years, that exchange is invaluable. On challenging projects, there’s a real sense of camaraderie that comes from being together.”

Chandler echoes that sentiment from a personal perspective. “For my mental health, I really value being around people. Working fully remote would be difficult for me.”

Ramani sees the impact in small, everyday moments. “I love the spontaneous hallway encounters; sometimes just bumping into a colleague leads to a quick conversation that resolves a challenge on the spot.”

The social culture reinforces those connections. Staff join art clubs, volleyball groups, foodie communities, Inktober challenges, and a wide range of employee resource groups. As Ramani puts it, “The clubs at ILM are definitely a highlight for me. We have a book club, a Pride ERG, a fashion club – there’s something for everyone, and it’s a joy to watch that community expand. It’s wonderful to see our diverse interests celebrated and getting to know my teammates through the things we love outside of our day jobs.”

For Wu, those communities also create everyday moments of creative exchange. “It’s really fun seeing colleagues share their drawings every day during projects like Inktober.”

Culture at ILM Vancouver does not live in policy documents. It lives in behaviour.

Benning chats with Riya Ramani (Credit: David Dovell & ILM).

Work and Innovation

Vancouver contributes to some of ILM’s most complex and ambitious projects. Artists describe an environment where technical advancement grows directly out of collaboration between departments and disciplines.

The studio is one of  ILM’s five global studios, with work frequently moving between sites as projects evolve. That kind of collaboration demands clarity, trust, and a shared technical language. Vancouver’s location on the Pacific coast places it in close alignment with west coast production while remaining deeply connected to each of the other ILM studios.

Matt Leonard offers a concise summary of the studio’s approach to problem-solving. “When a client has an impossible problem to solve, they often come to us. And I’ve never heard anyone here say, ‘We can’t do that.’”

Taplin points to a recent example where that mindset became tangible. “On Percy Jackson and the Olympians, we were being asked to move fast,” he says. “That meant building things locally, including building an ILM StageCraft LED volume and virtual production team, so the creative work could keep evolving. We were able to tap into the expertise from across ILM as a whole and create something new for our team here.”

He sees that approach as both an ILM hallmark and something the Vancouver studio has fully embraced: Drawing on the wider global company while remaining agile enough to respond quickly as new challenges emerge. That mindset plays out through repeated cycles of iteration. Shots evolve through multiple versions. Tools are reshaped and rewritten in response to real production demands. Chandler recently saw how that same approach shaped the work on Avatar: Fire and Ash (2025). “We developed several new tools that allowed us to work much faster and saved animators from having to do things manually.”

From the production side, Wu sees innovation supported by communication and trust. “My job is to make sure people feel supported and that when work moves between departments, communication is clear.”

Innovation at ILM is rarely about sudden breakthroughs. It is about a steady accumulation. Small improvements layered over time. Systems shaped by people solving real, creative problems at scale.

Benning and William Wu (Credit: David Dovell & ILM).

Belonging to a Larger Story

Artists and production staff in Vancouver describe a strong sense of belonging to something bigger than any single show. They recognize both their individual contributions and their place within ILM’s wider history.

Ramani appreciates that the studio formally recognizes the work of every department. “It’s so rewarding to see ILM include the studio support teams in the credits. It reinforces the idea that no project is the result of just one department; it takes an entire community to reach the finish line.”

Leonard notes how quickly new employees begin to feel connected to that legacy. “Very quickly you start to feel like you’re part of something bigger, something that has a real legacy behind it.”

For Taplin, that sense of continuity is essential. “When you look at all the industry pioneers that are at ILM, all of these people that everyone looks up to started as juniors. They were given opportunities, allowed to try things, allowed to fail, and to build over time. It’s important that people here know they can follow that same trajectory. That this can be a place where you build a career, not just move from project to project.”

Wu became aware of the ILM way almost immediately. “Everyone I spoke to before joining said ILM was the best place to be. And once you’re here, you really understand why.”

Careers at ILM often unfold over many years, sometimes with people leaving and returning, carrying new skills back into the studio. That flow of experience continually refreshes the culture while preserving its core identity.

Wesley Chandler gestures to a familiar Star Wars character as Benning listens (Credit: David Dovell & ILM).

Looking to the Future

The Vancouver studio is shaped by its people, influenced by its environment, and grounded in a culture of shared learning and collaboration. Artists and staff describe a studio where support is real, questions are encouraged, failure is a part of reaching success, innovation grows from teamwork, and ILM’s long history remains a living part of everyday work.

Taplin sees Vancouver playing an increasingly important role in the studio’s future. “There’s so much change happening in the industry. We need to be at the front of that. The question for us is always what Vancouver can bring to the table that serves the wider studio, while also pushing something new forward.”

He is also clear about the importance of acknowledging every department. “I want to recognize all of the teams that contribute to what we do in Vancouver. People come in every day trying to make things a little bit better, to try something new, and to put ideas forward with the wider team in mind. It’s a huge lift that everyone does, and it’s what makes this a special place to be.”

The values that shaped ILM in its earliest years are clearly still present here. Today, those values are expressed through hybrid workflows, global collaboration, and evolving technology. Looking forward, they will be carried by the next generation of artists, coordinators, trainers, and managers who will shape whatever ILM becomes next.

In a city known for its natural beauty, diverse communities, and deep connection to filmmaking, ILM’s Vancouver studio continues to expand the studio’s legacy across film, television, and emerging formats. It remains a place where people can build careers, push technology forward, and contribute to stories told around the world.

ILM’s Vancouver studio is located on the traditional, ancestral, and unceded territories of the Coast Salish Peoples, including the xʷməθkwəy̓əm (Musqueam), Skwxwú7mesh (Squamish), and Səl̓ílwətaʔ/Selilwitulh (Tsleil-Waututh) Nations. We thank all First Nations who have lived and worked on these territories from time immemorial.


Jamie Benning is a filmmaker, author, and podcaster with a lifelong passion for sci-fi and fantasy cinema. He hosts The Filmumentaries Podcast, featuring twice-monthly interviews with behind-the-scenes artists. Visit Filmumentaries.com or find him on X (@jamieswb) and @filmumentaries on Threads, Instagram, Facebook, and YouTube.

ILM visual effects supervisor Vincent Papaix and Nerfstudio creator Matt Tancik discuss their innovative approach to visual effects shot design.

By Lucas O. Seastrom

At this year’s HPA (Hollywood Professional Association) Awards for Technology & Innovation, Industrial Light & Magic and collaborator Nerfstudio took home a win in the Innovation in VFX, Virtual Production & Animation category. Embracing a new kind of open source toolset allowed ILM to recreate visual effects shots for Marvel’s 2025 series Ironheart at a degree of efficiency that greatly outpaced established techniques. The key was “NeRF,” or neural radiance fields, a method that allows 3D photorealistic environments to be created from a sampling of real-world 2D photography.

(Credit: ILM & Marvel).

A Catalyst from Marvel’s Ironheart

ILM visual effects supervisor Vincent Papaix faced an interesting challenge with a handful of drone-based shots from Ironheart, wherein the series’ namesake flies over Chicago’s lakeside waterfront and river district. The fast-flying CG character had to be integrated with the live action plates shot on location. “They decided to film with the drones in a very slow way, thinking we could retime the footage,” Papaix explains to ILM.com. “Typically, you might retime at 200% or 300%, but in this case it was over 1,000%. The character’s movement needed to be very, very fast. Traffic would have to be replaced. When you’re filming at normal speed with the drone, you don’t get the sense of the micro-movement, but at high speed, you could see the high-frequency movements of the camera.”

The visual effects team needed to recreate the desired camera moves while maintaining the appropriate view of the live action background plate. Normally, they might attempt a 2D stabilization of the image, but in a case like this, the sense of depth, or parallax, made for difficulties in trying to stabilize both the foreground and the background at the same time. They considered recreating the entire world in CG, traditionally modeling, texturing, and shading every detailed aspect of the Chicago setting. But with an episodic production schedule, the necessary resources and time required would be prohibitive. 

Papaix decided to begin what he describes as a “pet project,” researching how NeRF models could be applied to visual effects work. At first there was no guarantee that his inquiries would yield results, but then he discovered Nerfstudio, an open source program that provided an end-to-end workflow for developing 3D environments from 2D photography. 

Nerfstudio creator Matt Tancik began his research in developing neural radiance fields as a PhD student at the University of California, Berkeley. “People wanted to experiment and see how much they could push this technology,” Tancik says. “It became obvious that there was a desire for this research to make it into the industry field. But there wasn’t an easy way to do it because it was kind of obtuse research code at the time. The Nerfstudio project was about trying to see how we could wrap it up into something that looked more like a product, and fully open source, so that other people could start playing with it. 

“And most notably,” Tancik adds, “people could help build upon it. A lot of the research projects that we saw coming out of NeRF acted like modules attached to NeRF to make it better along one axis or another. It made sense to try to collaborate as much as possible. The Nerfstudio project was a step towards doing that, and that’s when Vincent and ILM started playing around with it.”

(Credit: ILM & Marvel).

The Function of “NeRFs”

But how exactly do neural radiance fields help empower artists like Papaix and his colleagues to work more efficiently? As Tancik explains, it’s a process that seeks to forego the traditional CG methods that involve the complex, often laborious craft of representing photorealistic imagery as meshes and triangles with applied textures and lighting. “All of that takes a significant amount of effort to make it photoreal, and in some cases, it’s almost impossible,” says Tancik. “That’s not for the lack of people trying to make these methods easier and easier. The goal of NeRF was to essentially see if we could use machine learning to accomplish the same thing. Instead of manually placing these triangles, can we have an algorithm construct these things from photos? So then the work becomes capturing many photos of a scene and converting them into a 3D representation.”

The result is a new approach to storing the corresponding data. Instead of triangles mapped within the CG model, NeRF uses individual points in space, each assigned a specific color. “When you look out into space, you’re shooting out into the scene and seeing what points you hit, and you’re noting which direction you’re hitting that point of space,” Tancik notes. “A single point in space, whether I’m looking at it one way or another, might look a little different. By describing the scene like this, it fits really nicely into optimization techniques that we can use to fit that to an image.”

ILM’s Practical Application

Working with former ILM research engineer Sirak Ghebremusse and former ILM pipeline technical director Kevin Rakes, Papaix oversaw the effort to adapt Nerfstudio’s functionality for visual effects. Both a new encoder and decoder were required to help translate information between Nerfstudio and ILM’s other tools, which ensured the team’s ability to maintain a certain amount of precision with color and image range. 

Similarly, the team needed to process the environments into real-world imagery that could be measured in feet, so Tancik himself created a new file format to aid the transition. That also required the development of new “gizmos” – a group of various nodes of information – within the compositing software Nuke, which allowed the artists to move seamlessly back and forth between the Nerfstudio render and the final effects work. 

“We can work with standard layout and animation in feet, then go into NeRF, import any camera we want, render it through Nerfstudio, and bring that camera move back with us into the Maya or Zeno file,” Papaix notes. “It was key to have that ability.”

(Credit: ILM & Marvel).

As the process evolved, ILM was able to apply these new capabilities in multiple ways. They could stitch a seamless transition between two separate camera views over the water, one captured by a drone and another from a boat, all without the need to create a new CG environment. Entire objects, such as street traffic on a bridge, could be removed. And because they were able to maintain parity between their visual effects environment and the Nerfstudio-rendered space, they could develop entirely new camera paths at the request of the filmmakers.

“We could create a new smooth camera move, basically art direct the exact move that we wanted, and then show that to the director,” Papaix says. “They were very happy. They didn’t think it was possible to change a camera move using an original plate, but we did. They said it was like magic to them. People were curious. Did we project the plate onto geometry? Did we model the whole city? No, there’s no modeling or anything.”

Now with Greater Accessibility

Papaix is keen to note that at the time ILM collaborated with Nerfstudio – in 2022 and ‘23 – these methods were still considered experimental. “Very few people were putting this kind of stuff into production. There was a lot of research taking place, but Matt showed how this could be useful, and ILM took it and made it production-ready.”

Tancik himself adds that “I’ve always been interested in the visual side of things, and hoped to get to that point, but didn’t know if the concept would ever actually make it there. It was not an easy thing to run. You needed a lot of computing power and GPUs. It didn’t feel like it was there yet to be useful in industry or productions. So watching Vincent and ILM put it into practice was really fun.”

Today, the use of neural radiance fields, as well as another similar outgrowth method known as Gaussian splat, is continually on the rise with increasing efficiency in computing power. “This was a science paper a few years ago, and now it’s making its way into all of the software that we use,” Papaix says.

“With the move to Gaussian splat, if I had to do those shots today, I could probably do it from start to finish in only a few days, compared to the months that it took before,” Papaix concludes. “At the time, it took about six months because it was more of a research project off and on, a side project. Now that we understand the tech, we can optimize things, and we can do things much faster. The tech improves so fast. We’re still in the early days of learning how these techniques will be applied.”

Watch the full demonstration reel:

Click here to read more about Nerfstudio.

See the full list of winners from the HPA Awards for Innovation & Technology.

Lucas O. Seastrom is the editor of ILM.com and Skysound.com, as well as a contributing writer and historian for Lucasfilm.