San Francisco News

50 Years | 500+ Film and TV credits | 135+ Awards

SINCE 1975

The production visual effects supervisor discusses the Emerald City, Elphaba in flight, and collaborating with Jon M. Chu.

By Mark Newbold

Based on the novel Wicked: The Life and Times of the Wicked Witch of the West by American author Gregory Maguire, the Wicked musical has enchanted audiences worldwide, both on the stage and on the big screen. Maguire’s 1995 novel was not only an adult-oriented version of L. Frank Baum’s classic children’s story, The Wonderful Wizard of Oz, but it was also the first entry in The Wicked Years book series.

With music and lyrics by Stephen Schwartz and a book by Winnie Holzman, the stage version of Wicked (or to give it its full title, Wicked: The Untold Story of the Witches of Oz) premiered on May 28, 2003 at the Curran Theatre in San Francisco and has broken records wherever it has been performed, including over 7,000 performances in London’s West End.

A true phenomenon ripe for further adaptations, director Jon M. Chu’s Wicked landed in cinemas in November 2024. Capturing the imagination of the cinema audience, the final moments of the film promised a sequel, and that promise was kept when Wicked: For Good debuted a year later in November of 2025, continuing the story of Elphaba Thropp (Cynthia Erivo), Glinda Upland (Ariana Grande), Fiyero Tigelaar (Jonathan Bailey), the Wizard (Jeff Goldblum), and the people of Oz.

Loaded with sequences of dizzying visual complexity, Industrial Light & Magic was tasked with bringing Chu’s vision to vivid, yellow-bricked life. ILM.com had the opportunity to sit down with the production’s four-time Oscar-nominated visual effects supervisor Pablo Helman to discuss Wicked: For Good and the task of unveiling even more of Munchkinland, Shiz University, and the Emerald City.

“It was a 155-day shoot for a two-part story,” says Helman. “We thought of Wicked and Wicked: For Good as one movie, and we shot it that way.” That meant intense preparation and planning, given the logistical and technical nature of certain sequences in the films.

“Visual effects can often be challenging because you’re asking the director, the editor, and everybody else to think about things that they normally don’t want to think about,” explains Helman. “A director is thinking of the whole story, but we’re asking them to look at specific sequences because we need to turn over a certain number of shots. They don’t like to be presented with choices because they think they’ve already made their choice, so why present another one? That means they have to rethink, and that takes time.” 

Jon Chu was open to the challenge. “Jon takes an organic approach to filmmaking, he loves having choices and different possibilities,” Helman says. “There might be a script and a plan in place, but the process of making a major motion picture still has plenty of fluidity. “Lots of things change throughout the process of filming, and there are lots of choices to make.”

(Credit: ILM & Universal).

“It’s a Transactional Thing”

Creative choices are one thing, but a production like Wicked: For Good requires a large amount of preparation and resources, and a sizable portion of those resources are given over to visual effects. As production visual effects supervisor, Helman was deeply involved with practical as well as creative duties.

“Part of my role is managing the project in partnership with my effects producer,” Helman explains. “That includes talent, financials, resources, all kinds of things. So if you’re not thinking ahead, you’re not doing your job. Nobody likes to be in dailies and not be able to say, ‘Yes, I can do this,’ and the only reason you say ‘I can do this’ is because you have a plan A, a plan B, and a plan C.” 

Those plans require intense work from the team, gathering as much data as they can. “We have LiDAR [a laser system that scans objects and environments to recreate physical objects and sets as digital models], we have high resolution textures, HDRIs for lighting, all the science behind it, so that when we’re in dailies, I can say ‘Yes, we can do this.’” That preparation is integral to the production. “Nobody likes to be surprised or ambushed. You don’t want to get into those situations, and we never did with Jon. He’s a great communicator and a terrific person. There was never a moment when he was annoyed about anything. For four years on a project, that is an accomplishment.”

Art always comes with the cost of tools, be it the canvas, brushes, and oils a painter uses, or the clay and tools of a sculptor. It’s the same for visual effects artists, but as Helman explains, creativity always leads the discussion.

“The creative stuff that we put together as a team comes first, and after that, you have to be conscious of the resources,” Helman says. “I could go to a producer and say, ‘Look, I know that we’re spending a lot more resources in this section, but I promise that when we get here, I’m going to find a way to get back the resources that we’re putting into this.’ It’s a transactional thing, but it’s all about the storytelling. 

“There’s always a way of doing what is needed for the movie,” Helman continues. “You look at the storytelling and the amount of resources and ask, ‘Is it worth it? Is there a payoff?’” Time and money are challenges for every production, no matter the scale, but Helman believes one is more of a problem than the other. “At some point, you run out of resources, but the resource isn’t money; it’s time, which is finite. You can throw all kinds of money at the problem, but it won’t get done, because it needs more time in the oven. Then it’s not my choice, it’s a choice that we make together.”

(Credit: ILM & Universal).

 “Every Department Brings Something to the Storytelling”

The work of the visual effects team overlaps with many departments, few more than the special effects team, whose focus is on-set effects like steam, smoke, weather elements, and anything the performers physically touch. Helman gives an example of where the seam between the two lies. “Production design can’t build a 79-story building. They can build 55 feet of it, and then visual effects takes that and develops it, all based on what was done on set.” ILM and their fellow effects houses can expand the world of Wicked, but it only works if the departments are on the same page.

“There’s plenty of discussion about special effects, what can and cannot be done in camera, what’s safe and what’s not,” says Helman. “Filmmaking is one of the most collaborative art disciplines because if you don’t collaborate, you end up with something that is flat. Every department brings something to the storytelling and adds nuance in a way that is individual, surprising, interesting, and curious. So it’s a combination of all those things. There were about 1,000 artists and production crew on these films, and I really would like to thank them because if it wasn’t for them, we wouldn’t be doing this.”

To the Emerald City

The Emerald City is as much a character in both Wicked films as Elphaba, Glinda, Fiyero, Boq (Ethan Slater, aka the Tin Man), Madame Morrible (Michelle Yeoh), and The Cowardly Lion (voiced by Colman Domingo). It is the home of the wonderful wizard and the high society of Oz, as well as the underclasses who have to fight for every scrap. Wicked showed us a vast swathe of the city, and Wicked: For Good not only takes us back there, but it takes us into places we’ve not seen before, a task that fell to Helman and his team.

“There were certain parts of Emerald City that we saw in the first part of the story, and certain parts of that we see in the second, so we built different assets for different parts of the story,” explains Helman. “The first movie was a lot more presentational. Things were being set up, and tonally, the movie was lighter, whereas the second one is a lot darker.” That visual change also mirrors the journey of the characters.

“Every character has a specific arc, so in Wicked: For Good, they deal with the consequences of the choices they made in the first film. Part of production design and visual effects is to accompany the performances with the environment,” Helman continues. “The atmospherics are always thick, and the light direction is purposely very dramatic at times. The sun is low, so there’s less light.” Time is also a factor in the progression of the environment. “It’s a combination of things. The clothes and the creatures get used, and the buildings get worn because they went through a specific experience.” He laughs, as he adds with a wink, “When I started the movie, my hair was brown, and now it’s white.”

(Credit: ILM & Universal).

“We’re Off to See the Wizard”

At the heart of the Wicked films are Elphaba and Glinda, and the second film gave the production the opportunity to delve deeper into the classic characters of L. Frank Baum’s original novel and add a 2025 spin on the characters. 

“It was really exciting,” says Helman. “I remember the first test that we did with Dorothy, Toto, the Tin Man, and the Scarecrow. It’s funny because some of the critics were saying that the chronology of the story didn’t do this or that. Well, you know what? It’s a complicated story, and everybody remembers it differently, so I think this is another point of view of that story.”

Based on Gregory Maguire’s novel, which itself presented the story and characters of Baum’s world in a more mature, complicated light, it gave the filmmakers the chance to overlap parallel stories in ways that didn’t step on each other. “It was fun to think about. One thing is right there in the background, but if I come around and go in front, the story is different. It’s an opportunity.”

“Mitigate the Forces of Gravity”

With plenty of experience in making us believe a human, a superpowered dog, or a battered old Corellian freighter can fly, ILM was charged with taking Elphaba into the skies of Oz, a task that required visual effects know-how and a game performer in Cynthia Erivo.

“The approach was always going to be Cynthia doing the flying,” says Helman. “That meant that she needed to mitigate the forces of gravity, no pun intended. She’s singing while trying to get her body to do very specific actions. Cynthia is very strong, but it takes some effort.” 

A willing actor and all the right equipment don’t necessarily mean the results are what is required by the director or the visual effects team. “Sometimes it doesn’t work. Sometimes, because of safety and other things, the actors are not fully exerting themselves, but Cynthia did.” That combination meant that when all was said and done, Elphaba looked even more imposing in the air in the sequel than she did in the first film. “In the arc of the story, she’s proficient. She’s gliding and then stopping. She knows what she’s doing and she’s done it before. She takes time to look at the world under her, and that requires some skill.”

Along with flying, there’s another aspect to the illusion: Elphaba’s cape, one of the most striking elements of her appearance as she heads down a dark path. “Elphaba’s cape isn’t in the cast, but it is a character in the movie, and it does a lot, not only when flying but also landing and taking off,” notes Helman. “Its creation required the skills of two visual effects houses. “ILM and Framestore created it because we couldn’t do a 30-foot-long cape physically.” With practical and visual effects work woven together, parts of the performer were also replaced to create the complete shot, but as Helman explains, “we always used Cynthia’s face and performance.”

Creating the physical cape meant crafting clothing that gives the right look and says something about the character, a challenge the designers went to great lengths to achieve. “The cape has different layers of materials and transparency, but we took some liberties with it,” explains Helman. “We know how difficult it is from doing different capes, from the Vader cape to Superman. The cape says who he or she is.” 

As is often the case, trial and error was the path to finding the right blend. “There was a lot of testing, we did simulations and resins, but at the end of the day, we said let’s forget about the science of it because it’s about the content. It might be scientifically correct, but it doesn’t work if it’s not doing what we need it to do and it’s not correct for the story we’re trying to tell.”

(Credit: ILM & Universal).

Unlimited Together

Just like its smash-hit predecessor, Wicked: For Good brings together an impressive array of on-screen and off-screen talent, all laser-focused on bringing their utmost in service of making the best film they possibly can. On visual epics like Wicked, visual effects, storytelling, and direction need to be in lockstep.

“There’s a four-minute sequence with this beautiful song that Glinda sings at the beginning of the movie called ‘I Couldn’t Be Happier,’” Helman says. “We redressed Munchkintown, we replaced the sky, the tulips, all kinds of things, but when we looked at it, Jon said ‘We’re missing something because this is a very subtle song.’ Jon said, ‘What if, when she starts singing, the confetti stops?’” 

It’s a striking visual as the confetti hangs motionless in the air, but one that entailed more work than one might imagine. “I asked how long are we doing this for, and Jon said the whole scene. That’s four minutes of really resource-intensive particle work that we didn’t know would work or not, but we had to complete it because we needed to know.” That would require Helman’s most valuable resource: time. “Jon understood that if he wanted to see this, it was going to take weeks to get it. It’s important to have that communication with the director, and to have somebody who understands what we’re doing.”

Magic, Glory, and Love

The ultimate combination of visual effects, performance, direction, production design, and numerous other departments is “The Girl In The Bubble,” written by Stephen Lawrence Schwartz and performed by Ariana Grande. Here we find Glinda in her home, inspired to finally take action following the cyclone that killed Elphaba’s favored sister, Nessarose. It’s a sequence brimming with emotion and meaning, and one that took almost the entire production schedule (that’s both films) to complete.

“We started ‘The Girl in the Bubble’ during the first week of filming on the first movie because it was a very complicated sequence, and we knew that it was going to take us two years. It was a four-minute, continuous shot.” A weighty task, and one that needed to pull in all the eyes it could to make it work. “We did a lot of work with [cinematographer] Alice Brooks and used lots of props before we understood what we needed for the previs.”

It’s one thing to dream up a film sequence, but it’s another to make that dream reality. So with previsualisation underway, Helman and his team also needed to work out the real-world technical aspects of the scene.

“Once we had the previs, then we did a techvis, which meant taking a look at the previs and taking a step back,” Helman continues. “For example, let’s say we previsualize where the camera is moving and, BOOM, there’s a wall in the way, but it’s not a wild wall [meaning it can be easily moved and then put back]. On set, the director might say, ‘Well, move the wall,’ but then you’re wasting two hours of time and resources. Techvis will look at the distance between two points, how fast things will move, and where the blocking is. If you don’t prepare before the shoot, we might put the lens on and find we can’t focus because it’s too close or too far, or the camera doesn’t fit in the space and the director has to change the shot.” 

With the techvis in progress, the team moved to the actual set itself. “We went on set to look at what the camera was doing,” says Helman, “and we realized that when she goes up the stairs, the set would have to be stripped because the camera couldn’t get there. We’d need a 50-foot crane. So we’d have to take the wall out and build a CG set as the camera comes around.” As it was for the entire production, planning in advance was key. “You have to figure these things out beforehand. It’s not necessarily something that a director would look at, but the other departments need that techvis information as well.”

While there was a physical set with props, Helman’s visual effects team added a surprising amount of detail afterward. “The railing is created in computer graphics, everything behind Glinda is created in computer graphics, and once she gets to the closet, only half of the closet was built, so we had to build the reverse of that.” There are even different takes of Grande’s performance brought together for the completed sequence, and that meant more delicate work for the team. “We had performances that were morphed, so the reflections had to match those performances. There were morphs in the middle of it that were very, very difficult, so there was the nuance of doing that.”

Even after completing this technical maze of work and collaboration, changes were still required. “Once we were done, Jon and Myron [Kerstein, editor of Wicked: For Good] changed two performances. We had about seven different plates that needed to be stitched into one, but they changed two of them because they thought the performances were better, so we had to redo the layout. We had all the assets, but then you have to resync everything so that it works.”

That meant the team on the ground needed to be extra vigilant and imaginative to keep the pieces where they needed to be. “The on-set video assist was so important because they needed to play it back and flop it [reverse the image],” Helman explains. “The floppiness of it was mind-boggling, and you have to make sure that you have plenty of imagination because there’s a lot of compositing that goes into the sequence that can’t be done on set in real time. You have to do all the thinking before.” 

Nevertheless, the reward is in the work itself. “It took two years to do, but it was really satisfying,” Helman concludes. “It’s one of the reasons why I love visual effects. It’s that satisfaction, and I realize how lucky I am to have a job that is so creative, because a lot of people don’t.”

(Credit: ILM & Universal).

Read more about ILM’s work on Wicked here on ILM.com.

Mark Newbold has written for Star Wars Insider magazine since 2006, ILM.com, Skysound.com, and news site FanthaTracks.com, having previously contributed to StarWars.com and StarTrek.com. He is a 4-time Star Wars Celebration Podcast Stage host, podcasting for over 20 years, and has been involved in websites since 1996. You can find this Hoopy frood @Prefect_Timing.

One of ILM’s first visual effects supervisors looks back at the film’s mix of practical methods and revolutionary digital effects 40 years later.

By Amy Richau

(Credit: ILM & Paramount).

“The game is afoot!”

In 1985, director Barry Levinson and writer Chris Columbus brought a new tale centered around a teenage Sherlock Holmes to audiences with Young Sherlock Holmes. The film’s effects team, led by visual effects supervisor Dennis Muren, ASC (Star Wars: A New Hope, 1977), that included Kit West (Raiders of the Lost Ark, 1981), John Ellis (The Goonies, 1985), and David Allen (Willow, 1988), was nominated for an Academy Award for Best Visual Effects the following year. The film holds a unique place in Industrial Light & Magic’s history. It includes an abundance of practical visual effects methods the company had developed and perfected over its first ten years, as well as the first fully digital character ever depicted in a feature film, a stained glass knight.

Young Sherlock Holmes arrived in theaters the same year as The Goonies, Cocoon, Explorers, and Back to the Future, when ILM was increasingly working on more projects outside of Lucasfilm. Muren recently spoke to ILM.com about the making of Holmes and its unique mix of old-school and groundbreaking visual effects.

Many of ILM’s biggest breakthroughs occurred during the making of epic blockbusters like Star Wars, The Abyss (1989), Terminator 2: Judgment Day (1991), or Jurassic Park (1993). However, because Young Sherlock Holmes was a smaller film, it became the perfect vehicle for testing onscreen photoreal computer graphics (CG) effects. Also key was ILM’s proximity at the time to a smaller group ILM founder George Lucas was running a few hundred feet from their offices – the Lucasfilm Computer Division, a portion of which would later become Pixar Animation Studios.

Matte artist Chris Evans (left) and visual effects art director David Carson in the ILM Matte Department (Credit: ILM & Paramount).

The Height of Practical Effects

The story of Young Sherlock Holmes follows its teenage namesake (Nicholas Rowe) and newly arrived John Watson (Alan Cox) during a year at a London boarding school. The duo discovers a series of mysterious murders that lead them to a secret cult in Victorian London.

The snow that appears in many sequences of the film, which today might be created with CG effects, was accomplished with practical, old-school methods. Kit West, who was in charge of many of the film’s physical effects, needed the snow to both look real and leave no trace after shooting wrapped. West, who died in 2016, told Cinefex that despite filming on location in the United Kingdom at Eton College, Belvoir Castle, and Oxford University during the winter, all of the snow seen in the film was made by the production.

For snow on the ground, West’s team used 150 tons of dendritic salt. Snow on the buildings was made from over 100 tons of magnesium sulfate that had “a glint to it just as real snow,” said West. High-expansion foam that evaporated after about three hours was used in larger areas to mimic snow, while falling snow was made from a biodegradable insulation material that consisted of finely chopped paper, deployed by agricultural grain blowers.

One of the quirkier characters in the film, retired professor Rupert Waxflatter (Nigel Stock), spends much of his time on-screen trying to perfect his flying machine design. Surprisingly, none of the shots of the flying machine in Young Sherlock Holmes include miniatures. West recounted to Cinefex that an aviation company that worked on the film Those Magnificent Men and Their Flying Machines (1965) built a full-scale flying machine with a 25-foot wingspan, which production then tinkered with to make it functional.

Getting the flying machine in the air included two 120-foot cranes. “They were tower cranes,” West told Cinefex, “like those used for building skyscrapers, one on either end of the flight path. We had a stretch cable between them, and the machine was on runners. We attached all our own runners and rails, as well as the raising and lowering mechanisms.”

Concept art of an anthropomorphized pastry that attacks young Watson during a hallucination (Credit: ILM & Paramount).

A Whole New World

One thing was clear from the beginning with Young Sherlock Holmes: Muren and the creative team behind the film wanted the effects to look as photoreal as possible. A challenge that, in the mid 1980s, even Sherlock Holmes could appreciate.

While many of the Computer Division’s projects at the time were focused on animation, Muren wanted to see if their technology could make the jump to photoreal effects. As Muren tells ILM.com, “I just needed to see if this technology had the controls necessary to make something look 100% real or not.” The sequences in Holmes that needed heavy visual effects were mostly split into discrete sections where characters experienced hallucinations, giving Muren the opportunity to use different methods throughout the film.

CG effects had been used in films by ILM before, most notably the Genesis sequence in Star Trek: The Wrath of Khan (1982), another collaboration with the Computer Division. But that sequence was intentionally not photoreal, and Muren knew from seeing other tests that in many cases reflections were too high, edges were too sharp, or the shots were missing essential shading and shadows to achieve a more realistic feel.

A CG test done by Triple-I ahead of Star Wars: The Empire Strikes Back (1980) involving five X-wings in flight increased Muren’s desire to play around with this emerging tech. “Triple-I’s test didn’t look photoreal, but they did a camera maneuver with the ship that there’s no way we could have done, and it looked pretty neat,” says Muren. “So it’s another temptation. This thing was out there, and I wanted to get it on a show and figure out how to do it.”

Muren decided to tackle the effects-heavy sequences as a bake-off, doing each one in a different way and seeing if a clear winner emerged. “With the stained glass man, that looks small enough, so let’s try CG, right?” Muren recalls, “If we find out in two months it’s not working, we can back off and do it another way.” For other hallucination sequences, Muren planned to use rod puppets in front of a blue screen and utilize Go-Motion with motion blurs.

Modelmaker Charlie Bailey creates an armature for one of the harpy puppets (Credit: ILM & Paramount).

Bringing Hallucinations to Life

The hallucinations in the film result from poisonous darts the cult’s leader, Professor Rathe (Anthony Higgins), uses as he seeks revenge against enemies from his past.

In one hallucination sequence that opens the film, an accountant, Bentley Bobster (Patrick Newell), sees his pheasant dinner attack him in a restaurant. After retreating to his home, Bobster sees the serpent handles on his coatrack turn into actual snakes that wrap around him and bite at his face. After the lamps in the room appear to start spitting out fire, Bobster leaps from his window to escape the flames.

In other hallucination sequences later in the film, Professor Waxflatter is attacked by harpy statues in an antique store. His niece, Elizabeth Hardy (Sophie Ward), finds herself fighting off skeletons at the bottom of a grave. Cameraman Michael Owens handled the motion-control programming and lighting for the harpy sequence. The animation was created by Harry Walton with the puppets primarily made by Tom St. Amand.

A harpy puppet is photographed by a motion-control camera (Credit: ILM & Paramount).

David Allen supervised the startling hallucination that young Watson experiences in a cemetery, which manages to be equally hilarious and disturbing. After being shot with a toxic dart, the ever-peckish Watson sees a wall of pastries. After he grabs one to eat, it comes to life in his hands and wraps a vine around him, knocking him to the ground. The other pastries soon leap off the shelves and start shoving whipped cream into his mouth.

Muren directed Allen and his crew to use rod puppets to bring these pesky, chaotic, and downright naughty desserts to life. The individual puppets were made of rubber and were approximately eight inches high. Each puppet had rods coming out of their elbows, torsos, heads, and legs with three or four puppeteers moving them in unison. Notes Muren, “Each element was shot in front of a blue screen, so when we combined them, twelve pastries would be in the same shot.” Since each puppet was shot separately, it took two to three days to shoot the eight to twelve pastries that would appear in each shot with Watson.

According to Muren, the pastry sequence in Holmes is a throwback of sorts to the mouse puppet Topo Gigio, who was manipulated by black rods in front of a black background in the early days of television. “It’s all how you angle it,” says Muren, “how you frame the shot. If you shoot the wrong way, you can have a rod go in front of the carrier’s face. So all the performances have to be manipulated to make sure the rods don’t go in front of the figures, or else you’ll see this black thing that will tip off audiences. It’s not using Go-Motion. It was all done by hand and mostly at real speed. I think at times we slowed it down to make it look a little more staccato from what the puppeteers could do. Adding a little more whimsy to it.”

Before sending the shots to the rotoscope department to remove the rods and the puppeteers, they stacked black-and-white footage of the multiple puppets and viewed the scene on a Moviola to make sure the performance had worked out as expected. Shots then went to optical for matting work and printing. “It was complicated. It’s not against black like the spaceships in Star Wars, so it was pretty difficult stuff,” adds Muren.

The ILM team puppeteers one of the anthropomorphized pastries (Credit: ILM & Paramount).

Six Months for Seven Shots

The Lucasfilm Computer Division, via its graphics group, had previously created a terraforming planet simulation, better known as the “Genesis demo sequence,” for The Wrath of Khan and a CG spinning hologram of the Death Star in Star Wars: Return of the Jedi (1983).

Muren went into the stained glass knight sequence – where a knight jumps out of a stained glass window in a church and walks towards a priest experiencing a hallucination – knowing that creating it digitally may not work. They had to have a backup. And Muren had to sort out how a walking CG character might look. “Should it look like the knight is a walking, full-size, flat glass figure, simply cutout from the window? That didn’t seem very threatening and too literal for a nightmarish hallucination. What if it wasn’t flat but a man-sized three-dimensional glass figure of the knight? Maybe. We also tried some other ideas but nothing really popped.”

The one design that did pop came from Muren’s wife, Zara, who suggested that the knight could jump out of the window in its many individual glass pieces that magically reassemble without touching each other when they land, making something like a hanging mobile but without the strings. Each of the pieces could twist and turn to make up the knight’s figure which could be moved and animated as one menacing figure.

Eben Ostby (left) and John Lasseter of the Lucasfilm Computer Division ready a lighting test of practical stained glass samples, which were used as reference for the CG knight (Credit: ILM & Paramount).

To make the knight even more menacing, Muren asked the visual effects artists to make each piece of glass of an inch thick with sharp jagged edges. Some of the pieces were bowed in the middle, convex pushing out from behind, so they were domed and coming towards the priest, making it appear more aggressive. “Everything in movies is feelings,” notes Muren. “And if I didn’t feel it, and the audience didn’t feel it, then you’re just telling a story, and you might as well be doing it by telephone.”

This was all done before shading and motion blur in CG shots were the norm, and Muren leaned on the fact that the knight was a hallucination, so it didn’t have to be as real-looking as ships flying through space. The seven shots of the knight took about six months to complete and included some of the first digital composites.

“George’s graphics group had been making an input-output scanner as a prototype,” said Muren, “and that was so troublesome because it was so cutting-edge that it would often break down unexpectedly. I think out of every input scan, it was 10 or more times before it would make it through as few as 120 frames.”

The breakthrough laser film scanner was pioneered by David DiFrancesco and the Lucasfilm Computer Division and was later used by ILM on its earliest CG productions (Credit: ILM & Lucasfilm).

The entire knight sequence lasts less than a minute in the finished film. It starts with a wide shot showing the church’s stained glass window bowing a bit before the knight breaks out and lands on the ground. While many traditional matte paintings made with oil paints were used in Young Sherlock Holmes to recreate exteriors of Victorian London and a pyramid temple, for the opening shot of the knight sequence, matte artist Chris Evans created the first CG image used as a film background. “I remember,” says Muren, “it took him a really long time to do it because the tools were so hard to use. The paint program was in existence, but it was very slow to use, to be able to paint and get the brush strokes right.”

After breaking out of the window, the knight’s 100-plus pieces reassemble as he lands on his feet, holding up a large sword. The next few shots depict the knight walking menacingly towards the priest. As the knight walks past the camera, audiences can see through the backside of the knight’s glass.

“It was all shot very traditionally,” explains Muren. “I shot a lot of plate backgrounds of the church.” In addition to footage of the priest, the location also had several candles and mist. When Muren returned from shooting, he still wasn’t sure exactly how they were going to pull this off or if it was going to work at all. “It could all hit a limit where the blacks or the whites never match. There were all sorts of things that could go wrong. I didn’t know what was going to work and what wasn’t. So I shot for any technique we were going to use.”

The next step in the process was getting the digital technicians to constrain their tools to what the eye sees on film. “A lot of what’s made for software manipulation, whether it’s brightness, camera movements, or distance, go to infinity,” said Muren. “So part of the process is constraining it down to what film records. We don’t want to go above or below what film records as black and white, even though the software could go beyond that. When it is constrained to the world of photography, then I can start to understand it again.”

Muren and his team also “cheated” what audiences saw through the glass of the knight at times. “What you see through the glass, let’s say a yellow piece of the knight, is brightening up the color that’s on the glass, not what you would really see if you held up all those color pieces where parts of the background of a yellow piece were blue and yellow. That would appear grey, which would take you out of the drama of the scene. So the whole transmission through the glass, what you see on the other side, is black and white. You can’t tell because it’s got this yellow, but it’s a cheat, just black and white to light it up. We did a lot of that later in The Abyss with the water snake, all the refractions in the rooms, we cheated all the way through.”

An animation pass of the stained glass knight seen as a wireframe (Credit: ILM & Paramount).
The final composite (Credit: ILM & Paramount).

All About the Blur

A key element to achieving realism in the stained glass knight sequence was understanding the importance of motion blur, where objects on-screen shot at 24 frames per second appear blurry as they are in motion. The problem was that at the time, ILM had yet to develop the ability to digitally render blurs. “We’re used to what those blurs look like,” says Muren. “They make things look fluid. That’s very important for an effect to look real because the rest of the movie has got that in it. I didn’t want the stained glass knight to look like it came from ILM, that it was stuck onto the background.”

To help achieve the blur effect, every frame in the knight sequence was rendered nine times in slightly different positions. As the render time in 1985 was so long, one primary frame would be rendered at a higher resolution than the rest to save time. The result was a blur made up of a number of static pictures. “So you put them all together, and you’re doing this 24 times a second, and each of these blurs has eight pictures in it that are kind of similar, but some are weaker on the outside, and some are strong in the middle – then it all looks like a normal blur.”

This experience pushed Muren and the Lucasfilm Computer Division to learn more and create the tools to execute their vision for the finished shots. “It was an introduction to them and to me,” explains Muren, “about what you could do. Motion blur, overexposure, underexposure, tracking or hanging the camera around. I hadn’t really thought about how you have to track the camera with the background. For a camera guy like me, who understands filmmaking technically, I could go in there and say, ‘Can we get this tool?’ ‘We need that one too.’ And they would 99% or 100% of the time come up with it in either hours, or they already had it, and they just adjusted something, or they could write something for it, within a few days.”

(Credit: ILM & Paramount).

A Wealth of New Tricks and Tools

It’s almost impossible to list all of the innovations and challenges the Young Sherlock Holmes effects crew faced during postproduction. The film not only includes the first CG character but also broke ground in developing digital matte paintings and digital compositing. In order for ILM artists to match camera movements from the live-action set into a computer’s 3D space, they projected footage shot on location in England with gridlines over it onto a computer screen. A new preview system gave creators the chance to work with a simple black-and-white wireframe of an image, so they didn’t have to wait for an image to be completely rendered to continue working on the shot.

To record the computer animation back to film, a laser scanner was used that could only print approximately one frame a minute, so each second of footage would take 24 minutes to complete. “I don’t think I rendered anything at 4K or even scanned it out because it was just taking too long,” recalls Muren. “We just did everything at either 1 or 2K. At least the tool was able to change and wasn’t locked into 4K, or we’d still be working on the film today.”

Among the most challenging shots to finish in the film was a panning shot of the knight coming toward the audience. “I think John [Lasseter] came up with the idea of panning the camera,” says Muren. “I didn’t even know if we could match the camera’s pan in the computer. When we shot it, I had somebody walk by and the operator followed as a reference. Then they shot the actual plate without the person in there. It took a while to get that, but it wasn’t hard once we figured out we could do it. It was somebody trying to track it manually every frame in 3D space because we didn’t want the stained glass knight to be locked into a candle that’s seven feet farther back. It had to be locked into them, closer to the camera.”

The final shot was a side view of the priest and the knight raising his sword above his head in a threatening way. Muren asked engineer Bill Reeves if they could add a glint of light to the sword blade for a dramatic end to the shot. “They didn’t know how to do that, how to put a light to reflect a certain thing, but they had all of the spatial information.” Muren suggested they track the shot backward, look at where the camera was, the angle of the sword, and then put a digital light there. “No one’s going to know that you cheated that light and it didn’t take a lot of time,” explains Muren. “That’s what we always do in moviemaking. What you care about is what the camera sees.”

Shortly after Holmes hit theaters, the Lucasfilm Computer Division was spun off into two pieces – one half funded by Apple co-founder Steve Jobs as Pixar, the other half as the digital editing company known as DroidWorks. In Young Sherlock Holmes’s 1 hour and 49 minute running time, ILM artists used just about every tool they had access to at the time, including a few newly invented ones. The seeds planted during their effects work would pay off in the ensuing years with a computer-generated water creature in The Abyss, the T-1000 in Terminator 2: Judgment Day, and the dinosaurs in Jurassic Park, the latter directed by one of Holmes’s executive producers, Steven Spielberg.

Amy Richau is a freelance writer and editor with a background in film preservation. She’s the author of several pop culture reference books including Star Wars Timelines, LEGO Marvel Visual Dictionary, and Star Wars: The Phantom Menace: A Visual Archive. She is also the founder of the 365 Star Wars Women Project – that includes over 90 interviews with women who have worked on Star Wars productions. Find her on Bluesky or Instagram.

Tha layered shading system has been the standard at ILM for many years as it continues to impact the wider visual effects and animation industries.

By Lucas O. Seastrom

(Credit: Academy of Motion Picture Arts and Sciences).

2026 marks Industrial Light & Magic’s 39th Scientific and Technical Award from the Academy of Motion Picture Arts and Sciences. The recipient innovation “Lama” – its name derived from the first two letters of each word in the term layered materials – is the first modular, production-ready, commercially available layered shading system of its kind in the visual effects and animation industries. Recognized on the award are Lama’s lead originators, including former ILM lookdev supervisor Jonathan Moulin, and former ILM rendering engineers Vincent Dedun and Emmanuel Turquin.

The concept for Lama first emerged ten years ago as a means to solve what had become a common problem with shading and rendering computer graphics imagery. A typical layered material network helps to define how light interacts with a digital surface like metal, wood, or skin. Light can reflect off a surface, but it can also refract between multiple, differing layers. Until 2016, material systems were commonly made specifically for the types of imagery in a given production. They were rigidly designed and often difficult to share between different productions. This inflexibility made it challenging for artists to adjust their work quickly while still maintaining the realistic dimensions of their images.

“In the early days of rendering – i.e. writing shaders to make objects look like real objects that are in fact CG objects – we had purpose-built shaders,” explains principal R&D engineer André Mazzone, who has been involved with Lama since its inception and currently manages the product. “There were shaders for glass, skin, metal and everything else. It was insular and isolated. Then there was a period when we developed general purpose shaders that would combine multiple properties. In certain cases, some parts of an asset might be clear but others might be opaque. For example with an eyeball, there’s a white, cloudy area but then there’s a transition into a transparent region where the lens is focusing light onto the retina. This blending needs to be smooth, so we require an expressive shader that comprises all of these behaviors. General purpose shaders were fixed in their designs as templates. If we wanted additional behavior, we had to jump in and code it. On Rango, they needed more dirt controls, so we had to splice in new pieces of code to make upgrades. That’s how it used to work.”

To eliminate this often cumbersome process, Lama was envisioned as a modular system where materials are layered and combined without the need for customized code. It’s a simple, lean, and artist-friendly method that ensures both physical accuracy and creative flexibility.

“The way Lama decomposes material responses is akin to the historical bespoke shader solutions for different materials, but the glue is now something that an artist can apply instead of an engineer,” explains Mazzone. “The engineering job is to provide all of the building blocks that might be needed, and the artists can make new additions themselves. This is Lama’s true strength. It employs an infrastructure that conserves energy across material layers. We had experimented with this in the past, but not in a way that allowed general arbitrary layering. This commitment to automatic physically-inspired energy conservation while rearranging components is what has made this tool so flexible and useful.”

Starting as an incubator project at ILM’s London studio in 2016, by mid-2017 Lama was already being used in productions. Disney’s Aladdin (2019) was the first to receive full Lama deployment to great success, and later, Terminator: Dark Fate (2019) resulted in the tool’s deployment throughout the wider network of ILM’s studios. “Any film that includes CG elements from our main-line pipeline – hero creatures, crowds and environments – has been 100% powered by Lama since 2019,” Mazzone notes. That includes episodic series like The Mandalorian, Skeleton Crew, Andor, and many of the Marvel shows. All main-line assets at ILM now go through Lama.”

2019’s Aladdin was ILM’s first production to fully integrate Lama (Credit: Disney).

However, this was only the beginning of Lama’s impact. At the same time that ILM fully integrated the system, it began sharing Lama’s possibilities with sister companies, Pixar and Walt Disney Animation Studios. Pixar was so taken with it that they chose to adapt the tool into their iconic RenderMan product. Lama first premiered with RenderMan 24 in 2021, and since then, studios across the industry have benefited from this ILM-grown innovation, including Laika, DNEG, and MPC, among others. Pixar’s newest feature Hoppers is just one example, wherein Lama’s workflow for hair, fur, and feathers was utilized to great success.

“Most importantly, Lama shifts the artist’s mindset,” says Mazzone. “Materials are now no longer abstract parameter blends, but substrates and layers, much closer to their real-world counterparts. They can be developed independently and combined later, improving efficiency and giving artists and engineers a clear, shared language. This balance, simplicity at the surface, and complexity through composition, makes Lama both approachable for artists and robust in production, enabling faster iteration and higher quality outcomes.”

Congratulations to Jonathan Moulin, Vincent Dedun, Emmanuel Turquin on their Scientific and Technical Award, and to everyone at ILM who has supported Lama’s continued development, including engineering lead and current product owner André Mazzone, former rendering engineer Henrik Dahlberg, rendering engineers Sam Cordingley, Alain Hostettler, Chong Deng and Khang Ngo, and lookdev supervisors Hugo Debat-Burkarth and Joseph Szokoli.

See the full list of Scientific and Technical Award Winners for 2026.

To learn more about Lama, visit RenderMan’s website.

Lucas O. Seastrom is the editor of ILM.com and Skysound.com, as well as a contributing writer and historian for Lucasfilm.

Behind every complex shot is a network of people supporting, teaching, coordinating, and looking out for one another. Drawing on perspectives from animation, production, training, and talent management, this article looks beyond the work on-screen to explore how everyday behavior, collaboration, and care shape life inside ILM’s Vancouver studio.

By Jamie Benning

(Credit: David Dovell & ILM).

When you arrive at Industrial Light & Magic’s Vancouver office, located in a unique skyscraper known as “The Stack,” the lobby displays creatures, props, and costumes tied to the company’s history, while the view beyond the windows reveals one of the most distinctive environments in the ILM network. Glass, steel, ocean light, and mountain silhouettes frame a workspace where some of the most technically complex and creative imagery in modern filmmaking is created. The Vancouver studio is shaped by its artists, influenced by its location, and sustained by a culture built on collaboration and shared purpose.

This portrait of ILM’s Vancouver studio emerges from conversations with people working across very different roles inside the studio: senior visual effects trainer Matt Leonard; lead animator Wesley Chandler; senior talent management coordinator Riya Ramani; and visual effects production coordinator William Wu. Their perspectives are reinforced by insights from Toban Taplin, executive in charge at the Vancouver studio, whose role bridges creative leadership, operations, and long-term studio strategy. Across these conversations, a consistent theme emerges. The Vancouver studio is a place defined by people who support each other, a city that inspires them, and a culture that reflects the best of ILM’s past and present.

Leonard’s role as senior visual effects trainer places him at the center of artist support, sharing knowledge across the studio as tools, workflows, and expectations continue to evolve. As a lead animator, Chandler works directly on performance and motion, guiding teams through some of the most creatively demanding sequences on ILM’s projects. Ramani, as senior talent management coordinator, sits at the intersection of people, logistics, and wellbeing, helping ensure that crews are supported not just creatively but sustainably. From the production side, Wu’s role as visual effects production coordinator focuses on communication and continuity, tracking work as it moves between departments and making sure artists have what they need to do their jobs effectively.

People


The Vancouver team consistently describes an environment shaped by openness, humility, and care. Matt Leonard, who works across ILM’s global studios, sees this as one of the company’s defining characteristics.

“That was one of the things that really drew me to ILM. From the outset, it felt like a very humble group of people. Having been here nine years, it still feels like there are no egos at all, which is staggering when you think about the calibre of people who work here.”

That absence of ego shows up, not as a slogan, but in everyday interactions. Production staff move between desks, checking in on shot progress. Artists gather for dailies, where work is reviewed openly, with feedback offered constructively from all present. Trainers circulate through departments answering highly specific technical questions. Talent managers quietly track crew wellbeing alongside schedules and contracts. The studio functions as an interlocking system, where each role supports the other.

That sense of care is reflected not only in how people are supported during difficult moments but also in how their time and energy are respected between projects. Wesley Chandler recalls how that approach stood out to him early on.

“I really loved how artist-focused ILM tries to be. That stood out to me quite a bit. I was finishing a project, and my talent manager at the time asked me, ‘Do you want to take some time off after this?’ Then I asked, ‘What do you mean?’ Usually, in visual effects, you go from one very busy project straight to the next. The idea that people could take time off if they wanted to really stood out to me. It felt like they genuinely wanted to make sure artists were well taken care of.”

For Toban Taplin, that environment is not accidental. His own background as an effects artist continues to shape how he thinks about leadership and studio culture.

“When I look back at my time as an artist, the places where I did my best work were the ones where the environment was good, and the people around you were all pulling in the same direction. The challenges on a show don’t feel quite so daunting when you’re sitting next to people you get on with, and feel supported by. A big part of my job is helping to create that environment so people can do their best work.”

For many, that sense of support extends far beyond project deadlines and delivery schedules. Chandler joined the Vancouver team when the industry itself was undergoing significant change, and he experienced that culture at a deeply personal level. “I’m incredibly grateful for how ILM supported my family and me, including giving us time to process a loss in the family. It really felt like they cared about my well-being as a person, not just what I could produce at work.”

That feeling of being valued as a person, not just as a contributor to a shot or a sequence, echoes across departments. Riya Ramani experienced that sense of belonging so strongly that she returned to ILM after a period working abroad. “My journey through different studios eventually led me back to ILM in Vancouver, which I now consider my ohana. What brought me back wasn’t just the work, but the people and the genuine sense of community that makes this place so valuable.”

Even those at earlier stages in their ILM careers feel actively encouraged to participate, learn, and grow. Staff describe an environment where questions are welcomed and curiosity is rewarded, creating a studio culture that supports learning alongside delivery.

Across every role, from production through artists, training, and talent management, the language is consistent. People feel supported, listened to, and encouraged to ask questions. It is a culture built as much on kindness as it is on craft, where emotional intelligence is valued alongside technical mastery.

While the work on-screen often draws the public spotlight, the Vancouver studio is sustained by a much wider network of expertise. Production, talent management, training, facilities, IT, and operations all work in parallel with the artists. Schedules are shaped, careers are guided, systems are maintained, and problems are solved quietly in the background.

Taplin recalls a message forwarded to him by a manager, written by an artist after an ordinary day at work. “They talked about coming into the studio, having breakfast that morning, then later picking up their production gift, and finding hot chocolate and donuts waiting upstairs. They were working on a Star Wars project, surrounded by memorabilia, and they said it felt like they were living their best life that day. Being able to share that feedback with the teams who created that experience is really important. It helps people see that what they’re doing matters.”


Place

Vancouver’s geography is central to the experience of working here. The proximity of mountains, forest trails, and the Pacific Ocean offers people across the studio a balance that many describe as both grounding and energizing. It is a city where an intense day at the workstation can be followed by a swim, a hike, or an evening on the beach. The natural world sits unusually close to the digital one.

Matt Leonard explains the appeal of the surrounding environment. “Within 10 or 20 minutes, you can cross a bridge into the North Shore and suddenly be in the mountains, or head the other way and be on the beach.” For Chandler and his family, that access to the outdoors is part of daily life. “My wife, daughter, and I love the outdoors! There are so many trails around here. We love to do a lot of hiking and camping!”

For William Wu, the character of the city runs deeper than its landscape. Vancouver’s multicultural identity shaped his upbringing and continues to shape his experience at ILM. “For me, Vancouver is home. Growing up in an Asian household, I was never tied to just one culture or one community. I was always surrounded by different cultures, and that became normal. People here are curious about what you appreciate in your culture, what you do for holidays, what your day-to-day life looks like. There’s a real willingness to learn and be open, and people are very kind and respectful. Vancouver is incredibly rich and diverse, and it doesn’t feel like anywhere else in the world.”

Taplin’s own relationship with the city began as a short-term experiment that became something more permanent. “We moved here on a whim, thinking we’d try it for a year. What made us stay was how accessible everything is. I live on the North Shore now, and within 15 or 20 minutes, you can be on a mountain trail, skiing in the evening, or hiking above the clouds. Even on the many grey, rainy days Vancouver has, you can drive up into the mountains, and suddenly you’re above it all, in the sunshine, with snow all around you. That ability to escape so quickly is pretty amazing. You’re immersed in nature all the time, and that’s incredibly inspiring.”

Vancouver has fully embraced its identity as a production city, with everything from major studio features to independent films and television series shooting across the region. Ramani notices that industry presence almost daily. “Working full-time at the office has its perks – our window overlooks Melville Street, where my colleagues and I have had a blast watching camera crews filming outside The Stack.”

That proximity to live production and nature feeds directly into the studio’s creative energy. Forests become reference, shifting Pacific light influences how people observe color and atmosphere, and rain, mist, rock, and water subtly inform the textures seen on-screen. Vancouver is not just a place where ILM happens; it actively shapes how people here see and imagine.

Author Jamie Benning (left) chats with Matt Leonard (Credit: David Dovell & ILM).

Culture

ILM’s global culture is rooted in a long tradition of collaboration, problem-solving, and shared creative ownership. The Vancouver office reflects that tradition, while adding its own local energy and character.

Training plays a central role in how ILM maintains that culture. Matt Leonard introduces new artists not only to the studio’s tools and workflows, but also to its history. “We run sessions on the history of ILM where we show images from the early days and talk about the people who built the studio. It helps new artists feel part of a much bigger story.”

Access to senior artists and long-time ILM innovators is another constant. Knowledge is not hoarded. It circulates. “You can talk to almost anyone in the company and say you’re struggling or ask how something works,” Leonard says. “People genuinely want to help.”

That openness is visible every day in Vancouver. Wu recalls moments when simple questions lead to unexpected insight, even on landmark films. “I remember someone sending out a question about Jurassic Park, and people who actually worked on the film replied with real details about how those shots were done. It really shows how open the culture is.”

The studio’s hybrid work pattern provides flexibility, but in-person collaboration remains important for many. The ability to sit alongside someone, sketch an idea, or solve a problem together still carries enormous creative value.

“Working from home has brought flexibility that people really value,” Wu explains. “But what being in the studio brings to the collective is different. When senior artists sit next to someone who hasn’t been in the industry for 20 years, that exchange is invaluable. On challenging projects, there’s a real sense of camaraderie that comes from being together.”

Chandler echoes that sentiment from a personal perspective. “For my mental health, I really value being around people. Working fully remote would be difficult for me.”

Ramani sees the impact in small, everyday moments. “I love the spontaneous hallway encounters; sometimes just bumping into a colleague leads to a quick conversation that resolves a challenge on the spot.”

The social culture reinforces those connections. Staff join art clubs, volleyball groups, foodie communities, Inktober challenges, and a wide range of employee resource groups. As Ramani puts it, “The clubs at ILM are definitely a highlight for me. We have a book club, a Pride ERG, a fashion club – there’s something for everyone, and it’s a joy to watch that community expand. It’s wonderful to see our diverse interests celebrated and getting to know my teammates through the things we love outside of our day jobs.”

For Wu, those communities also create everyday moments of creative exchange. “It’s really fun seeing colleagues share their drawings every day during projects like Inktober.”

Culture at ILM Vancouver does not live in policy documents. It lives in behaviour.

Benning chats with Riya Ramani (Credit: David Dovell & ILM).

Work and Innovation

Vancouver contributes to some of ILM’s most complex and ambitious projects. Artists describe an environment where technical advancement grows directly out of collaboration between departments and disciplines.

The studio is one of  ILM’s five global studios, with work frequently moving between sites as projects evolve. That kind of collaboration demands clarity, trust, and a shared technical language. Vancouver’s location on the Pacific coast places it in close alignment with west coast production while remaining deeply connected to each of the other ILM studios.

Matt Leonard offers a concise summary of the studio’s approach to problem-solving. “When a client has an impossible problem to solve, they often come to us. And I’ve never heard anyone here say, ‘We can’t do that.’”

Taplin points to a recent example where that mindset became tangible. “On Percy Jackson and the Olympians, we were being asked to move fast,” he says. “That meant building things locally, including building an ILM StageCraft LED volume and virtual production team, so the creative work could keep evolving. We were able to tap into the expertise from across ILM as a whole and create something new for our team here.”

He sees that approach as both an ILM hallmark and something the Vancouver studio has fully embraced: Drawing on the wider global company while remaining agile enough to respond quickly as new challenges emerge. That mindset plays out through repeated cycles of iteration. Shots evolve through multiple versions. Tools are reshaped and rewritten in response to real production demands. Chandler recently saw how that same approach shaped the work on Avatar: Fire and Ash (2025). “We developed several new tools that allowed us to work much faster and saved animators from having to do things manually.”

From the production side, Wu sees innovation supported by communication and trust. “My job is to make sure people feel supported and that when work moves between departments, communication is clear.”

Innovation at ILM is rarely about sudden breakthroughs. It is about a steady accumulation. Small improvements layered over time. Systems shaped by people solving real, creative problems at scale.

Benning and William Wu (Credit: David Dovell & ILM).

Belonging to a Larger Story

Artists and production staff in Vancouver describe a strong sense of belonging to something bigger than any single show. They recognize both their individual contributions and their place within ILM’s wider history.

Ramani appreciates that the studio formally recognizes the work of every department. “It’s so rewarding to see ILM include the studio support teams in the credits. It reinforces the idea that no project is the result of just one department; it takes an entire community to reach the finish line.”

Leonard notes how quickly new employees begin to feel connected to that legacy. “Very quickly you start to feel like you’re part of something bigger, something that has a real legacy behind it.”

For Taplin, that sense of continuity is essential. “When you look at all the industry pioneers that are at ILM, all of these people that everyone looks up to started as juniors. They were given opportunities, allowed to try things, allowed to fail, and to build over time. It’s important that people here know they can follow that same trajectory. That this can be a place where you build a career, not just move from project to project.”

Wu became aware of the ILM way almost immediately. “Everyone I spoke to before joining said ILM was the best place to be. And once you’re here, you really understand why.”

Careers at ILM often unfold over many years, sometimes with people leaving and returning, carrying new skills back into the studio. That flow of experience continually refreshes the culture while preserving its core identity.

Wesley Chandler gestures to a familiar Star Wars character as Benning listens (Credit: David Dovell & ILM).

Looking to the Future

The Vancouver studio is shaped by its people, influenced by its environment, and grounded in a culture of shared learning and collaboration. Artists and staff describe a studio where support is real, questions are encouraged, failure is a part of reaching success, innovation grows from teamwork, and ILM’s long history remains a living part of everyday work.

Taplin sees Vancouver playing an increasingly important role in the studio’s future. “There’s so much change happening in the industry. We need to be at the front of that. The question for us is always what Vancouver can bring to the table that serves the wider studio, while also pushing something new forward.”

He is also clear about the importance of acknowledging every department. “I want to recognize all of the teams that contribute to what we do in Vancouver. People come in every day trying to make things a little bit better, to try something new, and to put ideas forward with the wider team in mind. It’s a huge lift that everyone does, and it’s what makes this a special place to be.”

The values that shaped ILM in its earliest years are clearly still present here. Today, those values are expressed through hybrid workflows, global collaboration, and evolving technology. Looking forward, they will be carried by the next generation of artists, coordinators, trainers, and managers who will shape whatever ILM becomes next.

In a city known for its natural beauty, diverse communities, and deep connection to filmmaking, ILM’s Vancouver studio continues to expand the studio’s legacy across film, television, and emerging formats. It remains a place where people can build careers, push technology forward, and contribute to stories told around the world.

ILM’s Vancouver studio is located on the traditional, ancestral, and unceded territories of the Coast Salish Peoples, including the xʷməθkwəy̓əm (Musqueam), Skwxwú7mesh (Squamish), and Səl̓ílwətaʔ/Selilwitulh (Tsleil-Waututh) Nations. We thank all First Nations who have lived and worked on these territories from time immemorial.


Jamie Benning is a filmmaker, author, and podcaster with a lifelong passion for sci-fi and fantasy cinema. He hosts The Filmumentaries Podcast, featuring twice-monthly interviews with behind-the-scenes artists. Visit Filmumentaries.com or find him on X (@jamieswb) and @filmumentaries on Threads, Instagram, Facebook, and YouTube.

ILM visual effects supervisor Vincent Papaix and Nerfstudio creator Matt Tancik discuss their innovative approach to visual effects shot design.

By Lucas O. Seastrom

At this year’s HPA (Hollywood Professional Association) Awards for Technology & Innovation, Industrial Light & Magic and collaborator Nerfstudio took home a win in the Innovation in VFX, Virtual Production & Animation category. Embracing a new kind of open source toolset allowed ILM to recreate visual effects shots for Marvel’s 2025 series Ironheart at a degree of efficiency that greatly outpaced established techniques. The key was “NeRF,” or neural radiance fields, a method that allows 3D photorealistic environments to be created from a sampling of real-world 2D photography.

(Credit: ILM & Marvel).

A Catalyst from Marvel’s Ironheart

ILM visual effects supervisor Vincent Papaix faced an interesting challenge with a handful of drone-based shots from Ironheart, wherein the series’ namesake flies over Chicago’s lakeside waterfront and river district. The fast-flying CG character had to be integrated with the live action plates shot on location. “They decided to film with the drones in a very slow way, thinking we could retime the footage,” Papaix explains to ILM.com. “Typically, you might retime at 200% or 300%, but in this case it was over 1,000%. The character’s movement needed to be very, very fast. Traffic would have to be replaced. When you’re filming at normal speed with the drone, you don’t get the sense of the micro-movement, but at high speed, you could see the high-frequency movements of the camera.”

The visual effects team needed to recreate the desired camera moves while maintaining the appropriate view of the live action background plate. Normally, they might attempt a 2D stabilization of the image, but in a case like this, the sense of depth, or parallax, made for difficulties in trying to stabilize both the foreground and the background at the same time. They considered recreating the entire world in CG, traditionally modeling, texturing, and shading every detailed aspect of the Chicago setting. But with an episodic production schedule, the necessary resources and time required would be prohibitive. 

Papaix decided to begin what he describes as a “pet project,” researching how NeRF models could be applied to visual effects work. At first there was no guarantee that his inquiries would yield results, but then he discovered Nerfstudio, an open source program that provided an end-to-end workflow for developing 3D environments from 2D photography. 

Nerfstudio creator Matt Tancik began his research in developing neural radiance fields as a PhD student at the University of California, Berkeley. “People wanted to experiment and see how much they could push this technology,” Tancik says. “It became obvious that there was a desire for this research to make it into the industry field. But there wasn’t an easy way to do it because it was kind of obtuse research code at the time. The Nerfstudio project was about trying to see how we could wrap it up into something that looked more like a product, and fully open source, so that other people could start playing with it. 

“And most notably,” Tancik adds, “people could help build upon it. A lot of the research projects that we saw coming out of NeRF acted like modules attached to NeRF to make it better along one axis or another. It made sense to try to collaborate as much as possible. The Nerfstudio project was a step towards doing that, and that’s when Vincent and ILM started playing around with it.”

(Credit: ILM & Marvel).

The Function of “NeRFs”

But how exactly do neural radiance fields help empower artists like Papaix and his colleagues to work more efficiently? As Tancik explains, it’s a process that seeks to forego the traditional CG methods that involve the complex, often laborious craft of representing photorealistic imagery as meshes and triangles with applied textures and lighting. “All of that takes a significant amount of effort to make it photoreal, and in some cases, it’s almost impossible,” says Tancik. “That’s not for the lack of people trying to make these methods easier and easier. The goal of NeRF was to essentially see if we could use machine learning to accomplish the same thing. Instead of manually placing these triangles, can we have an algorithm construct these things from photos? So then the work becomes capturing many photos of a scene and converting them into a 3D representation.”

The result is a new approach to storing the corresponding data. Instead of triangles mapped within the CG model, NeRF uses individual points in space, each assigned a specific color. “When you look out into space, you’re shooting out into the scene and seeing what points you hit, and you’re noting which direction you’re hitting that point of space,” Tancik notes. “A single point in space, whether I’m looking at it one way or another, might look a little different. By describing the scene like this, it fits really nicely into optimization techniques that we can use to fit that to an image.”

ILM’s Practical Application

Working with former ILM research engineer Sirak Ghebremusse and former ILM pipeline technical director Kevin Rakes, Papaix oversaw the effort to adapt Nerfstudio’s functionality for visual effects. Both a new encoder and decoder were required to help translate information between Nerfstudio and ILM’s other tools, which ensured the team’s ability to maintain a certain amount of precision with color and image range. 

Similarly, the team needed to process the environments into real-world imagery that could be measured in feet, so Tancik himself created a new file format to aid the transition. That also required the development of new “gizmos” – a group of various nodes of information – within the compositing software Nuke, which allowed the artists to move seamlessly back and forth between the Nerfstudio render and the final effects work. 

“We can work with standard layout and animation in feet, then go into NeRF, import any camera we want, render it through Nerfstudio, and bring that camera move back with us into the Maya or Zeno file,” Papaix notes. “It was key to have that ability.”

(Credit: ILM & Marvel).

As the process evolved, ILM was able to apply these new capabilities in multiple ways. They could stitch a seamless transition between two separate camera views over the water, one captured by a drone and another from a boat, all without the need to create a new CG environment. Entire objects, such as street traffic on a bridge, could be removed. And because they were able to maintain parity between their visual effects environment and the Nerfstudio-rendered space, they could develop entirely new camera paths at the request of the filmmakers.

“We could create a new smooth camera move, basically art direct the exact move that we wanted, and then show that to the director,” Papaix says. “They were very happy. They didn’t think it was possible to change a camera move using an original plate, but we did. They said it was like magic to them. People were curious. Did we project the plate onto geometry? Did we model the whole city? No, there’s no modeling or anything.”

Now with Greater Accessibility

Papaix is keen to note that at the time ILM collaborated with Nerfstudio – in 2022 and ‘23 – these methods were still considered experimental. “Very few people were putting this kind of stuff into production. There was a lot of research taking place, but Matt showed how this could be useful, and ILM took it and made it production-ready.”

Tancik himself adds that “I’ve always been interested in the visual side of things, and hoped to get to that point, but didn’t know if the concept would ever actually make it there. It was not an easy thing to run. You needed a lot of computing power and GPUs. It didn’t feel like it was there yet to be useful in industry or productions. So watching Vincent and ILM put it into practice was really fun.”

Today, the use of neural radiance fields, as well as another similar outgrowth method known as Gaussian splat, is continually on the rise with increasing efficiency in computing power. “This was a science paper a few years ago, and now it’s making its way into all of the software that we use,” Papaix says.

“With the move to Gaussian splat, if I had to do those shots today, I could probably do it from start to finish in only a few days, compared to the months that it took before,” Papaix concludes. “At the time, it took about six months because it was more of a research project off and on, a side project. Now that we understand the tech, we can optimize things, and we can do things much faster. The tech improves so fast. We’re still in the early days of learning how these techniques will be applied.”

Watch the full demonstration reel:

Click here to read more about Nerfstudio.

See the full list of winners from the HPA Awards for Innovation & Technology.

Lucas O. Seastrom is the editor of ILM.com and Skysound.com, as well as a contributing writer and historian for Lucasfilm.

Visual effects artists from across Industrial Light & Magic are recognized for their work.

Members of the visual effects team after receiving their Emmy Award.

At the 4th Annual Children’s & Family Emmy Awards on March 2, 2026, Industrial Light & Magic’s team from Star Wars: Skeleton Crew won Outstanding Visual Effects for a Live Action Program. Additionally, ILM senior animator James Saunders was recognized for Outstanding Individual Achievement in Animation for Ultraman: Rising.

The recipients for Skeleton Crew included production visual effects supervisor John Knoll, production visual effects producers Abbigail Keller and Pablo Molles, animation supervisor Shawn Kelly, visual effects producer Nicole Matteson, virtual production supervisor Christopher Balog, and visual effects supervisors Jeff Capogreco, Bobo Skipper, Andy Walker, Joseph Kasparian, and Eddie Pasquarello.

ILM’s team joined fellow Skeleton Crew recipients from across Lucasfilm, including wins for Outstanding Young Teen Series, Outstanding Editing for a Young Teen Live Action Program, and Outstanding Sound Mixing and Sound Editing for a Live Action Program.

Congratulations to all of our Emmy winners! Read the full list here.

Senior animator James Saunders wins for Ultraman: Rising.

Read more about Star Wars: Skeleton Crew and Ultraman: Rising here on ILM.com:

‘Star Wars: Skeleton Crew’: ILM’s Visual Effects Treasure Chest, From At Attin to Starport Borgo

‘Star Wars: Skeleton Crew’: ILM’s Visual Effects Adventure from the Observatory Moon to Lanupa and back to At Attin

Real-Time Visual Effects: Behind-the-Scenes of ILM’s Cutting-Edge Contributions to ‘Star Wars: Skeleton Crew’

Netflix’s ‘Ultraman: Rising’: Building new worlds for the ultimate icon

ILM Evolutions: Animation, ‘Ultraman: Rising’ and ‘Transformers One’

Visual effects supervisors Charmaine Chan, Andrew Roberts, and Simone Coco share their experiences working together on the Oscar-nominated Jurassic World Rebirth.

By Amy Richau

Bringing dinosaurs to the screen for Jurassic World Rebirth (2025) required a true team effort with multiple ILM visual effects supervisors collaborating in teams around the world. While some films only require one visual effects supervisor to see the production through from start to finish, other films are just – bigger. Backing up production visual effects supervisor David Vickery, who recently talked about his work on Rebirth with ILM, were multiple visual effects supervisors from ILM, as well as others from partner studios like Midas VFX and ILP.

Charmaine Chan, Andrew Roberts, and Simone Coco talked with ILM.com about wrangling a herd of dinosaurs (both familiar and new to audiences), coordinating their individual teams’ work into one cohesive film, and the pressure of working on such a legendary franchise.

(Credit: ILM & Universal).

Getting the call

Working on an installment in the Jurassic film series was a full-circle moment for Chan, Roberts, and Coco, who all pointed to the original Jurassic Park (1993) as a moment that kick-started their careers in visual effects.

In the 1990s, while still working in the games industry, Roberts attended a talk from Jurassic Park’s CG supervisor Stefen Fangmeier in London. Hearing Fangmeier break down the work on Jurassic helped Roberts make the connection from his current work to a potential future in visual effects. “Seeing the same techniques of modeling, animation, and compositing that we were using in the games industry was the initial spark,” Roberts tells ILM.com. “That was an inflection point for me, where I started to pursue working in TV and film. The movie, as well as understanding the work that went into it, completely changed my life and my career and was the reason that I started to pursue computer graphics.”

The scene that stood out to Coco the most from the original film involved the iconic Tyrannosaurus rex. “It was so real and scary,” he notes. “I remember the T. rex screaming in the rain and shaking the glass and everything in the car.” The realism of that scene inspired Coco to better understand how the visual effects were created in other scenes in the film, eventually leading to his work at ILM, starting with projects like Napoleon (2023) and Mission: Impossible Dead Reckoning Part One (2023).

Chan was growing up in Hawaii, close to where Jurassic Park was filmed, when it was released. “I remember thinking, ‘Oh my god, dinosaurs could be there.’ I was a kid, and it just felt so real to me.” After joining ILM, Chan worked on Star Wars: The Last Jedi (2017), The Mandalorian (2019-23), and The Creator (2023) before joining the Rebirth crew, where she attended a special ILM screening of Jurassic Park. “It still stands up,” notes Chan, “that sense of awe and amazement and seeing the dinosaurs for the first time. And for me, it’s about wanting to recreate that feeling.”

While Rebirth was Roberts’s first Jurassic project, he had recently worked closely with director Gareth Edwards on The Creator. But even with that experience, Rebirth provided a “pinch-me” moment for him. “It was a little daunting, just seeing the quality of work and the deep history that ILM has with this franchise,” remembers Roberts. “So it was daunting, but very exciting. And I was definitely up for the challenge.”

The original Jurassic Park from 1993 (Credit: ILM & Universal).

Supervisor 101

The role of a visual effects supervisor can vary from film to film. Chan describes the role as that of both a mediator and translator, as well as the person to whom crew members come to with questions. “You see the big picture of everything and have such a huge overview of what’s going on that you can basically connect the dots that are needed for each department and each person within your team,” says Chan.

Coco points to being on set as an important part of the journey to reaching this role. “You start to see how the set works and how things develop from script to bidding to how we’re going to shoot this once getting on set.”

“In some ways, we’re here to facilitate the visual direction,” notes Chan. “Whether that be from the director or from our production visual effects supervisor, we make sure everyone is on the same page of what that visual need is. A lot of it is just working with people on a daily basis, reviewing their work and seeing that everyone’s moving in the same direction.”

The large number of visual effects shots in Rebirth (over 1,200) required splitting up the work throughout production and postproduction. Pulling off that many shots required constant communication between multiple departments and the visual effects supervisors, the latter of whom kept their focus on being creative problem solvers.

(Credit: ILM & Universal).

Designing the Dinosaurs

Chan was the first of the supervisors to join Rebirth in April of 2024, after dinosaur development at ILM had already begun. Figuring out how the dinosaurs would look and move on screen was a challenge they embraced through to the very last shot of the film. “We were constantly trying to make them the scariest, coolest, most fun dinosaurs we could,” says Chan. “We wanted something different from the previous worlds that we’d seen, something that honored some of the original Jurassic Park dinosaurs. But also, Gareth gave his own twist and turn to the design of them.”

Roberts, who joined Rebirth’s team last September, notes the jump between seeing skeletons of a dinosaur in a museum to thinking about how the creature’s joints would move in different environments. Before joining the film, he rewatched previous Jurassic films to get “familiar with the quality of work in all of them, how some of the creatures moved, and conveying the sense of weight for some of the bigger creatures.”

Gareth Edwards was heavily involved throughout the process of deciding how the dinosaurs would look in the film. “I think at one point we had a two-hour live session with Gareth trying to figure out what the Mutadon was going to look like,” remembers Chan, where one of the team’s modelers would try putting different pieces of real dinosaurs onto a Mutadon sculpture to piece it together. “I think that was vital to the process of making sure that our dinosaurs, from their basic stance, without even being in a shot, could stand by themselves and look cool. Once they were at the state that both David and Gareth were happy with, we would place them into a shot.”

Finding real-world animal references for each dinosaur was a key part of making the movements of dinosaurs in Rebirth appear believable and anchored in reality. To create Dolores, the small Aquilops dinosaur that Isabella Delgado (Audrina Miranda) adopts as her pet, an ILM team, led by animation supervisor Delio Tramontozzi, used videos of themselves interacting with their own pet dogs and cats. “They would have multiple takes of the way their pets were responding to a laser light or picking them up in a way that allowed them to snuggle into the crook of an arm or drape over a shoulder,” says Roberts. The reference videos were submitted with animation of Dolores or other dinosaurs so Roberts and other team members could see how those real-life moments translated to animated shots in Rebirth.

As Vickery was usually the only effects supervisor on set, he made sure to communicate what he and Edwards were looking for as far as dinosaur movements and behavior in different scenes. For the scenes in the tunnels when the Mutadon dinosaur pursued several characters from the film, Vickery took on the role of a dinosaur squeezing into the tunnel and picking itself up after landing on the floor. “There’s a moment where it plants its hands on the floor, leans forward with real weight, and roars before charging,” remembers Roberts. “And for a lot of that, David or [animation supervisor] Steve [Aplin] would act out to really convey the emotion they wanted. I think we really benefited from that. We’re all very comfortable with each other and locked in and just really enthusiastic about getting that character into the creatures.”

For another scene near the beginning of the film where a hybrid dinosaur almost caresses a lab worker with its claw before killing him, an animator was filmed holding a water bottle, looking at it, sniffing it, giving it a quick touch, and then snatching it. Notes Roberts, “that was a wonderful, fun performance from our animators, where they were able to get a bit more emotion into the scene from their own performance, which then was applied to some of the hybrid creatures.”

(Credit: ILM & Universal).

Dividing and Conquering

Different ILM supervisors took lead roles for each major sequence in the film. Chan’s team took on many of the water-heavy sequences featuring the Mosasaurus and the Spinosaurus, as well as the team that developed the Distortus Rex. Coco worked on the Mutadon sequences in the market and the tunnels as well as the T-rex chase sequence on the river, while Roberts tackled the beginning and ending of the film, as well as the cliff sequence featuring the Quetzalcoatlus.

Coco noted that splitting up the work into sections was helpful to their teams, so animators or compositors could go to one supervisor to ask a question, instead of having to approach multiple people to get the information they needed. Daily communication between supervisors and their teams of artists was also key throughout the production, as the team involved hundreds of people working in London, San Francisco, Vancouver, and Mumbai.

“It was very important for us all to hear what Gareth’s feedback was,” says Chan. “Because some feedback given on one dinosaur would also apply to another dinosaur in another sequence. And even though we were different teams, it was vital for us to still be sharing information about how we approach winged creatures or creatures in water — there were a lot of tips and tricks that we shared with one another.”

A library of shared assets documenting the workflow, along with an internal website, allowed everyone to understand what visual effects setups were established and ready to use and what they would need to create from scratch. This was especially helpful to Roberts and Coco, who joined the production after Chan. “A big part is sharing the tools up front to be on the same page about how we’re going to tackle things,” notes Roberts. “And then we have a number of chat groups for supervisors, as well as weekly meetings for each sequence and discipline.” Coco adds, “It was good to see what Gareth was looking for in a shot, or what was important for him in a particular environment, so I could follow that line.”

In one case, Roberts referenced the texture and amount of light in the sky in a night sequence at a gas station that the ILM team in London had worked on. That helped him to prepare a night scene his team had coming up. “We inherited that established look as a mood board of London’s work, allowing our team to match it seamlessly from the start,” notes Roberts, “so that when our team came on, we could say ‘we’re matching that.’ This is something that Gareth has already established. He likes this language for night, so we didn’t have to rediscover or explore that too much. So, without ego, just sharing and following, taking London’s lead where they were ahead, and then we also presented some of our work when we were ahead, or when it was on us to sort of establish a look. Very open communication made it a success and made it feel like it’s one team doing all the work together.”

Chat groups would also give supervisors an easy way to ask each other questions about how they might solve similar problems, especially in sequences where there was a bit of overlap between supervisors. To help with the time difference between London and San Francisco, Roberts and his team started their day early to increase the time the two teams were actively working.Another vital piece of the ILM crew on Rebirth was the production team – visual effects producers and production managers – who made sure supervisor teams were properly staffed, flagged important deadlines, and blocked off time for teams who needed to develop a new technique or tool.

(Credit: ILM & Universal).

Putting it All Together

The challenges Jurassic World Rebirth presented for its visual effects supervisors were varied, ranging from dinosaurs interacting with simulated water, designing environments from multiple elements, and satisfying a director well-versed in visual effects.

Coco’s team tackled the effects-heavy, intense action sequence where the Delgado family is chased by a just-awakened T-rex. While the river in the film is on a tropical island near the equator, these scenes were filmed at a British Olympic river course. “The T-rex interacting with the water, the digitally simulated water, and the family. It was a big, big moment,” notes Coco. “I don’t think a couple of years ago we would be able to do it because of the turnaround time needed. We had amazing effects artists who turned around the simulated water effects in record time.”

The Quetzalcoatlus sequence, when Zora Bennett (Scarlet Johansson) and other members of her team climb down a cliff to retrieve a sample from an egg, had its own unique challenges – and not all dinosaur-related. The cliff and cave environment was put together from a mix of elements, including footage shot at the cave set at Shepperton Studios in England, footage shot at Jog Falls in India, and millions of gallons of digitally simulated water. Mixing footage shot on location, wider shots that were fully CG, and digital extensions on top of drone work became a bit of a puzzle for Roberts’s team to make into one coherent environment. Another important part of this process was getting the right balance, wherein the background isn’t pulling too much focus from the actors. “Even though it’s multiple elements and different sections, you want to create a continuous environment where the audience truly believes the actors are immersed in that backdrop.”

Other shots not involving dinosaurs also occasionally proved tricky to get Edward’s sign-off on, in part because of his knowledge and appreciation of visual effects.

“Gareth has such a particular eye for blue screens that he can tell when a shot is a blue screen shot,” says Chan, “and for him, it’s successful when he can’t tell it’s a blue screen shot. So we are constantly trying to blend in, change lighting, include more atmospheric lens details, just so many little details that most people, when you think of just green screen or blue screen shots, wouldn’t even consider. Because Gareth wanted to make sure it never felt like a blue screen shot.”

Landing on the right scale for the dinosaurs was also an ongoing process for the visual effects supervisors and Edwards. “We’ve created these dinosaurs at a certain height and size,” notes Chan. “We put them in the shots the way they should naturally be at that size and height. And Gareth would look at some shots and say, ‘No, it doesn’t feel big enough.’ So we played this constant game of make it bigger, make it bigger, okay, that’s too big.

“One thing that Gareth just absolutely excelled at is scale and suspense,” Chan continues. “He knows how to compose every shot and frame to give you that sense. So to him, it’s less about the continuity and making sure things physically and scientifically look correct. It’s more about what makes the audience sit and look at something and feel that suspense. And so we worked with our animation team through many, many iterations of trying to figure out compositionally, what is the scale that works best for these shots?”

After months of hard work from teams across the world, the final product came together for the film’s release in July of 2025, giving both audiences and Rebirth’s crew an adventure to remember. “I think, every person who worked on the movie, and everyone that I talked to, they always said it’s been a dream to work on it, because it is such an iconic movie,” says Coco. “And in many cases, they started in visual effects because of Jurassic, so they don’t do it just because of the work, but because they love it. And working on such a big and iconic movie, they put their heart into it.”

Director Gareth Edwards on location (Credit: Universal).


Amy Richau is a freelance writer and editor with a background in film preservation. She’s the author of several pop culture reference books including Star Wars Timelines, LEGO Marvel Visual Dictionary, and Star Wars: The Phantom Menace: A Visual Archive. She is also the founder of the 365 Star Wars Women Project – that includes over 90 interviews with women who have worked on Star Wars productions. Find her on Bluesky or Instagram.

Teams from Andor, Sinners, and Avatar: Fire and Ash were recognized at the ceremony in Los Angeles.

The 24th annual awards ceremony for the Visual Effects Society was held on February 25 at the Beverly Hilton in Southern California, and teams from Industrial Light & Magic have earned three wins.

Avatar: Fire and Ash won Outstanding Environment in a Photoreal Feature for the Bridgehead Industrial City. ILM’s winners included Gianluca Pizzaia, Steve Bevins, Dziga Kaiser, and Zsolt Máté.

(Credit: ILM & 20th Century Studios).

ILM’s John O’Connell, Falk Boje, Hasan Ilhan, and Kevin George won for Outstanding Environment in an Episodic, Commercial, Game Cinematic, or Real-time Project for the Senate District in the episode “Welcome to the Rebellion” from Andor Season 2.

(Credit: ILM & Lucasfilm).

The feature film Sinners was recognized for Outstanding Supporting Visual Effects in a Photoreal Feature, and ILM’s Nick Marshall joined fellow winners Michael Ralla, James Alexander, Espen Nordahl, and Donnie Dean.

(Credit: ILM & Warner Bros.)

Congratulations to all of our ILM VES Awards winners, as well as to our Lucasfilm colleagues, who also took home the win for Outstanding Special (Practical) Effects in a Photoreal Project for their work on Andor.

Read the full list of winners.

Read more about Andor and Sinners here on ILM.com:

Snakes, Trains, and Automobiles: ILM’s Nick Marshall Sheds Light on the Visual Effects of ‘Sinners’

“Like Eating an Elephant One Bite at a Time”: TJ Falls and Mohen Leo on the Visual Effects of ‘Andor’ Season 2

“Let the Experts Be the Experts”: TJ Falls and Mohen Leo on the Visual Effects of ‘Andor’ Season 2

Assembling a Starfighter: Exploring ILM’s Role in Creating the TIE Avenger from ‘Andor’