
As artificial intelligence reshapes creative industries, Hollywood artists are finding new ways to merge technology and imagination. Yu Han “Hank” Chang, an award-nominated Senior Compositor specializing in Nuke, has been at the forefront of this evolution—bridging traditional visual effects workflows with emerging AI tools.
In a recent research project titled Next Stop Paris, Chang and his team explored how generative AI platforms like Runway, SAM, and Beeble could integrate directly into a professional Nuke pipeline. The result wasn’t about replacing artists with automation, but about redefining what’s possible when machine learning assists rather than dictates the creative process. In this conversation with AI Journal, Chang discusses how AI is changing collaboration across departments, accelerating iteration, and opening new creative pathways for visual storytelling.
You were involved in producing a short film that combined Nuke with Runway and other AI tools. Can you walk us through the workflow of that project and how AI technologies fit into the pipeline?
In Next Stop Paris, a research short I worked on at TCL, I was responsible for a train sequence that explored how AI tools could fit into a traditional Nuke pipeline. The actors were filmed against a green screen, and instead of relying on standard plates, we used Runway’s video-to-video system to generate the background environment. To support integration, we used SAM to create mostly the core mattes and Beeble to generate utility passes such as depth and normals, which allowed us to relight the actors so they felt naturally embedded in the scene. Our team also developed a process we called “Rediffusion,” where we would start with a rough slapcomp in Nuke, feed it into Runway to add interactive lighting or texture like grime on the windows, and then selectively reapply those enhancements back onto the plates. This approach let AI serve as a helper rather than a replacement—speeding up matte work, providing passes we couldn’t otherwise capture, and giving us creative layers to integrate—while the final compositing decisions and polish were always guided by the artist’s eye.
What unique advantages did AI tools bring to the creative process that traditional methods could not?
One of the unique advantages AI has brought to the creative process is its ability to generate unexpected results that expand our creative options beyond what traditional methods could achieve. For example, when we ran a slapcomp through Runway, it didn’t just give us a preview of the final look—it also introduced surprising effects we hadn’t initially imagined. In one case, the story required the actor to grow extra fingers, and the video-to-video output offered a range of variations that became creative choices rather than limitations. In that sense, AI acted almost like a creative collaborator, broadening the horizon of possibilities and accelerating the brainstorming stage by allowing us to quickly visualize ideas that might not have emerged through manual methods alone.
Did incorporating AI into the production change how different departments collaborated on the project?
In terms of collaboration, incorporating AI didn’t radically change how departments worked together, but it did streamline certain parts of the pipeline. For instance, as compositors, we could generate utility passes, as mentioned earlier, directly through AI tools, which would reduce the need to wait for the 3D department to build them from scratch. Although these passes weren’t always perfectly accurate, they were often sufficient to push the shot forward and accelerate iteration, making the overall workflow quicker while maintaining the same structure of interdepartmental collaboration.
As a compositor, precision and seamless integration are crucial. How do you balance the technical demands of AI-driven tools with maintaining artistic control over the final frame?
As a compositor, I believe the key to balancing AI-driven tools with artistic control is grounding everything in the fundamentals of photography and image structure. AI can generate useful results, but it doesn’t always understand consistency or intent. By relying on my knowledge of light, perspective, and composition, I can evaluate what the AI produces, identify usable elements, and refine or replace those that aren’t working. In practice, that means treating AI as a supportive tool to speed up or inspire ideas, while ensuring the final frame always meets the technical precision and visual integrity that audiences expect.
How do you think studios and artists should prepare for a future where AI becomes even more embedded in production workflows?
I think studios and artists should approach AI with the right mindset—understanding when it can be a valuable tool and when not to rely on it. That means building the psychology of using AI thoughtfully: letting it speed up repetitive or exploratory tasks, but still grounding the creative process in human judgment and artistic principles. Preparing for the future is less about replacing traditional skills and more about integrating AI in a way that enhances efficiency and creativity without losing control of the final vision.
What emerging AI technologies or techniques are you most excited about for the future of visual effects?
With technology evolving, I’m most excited about seeing AI tools become more accurate in generating roto and mattes. A tool that can quickly separate elements while still preserving fine edge details—like hair, motion blur, or transparency—would be a huge step forward. It would not only save a tremendous amount of time on one of the most labor-intensive tasks in compositing but also free artists to focus more on the creative and artistic aspects of the work.
If you could design the perfect AI tool for compositing work, what would it do?
That’s a tricky question. Of course, the dream would be an AI tool that could finish a comp with one click. But if that really happened, I’d probably be out of a job! More realistically, the perfect tool I want to design would be one that automates the repetitive tasks while giving artists smarter controls. For example, an AI that could perfectly fix edges, intelligently choose the right colors and luminance to be picked up when compositing different plates, or handle roto and cleanup with minimal adjustments. That way, it speeds up the technical side but still leaves the creative and artistic decisions in the compositor’s hands.
What aspects of compositing continue to challenge or excite you after years of working on complex productions?
What still excites me after years in compositing is the collaborative problem-solving, the moment when you sit down with colleagues or supervisors and discuss creative approaches to a difficult shot. That exchange of ideas and human connection feels especially valuable now that much of our work is done remotely in front of a computer. And of course, there’s a real sense of satisfaction when you finally conquer a challenging shot and see it come together seamlessly on screen.
Looking ahead, how do you see AI shaping your own artistic journey, and in what ways do you hope to use these tools to push the boundaries of visual storytelling?
For me, AI is a powerful tool for visualizing the concepts in my head. With my background in artistry, I see it as a way to take an idea and make it clearer and more precise, almost like creating a visual sketchbook that others can immediately understand. The outcome is like clay, which can be further shaped and refined. Looking ahead, I hope to use these tools not just to speed up workflows but to eventually tell my own stories. Using AI as a reference and springboard to push the boundaries of how ideas are communicated and how visual storytelling can evolve.