A step-by-step guide to an easy and controllable AI effect

Introduction – Finding a Path in the Wild West
I promised you last week that I’d go a bit further in depth into a way of controlling AI for use in VFX. I’m going to talk you through the process of creating the swirling apparition in the shot above. When I initially looked at this shot, I was sure that it would have to be simulated smoke, because that’s the only way that I could nail the shape, direction, and timing on it.
That would’ve been half a day or a day’s work, so I decided to try and find an AI based solution.
The use of AI in filmmaking and VFX is a Wild West at the moment. There’s no set protocols, it’s just everyone figuring it out. Last week, I said that from my perspective, I was less interested in using it to replace filming and using VFX, but in maximizing its potential as a tool.
If it’s going to be a useful tool, then the goal is controllability, which is AI’s main weakness.
As I mentioned last week, my first attempt to wrest control from it was to ask AI to create its effects on a flat green or black background, I did this using Luma’s Dream Machine.
Creating “Directability”
That works pretty well, but remember – if you can film it yourself, it’s going to look better! In this example I’m creating a smokey apparition, so AI’s uncanny weirdness could work in my favour.
A better way to control AI rather than using straight prompts is of course using image-to-video. I basically create a base image, plug it into AI and try and direct it from there. It has its usefulness, but it’s still very limited for film – it’ll create its own camera moves, and will completely destroy an actor’s performance.
Video-to-video is the next step, since then you can at least control the main beats of movement. This allows for much more directability. Of course, you still have the problem of AI completely replacing everything you’ve done rather than selectively changing parts!

Step by Step Video-to-Video Element Creation
To get around this, I aim to give it the smallest amount of information that I need to direct it to do what I need, here I’m used Runway ML.
1. I created a brush stroke in Fusion that followed the path that I wanted the shadowy smoke to move along

2. Simply by changing an attribute of the brush stroke, you can animate a brush stroke along its path. I animated it according to the speed that I wanted it moving in shot

3. I then exported just that brushstroke as a video, and put it into Runway video-to-video, asking it to create smoke, and putting in an image of swirling ink as a reference

4. The resulting element doesn’t really feel like smoke, but I thought it would be great for a sinister smoke-like apparition. If it needed to feel different, we’d unfortunately be back in the land of “re-prompt and pray”. Once downloaded, I treated it just like any filmed element, and integrated it using traditional VFX techniques (multiplied it over the shot, and used a blurred flipped version to create a soft shadow on the desk, and roto’ed out the cup so that it could go behind it)
The total turnaround time of the whole shot was roughly an hour, and I think that I can halve that time. I’m going to try this process out in a few more shots, and then I’ll create a video on how to do it, if you’re interested in using similar effects your end.
As always, I hope you have a great weekend!