AI Video Workflow
AI Video Workflow


With the launch of a myriad of new AI gen video tools and upgrades to existing ones, I wanted to fully explore taking a set of MJ still renders all the way through the AI production pipeline to finished project.
This project actually started off as part of my day role as a visual designer with my head of BD asking if we could create some visuals for a close partner project Estfor Kingdom who were pushing a big update.
Never one to pass up an opportunity to learn, I decided to try and do this using AI tools. With the original artwork as a starting point, I began to explore AI production pipelines that would take me to a finished, polished piece. Hopefully.
1: Initial imagery extended from 1:1 ratio to 16:9 using generative fill in photoshop. Great tool and great consistency on the results, this software is a must have in any designers toolbox for photo bashing.

2: I fired up a sub to by far the most impressive tool I have used in a long time. I upscaled the PS images with @Magnific_AI and allowed it to add in subtle details and was not disappointed with the results. Well worth the money.

3: I tried a few different programs to animate the upscaled Magnific images. First up was Final Frame AI, a great tool but did not quite cut it on wide/scenery stills. Runwayml did however with a mixture of camera motion and motion brush.

4. With a full set of around 10 renders, I jumped into After Effects and began threading the 3d renders together adding in particles, some text, stock elements and a few other bits and bobs to try and sell the illusion of depth.

5. Dynamic linked the AE project in Premiere to add some audio and sound effects and used a number of tools including speed ramping to really merge the AE render into the audio track.

Overall, super happy with the outcome and was a tonne of fun to explore these tools. Was thinking of running the final 720p render through something like @topazlabs AI video upscaler to see the results. Wanna test pika_labs and a few other tools next.
With the launch of a myriad of new AI gen video tools and upgrades to existing ones, I wanted to fully explore taking a set of MJ still renders all the way through the AI production pipeline to finished project.
This project actually started off as part of my day role as a visual designer with my head of BD asking if we could create some visuals for a close partner project Estfor Kingdom who were pushing a big update.
Never one to pass up an opportunity to learn, I decided to try and do this using AI tools. With the original artwork as a starting point, I began to explore AI production pipelines that would take me to a finished, polished piece. Hopefully.
1: Initial imagery extended from 1:1 ratio to 16:9 using generative fill in photoshop. Great tool and great consistency on the results, this software is a must have in any designers toolbox for photo bashing.

2: I fired up a sub to by far the most impressive tool I have used in a long time. I upscaled the PS images with @Magnific_AI and allowed it to add in subtle details and was not disappointed with the results. Well worth the money.

3: I tried a few different programs to animate the upscaled Magnific images. First up was Final Frame AI, a great tool but did not quite cut it on wide/scenery stills. Runwayml did however with a mixture of camera motion and motion brush.

4. With a full set of around 10 renders, I jumped into After Effects and began threading the 3d renders together adding in particles, some text, stock elements and a few other bits and bobs to try and sell the illusion of depth.

5. Dynamic linked the AE project in Premiere to add some audio and sound effects and used a number of tools including speed ramping to really merge the AE render into the audio track.

Overall, super happy with the outcome and was a tonne of fun to explore these tools. Was thinking of running the final 720p render through something like @topazlabs AI video upscaler to see the results. Wanna test pika_labs and a few other tools next.