Recently I’ve been very deep into the potentials of AI for storytelling. Everything from Midjourney to Runway to Stable diffusion. I’m going to attempt to layout what I’ve been working on with this NeRF to Stable Diffusion commercial process.

So firstly I thought about what I could with just a single object. Now normally I wouldn’t do this for a commercial purpose but it was sort of a challenge. What can I create using AI and just capture one single NeRF. I know how to capture a pretty good NeRF and I also understand how to create a stable diffusion model, then finally I wanted to maybe fuse that together to create a journey through time and space. So I came up with a storyline, then slowly peeling that back and distilled it down to a basic visual depiction of it… there was ZERO budget after all.

  1. Capture a flawless NeRF
  2. Train a unique diffusion model
  3. Craft a bunch of camera moves with the NeRF in order to fake the impression of movement. IN this case, a car driving from point A to point B.
  4. Craft an edit with these video files of my NeRF looking like it’s driving somehow lol.
  5. Style transfer my SD art model onto the shots I created.
  6. drop it back into the edit

Again this is all from just 1 single NeRF capture. So I did learns some things along the way that would potentially inform anything along these lines for the future.

Lessons Learned:

Wait, watch the video first!

OK, Lessons Learned:

  1. Probably not a good idea to make a TV commercial about objects that need to be in motion with a motionless 3D object lol.
  2. With more time and money to capture an actual moving car this style transfer could be waaay better and have very little rapid flicker. However in this case I have a large caveat: In order for the car to appear in motion it needed that GEN AI flicker to give a sense of motion so it actually helps here. But with a moving human or actual driving car, it would be much better.
  3. Using AI for aesthetic purposes to integrate or pepper it into the final product of the video or Advert, is the way tog o. This isn’t something I actually learned on this project. I knew this after the first big job I did for a pharma company. But my personal style and desire is to use AI to enhance the creative, but not always should it be the only fashion by which we experience the creative output.

I have another project I’m working on now that really starts to integrate GEN AI with live action in a way that feels better. It makes use of GENAI, NeRF and some other elements. I’ll post it here when it’s done. But for now I’ve posted an original NeRF shot here to give you a sense of what it was before the style transfer and also a smaller 7 sec cutdown of sorts.


Thanks for reading.