AI Video Just Got a VXF Department
This new AI filmmaking platform gives you control over motion, camera movement, and VFX
Last week, I had the incredible opportunity to guest host a video tutorial on Curious Refuge, where I walked viewers through one of the most exciting new AI tools in visual storytelling, Higgsfield AI.
As someone deeply passionate about AI filmmaking and the tools shaping this new frontier, this was a chance to showcase what’s possible right now with just a few prompts, some creativity, and a clear vision.
What is Higgsfield AI?
At a glance, Higgsfield is an image-to-video and text-to-video generator—but what sets it apart is its fine control over camera movements and visual effects.
Where most generators give you stylized animation or abstract visual noise, Higgsfield offers a toolkit that feels like it was built for directors.
You can:
Guide Camera Direction (zoom, whip pan, dolly, etc.)
Control Action Beats (explosions, levitation, disintegration)
Direct Specific Choreography (moonwalks, boxing punches, skate tricks)
Create an Ad from just one product image
In short, Higgsfield gives filmmakers and creators the power to orchestrate scenes that feel alive, cinematic, and stylistically intentional.
How It Works
There are two creation flows inside Higgsfield:
1. Text to Image
Start with a basic prompt. You can optionally apply stylizations (like Muppets, animated sitcoms, or even film noir).
Example: “A man with a beard gives a lecture in Muppet style.”
You select aspect ratio, quality, and whether to use their standard model or GPT-4. Easy generation and surprisingly good results.
2. Image to Video
This is where the magic happens.
You upload or generate an image, then apply camera effects, action triggers, or visual effects.
For instance:
Zoom into the subject’s eye for a transition
Trigger a building explosion + dolly in to simulate chaos
Have a clown shoot the lens and crack it (yes, that actually worked)
Most effects also support start and end frames, which is killer for transitions between scenes or to specify movement in a shot.
3. Effects That Actually Work
Here are some I used (and recommend you try):
Disintegration + Explosion: Thanos-style vanishing mid-scene
Arc Turn + Metal Morph: Great for character transformations
Levitation + Invisibility: Slick for supernatural or sci-fi setups
Face Punch, Lens Crack, Liquid Morphs: Action sequences and creative reveals
You can even do car chases and explosions with surprisingly accurate framing. Yes, AI can simulate a chase scene now.
The control is granular: you can assign weights to each effect (e.g., 70% explosion, 30% zoom) and combine multiple fxs or movements in a single shot.
Tips for Better Output
Keep prompts clear, but don’t over-specify: Higgsfield’s enhancer can rewrite messy ones
Use images that reflect the FX: e.g., if your character needs to fly, pick an image where their arms are already out
Reroll often: Like any model, it may take a few tries to land the best output
Adjust effect sliders: Don’t just settle for default values—fine-tune movement weight and duration
End frames help with storytelling: Use them for whip pans, zooms, or location transitions
Pricing Breakdown
Higgsfield offers 25 free credits to start which is barely enough for light testing. Real use requires a plan.
Here's a quick summary:
Basic Plan – $9/month: 300 images or 30 video generations
Pro Plan – $20/month: 600 credits, higher speeds, no watermark
Unlimited Plan – $55/month: 1,500 credits, 4 concurrent jobs, early feature access
All prices above for the annual plans. You can also top off with one-time credit packs if needed.
Final Thoughts
Guest hosting Curious Refuge was an honor. But more than that, it reminded me how quickly this AI filmmaking space is evolving. Tools like Higgsfield make it not only possible but practical to design cinematic sequences without an expensive production.
And sure, it’s not perfect. But it’s creative, fast, and inspiring. And that’s a damn good starting point.
If you’re experimenting with Higgsfield too, I’d love to hear what effects you’ve pulled off. Or drop me your prompts, let’s swap tips.
Be Well, Do Good and Make Awesome Things,
About the Author
Gabe Michael is an award-winning AI filmmaker and creative technologist shaping the future of production with AI. He currently serves as VP and Executive Producer of AI at Edelman, where he consults internal and external teams, enhances production workflows and explores new creative possibilities with AI.
As an early adopter of AI technology in film, video and creative production, Gabe’s work has earned accolades for ‘Best Odyssey’ at Project Odyssey, ‘Best Character’ and ‘Best Art Direction’ at the Runway Gen:48 AI Film Competitions, leading to his entry into many creative partner programs with top AI video tools.
With extensive experience as a director and producer in the creator economy, Gabe collaborates with top film studios, brands, and digital platforms, and shares his expertise on LinkedIn, YouTube, and in classrooms at UCLA.
📍 Website: gabemichael.ai
📺 YouTube: Gabe Michael’s Channel
📷 Instagram: @gabemichael_ai
📝 Substack: AI and the Creative Possible
💼 LinkedIn: Gabe Michael