The Future Will Be Generated
Navigating the Pixel Economy & Where AI Content Creation Is Headed
Cristóbal Valenzuela, CEO of Runway, recently published a bold essay titled The Pixel Economy. In it, he argues that we’re moving from a world where pixels—images, video, virtual environments—were expensive and scarce, to one where they are cheap, instant, and infinite.
“Pixels were expensive; soon they'll be free. We forget how quickly the impossible becomes inevitable.”
It’s a compelling idea. But is it true?
And more importantly—what does it mean for those of us actually making content?
A Vision of Real-Time Content on Demand
Valenzuela’s core thesis rests on three ideas:
The value of content is shifting from execution to vision
Creation is becoming as cheap and fast as distribution
We’ll soon see 1:1 creation and consumption—content generated uniquely and instantly for each viewer
It’s this final point that feels the most radical.
A world where stories aren’t produced, edited, and distributed—but generated dynamically in real-time, tuned to each user.
I love this thought, but how close are we to actually making this a reality?
Here are three real-world demos that suggest where the industry might be going—but also show what’s still missing.
Odyssey: Real-Time AI Video Worlds
Odyssey generates interactive video at ~25 FPS, refreshing every 40 milliseconds. You can “walk” through AI-generated dreamscapes in real time.
Promise: Video you don’t just watch—you explore
Reality: It’s unstable, GPU-intensive, and glitchy. Narrative cohesion is nonexistent, and sessions remain short.
GameNGen: Neural DOOM
Google’s research project simulates DOOM entirely via neural prediction—no traditional game engine required.
Promise: Fully AI-driven gameplay, rendered on the fly.
Reality: 20 FPS, low fidelity, and limited interactivity. A novel experiment, not a playable experience.
Oasis: Minecraft Clone via Neural Rendering
Oasis builds blocky worlds frame-by-frame, mimicking Minecraft-style interactivity with no engine.
Promise: Procedural play, generated in real-time.
Reality: Logic fails quickly. Physics are inconsistent. Worlds hallucinate unpredictably.
Each of these three examples are technically groundbreaking—but these are demos, not yet tools for production.
None of them currently support durable characters, emotional arcs, or story beats that evolve over time.
More importantly, none offer a user experience that competes with even mid-tier games or films.
What the Pixel Economy Ignores
Let’s be clear: real-time pixel generation is no longer a fantasy. But Valenzuela’s vision leaps over some hard questions:
Do audiences want personalized narrative flows? Or do they still crave shared, authored stories that provoke collective meaning?
Is infinite generation actually valuable? Or are we headed for content fatigue—an endless scroll of things that don’t quite matter?
How do we measure quality in a world of disposable pixels? Is emotional resonance still the metric, or does novelty win?
What about cultural context? If no two people experience the same story, what happens to shared canon?
These aren’t just abstract questions—they’re the terrain we now have to navigate.
The Pixel Economy opens exciting new frontiers, but also invites deeper reflection.
As the tools evolve, so must our understanding of what content means, how it connects us, and what we truly value.
And that’s exactly…
Why We Should Still Pay Attention
Despite the limitations, I believe these experiments matter. Not because they’re ready—but because they mark a rapid shift in how we think about content.
Just a year ago, none of these systems existed publicly. Now you can explore AI dream worlds in your browser.
Technological progress is not linear—it compounds.
We underestimate the jump from “glitchy prototype” to “useful tool” at our own risk.
But the most dangerous assumption is this: that this future is inevitable. It’s not.
Technology doesn't replace human creativity—it magnifies it, distorts it, sometimes breaks it.
It demands discipline to know when to lean in and when to hold back.
Blue-Line vs. Green-Line Thinking
Valenzuela frames this moment in terms of two mindsets:
Blue-liners: Wait for AI tools to mature and integrate into legacy workflows.
Green-liners: Engage early, experiment, build fluency—even if the tools are flawed.
I get the appeal of waiting. These tools aren’t easy. They crash. They hallucinate. There are ethical questions that need answering.
But here’s the truth: if you’re not playing with them now, you won’t be ready when they’re suddenly viable.
The real value isn’t in using these tools today to create finished “Production Quality” work.
It’s in building the muscle memory of real-time iteration.
It's in understanding how prompting, logic, latency, and feedback loops change the shape of storytelling itself.
So What Should Creators Do?
Here’s what I’d recommend for any creative team looking ahead:
Prototype small
Start with internal tests. Try a micro-story, a 60-second interactive demo. Not for public release—just to build intuition.
Document milestones
Create benchmarks: frame rate, stability, coherence, character control. Measure monthly. Track progress.
Invest in hybrid talent
Build cross-functional teams. You’ll need storytellers who understand logic, engineers who understand pacing, and strategists who can connect it all to culture.
Focus on narrative use cases
Not every format benefits from real-time generation. Choose projects where responsiveness enhances the experience—like branded interactions, adaptive learning, or immersive fiction.
Runway’s recent launch of Game Worlds is a great example of this shift in action.
It lets anyone build interactive, text-based games with AI-generated images and dialogue—no code required.
The games are simple, built on text and image prompts, but the long-term vision is clear: Runway aims to integrate 3D generation and physics engines, blurring the lines between filmmaking, gaming, and generative world-building.
Tools like this aren’t cinematic yet—but they’re excellent playgrounds for developing narrative logic, branching dialogue, and visual storytelling in real-time systems.
What We Risk by Waiting
I’m not suggesting we abandon traditional tools. I’m still making linear films.
But I’ve also seen how fast the “weird AI prototype” becomes the default tool.
Dismissing this shift because it’s not yet perfect is like dismissing early YouTube because the resolution was terrible. Or rejecting digital photography because it lacked depth of field.
The creative world is fracturing—some of it heading toward cheap, infinite, context-aware content.
That doesn’t mean all content goes that way. But it does mean you need to decide where you’ll stand when the wave breaks.
Not everyone needs to surf it. But you should at least know how deep the water is.
Be Well, Do Good and Make Awesome Things,
About the Author
Gabe Michael is an award-winning AI filmmaker and creative technologist shaping the future of production with AI. He currently serves as VP and Executive Producer of AI at Edelman, where he consults internal and external teams, enhances production workflows and explores new creative possibilities with AI.
As an early adopter of AI technology in film, video and creative production, Gabe’s work has earned accolades for ‘Best Odyssey’ at Project Odyssey, ‘Best Character’ and ‘Best Art Direction’ at the Runway Gen:48 AI Film Competitions, leading to his entry into many creative partner programs with top AI video tools.
With extensive experience as a director and producer in the creator economy, Gabe collaborates with top film studios, brands, and digital platforms, and shares his expertise on LinkedIn, YouTube, and in classrooms at UCLA.
📍 Website: gabemichael.ai
📺 YouTube: Gabe Michael’s Channel
📷 Instagram: @gabemichael_ai
📝 Substack: The Creative Possible
💼 LinkedIn: Gabe Michael
Great post Gabe. I see a lot of potential in personalized content experiences, but I wonder if that might also lead to more individualism in society. One of the beauties of content is that it can be shared and experienced collectively. Still, it's exciting to see Runway pushing the boundaries of content generation.
Brilliant write-up Gabe. The future may be generated, but how we shape it still matters. Looking forward to seeing how your work continues to push the frontier.