Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
Adobe‘s Premiere Pro has been among the most popular video editing programs in the world following its initial release as just “Premiere” for Mac in late 1991 and is today used by major Hollywood movie editors and indie filmmakers around the world. But it’s about to undergo a shake-up unlike any in its 33-year history.
Today, Adobe announced it aims to update Premiere Pro to add plug-ins to emerging third-party AI video generator models — including OpenAI’s Sora and rivals Runway ML’s Gen-2 and Pika 1.0.
Not only would this bring these AI tools to even more potential users, but if these AI integrations do prove to be taken up by nearly any portion of Adobe’s 33 million Creative Cloud paying subscribers, they could usher in the most sweeping and revolutionary changes to the computerized video production process yet.
With this addition, Premiere Pro users would be able to edit and work with live-action video captured on traditional cameras alongside and intermixed with AI footage. Imagine filming a video of an actor performing a scene running away from a monster, then generating the monster itself with AI — no props or costuming necessary, but both clips are accessible and combined in the same video file in the same editor. The same is true for animation created using more established processes — from computers to hand-drawn frames — which could be intermixed with AI footage matching it in the same file on Premiere Pro.
AI Scaling Hits Its Limits
Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:
- Turning energy into a strategic advantage
- Architecting efficient inference for real throughput gains
- Unlocking competitive ROI with sustainable AI systems
Secure your spot to stay ahead: https://bit.ly/4mwGngO
As Adobe wrote in a news release published on its website: “Early explorations show how professional video editors could, in the future, leverage video generation models from OpenAI and Runway, integrated with Premiere Pro, to generate B-roll to edit into their project. It also shows how Pika Labs could be used with the Generative Extend tool to add a few seconds to the end of a shot.”
There’s an interesting philosophy behind the move, which the company spells out: “Adobe sees a future in which thousands of specialized models emerge, each strong in their own niche. Adobe’s decades of experience with AI shows that AI-generated content is most useful when it’s a natural part of what you do every day. For most Adobe customers, generative AI is just a starting point and source of inspiration to explore creative directions.”
Adobe also published a preview video showing what the future workflow with third-party AI video generators would look like, embedded in the section above.
No timetable or concrete release details for now…
No timetable has been set for when Adobe would integrate these third-party AI video generators into Premiere Pro, and the details don’t seem fully ironed out quite yet — especially since many of these third-party tools require paid subscriptions after a few initial preview video generations, or, in the case of Sora, are not even publicly available.
In addition, Adobe has made it a point with its own in-house generative AI products — Firefly and Generative Fill, among them — to note that its models are trained on data it owns or has licensed/has rights to use, such as Adobe Stock image contributor content (to the chagrin of some Adobe Stock photographers and artists).
This, combined with Adobe’s AI indemnification coverage, is designed to position the company as a source of trusted AI tools and products for enterprises in particular, who may be wary of the ongoing legal challenges, issues, and general critiques of some artists and creators against AI.
Presumably, the use of third-party tools within Adobe Premiere Pro would not be covered by the same indemnification shield, but Adobe does state in today’s news release that: “Adobe pledges to attach Content Credentials – free, open-source technology that serves as a nutrition label for online content – to assets produced within its applications so users can see how content was made and what AI models were used to generate the content created on Adobe platforms.”
Also, just last week, Adobe was reported by Bloomberg to have trained Firefly on some images generated by the rival AI art generator Midjourney, which itself is based on the open-source AI model Stable Diffusion, trained on scraped and copyrighted web data in the public domain (among a multitude of sources).
For now, Firefly for video
In the meantime, Adobe today also announced a version of Firefly text-to-image generator model will be coming into Premiere Pro “later this year,” allowing a whole new set of “generative AI workflows” and features within the software.
Among them is “Generative Extend,” which lets video editors and filmmakers “seamlessly add frames to make clips longer” without shooting any new footage, a potentially hugely helpful and money-saving feature. Adobe also notes how it can aid in making smoother transitions, so a clip that ends too abruptly can instead be extended to linger longer on a moment or motion.
Firefly for Video will also ensure Premiere Pro users can perform intelligent “Object Detection & Removal,” essentially highlighting objects — props, characters, costuming, scenery, etc — within their videos and allowing the AI model to track them across different frames. In so doing, users will also be able to edit these objects with gen AI into new ones — quickly changing a character’s costume or prop — or remove the objects entirely even across multiple clips and camera angles.
Finally, Firefly for Video will also ship with a text-to-video image generator, putting it into a class right up against Sora, Runway, Pika and Stable Video Diffusion. However, without getting hands-on, it’s tough to say how the quality and accuracy of the model at reproducing a user’s text prompt will compare to those of more established AI video generators.
Positive early reception among filmmakers and creatives
Though it’s still just a preview for now, Adobe’s new gen AI integrations and features for Premiere Pro have already won it applause from filmmakers and creatives on social media, especially those people already experimenting with AI video production.
“If this actually works, this is how AI will make everyone more efficient,” wrote Jason Zada, the filmmaker and founder of the creative studio Secret Level, in a comment on LinkedIn.
“This is going to be incredibly helpful with my live-action work,” wrote director Kevin K. Shah in the same thread.
Bilawal Sidhu, an AI influencer and former member of Google Maps’ AR/VR team, posted on X stating that the proposal to add third-party AI video models to Premiere Pro “is amazing for creatives because to do anything compelling with AI video generation models you need to bring them into a video editing tool.”