Last week, Deepa Subramaniam, Adobe’s Vice President of Product Marketing for Creative Cloud, was excited to reveal Adobe’s Firefly generative AI model at Adobe MAX London. Michael Burns questions her about the fears and hopes this new influence will have on post production.
“Firefly debuted last March. It’s barely over a year old, yet it’s been incredible to see the reception to the foundational models – we have an image model, we have a vector model, we have a design model, and we’re working on a video model, which we’ll bring to market this year,” Subramaniam told IBC365 on the eve of the MAX event in Battersea. Subramaniam heads up a team driving Adobe’s digital imaging, photography, video, and design strategies forward: “The excitement and enthusiasm and adoption from the community [towards Firefly] have been awesome, and all of that development has happened through open public betas.”
The Firefly model integrates AI image generation into Adobe’s creative applications, shown at MAX running in Adobe Photoshop, Lightroom, Illustrator, InDesign, and Adobe Express. But what will be of most interest to IBC365 readers is a sneak preview of a new video model that promises AI-powered post production capabilities in Premiere Pro...
You are not signed in.
Only registered users can view this article.
Finding our ethical true north on AI: Part II
Part two of our insight into AI ethics and regulation continues with observations on industry efforts around standards and best practices, and why human impact should be the guiding force. James McKeown reports.
Digital Catapult: AI innovations to supercharge the creative industries
Accelerated VFX workflows, video game characters you can converse with, and auto-generated visual experiences from sound for XR headsets are just some of the AI innovations devised by start-ups as part of a recent Digital Catapult programme. Adrian Pennington reports.
Neural Radiance Fields – A new approach to 3D modelling
From the chemical, mechanical and electrical process of creating a film, to the rise of virtual production, visual storytelling has always turned to cutting-edge technologies. Now Neural Radiance Fields (NeRF) could replace the traditional technological foundations that broadcasting and film are built upon. IBC365 speaks to leading researcher, Professor Ravi Ramamoorthi.
Future predictions – Part II: Leaders and analysts
The coming year hints at big changes in focus and innovations for the media and entertainment world. With giant leaps in AI advancements, streamlining production and the road ahead for ad-tech, how can vendors meet the demands of the hungry yet cost-conscious consumer, whilst staying ahead of the game? John Maxwell Hobbs gathers more expert insight from leaders and analysts in the second part of our future predictions series.
Future predictions – Part I: Broadcasters and suppliers
As we wrap up 2024, it’s time to consider what lies ahead for the media industry in 2025. John Maxwell Hobbs probed industry executives to share their crystal ball predictions on themes spanning the impact of AI, the transition from hardware to software-based solutions, data security and ways of reaching new audiences.