Last Updated on May 2, 2025 by factkeeps
Artificial Intelligence (AI) is no longer a peripheral tool in media production—it has become a driving force in how stories are created, told, and delivered in real time. Speaking at the inaugural World Audio Visual and Entertainment Summit (WAVES) 2025 in Mumbai, Richard G. Kerris, Vice President at Nvidia, emphasized how AI is redefining the entire media and entertainment value chain. With a rapidly growing ecosystem of developers and next-generation tools, Nvidia is pushing the boundaries of what’s possible in creative industries.
Kerris’s core message to the industry was clear: AI is not here to replace artists—it’s here to enhance their creativity and speed up the storytelling process. “Whether it’s a movie, a TV show, or a game—it’s really about the story. These tools help you get to that point faster,” he said. At the center of this transformation are Nvidia’s developer-first initiatives, including RTX Kit, HoloScan for Media, and NIM (Nvidia Inference Microservices).
Developers: The Real Drivers of AI Innovation
According to Kerris, the AI revolution is powered by developers. “Developers are key to what’s taking place with AI, because they understand how an application works, and they can harness the power of AI and bring it to fruition with the work that’s being done together.” Nvidia’s strategy is built around empowering these developers with the tools and platforms they need to build the next generation of content creation solutions.
One of the standout innovations highlighted at the summit was HoloScan for Media, a software-defined platform that allows real-time AI to be integrated into live broadcasts. This marks a fundamental shift from traditional post-production workflows to interactive, real-time content delivery. With HoloScan, broadcasters can now make dynamic, AI-driven decisions such as language changes or personalized content feeds without the need for expensive, dedicated hardware.
Expanding AI’s Reach Across the Value Chain
The pace of AI adoption in media is accelerating. Nvidia’s ecosystem has ballooned from around 22,000 generative AI companies to nearly 30,000, indicating explosive growth. These partners include tech giants like Adobe and emerging innovators like Runway and Arctris. Nvidia’s role, as Kerris puts it, is to “provide the platform and support the developers who are shaping the next generation of storytelling.”
At the heart of Nvidia’s push is NIM, a suite of microservices that handle specific tasks such as translation, visual effects, and interactive overlays. These can be assembled into modular “blueprints” for different production requirements—whether for film studios, news broadcasters, or digital content creators. “It’s growing fast, and these tools are designed to scale from consumer-grade setups to data-center workflows,” Kerris said.
This scalability has significant implications. For instance, AI can now automatically reformat content—like converting widescreen 16:9 cinematic scenes into vertical video formats for mobile—without compromising artistic integrity or character continuity. This level of personalization and audience engagement was previously unthinkable on such a scale.
The Human Touch Remains Central
Despite concerns about job displacement, Kerris was quick to reassure the audience. “It doesn’t replace the artist out there. What it does is it accelerates the capability for an artist to tell their story,” he stressed. Rather than diminishing human creativity, AI is expanding the possibilities available to artists, writers, directors, and designers.
As real-time AI continues to evolve, the focus remains on enhancing storytelling, not automating it away. The blend of human vision and AI efficiency is crafting a future where stories are more immersive, personalized, and powerful than ever before.
With AI inputs.