NEWSCASTSTUDIO.COM
aimed at a diverse demographic is going to
certainly add speed and scale to the busi-
ness model.
Jordan Thomas, marketing manager,
QuickLink: For broadcasters and produc-
tion organizations, we have seen an inno-
vative approach in which manufacturers
and solution providers are utilizing these
AI advancements and applying them to
both existing and new solutions. These
advancements are not only streamlining
workflows but also allowing us to elevate
video and audio quality. The ability to re-
move video artifacts, correct eye con-
tact, and automatically frame the shot of
remote guests is revolutionary when it
comes to creating high-quality content
that engages audiences.
Costa Nikols, strategy advisor, media
and entertainment, Telos Alliance: In au-
dio, AI is unlocking new creative options
and helping make the unmanageable more
manageable — from improving sound clari-
ty in challenging environments to enhanc-
ing dialogue normalization at scale for
global audiences. These advancements
can reduce the manual workload for pro-
duction teams, enabling them to focus on
storytelling and creative processes rather
than the mundane. Automating the mun-
danity is where AI thrives — and where it
can deliver most impact today.
Sam Bogoch, CEO, Axle AI: AI has ma-
tured into a critical tool for broadcasters,
enabling real-time applications such as
scene understanding with semantic search,
automated tagging, speech-to-text tran-
scription, and metadata generation. These
advancements simplify media asset man-
agement, streamline workflows, and en-
hance production speed, allowing teams to
deliver high-quality content faster.
Noa Magrisso, AI developer, TAG Video
Systems: For broadcasters, this means ac-
cess to tools that automate captioning, en-
hance audience analytics, and streamline
video editing. AI agents are revolutioniz-
ing workflows by autonomously managing
tasks like scheduling, content tagging, and
even real-time audience interactions. The
rise of multimodal AI is also a game-chang-
er, enabling seamless integration of text,
images, and audio within a single model.
Simon Parkinson, managing director,
Dot Group: Within broadcasting, there are
many competitive advantages that AI can
help businesses to realize, be it through
video editing, content generation, or auto-
mating industry-agnostic challenges that
free up employees to work on being cre-
ative. The possibilities are endless.
How is AI actively being used in broadcast
production workflows? In real applications,
not just as a proof of concept?
Peyton Thomas: AI is being used in
auto-tracking/auto-framing of robotics
and robotic cameras. During the election
broadcast we saw AI being used to trig-
ger graphics via voice prompts. AI is trig-
gering back-end automation to encode
and tag data during and after a produc-
tion is complete.
Yang Cai, CEO and president, Visua-
lOn: AI is actively used in broadcast pro-
duction workflows to enhance efficiency
and quality. It automates repetitive tasks
like transcription, metadata tagging, and
content indexing, significantly speed-
ing up production timelines. Addition-
ally, AI-driven tools optimize live video
streams by increasing compression ra-
tio through technologies such as con-
tent-adaptive encoding, enable real-time
language translation, and improve visual
quality through upscaling, color correc-
tion, and noise reduction.
Bob Caniglia: AI is being actively utilized
to enhance efficiency and simplify complex
tasks. For example, by using smart reframe
for social media, broadcasters can easily
create square or vertical versions of their
footage for Instagram and other apps, with
AI technology automatically identifying ac-
tion and repositioning the image inside a
new frame so the team doesn’t have to do
it manually. Additionally, there’s real-world
applications of AI-powered facial recogni-
tion that streamline footage organization by
sorting clips based on people in the shot.
Steve Taylor, chief product and technol-
ogy officer, Vizrt: From a Vizrt perspective,
we have been using AI ML for a long time as
a key advantage for our sports and graph-
ics solutions. This includes to support color
keying on any background, without the need
for a green screen. AI ML have also been
used at Vizrt to make augmented reality and
virtual reality more realistic, as well as to
quickly process live sports content to iden-
tify players.
Sam Bogoch: Our company has seen mul-
tiple real-world uses of our Axle AI Tags plat-
form, ranging from large national broadcast-
ers using AI (including RTM, in Malaysia) to
make their news content searchable, to Hol-
lywood promo houses (including MOCEAN,
in Hollywood) using AI to sift through the
massive amount of dailies footage they re-
ceive. In both these cases, AI makes it prac-
tical to search the large amount of relevant
footage for the first time.
Beyond real-world implementation, what is
likely next to use AI or ML?
Stefan Lederer, CEO and co-founder,
Bitmovin: Something we’re exploring and
developing is an AI-powered solution that
translates American Sign Language (ASL)
text into client-side sign-language signing
avatars. Currently, this is strictly an innova-
tion piece that we’re collaborating with the
deaf community on to understand how and
if the technology could help make video
streaming more inclusive. Beyond that, I ex-
pect companies to explore different ways to
make content more accessible for all view-
ers. For example, AI could be used to ana-
lyze video content and narrate key visual el-
ements, such as facial expressions, settings,
and actions, in real-time, which will help to
automate the creation of audio descriptions
for visually impaired viewers.
Steve Taylor: The use of AI to auto gen-
erate subtitles and captions, as well as to
translate languages is definitely an area
that is growing. This is also true for AI’s
use in identify workflow optimizations,
through studio automation. In a produc-
tion environment, it can optimize work-
flow by automating repetitive tasks, en-
abling the team to confidently focus on
other areas of the production.
Noa Magrisso: The next phase of AI
and ML involves advancing collaboration,
personalizing content, and seamlessly le-
veraging multimodal AI to integrate text,
images, and audio. Emerging applications
include adaptive learning tools, healthcare
diagnostics, and immersive media experi-
ences.
How can emerging technologies improve
efficiency in news gathering and reporting?
Siddarth Gupta: Emerging technologies
let reporters quickly filter vast data sets
to help them pinpoint the most relevant
information. Automated tools help reduce
Continued from previous page
For broadcasters, this
means access to tools
that automate captioning,
enhance audience
analytics and streamline
video editing.