AI in Media & Broadcast – Professional Essentials Guide

Welcome to interactive presentation, created with Publuu. Enjoy the reading!

NEWSCASTSTUDIO.COM

NEWSCASTSTUDIO.COM

aimed at a diverse demographic is going to

certainly add speed and scale to the busi-

ness model.

Jordan Thomas, marketing manager,

QuickLink: For broadcasters and produc-

tion organizations, we have seen an inno-

vative approach in which manufacturers

and solution providers are utilizing these

AI advancements and applying them to

both existing and new solutions. These

advancements are not only streamlining

workflows but also allowing us to elevate

video and audio quality. The ability to re-

move video artifacts, correct eye con-

tact, and automatically frame the shot of

remote guests is revolutionary when it

comes to creating high-quality content

that engages audiences.

Costa Nikols, strategy advisor, media

and entertainment, Telos Alliance: In au-

dio, AI is unlocking new creative options

and helping make the unmanageable more

manageable — from improving sound clari-

ty in challenging environments to enhanc-

ing dialogue normalization at scale for

global audiences. These advancements

can reduce the manual workload for pro-

duction teams, enabling them to focus on

storytelling and creative processes rather

than the mundane. Automating the mun-

danity is where AI thrives — and where it

can deliver most impact today.

Sam Bogoch, CEO, Axle AI: AI has ma-

tured into a critical tool for broadcasters,

enabling real-time applications such as

scene understanding with semantic search,

automated tagging, speech-to-text tran-

scription, and metadata generation. These

advancements simplify media asset man-

agement, streamline workflows, and en-

hance production speed, allowing teams to

deliver high-quality content faster.

Noa Magrisso, AI developer, TAG Video

Systems: For broadcasters, this means ac-

cess to tools that automate captioning, en-

hance audience analytics, and streamline

video editing. AI agents are revolutioniz-

ing workflows by autonomously managing

tasks like scheduling, content tagging, and

even real-time audience interactions. The

rise of multimodal AI is also a game-chang-

er, enabling seamless integration of text,

images, and audio within a single model. 

Simon Parkinson, managing director,

Dot Group: Within broadcasting, there are

many competitive advantages that AI can

help businesses to realize, be it through

video editing, content generation, or auto-

mating industry-agnostic challenges that

free up employees to work on being cre-

ative. The possibilities are endless.

How is AI actively being used in broadcast

production workflows? In real applications,

not just as a proof of concept?

Peyton Thomas: AI is being used in

auto-tracking/auto-framing of robotics

and robotic cameras. During the election

broadcast we saw AI being used to trig-

ger graphics via voice prompts. AI is trig-

gering back-end automation to encode

and tag data during and after a produc-

tion is complete.

Yang Cai, CEO and president, Visua-

lOn: AI is actively used in broadcast pro-

duction workflows to enhance efciency

and quality. It automates repetitive tasks

like transcription, metadata tagging, and

content indexing, significantly speed-

ing up production timelines. Addition-

ally, AI-driven tools optimize live video

streams by increasing compression ra-

tio through technologies such as con-

tent-adaptive encoding, enable real-time

language translation, and improve visual

quality through upscaling, color correc-

tion, and noise reduction.

Bob Caniglia: AI is being actively utilized

to enhance efciency and simplify complex

tasks. For example, by using smart reframe

for social media, broadcasters can easily

create square or vertical versions of their

footage for Instagram and other apps, with

AI technology automatically identifying ac-

tion and repositioning the image inside a

new frame so the team doesn’t have to do

it manually. Additionally, there’s real-world

applications of AI-powered facial recogni-

tion that streamline footage organization by

sorting clips based on people in the shot.

Steve Taylor, chief product and technol-

ogy ofcer, Vizrt: From a Vizrt perspective,

we have been using AI ML for a long time as

a key advantage for our sports and graph-

ics solutions. This includes to support color

keying on any background, without the need

for a green screen. AI ML have also been

used at Vizrt to make augmented reality and

virtual reality more realistic, as well as to

quickly process live sports content to iden-

tify players.

Sam Bogoch: Our company has seen mul-

tiple real-world uses of our Axle AI Tags plat-

form, ranging from large national broadcast-

ers using AI (including RTM, in Malaysia) to

make their news content searchable, to Hol-

lywood promo houses (including MOCEAN,

in Hollywood) using AI to sift through the

massive amount of dailies footage they re-

ceive. In both these cases, AI makes it prac-

tical to search the large amount of relevant

footage for the first time.

Beyond real-world implementation, what is

likely next to use AI or ML?

Stefan Lederer, CEO and co-founder,

Bitmovin: Something we’re exploring and

developing is an AI-powered solution that

translates American Sign Language (ASL)

text into client-side sign-language signing

avatars. Currently, this is strictly an innova-

tion piece that we’re collaborating with the

deaf community on to understand how and

if the technology could help make video

streaming more inclusive. Beyond that, I ex-

pect companies to explore diferent ways to

make content more accessible for all view-

ers. For example, AI could be used to ana-

lyze video content and narrate key visual el-

ements, such as facial expressions, settings,

and actions, in real-time, which will help to

automate the creation of audio descriptions

for visually impaired viewers.

Steve Taylor: The use of AI to auto gen-

erate subtitles and captions, as well as to

translate languages is definitely an area

that is growing. This is also true for AI’s

use in identify workflow optimizations,

through studio automation. In a produc-

tion environment, it can optimize work-

flow by automating repetitive tasks, en-

abling the team to confidently focus on

other areas of the production.

Noa Magrisso: The next phase of AI

and ML involves advancing collaboration,

personalizing content, and seamlessly le-

veraging multimodal AI to integrate text,

images, and audio. Emerging applications

include adaptive learning tools, healthcare

diagnostics, and immersive media experi-

ences.

How can emerging technologies improve

efciency in news gathering and reporting?

Siddarth Gupta: Emerging technologies

let reporters quickly filter vast data sets

to help them pinpoint the most relevant

information. Automated tools help reduce

Continued from previous page

tedious tasks such as transcription, trans-

lation, and summarization. This not only

speeds up production but allows news

teams to focus more deeply on research

and improve accuracy and turnaround

time.

Bob Caniglia: Innovative AI-driven tech-

nologies are driving greater efciency in

news gathering and reporting by automat-

ing repetitive tasks and optimizing work-

flows. AI tools, like automatic transcrip-

tion and smart sorting, enable journalists

to manage content faster and improve ac-

curacy under tight deadlines. This allows

news teams to dedicate more time to in-

depth reporting and delivering compelling

stories to their audiences.

What are the potential challenges of

integrating AI in newsroom workflows?

Peyton Thomas: While many may ar-

gue that integrating AI in the newsroom

removes human-manned jobs, I believe it

can be used to automate repetitive tasks

while creating an opportunity for end-us-

ers to be more creative and try things they

haven’t been able to do before.

Yang Cai: From a newsroom perspec-

tive, integrating AI can be challenging due

to concerns about maintaining journalis-

tic integrity and ensuring the accuracy of

AI-generated content. Compatibility with

existing newsroom systems and work-

flows may require significant technical

adjustments. There’s also apprehension

among journalists about balancing auto-

mation with editorial oversight and pre-

serving the human element in reporting.

Jordan Thomas: Adopting AI driven

technology within newsroom workflows

requires overcoming resistance to change

and ensuring seamless integration with

existing systems. Another fundamental

challenge is addressing ethical use of AI

and ensuring that it is not misleading view-

ers. This is particularly the case when it

comes to video and audio content that may

be altered by AI tools.

Steve Taylor: There are certainly two big

challenges that we hear about a lot. One is

the trust factor for the content workflow

— the question of whether information is

coming from a legitimate source or if it

was generated by AI? Second is whether

the output of AI is breaking any copyright

or licensing contracts, such that it is not

legally seen as new content owned by the

person who requested AI to generate it.

This this will keep lawyers busy for a long

time!

Sam Bogoch: Challenges include adapt-

ing legacy systems to integrate with AI

tools, although increasingly the AI tools

can catalog existing media and storage

repositories (both on-premise and cloud).

Training staf to take full advantage of rap-

idly-evolving AI capabilities is also critical;

even the best technical solutions have lim-

ited value if there isn’t buy-in and adoption

from the wider team.

What are the biggest barriers to adopting

AI in broadcast production?

Siddarth Gupta:  Adopting AI in broad-

cast production often requires extensive

infrastructure and specialized talent, both

of which drive up implementation costs.

Models trained on limited or non-rep-

resentative data can often struggle with

real-time scenarios, leading to out-of-dis-

tribution (OOD) errors. These compound-

ing technical and financial hurdles have

forced broadcasters to rigorously scruti-

nize and justify their potential ROI before

committing to AI implementation.

Yang Cai: The biggest barriers to adopt-

ing AI in broadcast production include

high implementation costs, the complexity

of integrating AI with existing workflows,

and a lack of technical expertise among

staf. Additionally, concerns about data pri-

vacy, reliability, and resistance to change

within organizations can hinder adoption.

Overcoming these challenges requires in-

vestment in training, infrastructure, and

building trust in AI solutions.

Kathy Klinger: Ensuring quality and au-

thenticity remains a challenge, as AI lacks

the nuanced understanding and emotional

depth of human creators. Ethical and legal

concerns, including intellectual property,

data privacy, and bias, further complicate

its adoption, particularly in news and fact-

based content. To navigate these issues,

the industry must balance AI’s efciency

with human creativity, establish responsi-

ble frameworks, and uphold transparency

to maintain trust and content efcacy.

Jordan Thomas: Often, a lack of tech-

nical expertise and concerns about job

displacement may hinder full-scale adop-

tion, however, this can be overcome by

preparing and providing insightful training

to workforces. One misconception is often

the barrier of cost and complexity of inte-

grating AI-driven tools. However, this isn’t

always the case. Solutions like QuickLink

StudioEdge utilizes AI-technology pow-

ered by Nvidia to enhance video and au-

dio quality of remote guest contributions,

ofered at no additional cost, and can be

seamlessly integrated into workflows.

Ken Kobayashi, business manager,

Sony Electronics: One of the biggest bar-

riers in camera operation is the “skills

transfer.” Customers already have their

own established or inherited skills, and

sometimes they don’t want to use auto-

mated features such as auto-focusing. If AI

cameras have room to train or implement

customer’s skills about PTZ speed/fram-

ing etc. through deep-learning algorithms

in the future, they would be more widely

used in broadcast production.

What role does AI play in improving live

event production and broadcasting?

Yang Cai: AI enhances live event pro-

duction and broadcasting by enabling re-

al-time analytics, automated camera con-

trol, and intelligent content curation. It

improves viewer experience with features

like real-time language translation, per-

sonalized recommendations, and adaptive

bitrate streaming. AI also assists in de-

tecting and correcting errors during live

broadcasts, ensuring seamless delivery

and high-quality output.

Kathy Klinger:  AI enhances live event

production and broadcasting by optimiz-

ing workflows and enabling real-time ad-

justments to improve both quality and

efciency. It can automate tasks such as

camera switching, highlight detection, and

audience analytics, allowing production

teams to focus on creativity and storytell-

ing. This combination of automation and

insight elevates the viewing experience

and ensures events reach audiences with

greater impact.

Zeenal Thakare: Broadcasters and live

event productions are going to focus on

creating more refined and engaging con-

tent. What that means is faster reaction

times during live events, as well as im-

mersive and interactive experiences. AI

is helping push the boundaries in the art

For broadcasters, this

means access to tools

that automate captioning,

enhance audience

analytics and streamline

video editing.

Continued on next page

Adopting AI in broadcast

production often requires

extensive infrastructure

and specialized talent,

both of which drive up

implementation costs.

Made with Publuu - flipbook maker