Vue normale

Reçu aujourd’hui — 15 avril 2026

Our Conversation With Bob Orban, Part 4

15 avril 2026 à 18:10
Bob Orban sits in his workshop
Bob Orban in an advertisement that challenged users to compare competing processors with V3.0 of the 8200 software.

Orban’s first audio processor for FM broadcasting was launched 50 years ago. To mark the anniversary, the company is sponsoring a series of interviews of Bob Orban in conversation with Radio World Editor in Chief Paul McLane. (You can read the series from the beginning here.)

Last time Bob discussed the genesis of the Optimod-FM 8100A. Here is Part 4, which includes the acquisition of the company by AKG Acoustics and the introduction of the first DSP-based Orban FM audio processor, the Optimod-FM 8200.

Paul McLane: We ended on a cliffhanger last time, with the company about to be sold. What brought that about?

Bob Orban: My partner John Delantoni was ill and couldn’t put his full energy into managing it anymore. We had grown the company very nicely, so we looked around and AKG was the willing buyer. We did two months of grueling negotiations but the deal got done in 1989.

It completely changed the managerial structure and opened up opportunities for new growth.

McLane: AKG was based in Europe. At least watching from the outside, the Orban acquisition felt like a big shift for them, how did you fit into their thinking?

Orban: They were heavily into pro audio, and they felt that Orban would be a good fit for expanding into the broadcast market, which they saw as a first cousin to what they were already doing.

McLane: Were you entirely out of ownership at that point?

Orban: I was. I’d continue to be the chief engineer. My employment contract was one of the criteria for completing the deal.

McLane: So you kept your hand in, yet you were answering to a boss after having run the place on your own initiative. And there’s the whole creative side of your work. Was it a shock to the system?

Orban: As much as you might expect. We’d had product managers before the acquisition and it got a bit more serious with AKG, but I was still driving the technical innovation.

This was about the time we started developing the 8200, and we’d hired our first DSP engineer, Paul Neyrinck.

Motorola had come out with its 56000 series DSP chips, which were 24-bit, fixed-point with double precision arithmetic, and they were good enough for high-quality audio. The stars aligned in terms of making the first DSP-based Optimod practical.

McLane: Why did digital signal processing mark such an important shift?

Orban: It’s a completely different way of processing audio. At the simplest level, you could model analog audio processors. But you could also do stuff you could never do in analog.

One of the big challenges of building complicated analog processors like the 9000 or the 9100 was that they had hundreds of parts with varying tolerances.

We used sensitivity analysis and other formalisms to choose appropriate tolerances for the components; but the boxes were time-consuming to test. You really had to go into the details, measuring many things, to detect anomalies due to parts that were out of tolerance.

DSP is software, and once you write the software, every box sounds identical. You don’t have to worry about component tolerances anymore.

It also gives you the opportunity to make processing that’s more complicated than anything you could do in analog without having to worry about manufacturability.

McLane: What were the tradeoffs, if any?

Orban: We faced the aliasing problem, like anybody doing nonlinear processing in digital.

When you do clipping, for example, you produce harmonics, and if their frequencies are more than half the sample rate, they fold around and eventually end up back in the audio band, where they can cause a new unpleasant distortion that wasn’t present in the analog domain.

In the 8200 we approached this by oversampling four times so that our clippers were internally sampled at 128 kilohertz. We had set up a listening test, comparing the base sample rate — which was 32 in the 8200 — with 64 and with 128. There was a big audible difference going from 32 to 64, and a much smaller audible difference going from 64 to 128.

We figured that 128 was a good choice, so the 8200 was based on that.

Later, when we got competition from Omnia and Frank Foti started beating us up over aliasing, I revisited that and worked out some mathematical formalisms for anti-aliasing, which I patented and which would be introduced in the 8400.

The other difference was that digital filters naturally don’t have the same frequency response as their closest analog counterparts.

There are standard transforms you can use if you, say, have a low-pass filter or a filter doing FM preemphasis. For the low-pass filters, you often get a very reasonable result — it’s not identical to the analog, it might roll off more steeply, but it still gets the job done.

The problem was FM preemphasis. None of the common closed-form transforms got us close enough to make it satisfactory.

About this time I decided that I was going to teach myself DSP. I bought some textbooks and studied them. The advantage of having solid university engineering training from Princeton and Stanford is that you get a very good mathematical background, making it possible to understand the textbooks.

In school they said, “We’re not going to teach you much specific technology because it will be obsolete in 10 years. But if you know the mathematics and the physics, it will last you throughout your career.” And indeed that proved to be the case.

McLane: So you taught yourself.

Orban: I set an informal thesis for myself. I decided to write a program. Its original intent was to solve the preemphasis problem, but we’ve used it literally hundreds of times on other things.

It does what’s known as a minimax error approximation to analog filter frequency responses. It minimizes the maximum error between the digital frequency response and the frequency response of the original analog filter, and exploits several mathematical principles originally created by two Russian mathematicians: Professors Chebychev and (later) Remez, as well as more recent developments by the late Prof. Jiri Vlach at the University of Waterloo.

McLane: Was a lot of the work at the time about emulating analog equivalents?

Orban: The 8200 was modeling a sort of amalgam of the 8100, the XT2 and the Gregg Labs processor. We’d hired Greg Ogonowski as a consultant shortly after the AKG acquisition, and he was heavily involved in the development of the 8200. Gregg has been one of my best friends for 50 years now.

He’s one reason the 8200 has a five-band and not a six-band compressor like the XT2 — the five bands were based on the Gregg Labs processor.

McLane: So you’ve got the 8200 on the bench. How are you field-testing it? Were you asking customers to run beta versions?

Orban: At this point we felt we knew what we were doing in FM processing. We knew what was successful with the XT2 and the 8100, so most of our testing was internal.

We did not want to announce the product prematurely because we didn’t want to cannibalize our analog sales. John Delantoni picked up that technique from IBM. When IBM announced a new mainframe, they were ready to deliver it.

McLane: As you brought the 8200 to market in 1991, what was the differentiator? I don’t imagine people were going to buy it just because it had DSP in it.

Orban: First of all, it offered presets for the first time. It had a number of factory presets, and you could modify them and save your own. It relieved a lot of the need for stations to have a processing expert in house and automated a lot of that functionality. It also offered the ability to daypart the processing.

Another improvement was the big Less/More knob, which was intended to simplify operation. Instead of having to know exactly what you were doing to make the processor louder or quieter, you’re given a one-knob adjustment for basic control.

In short DSP was friendlier for users and needed fewer technical resources. Also, being digitally based, it opened up the opportunity to do PC remote.

McLane: Now we’re getting into interesting new options from the world of PCs and stuff.

Orban: It took us a while to complete the 8200’s PC remote software. We initially hired a consultant who didn’t give us a very good product so we eventually had to do this in-house.

Even before the 8200, we’d done a digitally controlled analog parametric equalizer and the digitally controlled analog mic processor. For control we were using a small microprocessor, the Zilog Z80, which was programmed in assembly language. The 8200 was programmed 100% in assembly language, both the control and the DSP.

McLane: Looking back, it must now seem pretty rudimentary, how you were putting everything together.

Orban: You had to work with the computing power that you had, and make sure the user experience was smooth.

The advantage of assembler is it’s as fast as it gets, at least at that time, so it could make the most of those old, slow processors. These days, with advanced optimizing compilers, you can write in high-level languages and go faster than you ever would if you had written in assembler. Starting with the 8400, our control software was written in a high-level language.

McLane: Was this the first DSP-based audio processor for FM in the market?

Orban: No. There were the Texar Digital Prism and the Audio Animation Paragon. But neither of those got significant market traction.

Bob Orban, Greg Ogonowski and Phil Moore stand in the Orban booth at the 1993 NAB Show.
Bob Orban, Greg Ogonowski and Phil Moore at the 1993 NAB Show.

McLane: Who were the big players at this point?

Orban: There was Texar and Glen Clark. He had created the Audio Prisms that in many cases were used as pre-processors for the 8100. As I recall he eventually made a complete standalone box.

Then Frank Foti came in, originally as Cutting Edge and later rechristened to Omnia. If I recall correctly, he started out with an analog processor based on his work consulting with New York City radio stations. Then they did the original Omnia, which was their DSP processor. Its big claim to fame was anti-aliasing; and Omnia was very good at marketing.

It was fortunate that we had AKG behind us, and later Harman International after it acquired AKG in 1993, so we had the advantage of a large corporate marketing structure and went to ad agencies for the first time.

You know, back in the pre-AKG days I wrote most of the ad copy and came up with a lot of the concepts. In a different life I probably could have been an ad man.

McLane: You and Don Draper! … The 1990s was a very productive stretch for you.

Orban: It was indeed. We called the 8200 the first commercially successful FM processor with DSP, and it had a 10-year run. I think we sold around 5,000 units.

Then we built on it with the Optimod-FM 2200, a lower-priced product with two bands, and the original 9200, using substantially modified 8200 code, tweaked for the needs of the AM market. And the Optimod-TV 8282 was the first all-digital audio processor for television.

Also, in 1994 the DSE 7000 digital audio workstation, made by Barry Blesser’s group in Cambridge, Mass., was rebranded as an Orban product. We did some of the DSP for that.

McLane: Remembering the workstation raises a question for me of what the company name Orban meant to you as a businessperson. AKG and later Harman were taking Orban beyond audio processing.

Orban: It wasn’t up to me, but as long as the brand also stayed on audio processors, I was happy. If AKG and Harman wanted to leverage the brand to go into other broadcast-related fields — and as long as the quality was there, which was always the case — that was fine with me.

McLane: Did the change to Harman International four years after the AKG deal affect you in significant ways?

Orban: Harman was a much bigger corporation than AKG and had a big consumer business too. Eventually they would decide that for a company with sales of hundreds of millions of dollars, Orban was really too small to spend a lot of their corporate energy on. It would lead to the eventual spinoff to CRL in 2000.

McLane: In the 1990s you not only received the NAB Radio Engineering Achievement Award but you also shared the Scientific & Engineering Award from the Academy of Motion Picture Arts and Sciences, receiving it with Claus Wiedemann and Dolby Labs.

Orban: There’s a funny story. I’d known the folks at Dolby for a long time, I knew Ray and Ioan Allen and a few others. They were up for an Academy Award for a product called the Container, a multi-band processor used to protect the light valve in the optical sound recorders used for Dolby Stereo when the soundtrack was put on film. I think it was Ioan who called me and said, “Well, we were looking through the patents, and lo and behold, it looks like we’re infringing one of yours. So instead of licensing, how would you like to share the Academy Award with us?”

Licensing wouldn’t have made us very much money because there aren’t that many optical recorders out there, so the Dolby unit was a very specialized, low-volume product. So I said, “Sure … sounds like fun!”

Accordingly, I got to attend the award ceremony, and I got my award from Sharon Stone, the closest I’ve ever been to a big Hollywood celebrity.

McLane: That’s pretty close.

Orban: A good time was had by all.

McLane: I don’t want to hear about any party nastiness at the after-party! Seriously you must have felt a bit like you were atop the engineering world in broadcasting. And then comes the Telecommunications Act, a watershed in our business. Did consolidation bring important changes in what you were doing?

Orban: Not to a great extent, though eventually the loudness wars were deemphasized.

I remember visiting a friend in New York in the 1980s. I punched up the New York FMs on the dial, and I couldn’t find one that I could stand to listen to. They were all so distorted. I really did not understand what they were thinking.

One of the good things about consolidation is that processing could be turned down and do what it was intended to do, which is to provide consistent loudness and spectral balance. You could use the entire potential reach of the transmitter without getting so obnoxious in terms of processing artifacts that you would drive away more people than you would gain by taking advantage of the coverage.

McLane: And in the 1990s we’re starting to think about Digital Audio Broadcasting. How did that start to play out at Orban?

Orban: Greg Ogonowski kept his association with Orban and eventually joined us as VP for new product development. Our first DAB processor was the Optimod-DAB 6200 for DAB, DTV and netcasting. It was based partly on the 8200’s code, but we had to go to a 48 kHz sample rate because it was a 20 kilohertz audio bandwidth system.

A lot had to be rewritten. We had to develop new peak limiters, because FM peak limiting is inappropriate for DAB.

At Greg’s urging, a few years later we would also develop the Optimod PC 1100, a DAB processor on a PCI sound card.

Next time: CRL Systems, HD Radio and the 8400. 

The post Our Conversation With Bob Orban, Part 4 appeared first on Radio World.

Reçu avant avant-hier

Video Radio Gets More Sophisticated

13 avril 2026 à 16:41
Studio at Houston Public Media’s KUHF.
Studio at Houston Public Media’s KUHF.

A Sunday morning session in the NAB Show’s Broadcast Engineering & IT Conference will discuss “Successfully Launching Compelling Visual Radio Automation.” It will be by Fritz Golman, director of video systems and automation for RadioDNA.

Radio World: It feels like video has been part of radio operations for a while now. What will you talk about?

Fritz Golman: I’ll be presenting how we’ve successfully implemented a number of visual radio automation platforms. These case studies will highlight two projects of note, Houston Public Media’s KUHF, with two of their live flagship shows “Hello Houston” and “Houston Matters,” as well as Good Karma Brand’s WVMP/ESPN Radio Chicago and their 12-hour live broadcast day programming.

Fritz Golman
Fritz Golman

In this day of declining traditional listenership, not only alternative channels (EG, streaming) but enhanced presentation methods are needed to maintain and grow audience numbers — and of even more critical importance, especially for public radio, new opportunities for generating revenue.

RW: What would a typical station’s visual radio system consist of in 2026? 

Golman: Although there is a temptation to utilize the least expensive components like low-cost webcams, the limitations they impose will be realized as soon as the operators of such systems when they want to “take it up a notch” with more sophisticated presentations. 

Thus we only specify IP video systems using NDI and audio over IP, such as WheatNet, LiveWire or Dante. As these run on conventional network wiring, we eliminate the additional complexity of coax-based SDI cameras and dedicated digital video connections like HDMI.

The small additional upfront costs of these platforms yield long-term benefits of flexibility and interoperability. Then we integrate with a typical playout system like WideOrbit, RCS NexGen or RCS Zetta.

RW: Can you give examples of best practices?

Golman: Don’t scrimp on network wiring or backbone. The tiny additional cost of Cat-6 versus Cat-5 wiring can make a huge difference in the near future. 

Pull another wire or two more than what is needed at the time, you’ll find that will come into play sooner or later. 

Don’t be tempted to use hardware video switchers. They are limited in capability and locked into that configuration permanently. 

Be willing to learn about the visual medium and collaborate with others, potentially from disciplines outside of one’s facility, to get the “look” that will attract and keep viewers.

The video network configuration at KUHF. RadioDNA was involved in all aspects of the build-out, not just the video side. The audio/Wheatstone environment is not detailed here.
The video network configuration at KUHF. RadioDNA was involved in all aspects of the build-out, not just the video side. The audio/Wheatstone environment is not detailed here.

RW: What else should we know?

Golman: We like to configure our solutions around proven, widely used products. Although there are a number of software packages that can do at least some of what we’re fielding, the operator should consider how many other users there are of that piece of kit. 

I like to say that if you can only find one or two video clips showing the system in use or demonstrating features, it is probably not the package that has a lot of depth of support. 

On the other hand, stay clear of open-source offerings. Even with the very tempting price (free), you will probably get what you paid for.

[For more coverage of the convention see our NAB Show page.]

 

The post Video Radio Gets More Sophisticated appeared first on Radio World.

Exhibitor Viewpoint: OBSBOT at NAB Show 2026

9 avril 2026 à 16:53
Liu Bo
Liu Bo

 

With the 2026 NAB Show approaching, we’re providing a series of previews with exhibitors about their plans and expectations.

Liu Bo is CEO of AI camera company OBSBOT, featured here because of the growing role of video in the radio media ecosystem.

Radio World: What is OBSBOT and what types of products does it offer? 

Liu: OBSBOT is focused on AI-powered imaging solutions for live production, video creation and increasingly integrated AV workflows. Our mission is to make professional-grade video production more efficient, accessible and creatively powerful.

We offer a broad portfolio that includes flagship PTZR live production cameras such as Tail 2, compact live streaming cameras like Tail Air, AI-powered webcams across the Tiny and Meet series, as well as control hubs such as Talent.

We also provide a growing ecosystem of accessories and software, including microphones, remotes, filters, tripods and mounting solutions, to support a more complete production workflow.

The reason we exist is simple. Broadcasters, podcasters, streamers, educators, worship teams, enterprises and independent creators are all being asked to produce more high-quality video with smaller teams and tighter deadlines. Traditional setups are often too complicated and labor-intensive.

By combining smart AI automation with true production-grade imaging tools, we help teams dramatically reduce repetitive work, simplify operations and gain the freedom to create more dynamic and engaging content across many different formats and industries.

RW: What will you highlight for NAB Show attendees?

Liu: We are centering our presence on the OBSBOT Tail 2 and the core theme of AI-powered live production that is both professionally robust and highly accessible.

Tail 2 is the product that best represents where OBSBOT is headed today: bringing together production-ready image quality, intelligent automation and workflow flexibility in a single camera platform. It supports up to 4K@60fps, features AI Tracking 2.0 and native vertical 4K rotation and offers broad protocols and connectivity options, including NDI, FreeD, SDI, HDMI and Ethernet.

This makes it a camera that works equally well in traditional broadcast environments and in fast-moving creator workflows.

We are also excited to give attendees the first public look at our upcoming launch: the OBSBOT Talent 2, our next-generation all-in-one portable live production system, built around the philosophy “Aggregate. Automate. Amplify.”

Talent 2 integrates video switching, 4K encoding, recording, monitoring and multicamera control into a single compact device. It significantly streamlines professional multi-camera workflows by removing the need for laptops and complex setups, while serving as a powerful control hub to all OBSBOT cameras and further strengthening our end-to-end AI-powered ecosystem.

The core message is that advanced live production no longer has to be complicated or resource-heavy.

That’s why we’ve built a complete, hands-on experience at our booth. We’ve created a fully functional Podcast Studio where visitors can see the Tail 2 working together with the Tiny 3, Talent, and our full ecosystem in real-world situations. We’ll run daily themed sessions showing how to easily build multi-cam NDI setups, run smooth one-person productions, create visually compelling podcasts and put together perfect end-to-end AV solutions.

We’ve also set up a dedicated green-screen zone to demonstrate the Tiny 3’s virtual avatar capabilities with virtual voice, and to showcase Tail 2’s compatibility with FreeD technology for real-time virtual production.

Next door, in collaboration with 4DV.ai, we’re showing 60 Tail 2 cameras integrated with cutting-edge 4D Gaussian Splatting technology. Attendees will even be able to experience the immersive, volumetric video through VR headsets, which must be quite impressive.

RW: What is the most notable technology trend or recent change in streaming video?

Liu: The most significant trend is AI evolving from simple tracking and framing tools into a true, context-aware copilot for live production and streaming. This new generation of AI actively reduces crew requirements, enhances reliability and enables far more dynamic content with much lower operational overhead.

At the same time, we’re witnessing explosive growth in video-first podcasting and vodcasting. Industry reports project that global podcast and vodcast advertising revenue will approach $5 billion in 2026, with nearly 20% growth. On the viewer side, YouTube data shows that time spent watching video podcasts on connected TVs has nearly doubled year-over-year.

These shifts make one thing clear: Audiences now expect high-quality, polished video to accompany excellent audio. At OBSBOT, we design our products to meet this exact need, delivering professional AV experiences while dramatically simplifying the production process.

RW: What other business or technology trends will you be watching for?

Liu: First, the continued convergence of creator workflows and traditional broadcast environments. More and more, small teams are achieving the same multi-camera, low-latency quality that used to require big production crews.

We’re also paying close attention to advances in hybrid and remote IP production, especially tighter integration between AI and cloud technologies that bring greater scalability and efficiency.

Another exciting area is the rise of volumetric and 4D capture technologies, which are opening up truly immersive and interactive content experiences. We’ll also be looking at how spatial and immersive audio can be perfectly synchronized with AI-driven video tracking and virtual production.

Finally, there’s a strong and growing emphasis on sustainability and operational simplicity, with solutions that help reduce crew size, physical infrastructure and overall complexity while still delivering reliable, broadcast-grade performance for live events, sports, worship and podcasting.

RW: What else should we know?

Liu: What we would most like people to know is that OBSBOT is not just showing individual products at NAB Show 2026. Instead, we are showing how an AI-powered production ecosystem can work in real-world workflows. … At the end of the day, we don’t just build cameras. We create intelligent, creative partners that help you work smarter, faster and with much more creative freedom.

OBSBOT will be in booth C5144 and also a joint booth with 4DV.ai in C5249

[For more coverage of the convention see our NAB Show page.]

The post Exhibitor Viewpoint: OBSBOT at NAB Show 2026 appeared first on Radio World.

Connectivity in the SBE Ennes Spotlight

9 avril 2026 à 14:48
Dan Merwin
Dan Merwin

Telecom circuits and links are critical for today’s broadcast and media facilities. They’ll be the focus of a talk during the “Emerging Technology” track of the SBE Ennes Workshop at the NAB Show.

Dan Merwin is founder of Broadcast Telecom and a longtime telecom veteran; he also works part-time as a contract broadcast field engineer. 

Radio World: What is the most important trend in telecom links that we should know about?

Dan Merwin: Starlink will continue to evolve in all aspects, including probably eliminating the need for Carrier-Grade NAT (CGNAT). Space-based 5G and Amazon’s Project Kuiper low-earth-orbit constellation will also change the game in terms of ubiquity and performance. 

[Related: “Inside the SBE Ennes at NAB Show Emerging Technology Track”]

Also, with the use of SD-WAN technology, which by now is quite mature, the need for expensive private links such as MPLS, Metro Ethernet, satellite delivery and even 950 MHz STLs has been greatly reduced for broadcast and media facilities. And just the fact of usually no longer needing to have multi-year contractual obligations to ISPs with huge termination fees is key.

RW: How have platforms like Starlink, 5G and 5.8 GHz links changed the game?

Merwin: Starlink and 5G are at the forefront of the evolution of internet access in general in that it is vastly more useful to enterprises than in the past for a variety of reasons. It is more ubiquitous, higher-performing, more diverse, and far less expensive than 10 or 20 years ago.

There are many factors to take into account, though, when making network changes. With Starlink, there is apparently an outage of 3:19 every night when the satellites and/or terminals are reset. 

Regarding 5.8 GHz PtPs, yes, there are more options for wirelessly connecting sites and for extending the last mile, but the 5.8 GHz space has become crowded so people are often looking at instead utilizing licensed spectrum such as 6 GHz and 11 GHz. 

RW: How do SD-WAN technologies play into this discussion?

Merwin: It was inevitable that we’d see an explosion in cutting-edge technology that takes advantage of the changes in internet access. SD-WAN also addresses the fact that WANs have evolved from a datacenter-based topology to one that is based on the realities and necessities of distributed security, as well as applications that reside in the cloud and/or at any remote location.  

Quality of Service has been supplanted by Quality of Experience, which is AI-driven. For broadcasters, SD-WAN overlayed on top of two or three connections of various types provides a more cost-effective, reliable and manageable way to handle all of their applications, including audio, video, metadata, telemetry, etc. 

Given the evolving options, it’s a challenge to make decisions about which SD-WAN platform to purchase, not to mention which MSP/ISP to engage.
Given the evolving options, it’s a challenge to make decisions about which SD-WAN platform to purchase, not to mention which MSP/ISP to engage. Click to enlarge.

RW: Can you offer a few best practice tips?

Merwin: Engage a trusted advisor. With an SD-WAN deployment, what generally takes up the most time of the customer and the vendor(s) are the design, planning and configuration stages. That, and the fact that it often involves moving from a WAN made up of MPLS and/or Ethernet Private Lines, mean that salespeople and overlays (e.g. sales engineers) have to invest a significant amount of time, usually far more than in the past, in the pre-sales process, and are compensated relatively little. 

Not only that, but in the case where they are paid based on a customer’s spend, they are cannibalizing their revenue and thus their commissions. All this to say that they are mostly not motivated to try to move customers in that direction, and thus it is usually advisable to work with a consultant/agent who probably has more to gain and less to lose in order to help find the best fit. 

Just a few examples of factors to consider:

  • Some of the platforms redirect traffic on a per-packet basis, which facilitates seamless failover, whereas some just do it on a per-session basis, which will cause interruptions in case the primary link drops or bounces.
  • Fortinet, for example, can go deeper into the LAN because they make switches and access points, but there are capabilities that some of the vendors have that they don’t.
  • Is there a justifiable need to use a vendor that provides a “Middle Mile,” which enhances performance and security, such as Cato Networks?

Finally, when looking at moving to SD-WAN, it makes sense to do at least a cursory comparison of the benefits and potential drawbacks of self-deployed/managed, co-managed and fully managed SD-WAN, based on the IT resources available to the customer as well as the projected total cost of ownership, among other factors. 

Some providers will open tickets for you on any internet circuits (with a letter of authorization in place) as one example of the benefits of a managed service.

RW: What kinds of questions do you get from radio engineers?

Merwin: Naturally, they often want to know how well Starlink will work for their air chains and how reliable it is, although more and more engineers are becoming aware of its use in the industry. 

Cost is of course a common concern, so it’s nice to be able to talk to people about a technology change that will save them money in most cases. 

How troubleshooting is done with SD-WAN is another common concern. But most SD-WAN platforms make that a snap, with visibility and analytics even up to the application level.

RW: Are there misconceptions you would like to dispel? And what else should we know?

Merwin: It’s a little difficult for some people who have been involved with WANs for a long time to wrap their head around the idea that they might not need to have an expensive legacy private WAN anymore, and that low-cost, best-effort services such as cable internet can do the job perhaps even better when part of a redundant setup.

Meanwhile it’s estimated that there are still 7 to 8 million business POTS lines in the U.S. We have seen the prices go up for POTS as high as $1,500 per line! In addition, the copper infrastructure is no longer being maintained as it was in the past. 

POTS replacement has also become a mature and diverse offering, and it’s a managed service, so the days of a telco tech going out to install lines that are not installed where or how they need to be are pretty much gone.

The presentation “Telecom Circuits and Links for Broadcast and Media Facilities” is scheduled for 1:15 p.m. on Tuesday April 21 during the SBE Ennes Workshop.

[For more coverage of the convention see our NAB Show page.]

The post Connectivity in the SBE Ennes Spotlight appeared first on Radio World.

Exhibitor Viewpoint: Shure at the NAB Show

8 avril 2026 à 18:11
Sean Bowman of Shure
Sean Bowman

One in a series of previews asking exhibitors about their NAB Show plans and expectations. 

Sean Bowman is associate VP, sales North America at Shure Inc.

Radio World: The growth in the “creator economy” is an important theme throughout this year’s NAB Show. How are Shure’s products used in that part of the media ecosystem?

Sean Bowman: When we look at the creator economy, we see it as a broad, diverse spectrum rather than a single category. It includes everyone from individual podcasters and YouTubers to professional broadcasters, sports producers and live event teams.

Our focus is on supporting creators regardless of scale across that entire journey, starting with tools that deliver great results easily and growing with creators as their workflows become more advanced and specialized.

At the entry point, many creators are working alone or in small teams and need professional audio without a lot of technical set up. Our USB microphones and compact digital interfaces are designed to remove barriers by handling gain, processing and reliability behind the scenes, automatically. That allows creators to focus on storytelling and content while still producing audio that meets professional expectations.

As creators scale up, their needs change. They might move into live production, mobile broadcast and more complex environments where reliability, flexibility and speed matter. 

That is where our digital wireless systems, software tools and other innovative technologies come into play, helping creators manage more demanding workflows without starting over. The goal is continuity. We want creators to be able to start with Shure, stay with Shure and rely on familiar tools as their ambitions and audiences grow.

RW: Have Shure products found notable application in the exploding field of sports media? 

Bowman: Sports media has been a major area of focus for Shure and it is one where our products have been used for decades across sideline reporting, field of play capture and broadcast production. 

What has changed recently is the pace and complexity of those workflows. Sports broadcasts are faster, more immersive and more demanding than ever, which has pushed us to rethink how audio is captured, managed and delivered in those environments.

A good example of that evolution is the DCA901 Digital Array Microphone. Traditional approaches to sports audio have often required extensive manual setup and constant adjustment, especially when trying to follow unpredictable action on the field. With DCA901, engineers can digitally steer and calibrate the array in real time, saving significant setup time while gaining much more flexibility. 

It allows producers and engineers to capture exactly the sounds they want, follow the action as it moves and respond instantly as the production changes, which is critical in live sports environments.

We are also seeing that once engineers adopt these workflows in sports, they begin to recognize their value beyond a single use case. The ability to react quickly, manage uncertainty and maintain creative control has applications across other forms of broadcast and live production. 

Sports media has become a proving ground for these technologies and it continues to influence how we think about audio capture across the broader media ecosystem.

RW: What new products will you highlight?

Bowman: At NAB, we will be highlighting a mix of new hardware and software that reflects how broadcast workflows are evolving. One major focus will be SLXD+ digital wireless, which is designed for broadcast professionals who need reliable performance in crowded RF environments, whether they are working in studios, mobile broadcast setups or out in the field. We will also be showcasing our broader Axient Digital portfolio, including our latest updates and expanded features, to demonstrate how engineers can manage increasingly complex productions with greater confidence and efficiency.

Another key area of focus will be the DCA901 Digital Capture Array, which we will show in a simulated sports environment for the first time. DCA901 represents a shift in how audio can be captured in fast-paced productions by replacing traditional manual setup with digitally steerable, software-controlled workflows. 

Alongside that, we will be highlighting Action Isolator, a free software tool that helps broadcasters focus on the sounds they want to capture while reducing unwanted audio, reinforcing our broader strategy around software driven audio solutions.

We will also highlight products that reflect the growing overlap between broadcast and creator workflows. That includes portable and compact solutions like MV88 USBC and MVX2U Gen 2, as well as collaboration-focused technologies such as IntelliMix Bar Pro, built on Microsoft’s device ecosystem platform (MDEP) for modern AI powered workplaces.

Taken together, what we are showing at NAB is a portfolio designed to give customers flexibility, speed and consistency, regardless of where or how they are creating content.

RW: Has AI technology changed Shure’s products, behind the scenes or in how the products are deployed?

Bowman: Yes, AI has absolutely influenced how we think about our products, both behind the scenes in how we develop them and in how customers use and deploy them. 

In many cases, that shows up as intelligent processing that helps improve audio quality automatically, especially in less-than-ideal environments. Whether it is noise reduction, de-reverberation or sound isolation, the goal is to make it easier for users to get clean, usable audio without having to manually correct issues after the fact.

We are also seeing AI play a growing role in customer workflows that sit on top of the audio we capture. As more tools rely on speech recognition, transcription and content analysis, the quality of the audio input becomes critical. If those systems cannot clearly distinguish voices or understand what is being said, the productivity gains fall apart. That reinforces our focus on delivering consistent, high-quality audio capture, so customers can take full advantage of AI-driven tools with confidence and spend less time fixing mistakes and more time creating.

RW: What other business trends will you be watching for?

Bowman: One major trend we are watching closely is how venues and stadiums are evolving. There is a growing focus on how the in-person experience that fans hear and feel in the venue competes with versus what an audience experiences on the broadcast. 

That is driving new approaches to audio capture and distribution on both sides, especially in large, complex spaces where traditional methods no longer deliver the same impact.

We are also seeing increased interest in how technology can help create more inclusive and flexible experiences within those environments. That includes ideas like delivering consistent audio to premium seating areas, suites and hospitality spaces, as well as emerging applications such as translation and enhanced accessibility. 

As these experiences become more immersive and personalized, audio plays a critical role and we are excited to work with partners who are pushing the boundaries of what is possible in modern venues.

We’ll be discussing these topics in more detail at NAB’s Sports Summit series throughout the show.

NAB Show Booth: C4916

[For more coverage of the convention see our NAB Show page.]

The post Exhibitor Viewpoint: Shure at the NAB Show appeared first on Radio World.

Exhibitor Viewpoint: Fraunhofer IIS at the 2026 NAB Show

7 avril 2026 à 18:02
Mark Gayer of Fraunhofer IIS
Marc Gayer

With the 2026 NAB Show approaching, we’re providing you a series of previews asking exhibitors about their plans and expectations.

Marc Gayer is head of the Audio and Media Technologies’ Business Department at Fraunhofer IIS.

Radio World: Most attendees will have heard of Fraunhofer but may not realize its scale. Briefly, what is it and what is its core business?

Marc Gayer: Fraunhofer is Europe’s largest applied research organization, with 32,000 employees across 76 institutes, covering everything from communication systems and AI to health, mobility and media technologies. 

Fraunhofer IIS is one of the biggest institutes and home to the Audio and Media Technologies division — the people behind mp3, AAC, xHE‑AAC and today’s MPEG‑H and JPEG XS standards.

Our core mission is to develop technologies that turn scientific excellence into real‑world solutions such as efficient audio and video codecs and personalized, immersive sound. Our technologies are used for broadcast and streaming infrastructure as well as in advanced tools for content production and distribution. Continuous collaboration with global broadcasters, device manufacturers and standardization bodies ensures that our innovations reach audiences worldwide.

RW: What products or themes will you highlight at the NAB Show?

Gayer: Our focus will be on next-generation audio and professional media workflows, with major highlights from our audio and video technologies.

  • MPEG‑H Audio for broadcast and streaming — We’ll showcase new integrations of the MPEG‑H Renderer into Avid Pro Tools and Marquise Technologies’ MIST, enabling more creators to produce, QC and master immersive and personalized audio within existing workflows. We also present cloud‑based MPEG‑H Audio production and transmission workflows developed together with AWS and technology partners.
  • JPEG XS for ultra‑low‑latency, visually lossless video transport — Fraunhofer IIS will present the Emmy Award‑winning JPEG XS codec and its SDK, supporting ST 2110‑22, RTP, MXF and integration into CPU/GPU/FPGA/ASIC workflows — essential for IP‑based studio, cloud and live production environments.

RW: Fraunhofer codecs have played important roles in audio and radio broadcasting. What recent developments should we know about?

Gayer: Fraunhofer continues to advance the codecs that power today’s broadcast and streaming ecosystems. 

Recent developments include broader integration of MPEG-H Audio into cloud workflows as well as into even more professional production tools and consumer devices. 

In streaming, xHE-AAC remains a key technology for consistent, high-quality audio under variable network conditions, and adoption continues to grow across devices and platforms.

Beyond broadcast and streaming, our codec portfolio also advances next‑generation communication and immersive media, with IVAS and MPEG-I as emerging codecs enabling spatial experiences for phone calls and VR/XR applications. 

These efforts are complemented by innovations such as the integration of xHE‑AAC into modern messaging through its adoption in RCS ecosystems and LC3plus for low‑latency audio, which also has a lossless operation mode.

A particularly dynamic development is happening in Brazil, where the new DTV+ (TV 3.0) system uses MPEG‑H Audio as the mandatory audio codec. 

Consumer devices are now arriving on the market, including TVs with full MPEG‑H feature support, enabling Brazilian viewers to enjoy immersive and personalized audio at home. While Brazil leads the way, we also see growing interest from other markets in Latin America that watch this global media powerhouse closely. We expect many broadcasters to monitor the rollout of DTV+ and the viewer response during major events such as the upcoming World Cup — and some may explore similar audio innovations as Brazil’s ecosystem evolves.

RW: What have been the most important recent developments in AI for these areas?

Gayer: Within audio, the biggest shift has been the rapid move from classical signal processing to AI‑enhanced, context‑aware processing. 

For Fraunhofer IIS, this includes advances in AI‑based noise reduction, echo control, beamforming and dialogue enhancement — technologies that now adapt automatically to complex acoustic scenes in real time. 

At the same time, AI‑assisted tools are enabling more efficient broadcast and production workflows, from cloud‑based rendering to automated quality control, helping broadcasters scale modern, flexible production environments with less manual effort.

RW: What other trends will you be watching for at the convention?

Gayer: We expect strong momentum in cloud‑native and hybrid production workflows, particularly as more broadcasters adopt IP‑based infrastructures and seek interoperable, scalable audio tools. 

Personalized and accessible audio continues to gain importance, with viewers expecting adjustable dialogue levels, multiple commentary options, and mixes tailored to their listening environments. 

Additionally, the growing use of ultra‑low‑latency IP transport — supported by technologies like JPEG XS — is reshaping live production and enabling more distributed, collaborative workflows across studios and cloud platforms at the scale seen across NAB this year.

RW: What else should we know?

Gayer: Fraunhofer IIS develops technologies across the entire media chain and works with broadcasters, manufacturers and standards bodies to ensure real‑world deployment. Our goal is to make next-generation audio practical, accessible and ready for today’s workflows, a commitment reflected in the MPEG-H Audio and JPEG XS solutions we are showcasing at NAB, whether in immersive broadcasting, efficient streaming or the emerging world of personalized and interactive media experiences.

NAB Show Booth: W2343

[Read more interviews in this series.]

The post Exhibitor Viewpoint: Fraunhofer IIS at the 2026 NAB Show appeared first on Radio World.

Suess on the Myriad Uses of AI in Media

6 avril 2026 à 19:02
Kyle Suess stands outside in front of shrubbery, wearing a sports jacket and open-collared shirt
Kyle Suess

Kyle Suess is co-founder of Amira Labs. During the NAB Show he will give a talk as part of the SBE Ennes Workshop on April 21 about “Myriad Uses of AI in Media,” including for radio.

Radio World: What does your company do?

Kyle Suess: Amira Labs builds AI software for broadcast and media teams to detect, diagnose and help resolve content issues in real time before viewers notice. We automate audio/video QC, compliance and language/caption checks across live and VOD workflows. Our solutions are deployed on-prem or in the cloud, including fully air-gapped installations where models run locally with no third-party APIs required.

RW: What’s your background?

Suess: It is in building software products. I became drawn to a blend of tech and media starting in college in 2013 while working at a startup that was commercializing natural language processing research for multi-language translation and metadata tagging of videos from YouTube, news publishers and other online platforms.

That was the spark that led me to working at another startup, Grafiti, where my Amira Labs co-founder Stefan and I leveraged machine learning to catalog thousands of graphics and charts to make it easy for journalists and news media to weave them into stories.

These experiences brought out a motivation to get more involved in SMPTE, to learn from those who know more than me, and ramp up building useful tools for broadcasters. Our first Amira Labs-product designed for scalable, low-latency captioning, translation and language identification won NAB’s PILOT Innovation Challenge award.

RW: Broadly speaking, what are examples of how AI is being used in media now?

Suess: Captioning is the big one that many people have seen by now. There are a lot of captioning choices in the market, though be mindful of aspects like language support, latency and usage costs for captioning for long periods of time across many feeds.

Clipping highlights, content tagging and dubbing/AI voiceovers are other top examples. These applications of AI help with quickly generating highlights to post across social media, analyzing saved files to generate metadata for easier searching in media asset management (MAM) systems, and generating synthetic voices to narrate a script or speak in another language.

From a pragmatic sense, AI is widely being used in media as a service delivered through one of the “Big 3” providers of Google (Gemini), OpenAI (ChatGPT) and Anthropic (Claude) for typical everyday tasks like debugging networking issues, generating show rundowns, analyzing advertising data, etc.

This works at an individual level, but can be very expensive and limiting at scale, especially when actually involving content — audio streams, media streams, codecs, containers. For a lot of media companies, the last few years have involved “R&D science projects” relating to incorporating AI.

An infographic headlined "Building Connected Agent Ecosystems"
AI protocols that Suess will discuss for media uses cases.

What I will highlight is bringing an engineer’s mindset to strategically approaching AI and navigating how to build with it, beyond R&D. It’s important to be cognizant of the bigger picture and be calculated with assessing options when making AI decisions. There’s so much innovation happening nearly every week in the open-source world. I’ll highlight some of the most impactful and useful projects for media organizations.

RW: Specific to Radio World readers, what instances can you describe?

Suess: Translation of radio programs from English to other languages, done locally by uploading a script. The motivation for this use case is from working with a radio station in Kansas that wanted to reach more Spanish speakers and automatically translate their English programs, while still making it sound natural and not as robotic. This can go beyond Spanish to other languages catering to the community demographics of different radio markets, like Chinese in the Bay Area, Vietnamese in Orange County, Arabic in Detroit, etc.

Another use case is real-time content classification and segmentation of radio broadcasts.

Consider that a major U.S. radio broadcaster has multiple programs running simultaneously and they want to listen and classify different segments of the programs, or conversations if it’s like a podcast. This is where AI can be useful to easily save snippets of content that could be repurposed for a multitude of uses, without having to put in hours and hours of manual effort.

RW: Much of the attention around uses of AI focuses on negative impacts on human-based workflows. What’s your view?

Suess: First, I think it’s a valid concern and I wouldn’t dismiss it. We hear about the hype around the gold rush and efficiency multiplier aspects of AI in the news. Sometimes, it sounds like CEOs reimagining the 1960s “Twilight Zone” episode, where an enterprise can turn itself into a workforce of machines overnight.

I think AI is a greater enabler and augmenter to save time on the work we struggle with doing and don’t look forward to doing. However, I’m not sold that propositioning AI as a full replacer of humans is the next best move.

Screenshot of a multiviewer mosaic with an AI agent interface to obtain real-time understanding for user-requested live video feeds.
Screenshot of a multiviewer mosaic with an AI agent interface to obtain real-time understanding for user-requested live video feeds.

There’s a divergence between business aspirations and reality, and the reality of the situation is there’s still so much nuance, tribal knowledge and (let’s face it) chaos involved in making media happen that it seems short-sighted to sacrifice those built up in-house advantages.

I think the biggest gains will come from equipping employees with the AI tools that will bring the people and technology together to achieve those more productive outcomes desired.

In thinking about the impact of AI on jobs, there’s a great video from 1979 on YouTube of an interview about the impact of computers. If you replace “computer” with “AI,” the same thoughts we are grappling with today seem not so different than those of 47 years ago.

Perhaps looking back at the past will help offer informative moments of clarity for what we should really be doing with AI when going forward.

Amira Labs will be in booth W2217, sharing space with Open Broadcast Systems in W2219.

The post Suess on the Myriad Uses of AI in Media appeared first on Radio World.

Catch Up With Me at NAB

6 avril 2026 à 14:22
The Nautel Radio Technology Forum explores trends and best practices as well as new products.
The Nautel Radio Technology Forum explores trends and best practices as well as new products.
Credit: Photo by Jim Peck

In the latest issue of Radio World and on our website, you’ll read about quite a few interesting sessions and presentations in and around this year’s NAB Show.

We’ve provided samplings from the Broadcast Engineering & IT Conference, the Society of Broadcast Engineers Ennes Workshop and the Public Radio Engineering Conference. In our stories we’ve tried not just to preview the talks but asked our sources to share some of their insights with us. I hope you’ll find these articles useful.

I’d also like to invite you to two events at which I’ll be speaking.

First, Nautel will reprise its popular Radio Technology Forum — still often referred to as “the NUG” but by no means limited to Nautel users — on Sunday morning April 19. 

Conveniently, the forum this year will be held in the main ballroom at the Westgate, which is right next door to the Las Vegas Convention Center.

Nautel does a super job with the event — and I say this not only because they have the good taste to invite me to speak! The forum consistently pulls in 300 or more engineers and other broadcasters who gather to learn about new technologies from around the industry, from Nautel as well as other technology sources. 

The doors open at 8 a.m. Here’s an insider tip: The first half-hour is a great time to grab some of Nautel’s hot coffee and mingle with an engineering “Who’s Who” of radio until presentations start at 8:30, so arrive early.

I then help kick things off with a short discussion about “What I’m Watching for at NAB.” Other speakers this year include Joe D’Angelo of Xperi; Steve Newberry of Quu; Deborah Parenti, president of Radio Ink; Keith Barton, VP/GM of Max Media; Dr. Andy Gladding of Hofstra University; Geary Morrill of Connoisseur Media, who chairs the SBE Education Committee; and Kory Hartman, COO of Civic Media. And Jeff Welton with his famous “tips and tricks.”

A complimentary hot lunch is available for attendees. In fact the whole thing is free, which is a nice kind of price. Your attendance even qualifies for a half-credit towards SBE recertification in Category H. But advance signup is required.

And later on Sunday, please join me on the stage of the TV and Radio HQ Theater on the Central Hall exhibit floor.

I’m going to salute Andy Gladding, this year’s recipient of Radio World’s Excellence in Engineering Award; and then Andy and I will be joined by Bud Williamson for a conversation called “Radio — the New Boutique Business?”

[Related: “Andy Gladding Champions Radio’s Future”]

We’ll explore the idea that owning a radio station is a great fit for Gen-X and Millennial professionals.

“People my age are looking for an opportunity to do something outstanding in their communities,” Andy told me recently. 

“They’re investing in traditional small business, they’re buying farms, opening retail establishments and generally looking for an opportunity to succeed while having the power to have an impact at the community level and create lasting interpersonal business relationships.”

He believes that for media professionals who are competent with and trained in radio workflows and understand how to market local business, “radio can be a perfect for personal satisfaction and growth.”

Andy is an engineer with Salem Media Network and an educator at Hofstra University. He and his wife Katie recently acquired WKZE(FM) in Red Hook, N.Y. His friend and colleague Bud Williamson is also an engineer and station owner.

You can stop in while you’re browsing the booths of the Central Hall. Our talk is 3 p.m. Sunday at C2450, the TV and Radio HQ Theater.

[For more coverage of the convention see our NAB Show page.]

The post Catch Up With Me at NAB appeared first on Radio World.

Codecs Serve Increasingly Diverse Needs

5 avril 2026 à 14:00

This is one in a series about trends and best practices in codecs for radio.

Chris Crump
Chris Crump

Chris Crump is senior director of sales and marketing for Comrex. He has experience as a radio producer and remote broadcast engineer, and has held technical sales roles for several manufacturers.

Radio World: Chris can you give us your perspective on the most important current or recent trend in codecs?

Chris Crump: I don’t think we are seeing just one trend but maybe a few. 

As we see younger broadcast talent entering the industry, they’re wanting to depend more on their personal mobile phones whenever and wherever possible. There’s an increasing dependence on apps and, of course, social media as an extension of their terrestrial broadcast. 

On the corporate side, we are being asked for large-scale virtualization to address centralized infrastructure or disaster recovery plans on an enterprise level. So while we see talent wanting the freedom that mobility offers, we also see a need for the cost savings that centralization can offer. 

RW: How has the expanding use of the cloud changed the role and use of codecs?

Crump: “Use of the cloud” always kind of makes me chuckle because it basically just means “somebody else’s computer that’s connected to the internet.” 

But some of our biggest customers require a large-scale, virtualized codec that can live “in the cloud” to address their need for cost-savings, DRP and ease of routing programming within and between very large facilities. Some of our biggest customers have or will be moving to our ACCESS VM platform for centralized distribution of programming and streaming content. This is especially important in scenarios where having 100 or more hardware codecs is no longer tenable in terms of both cost and rack space. 

RW: How well do today’s codecs integrate with today’s AoIP networks and infrastructures; what issues do they present?

Crump: Luckily, or perhaps out of necessity, standards exist that help facilitate these integrations. Most professional audio codecs on the market support some or all the various proprietary flavors of AoIP — Livewire, WheatNet, Dante, AES67 Ravenna — or the AES67 standard for AoIP interoperability. 

We’ve also pulled some standards from the video side of our business such as SMPTE 2022-7 Seamless Production Switching and NMOS, which are critical for our key distribution customers that provide both audio and video content. 

Our company philosophy has always been to support free, unlicensed, open-source standards and platforms to allow for easier integration of products with AoIP systems and to keep the costs of our products reasonable for our customers. 

For example, we love the idea of AES70 for control and monitoring of media devices over AoIP networks, but its unlikely that we’ll see it implemented by console manufacturers because that’s really their “secret sauce,” if you will.

RW: What considerations should be taken into account to allow radio talent to do shows using their phones?

Crump: Mobile phones have improved drastically as processors have gotten faster and storage more efficient. But today’s smart phones are very personal objects, and users have their own unique ways of using them. 

Trying to get someone to understand that using a phone for reliable broadcast requires they understand that they need to turn off resource draining background apps and take measure to insure an uninterrupted broadcast — perhaps even using a specific phone configured specifically for broadcast use.

There’s so much that can go wrong if someone is running a bunch of background apps and if they forget to put the device in “Do Not Disturb” or Airplane mode before they go on air. 

As developers, we must make sure our apps work on about a gazillion mobile devices, with new devices being introduced all the time. As with any broadcast, having a backup plan is key. We really like the concept of apps, but for reliability, we still encourage the use of our purpose-built, hardware codecs like the ACCESS NX Portable.

Comrex has developed several products and applications that take advantage of mobile phones. Our free FieldTap can be used to connect to our ACCESS and BRIC-Link codecs using a wireless internet connection like 4G/5G or Wi-Fi, and it can also be used with our new FieldLink sideline reporter codec on private Wi-Fi connection. 

Our Gagl + Hotline subscription-based service utilizes a web browser on a mobile device but it also allows users to call a 10-digit phone number in the U.S. from a Verizon, AT&T or T-Mobile device. This special phone line maintains HD Voice near-studio quality all the way to the hardware codec in the studio. 

Our Opal IP Audio gateway uses the same WebRTC technology from a mobile device’s web browser to a dedicated hardware device in the studio. We’ve also seen customers having success using USB-C microphone/headphone interfaces from Shure, IK and others with mobile devices, to make the experience more professional and broadcast-like.

 RW: Can you tell us about a recent installation or application for codecs that you found notable? 

Crump: We recently shipped our very first FieldLink Sideline reporter codec, which uses mobile phones to get audio from courtside or the sidelines up to the press box. FM station KPGZ(LP) in Kearney, Mo., was the first to use it, at the Missouri High School Football state championships, with great success. 

This product was developed for our customers who were requesting a simple and affordable way to do sideline reporting. So, for it to deliver such great results right out of the box and to generate comments like, “This thing is friggin’ cool” was a great feeling for everyone at Comrex.

Read more on this topic in the free ebook “Trends in Codecs 2026.”

The post Codecs Serve Increasingly Diverse Needs appeared first on Radio World.

“Control What We Can, and Propel Forward”

4 avril 2026 à 16:01

We’re previewing the spring NAB Show in this series of articles. Here we consider challenges for radio sales departments.

Mike Hulvey is president and CEO of RAB and Dave Casper is its SVP, digital services. The RAB is a trade association that supports U.S. radio broadcasters in generating revenue.

Mike Hulvey headshot
Mike Hulvey

Radio World: As radio broadcast companies prepare to head to the convention, what do you consider the most important challenge facing their businesses?

Mike Hulvey: There’s a lot of uncertainty in the marketplace that we broadcasters cannot control. However, I tend to look at the challenge differently: Control what we can, and propel forward. With that, it’s critically important that broadcasters look ahead, innovate and plan for the future. And above all keep the eye on the ball, for our customers, listeners and our advertisers alike.

RW: You’ve recently published a report on the growing percentage of radio’s revenue that comes from digital sources. In the bigger picture, commercial U.S. radio revenue is down markedly from 15 or 20 years ago. What are the obstacles to radio companies returning to larger-scale growth in revenue?

Hulvey: In an increasingly fragmented media marketplace, radio continues to shine. All the collaboration that we’re embarking upon in ways that we’ve never done before, especially as it relates to measurement and addressing our advertisers’ needs under the “One Voice, Better Together” initiative, which addresses some of the obstacles to driving more revenue and increasing our share.

Above all, we must tell our story and ensure we’re responsible for the narrative.

RW: How will AI change how radio runs its businesses and workflows, beyond what we’ve seen to date so far? 

Dave Casper: AI is transforming every corner of business and our economy. It will undoubtedly have a profound impact on how radio operates.

Dave Casper headshot
Dave Casper

As to how, like so many other companies, I think broadcasters are still working through this. Speaking from a sales standpoint, for the moment, it’s a workforce multiplier, allowing our sales teams to work faster and smarter, uncovering new sales opportunities and providing AEs with an unprecedented level of information they can use to help the local advertisers grow their business.

From copywriting and prospecting tools to enhanced CRM interactions and tools to help AEs plan and execute more effective audio and digital marketing strategies, AI is already helping AEs radio drive revenue.

However, I think this is just the beginning. Without a doubt, agentic AI will be the next big thing, as broadcasters start linking systems and information sources to drive further innovation and more effective solutions for our advertising clients.

RW: What sessions will you be participating in at the NAB Show, and what will you discuss?

Hulvey: We’re excited about RAB’s upcoming sessions, whether it’s our roundtable participation at the Small and Medium Market Radio Forum around RAB’s AI resources for broadcasters, or our two-session series on digital sales with Gordon Borrell highlighting our 14th annual benchmark report.

Lastly, the midterms are around the corner, so we’re going to have our good friend Steve Passwaiter join us for a session dedicated to political, and specifically how broadcasters can create more value and opportunities for local candidates.

RW: What other trends or technologies will you be watching for in the exhibits, the sessions or the hallways?

Casper: Isn’t that the exciting thing about NAB? Around every corner, there is something new to learn.

I’m excited to visit the Xperi booth and look in on DTS AutoStage. It’s such an exciting technology.

Getting back to AI, I’m also curious to see how broadcasting’s many vendors and partners are integrating AI into their product lines.

When you think about the intersection of broadcasting and AI, much of the heavy lifting will be done by the companies supporting our industry. How are they using AI to create smarter technology and tools. In turn, how can we leverage their work to drive the industry forward.

[Read more observations about current radio business trends in “Radio Managers Navigate the Rivers of Digital.”]

The post “Control What We Can, and Propel Forward” appeared first on Radio World.

Don’t Try This at Home

3 avril 2026 à 12:00

Alan Spindel is president of the Radio Club of America and senior electrical engineer for Ten-Tec/Alpha RF Systems. He develops hardware and firmware for amateur and professional radio systems.

He will moderate a Monday morning session at the NAB Show’s BEIT conference called “War Stories From the Front Lines of Broadcasting” featuring Bob Orban and Mike Pappas of Orban Labs, and William Harrison of WETA(FM) in Washington.

Alan Spindel
Alan Spindel

Radio World: How did this session come about?

Alan Spindel: The Radio Club of America honored Bob Orban with the Jack Poppele Award, which is named after broadcast pioneer and VOA director Jack Poppele, at the club’s 116th annual awards banquet. The award recognizes individuals who have made important, long-term contributions to radio broadcasting. 

Bob was unable to attend, so RCA Fellow Mike Pappas of Orban Labs accepted on his behalf. It is customary for recipients to give a talk at the technical symposium that accompanies the banquet; I asked Mike if he could share some practical field experience, as the symposium was heavy on theory this year. 

Mike gave a great presentation that included an exploding transfer switch blown to bits on security camera footage; screwdrivers jammed in to hold RF contactors closed; and the results of someone accidentally running full daytime power into a nighttime low-power tuning unit. 

A transfer switch explodes as seen on security video.
A transfer switch explodes as seen on security video.

When NAB asked RCA as a partner organization to host a panel at the BEIT, I asked Mike if he could reprise his talk and bring in other panelists. He agreed on the condition that I add some of my own war stories, a few of which involved Mike. We agreed the format would be irreverent and lighthearted.

RW: Can you give a few more examples? 

Spindel: Mike has great stories and photos from a recent AM site renovation in Utah. Other tales include a six-figure hardline burnout caused by a 19-cent zener diode, and a DJ who panicked when the fire alarm annunciator panel caught fire in the control room and emptied an entire dry-chemical extinguisher into the on-air console and cart library.

A screwdriver has been used to hold RF contactors closed.
A screwdriver has been used to hold RF contactors closed.

RW: What kind of practical knowledge are you looking to impart?

Spindel: There is a common misperception that young, up-and-coming broadcast engineers lack adequate RF knowledge or experience. I believe it is a misperception because if you lack RF knowledge, you will gain it quickly on the job. 

Much of the procedural and troubleshooting knowledge that exists in a modern broadcast plant is not in any textbook. These hard-won lessons must be passed down to each new generation. 

Our goal is that practitioners of all experience levels take away something useful to apply or pass along. The takeaway: You must survive and thrive where failure is not an option. The show must always go on. We hope every attendee will be both enlightened and entertained.

RW: What else should we know?

Spindel: A station GM, himself a former engineer, once asked me what I thought about a person he was considering hiring after meeting him for the first time.  I said, “He’s like us: someone who would never leave the transmitter site in the middle of the night while the station was still off the air.”

Broadcast engineering is a unique field with no formal academic path. It encompasses high power, RF, towers, generators, audio, video, microwaves, winches and four-wheel drives, to name a few disciplines. Knowledge is gained almost entirely through on-the-job experience. 

If this forum imparts even a small measure of that knowledge to the next generation through lessons learned, it will be a great success.

[Do you receive Radio World’s daily SmartBrief email every weekday morning? It’s free here.]

The post Don’t Try This at Home appeared first on Radio World.

Monitoring and Control Go Far Beyond the Transmitter

2 avril 2026 à 13:12

This is one in a series of articles about trends in remote control and RF facility management for radio enterprises.

Edwin Bukont is a longtime consultant and broadcast engineer. He runs E2 Technical Services and Solutions.

Ed Bukont
Ed Bukont

Radio World: Ed, what do you consider the most important trend in how broadcasters control and monitor their transmission facilities?

Ed Bukont: HTML5 — a programming language that allows for the creation of network browser-accessed user interfaces that may be liberated from specifics of an OS, a PC, installed applications and dedicated connections that become too cumbersome to maintain or access when time is of the essence. 

RW: To what extent have radio companies created centralized infrastructures for monitoring and control of their transmitter sites?

Bukont: Monitoring and control is becoming ubiquitous, and the centralization isn’t just for transmitter sites. It is for the transmission system, which may include ingest such as media receivers (increasingly via IP), the automation system, the audio network, the STL and the transmitter.

But wait, we need to include RDS and PAD data. And the stream originates from the studio to a CDN. 

The “transmitter site” may have backup automation. The centralization is not just at a “Master Control” or NOC. 

The real benefit of centralization is to pull together the monitoring and control of all systems to a central point that offers access to any of the sites from any of the broadcast centers. Centralization has been tried on and off since at least the 1990s, with varying levels of success. 

RW: Can you recommend best practices for setting notifications and alarms?

Bukont: Are notifications providing useful info in a timely manner?

First, does the system provide the alarms you desire? This may require a third-party device. Peripheral gear may offer a useful interface to your monitoring and control hardware.

Second, does the infrastructure allow alarms and notifications to be sent and received via the technology of the remote-control device’s interfaces? These concerns may include hardware concerns, network security policies, operating bias (phone call vs. email) and limitations of your ISP, which may not support various protocols. I find this is often overlooked when choosing options for integration.

Third, what is the precedence of alarm notifications? Who will be notified first, second and third, in what manner (text, email, phone call etc) and for what types of alarms?

Have more than one monitoring point for a type of failure, especially if a failure may be within a chain of devices — as an example, air chain silence sense at three points: the STL input, the STL output and the transmitter output, with a way to discriminate between them. (See Fig. 1.)

Logic Truth Table For Air Chain Silence Alarms (Fig. 1)
STL INPUT STL OUTPUT TRANSMITTER OUTPUT ALARM
AUDIO CARRIER
AUDIO OK AUDIO OK AUDIO OK CARRIER OK NO ALARM
AUDIO LOSS AUDIO LOSS AUDIO LOSS CARRIER OK STUDIO ALARM
AUDIO OK AUDIO OK AUDIO LOSS CARRIER OK TRANSMITTER ALARM
AUDIO OK AUDIO OK AUDIO LOSS CARRIER LOSS TRANSMITTER ALARM
AUDIO OK AUDIO LOSS AUDIO LOSS CARRIER OK STL ALARM
AUDIO OK AUDIO LOSS AUDIO LOSS CARRIER LOSS STL AND TRANSMITTER ALARMS
AUDIO LOSS AUDIO LOSS AUDIO LOSS CARRIER LOSS STUDIO AND TRANSMITTER ALARMS

Take advantage of screen grab and streaming options that can provide confidence audio or a display of user interface settings. Beyond transmitter readings, we may want to see the console or hear audio from the studio.

And don’t forget to monitor that stream! It may generate revenue and give an ability to monitor the studio audio. 

AoIP platforms such as Livewire and WheatNet offer software-based monitoring and control (M&C) devices that make the creation of such alarms quick and easy, provided your peripheral devices and IT department cooperate. 

Fig. 2 is an example of an HTML5 facility control panel, monitoring an automation system, accessed by browser from PD machine. This was created using Axia Pathfinder and is courtesy of Megan Amoss at Baltimore Public Media.

An example of an HTML5 facility control panel, monitoring an automation system, accessed by browser from the PD’s machine.
Fig. 2: An example of an HTML5 facility control panel, monitoring an automation system, accessed by browser from the PD’s machine.
Credit: Megan Amoss

RW: How can an engineer protect these systems and related infrastructure from cyberattacks?

Bukont: The advice of a transmitter manufacturer’s support tech comes to mind: “Nobody ever died from a lack of rock and roll.”

Making your remote control easy for you to access makes it easy for others to access. 

Solutions should “dial out” rather than “dial in.” The ability to remotely restore operations should not be an excuse to shortcut security protocols and best practices just to save a minute of down time. 

Control of access should limit the ingress necessary. Ingress and egress do not have to have complimentary network configurations. 

Most products designed to be accessed remotely for monitoring allow more than one level of access, from admin (setup only) to monitor (observe readings) to control (make adjustments). Each level should have some distinction and unique passwords. 

The generic admin should have its password changed to something quite long, and a secondary admin user created for setup and admin. 

Nautel and GatesAir have both published guidelines on site network security and provide free webinars via SBE meetings, webinars and NAB shows. 

The information to secure your network is out there, often available for free. Join a users’ group such as “Broadcast Engineers” on Facebook to access the body of knowledge. 

Do not put your devices directly on the internet, use a jump box. Yes, learn such IT lingo so you can ask for the support you need in a way that IT will understand. 

Nothing about broadcast is unique in a way that cannot be properly managed according to recognized best practices, no exceptions. If you have an exception, it probably means you have a vulnerability. 

RW: What misconceptions do people have about this topic?

Bukont: Just because you are “off the air” doesn’t mean it is the engineer’s problem!

Not every fault deserves a truck roll, especially if the engineer is driving his or her privately owned vehicle. 

Nothing coming out of the speaker at the GM’s home? First call the OM and be sure the automation is really playing. If not, ask, “Is the log created and loaded?” Who handles that, and can they be contacted “outside of business hours”? Check with the A/P person that the ISP bill was paid. 

If the automation is playing and the STL is intact, now call the engineer. What then can the engineer  diagnose or service remotely?

Then, if a dispatch is in fact needed, is there someone with a brain who can arrive at the studio or transmitter faster than the engineer or the OM, and be guided by the knowledgeable person?

At one station the contract engineer had a strict limit of hours. The OM was afraid of the transmitter site and did not have a privately owned vehicle. But the news person, who was the daughter of an electrical transmission engineer, was happy to assist, providing a fine set of eyes, ears and hands under remote direction. 

This is supposed to be teamwork. When you send all of the alarms to the engineer, you aren’t using the team to do the work.

RW: What else should we know?

Bukont: A system is only as good as its weakest link. 

Remote control systems should be on a UPS AND a backup generator, not just one or the other. 

Switches take many minutes to reboot, you don’t want the alert to not go out in a timely manner because the switch is rebooting from a minor power bump. 

Power to the components should come from the outlet closest to a known good power source, not at the end of three daisy-chained outlet strips.

Many devices now have either two AC power sources, or an AC and DC power source. Use both, from different sources. 

On a recent project, we knew that the site primary power was unreliable. Those devices with dual power inlets have each connected to one of two local UPS units, and the two UPS units are fed from different panels. One set of panels has a generator feed. 

TV people know this, but it is new to many in radio: Central clock sources should be on a UPS. This may include both PTP and NTP. 

NTP time should be sent to remote control central systems to provide a reliable time stamp for events. 

Learn from other experts in the free Radio World ebook “Trends in Remote Control & Facility Management.”

The post Monitoring and Control Go Far Beyond the Transmitter appeared first on Radio World.

NPR Distribution Highlights New Adaptable Receiver

1 avril 2026 à 11:00
Badri Munipalla
Badri Munipalla (Photo by Jim Peck)

Representatives of NPR Distribution will speak during the Public Radio Engineering Conference at the Tuscany Suites and Casino in Las Vegas, preceding the NAB Show.

Badri Munipalla is vice president, NPR Distribution, which manages and operates the Public Radio Satellite System. It distributes approximately 400,000 hours of content annually to some 1,200 stations.

Radio World: Which members of the NPR team are presenting at PREC?

Badri Munipalla: Thank you, Paul, for taking the time to talk with me. We appreciate the opportunity to speak with our public radio colleagues each year to talk about how we’ve listened to their needs and developed solutions to address them.

Three members of NPR Distribution will be speaking at the conference. Joining me will be Jon Cyphers, senior manager, product and product support, Distribution, and Mike Pilone, enterprise architect, Distribution. They have been instrumental working with our team on ContentDepot and other recent developments.

RW: What is the topic?

Munipalla: We’ll provide updates about the PRSS and ContentDepot, public radio’s broadcast distribution and management platform. 

This year, in addition to discussing improvements to ContentDepot, we will provide updates on the next-generation ContentDepot Edge receiver that the NPR Distribution team has developed. It is a best-in-class terrestrial distribution receiver for live broadcast that will transform how public radio stations receive programming across America.

The integration of local and national news and programming is essential to the public radio experience. To advance innovations in broadcast distribution, NPR Distribution is introducing this low-latency terrestrial receiver. ContentDepot Edge, currently being piloted, enables stations to retain the core functionality of their existing distribution platform while adding new capabilities, including station-to-station content sharing, geo-targeted delivery and enhanced metadata, monitoring, and playback functionality. This is a significant development in broadcast distribution.

RW: The past year has brought huge changes to public radio, specifically in how the ecosystem is funded. How has this affected the work you and your team do to support public radio engineers and stations?

Munipalla: NPR remains committed to supporting the PRSS and public radio stations, and providing best-in-class reliable broadcast content management and distribution services, as we have for decades. 

No changes are planned in that commitment, despite the loss of federal funding. In fact, underscoring our commitment to stations, NPR announced to the public radio system in November that we would immediately dedicate additional resources toward fortifying the public radio system. NPR secured five years of federal interconnection funding prior to the dissolution of CPB. 

The NPR Board of Directors approved full and total relief of PRSS interconnection fees for two years for all interconnected public radio stations. 

The defunding of public media by Congress has brought uncertainty to a system that is vitally important to many Americans. We remain committed to advocating for public sources of funding to support the public radio system.

RW: How does NPR Distribution’s relationship with its users change now that Public Media Infrastructure is on the scene?

Munipalla: NPR Distribution’s support for interconnected stations hasn’t changed. We have multi-year funding, and have provided relief to all interconnected stations during these challenging times. 

We continue to deliver reliable distribution services through PRSS, and we are delivering on future technologies that will respond to the needs of tomorrow’s public media. Our hardware and software are already developed, and operating both in pilots and in production. No other provider can offer that, and none has a track record of delivering like PRSS. 

We are working with our users to launch the ContentDepot Edge hardware and software solution designed to operate over public internet connections including fiber, 5G and satellite internet service providers.

Promotional image of the new NPR receiver.
Promotional image of the new NPR receiver.

What will change is that we will work with them to help ensure this new receiver will smoothly integrate with the existing ContentDepot platform to provide higher audio quality, faster file transfers and plug-and-play usability — all while maintaining NPR’s high standards for uptime and dependability. 

One example is the recent collaboration of NPR with KCUR in Kansas City, Mo., to keep the station on the air when KCUR faced an urgent move to relocate its core broadcast equipment to its transmitter building in a single weekend. KCUR and NPR took the opportunity to work together to deploy ContentDepot Edge rapidly without interrupting its broadcasts. 

Because the technology is designed to be highly adaptable and run over standard broadband, KCUR didn’t have to wait to rebuild complex satellite downlink infrastructure at its temporary setup.

Station engineers and general managers are already enthusiastic about this 21st-century solution for the system. This is consistent with our mission as we led the way to satellite delivery, to online content management through deploying the ContentDepot platform and now to a reliable terrestrial solution.

Through 2026 and beyond, we’ll be working closely with our public media colleagues at stations and producing organizations to address their needs and exceed their expectations in providing reliable, accessible and affordable broadcast content management and distribution.

Info: www.nprdistribution.org

[For News Like This See Our Show News Page]

The post NPR Distribution Highlights New Adaptable Receiver appeared first on Radio World.

How Boosters Can Help AM Stations

30 mars 2026 à 20:20
David Layer of NAB
David Layer

We’re previewing technical sessions and trends of the upcoming NAB Show.

NAB Vice President, Advanced Engineering David Layer will give talks about AM radio and hybrid platforms before and during the convention.

Radio World: What will your sessions be about?

David Layer: My presentation to the Public Radio Engineering Conference will focus solely on AM radio, and I plan to spend most of my time telling the audience about all the interesting AM radio-related work ongoing within the National Radio Systems Committee.

As it turns out, my colleague and good friend John Kean is presenting at the PREC as well, also about AM radio, so he and I will be coordinating our presentations as we work together on the NRSC projects.  It’s fair to say that John is the brains behind a lot of this work, and we’re fortunate that he is “on the job” here.

Also I’ll be speaking on the NAB Show floor on Tuesday, in the TV and Radio HQ Theater, about “Improving AM Coverage and the Future of Digital Radio Listening.” 

This talk will include some of the material I’m discussing at the PREC, in particular on the NRSC’s AM booster project, targeted to a different audience. I also plan to discuss my thoughts on the importance of broadcasters using digital radio signals and why digital plus hybrid — over-the-air plus internet — technology is the best combination to keep their stations “looking as good as they sound.”

RW: The NRSC has been conducting research about AM single-frequency networks. What is the status of that work?

Layer: AM broadcasters are disadvantaged compared to FM and TV broadcasters in that they are not authorized by the FCC to make use of on-channel booster stations.  

Also known as single-frequency networks or SFNs, main signal-booster signal combinations can help broadcasters reach listeners within a station’s service area that experience poor reception. Modern transmission technologies, including RF channel simulation tools that accurately model SFNs, and precise timing control between main and booster stations, are being used successfully in the support of SFNs in FM radio and broadcast TV services.

It stands to reason that AM broadcasters should also be able to employ these techniques and improve their coverage and service to listeners.

The NRSC is pursuing an AM booster project with the ultimate goal of developing a technical record to support adoption of a petition for rulemaking at the FCC that establishes rules for AM booster stations. 

Station lists in a Hyundai Ioniq 5 with DTS AutoStage.
Station lists in a Hyundai Ioniq 5 with DTS AutoStage.

This project is expected to consist of a number of phases including laboratory testing of AM co-channel interference to develop parameters for booster station design; investigation into small antennas suitable for booster station operation; and ultimately construction and field testing of an AM radio SFN utilizing the learnings of the earlier work, under experimental authorization.

Our current challenge is identifying a full-service AM station that we can work with on booster experiments. We hope to identify a station in the Washington, D.C., area as that is where our testing resources are located. Once a plan is in place for conducting tests on a specific station, I expect the other parts of the project will move forward. 

RW: Hybrid radio systems like DTS AutoStage are becoming more prevalent in automobiles. What do they portend for the way radio uses metadata and its broader user experience?

Layer: I am a big fan of hybrid radio systems, and DTS AutoStage is clearly leading in this technology. Well-designed hybrid radio systems give the AM and FM radio bands a totally consistent user experience with respect to metadata, where all stations in the band look great with station logos and station information.

NAB recommends that all broadcasters participate in hybrid radio and make the necessary investments to provide great metadata to listeners. At the same time, many broadcasters should also be thinking about how they can support digital radio (i.e., HD Radio) technology and start broadcasting in digital.  

There are far more vehicles with HD Radio than with hybrid radio, and the radio “product” on the dash will look better and better as more broadcasters consistently transmit metadata using the HD Radio system. 

RW: The number of AM stations in the United States has been declining, slowly but consistently, for some time. What role do you see the band playing in American life in another few years?

Layer: AM radio continues to play a vital role in the emergency infrastructure of the U.S. as the backbone of the Emergency Alert System. This is a role not easily replaced by other technologies, and NAB has been a strong supporter of the AM Radio for Every Vehicle Act, which recognizes this and would keep AM radio in vehicles for the safety of all Americans.  

As an audio service, both AM and FM face challenges due to the increased competition that internet-delivered audio represents. I primarily focus on the technical aspect of these services in my role at NAB, and I expect NAB to continue to investigate and encourage use of technologies, like AM boosters and the use of digital radio, that help broadcasters to stand out in this ever more crowded field of choices.

[For more coverage of the convention see our NAB Show page.]

The post How Boosters Can Help AM Stations appeared first on Radio World.

Is Your Signal Secure?

29 mars 2026 à 16:00
During the session “Securing the Signal,” panelists discuss how threat intelligence platforms enable broadcasters to anticipate risks and safeguard field operations.
During the session “Securing the Signal,” panelists discuss how threat intelligence platforms enable broadcasters to anticipate risks and safeguard field operations.

A panel in the Broadcast Engineering & IT Conference of the upcoming NAB Show will explore “Securing the Signal: Field Operations, Site Safety and Security Protocols for Modern Broadcasters.” It includes experts from Fox, Dataminr, Smith Entertainment Group and Verkada.

Steve Shultis, the CTO of New York Public Radio, is the moderator.

Radio World: Steve, what’s this session about?

Steve Shultis
Steve Shultis

Steve Shultis: “Securing the Signal” examines how broadcasters can modernize their approach to field safety and site security as operations become increasingly distributed.

Today’s broadcast signals travel through our HQs, transmitter sites, remote field production environments, rooftop positions and shared infrastructure. Each of these locations introduces operational risk that often falls outside traditional security planning.

This panel brings together broadcast leaders responsible for safety and operations, along with technology experts to discuss practical frameworks for protecting both infrastructure and personnel. 

The focus is on helping stations shift their security posture from reactive response toward proactive risk identification and prevention — using clear protocols, real-time intelligence and modern monitoring tools.

RW: Can you give an example of best practices that stations should consider to secure their signals?

Shultis: First, integrating real-time threat awareness into field operations. This includes the use of threat intelligence aggregation and alerting platforms that provide situational awareness within defined geographic areas, allowing organizations to anticipate and respond to emerging risks affecting their personnel or sites.

And then enhancing physical site monitoring with intelligent video analytics. AI-assisted analysis can help identify unusual behavior, perimeter breaches or developing threats earlier, enabling faster escalation to internal teams or law enforcement when necessary.

RW: “Field operations” is part of the description. What should stations know about securing their signals in this area?

Shultis: Field operations introduce unique challenges because teams are often working alone, in remote locations or in temporary environments such as live event sites.

Stations should ensure that:

  • Clear check-in, “all-clear” and escalation protocols are in place for remote engineers and field crews.
  • Site access procedures are standardized and documented.
  • Risk assessments are conducted before deployments, particularly for high-profile events or in areas experiencing heightened activity.
  • Safety planning is integrated into routine maintenance and upgrade workflows, not treated as a separate function.

Securing the signal in the field ultimately means securing the people responsible for keeping it on air.

RW: Are there common misconceptions you’d like to dispel?

Shultis: One is that serious physical threats are rare or limited to large markets. In reality, incidents affecting broadcast personnel and infrastructure have occurred across market sizes, often in environments that were previously considered low risk.

Another misconception is that security is solely a facilities issue. In today’s distributed and IP-centric workflows, responsibility spans engineering, operations, IT, HR and leadership. Security must be integrated into everyday operational planning rather than addressed only after an incident.

RW: What else should we know?

Shultis: This session is designed to be practical. Attendees will leave with actionable insights they can adapt immediately, whether they operate a single-site station or a large, distributed network. Our goal is to foster collaboration between engineering, IT and security teams to strengthen resilience without creating unnecessary operational friction.

[For more coverage of the convention see our NAB Show page.]

The post Is Your Signal Secure? appeared first on Radio World.

Prism Quattro Is a New Distribution Option

26 mars 2026 à 17:01
Adrian Berkovits smiles and poses by leaning against an equipment rack in a server room
Adrian Berkovits

Supply Side is a series of occasional articles about companies in the radio broadcast supply ecosystem.

Visitors to the MaxxKonnect booth at the 2026 NAB Show will learn about a new option for distribution called Prism. Adrian Berkovits is founder and president.

Radio World: What is Prism?

Adrian Berkovits: It is a purpose-built, global audio broadcasting ecosystem designed to replace traditional satellite distribution.

Its hardware and software are engineered from the ground up to work in unison. The result is a cost-effective solution that offers far more ease, control, insight and flexibility than traditional satellite distribution.

Our flagship receiver, the Quattro, is a four-channel stereo 1RU appliance designed and built in collaboration with Angry Audio.

RW: Who founded the company and where is it based?

Berkovits: I founded Adventure 33 and own it 100%. Prism is a new product and service offered by us at Adventure 33. We’re based in Toronto and we are a team of 18 people, a mix of employees and contractors.

Prism Quattro, a piece of electronic equipment
Prism Quattro

RW: The website describes “a resilient omnidirectional IP network for broadcast-grade audio delivery from studios to affiliates.” Who developed this technology?

Berkovits: We developed this in-house all from scratch because we’re obsessed about uptime and we knew to do it right, we needed to start from zero.

When we say “omnidirectional,” we mean it literally: There is no single point of failure anywhere in the architecture. Prism routes audio simultaneously across five independent infrastructure layers and multiple cloud and dedicated providers, each with different network paths among multiple geographic regions.

If a vendor has an outage, for example, your audio is already flowing through the other layers. If a fiber cut takes down one network path, traffic keeps flowing via the other layers.

We’ve watched Prism maintain uninterrupted service during major cloud provider outages that took down thousands of websites and services. While other systems went dark, our audio delivery stayed on air because the architecture simply routed around the problem. It’s resilient by design.

RW: Why does the radio broadcast marketplace need this, compared to what’s available?

Berkovits: The radio industry is facing a critical infrastructure crisis. C-band satellite spectrum is being reclaimed by wireless carriers for 5G deployment. Satellite capacity is literally shrinking, and what remains is becoming prohibitively expensive and difficult to manage.

Broadcasters need a migration path off satellites, but early IP-based alternatives were typically built on a single cloud provider that just trades one single point of failure for another. We’ve been listening, and the marketplace has been waiting for a solution that’s both truly resilient and actually practical to deploy.

Prism solves this by using proven, leading-edge technology that affordably is far easier to use, configure and deploy than traditional broadcast infrastructure. Stations can provision new receivers remotely in minutes and even configure their audio channels and closures using their phone.

RW: What are the terms of purchase — is this a one-time buy, a lease, a monthly subscription?

Berkovits: Encoders and receivers are a one-time purchase with no recurring fees. The Prism network infrastructure, web portal and support operate on a flat monthly or annual subscription model. No surprises, no usage charges, just predictable and affordable operational costs.

RW: Do you have any clients using the system?

Berkovits: Yes, we have several large early adopters already using Prism in both Canada and the United States. We’re working with them under NDA during the initial deployment phase, so we can’t release their names just yet, but we’ll be announcing those partnerships shortly.

RW: What else should broadcasters know?

Berkovits: At the end of the day, we are an agile team that deeply cares about audio and are driven by the hunger to solve problems. We’ve spent a great deal of time focusing on contact relay closures for automation triggers for example.

This has been a sticking point for the industry for quite some time. Commercial copy-splits, for example, and audio/metadata timing are all seamlessly managed and precisely synchronized within Prism. Events fire exactly when they’re supposed to. We even support cross-fading and ducking if desired for the broadcaster’s use case.

Info: www.prism18.com

NAB Show Booth: C2038 (MaxxKonnect)

The post Prism Quattro Is a New Distribution Option appeared first on Radio World.

GBS Welcomes Findings of India Government Study

25 mars 2026 à 19:42

GeoBroadcast Solutions said its MaxxCasting and ZoneCasting technologies received a vote of confidence from a study by the government of India.

The company’s international arm Geo Global said the research, which was conducted in 2024 and recently made public, “validated” the performance of those systems.

“The study, conducted by Prasar Bharati’s Research Department at the government-owned All India Radio (AIR) FM station in Bengaluru, confirmed that the technologies deliver enhanced coverage and seamless listener experiences across a single-frequency network (SFN),” Geo Global said in a press release.

It quoted Dev Viswanath, managing partner of Geo Global, saying the study “sets the stage for final approvals and broader deployment. We are now positioned to support activation across hundreds of government-owned and commercial radio stations in one of the world’s largest and most dynamic broadcast markets.”

It said the report found that synchronized booster transmitters enabled smooth, uninterrupted transitions between coverage areas, even in dense urban environments and challenging terrain.

“In fully synchronous (MaxxCasting) mode, transitions between the primary signal and boosters were seamless, with no perceptible impact to audio quality,” according to Geo Global.

“The study also evaluated ZoneCasting capabilities, demonstrating precise geographic content delivery with minimal transition zones. Field testing conducted by Prasar Bharati engineers, on foot and in vehicles, confirmed consistent audio quality and reliable performance throughout the station’s coverage area.”

It said the report concluded that Geo Global’s technology claims were “fully substantiated.”

The company posted a link to the findings.

The post GBS Welcomes Findings of India Government Study appeared first on Radio World.

Path Redundancy Is Now Very Cost-Effective

25 mars 2026 à 14:55
Abstract image of data center with flowchart.
Credit: Yurichiro Chino/Getty Images

This is one in a series about trends and best practices in codecs for radio.

Robbie Green is product manager, communications products for Telos Alliance. He is the former senior director, enterprise technology for Audacy and has held engineering management roles at Cumulus and Clear Channel, among other broadcast companies.

Robbie Green headshot
Robbie Green

Radio World: Robbie what is the most important trend in the design or use of codecs for radio broadcasting?

Robbie Green: Hands down, it’s path redundancy. To ensure 100% uptime, you need to send your critical audio across more than one path. The good news is IP path redundancy is very cost-effective these days. You could pair two different wired ISPs, or an ISP and a low-cost IP radio option. As long as one link remains up, you are still on the air.

RW: More and more parts of the broadcast air chain now are performed in software rather than hardware. How has this affected broadcast codecs?

Green: If anything, it’s made codecs even more ubiquitous. As air chains have become increasingly virtualized and coding algorithms standardized, deploying codecs has become much easier. 

Telos offers several virtual codec choices tailored to different tasks; all of them integrate into modern studio systems and AoIP networks quite nicely. Things have come a long way since the days of ISDN and 66 blocks!

RW: How well do today’s codecs integrate with today’s AoIP networks and infrastructures; what issues do they present?

Green: I’d say they integrate extremely well. As the people who invented AoIP for broadcast, we’d better have codec integration nailed down! Telos Zephyr Connect and iPort codecs integrate directly with Axia Livewire and AES67 networks. It’s very seamless.

RW: How widespread are IP-based systems for STL applications now?

Green: I’d say they’re very widespread. In many parts of the world, IP-based STLs are more the rule than the exception these days.

RW: What tools are available for sending audio to multiple locations at once?

Green: Telos Zephyr Connect and iPort are designed to send audio to over 64 locations using both a main and backup path for each link. As long as half the packets arrive on one path, and half the packets arrive on the other path, audio is seamless on the air. These paths could be two different internet connections, or a combination of some sort of wired or fiber path and an IP radio link.

RW: How are manufacturers assuring reliable transmission with low delay over marginal IP networks?

Green: The gold standard is really path diversity. When it comes to any RF or wired link, it’s not a matter of if the link will fail, but when. Eventually, a backhoe will cut the fiber, or some other radio in your vicinity will become spurious and disrupt your microwave link. 

Fortunately, these days, there are very cost-effective options to achieve path redundancy. You can pair a business-class cable modem or fiber link with inexpensive IP radios to create an STL that won’t go down, even if you experience a fiber cut or interference issues with a microwave link. 

RW: How can an engineer protect codecs and their related infrastructure from cyber attacks?

Green: Three words: “change the defaults.” These are crimes of opportunity, and factory logins are there to get you started, not for long-term use. 

Changing usernames and passwords should be a routine part of any new equipment installation. There are lots of excellent password manager apps out there; pick one and let it generate unique secure passwords for your gear. It’s just good practice.
Ideally, you should also have any codec that is sending audio over the public internet behind a firewall, and sending the traffic through a VPN tunnel. 

By creating a VPN, you can effectively create a link extension from location A to location B on the public internet that nobody else can access — think of it as a very long Ethernet cable. If cost and complexity are an issue, Ubiquiti offers inexpensive solutions like the Gateway Lite and Gateway Max that feature easy-to-use setup wizards

RW: Is availability of parts for legacy codecs a serious problem? 

Green: Although many old hardware codecs soldier on reliably, there are units out there that are 10 to 15 years old, sometimes older. I don’t think parts availability for units this old is an issue, because advances in algorithms, connectivity and link reliability, plus the migration to software codecs for many applications, make replacing these old devices with new, modern solutions a wiser monetary choice than repairing them.

RW: What misconceptions do people have about codecs that you’d like to dispel?

Green: That the public internet isn’t ready for prime time when it comes to audio transport, and that 950 MHz STLs are the only way to guarantee reliability.
While 950 MHz links have traditionally performed well, they have several single points of failure — a dish can be damaged by falling ice or water intrusion over time, large coax cables are an attractive target for copper thieves, etc.
Inexpensive IP links offer transport redundancy that 950 MHz links just can’t beat.

Read more on this topic in the free ebook “Trends in Codecs 2026.”

 

The post Path Redundancy Is Now Very Cost-Effective appeared first on Radio World.

Sage, Orban to Demo Virtualized EAS at NAB

24 mars 2026 à 22:29
Screenshot of the Sage v-ENDEC Audio Channels Panel.
Screenshot of the Sage v-ENDEC Audio Channels Panel. (Click to enlarge.)

Sage Alerting Systems will demonstrate virtualized Emergency Alert System alerting at the Orban Labs booth at the NAB Show.

The company is working with Orban and DNAV to conduct the demo, which is intended to show real-world applications for virtualized EAS.

It will feature a transmission chain with a pair of Optimod 5950 HD processors, with analog FM plus HD-1, HD-2, HD-3 and HD-4 feeding an exporter/exporter and exciter into a dummy load.

“The demonstration emphasizes a minimal hardware approach to EAS alerting,” the companies said in a press release.

“The Sage technology uses next-generation EAS software, hosted on an industrial PC the size of a deck of cards. Using only power and LAN connections, Orban and Sage will demonstrate the flow of normal audio plus EAS alerts.”

Sage will use AES-67 for audio output and AAC-LC streams for live EAS audio input, with direct control of an SAS transmission router managed via the Sage software. Live EAS “alerts” will be generated from the SAS booth.

Orban Labs VP of Sales noted in the announcement that such technology is not yet approved for deployment, but “it’s well on its way.”

He said customers have been asking for this type of technology, with an easy way to integrate EAS alerts into the audio chain without extra hardware and wiring.

NAB Show Booth: C1459

The post Sage, Orban to Demo Virtualized EAS at NAB appeared first on Radio World.

Exhibitor Viewpoint: Broadcast Bionics at the NAB Show

24 mars 2026 à 17:58
Matt Collison

With the 2026 NAB Show approaching, we’re providing a series of previews asking exhibitors about their plans and expectations.

Matt Collison is brand and marketing lead at Broadcast Bionics.

Radio World: What is Broadcast Bionics and what kind of solutions does it provide?

Matt Collison: Broadcast Bionics is the name behind the leading audience engagement software, BionicStudio, which handles on-air calls, messages, social media and visualization. With 30 years of broadcast software innovation behind it, Bionics continues to lead the industry through constantly changing landscapes with innovative, forward-thinking solutions.

While audience engagement remains central to what it does, Bionics work increasingly spans wider studio workflow and technology integration, helping broadcasters streamline software, broadcast infrastructure and workflows to create cohesive and efficient production environments.

Through careful design and consideration, Bionics has introduced Augmented Intelligence (Bionics’ take on AI) tools into BionicStudio that automate transcription, summarization and topic detection. The aim is not to replace talent, but to support it, handling routine production tasks while freeing human teams to focus on the content. These tools are designed with security in mind, ensuring any data processed remains protected.

RW: What do you feel is the most significant technology trend in your part of the broadcast supplier industry?

Collison: Leveraging the hyperscale computing methods that drive AI and data centers will change how broadcast facilities are built and operated. The move toward software-defined facilities shifts the focus away from discrete hardware boxes that each perform a single function, toward centralized infrastructure running containerized products.

This allows broadcasters to scale rapidly, launch new services, operate from anywhere and build resilience into their systems, while reducing long-term costs and dependency on fixed hardware.

Alongside the adoption of AI, this transition represents one of the most significant shifts the broadcast industry will experience over the next decade, changing not just the technology stack but the way broadcasters think about infrastructure, agility and growth.

RW: What will be your most important product news?

Collison: Reflecting the wider shift toward software-defined infrastructure, BionicStudio will soon be available within VirtualRack (adding to the library of products from various manufacturers). This will enable BionicStudio to be hosted on scalable, centralized infrastructure and further expands the role of VirtualRack within modern broadcast facilities.

Bionics is also adding WhatsApp support to CallerOne, its affordable call screening solution. The integration enables broadcasters to receive and make WhatsApp audio calls directly within CallerOne, eliminating clunky desktop applications and workarounds.

Stations using BionicStudio can already add WhatsApp voice calls alongside other call types. Adding WhatsApp to CallerOne allows smaller or standalone operations to benefit from the same convenience and high quality from this popular platform.

RW: How is your product different from what’s available on the market?

Collison: While generic server environments or public cloud platforms may be able to host some broadcast software, this typically requires significant engineering expertise to deploy and manage. VirtualRack is purpose-built for low-latency broadcast audio and supports products from multiple manufacturers, delivering scalable deployment and flexibility without requiring deep Linux knowledge.

WhatsApp integration in CallerOne replaces improvised studio setups, such as cell phones connected via wired cables or Bluetooth, with a native broadcast workflow. Calls and messages are handled within a unified interface, reducing clutter and complexity and providing a more reliable, simplified workflow.

RW: What brings you back to the spring show each year?

Collison: NAB has been instrumental in establishing some of Bionics’ most strategic partnerships, both with customers and suppliers. It brings together existing relationships and new opportunities in a way few other events can.

For Bionics, NAB is more than a product showcase. It is a place to listen, learn and contribute to meaningful discussions, while drawing energy from the ideas shaping the future of broadcast technology.

NAB Show Booth: C3016

[For more coverage of the convention see our NAB Show page.]

The post Exhibitor Viewpoint: Broadcast Bionics at the NAB Show appeared first on Radio World.

❌