From Video Switchers to Event Processors

AVE, From Video Switchers to Screen Switchers

How Live Production Systems Evolved, and How to Learn Them Without Getting Lost

Live production technology didn’t evolve randomly. Each major category of system emerged to solve a specific limitation of the one before it. The confusion many operators feel today isn’t caused by too many tools, it’s caused by skipping steps in how we explain them.

When taught in the order they appeared, these systems form a logical stack of ideas. Each one adds a new layer of responsibility without fully replacing what came before… This is that story.

  1. Video Switchers: Composition in Time

The video switcher is where modern live production begins.

Originally developed for broadcast television, video switchers (often called vision mixers outside the US) are designed to combine multiple video sources into a single composed output. Cameras, playback, and graphics are mixed using cuts, dissolves, wipes, and keys via Mix/Effects (M/E) buses. Systems such as the Blackmagic Design ATEM line, Ross Video Carbonite switchers, and Panasonic broadcast switchers are all built around this same core model.

What defines a video switcher is not the hardware it’s the cue-based mental model.

Key traits:

- One primary program output (1M/E)

- Fixed M/E structures

- Deterministic transitions

- Video-first assumptions

Video switchers answer one core question:

What is the next shot?

This model works beautifully for storytelling, but it assumes a single frame of reference. As soon as productions needed multiple destinations at once, the cracks began to show.

2. Video Routers and Matrices: Distribution Without Interpretation

Before screen-focused systems existed, the industry needed a way to move signals without changing them. That need gave us video routers (also called video matrices). Routers do not process video. They do not scale, key, or transition. They simply connect any input to any output, cleanly, repeatably, and indefinitely.Video routers and matrices systems from manufacturers like Extron, Crestron, AJA and others, exist to distribute signals cleanly and predictably, without altering them.

This introduces a critical conceptual split that still defines the industry: Routing vs Processing

Routers bring:

- Infrastructure thinking

- One-to-many distribution

- Redundancy and failover

- Signal abstraction

They answer a different question entirely: Where does this signal need to go?

From a learning standpoint, routers are essential because they teach signal-flow discipline, and because every advanced system quietly contains one.

3. Screen Switchers: Routing Applied to Displays

As AV expanded into corporate spaces, classrooms, and venues, routing logic began to attach directly to displays. This is where the informal term screen switcher emerged, often describing compact systems that decide which source appears on which screen.

Screen switchers choose which input goes to which display, sometimes with simple transitions. Inputs are typically laptops or media players. Outputs are projectors, flat panels, or LED displays.

Defining characteristics:

- Display-centric thinking

- Minimal processing

- Simple transitions

- No compositing hierarchy

Screen switchers answer: Which screen shows what right now?

This is still routing, just closer to the human experience of the room.

4. SCREEN Switchers: Structured Layouts and Polish

As expectations grew, screen switchers began absorbing features from video switchers: scaling, layering, picture-in-picture, and basic keying. This middle ground became known as the a scree switcher.

Systems in this category remain screen-focused but introduce compositional logic. Layouts become intentional. Transitions become repeatable. Operator predictability matters.

Presentation switchers introduce:

- Layered outputs

- Deterministic signal paths

- Layout-based thinking

- Controlled visual polish

They answer: How should this screen be composed?

From a learning perspective, this is where routing becomes design.

It’s also where the boundaries start to blur. Modern systems increasingly mix traits across categories, and some high-end presentation switchers edge toward event-processor capabilities. The distinctions are best understood as conceptual models rather than rigid product boxes.

5. Event Processors: Screens Become Space

At large scale, displays stop behaving like frames, or standard SMPTE Resolutions and start behaving like architecture.

LED walls with irregular resolutions, multiple canvases, IMAG, broadcast feeds, confidence monitors, and redundancy requirements all converge. At this point, calling the system a “screen switcher” undersells its role. This is where the industry adopted the term event processor (sometimes called Hi-Res), not just as a marketing label, but as a practical one used by engineers, designers, and rental houses.

Systems like Analog Way Aquilon, Barco E2 / Encore, Christie Spyder, and similar platforms share a defining architecture. Newer platforms such as Pixelhue demonstrate how this category continues to evolve, borrowing increasingly from its predecessors workflows while remaining purpose-built for live events.

They are layer-based, state-driven, resource-limited compositing systems.

They can perform:

- Cuts, dissolves, and transitions

- Multiple keyers and cut/fill

- Borders, shadows, masks, and crops

- Independent transitions across multiple canvases

But unlike traditional video switchers, they do not have fixed M/Es. Every visual decision consumes finite processing resources. Layer count, not input count, becomes the real limitation.

Event processors answer the biggest question yet: What does the entire visual environment look like right now?

Event processors introduce a fundamentally different way of thinking about live video. Instead of focusing on individual actions or layouts, operators manage system state, a living configuration of layers, canvases, and destinations that exists continuously until changed. Transitions don’t “happen” so much as the system moves from one valid state to another.

The Hidden Shift: From Actions to States

One of the least-discussed but most important changes in this evolution is the move from action-based systems to state-based systems.

- Video switchers are action-driven (cut, take, auto).

- Routers are purely stateful (the current crosspoint is the truth).

- Event processors are state-based systems that simulate actions for operator comfort.

At scale, “snapshots” or presets, and recall matter more than individual transitions and mistakes feel architectural instead of operational.

Understanding this explains why broadcast trained operators sometimes struggle on event processors: the system isn’t asking what to do next, it’s asking what reality (Preset) should be true.

Who Bought These Systems and Why That Matters

Another reason terminology fractured is simple: different buyers.

- Broadcast engineers bought video switchers.

- Facilities teams bought routers.

- IT and AV departments bought presentation switchers.

- Rental and production companies demanded event processors.

Each group brought different priorities, vocabulary, and tolerance for ambiguity. Manufacturers adapted language accordingly, often choosing market comfort over technical clarity.

In today’s market, many products are hybrids that span categories, but these historical “buying centers” still influence how systems are packaged and described.

What Comes Next: Software, Hardware, and Who Holds the Risk

Software platforms like vMix and OBS point toward the next phase of live production, not because they replace event processors, but because they reveal something event processors tend to hide:

The limits aren’t in the software. They’re in the machine underneath it.

Hardware centric event processors are resource partitioned: fixed limits, consistent behavior, and guardrails that protect the operator.

Software centric production systems are compute limited: flexible, scalable, and ultimately dependent on CPU/GPU power, drivers, capture hardware, and the operator’s ability to design within real-world constraints.

In other words, hardware tends to enforce boundaries. Software tends to transfer responsibility.

So the big question for the industry isn’t “which is better?” It’s this:

Do we want systems that guarantee outcomes or systems that maximize freedom and make the operator responsible for the consequences?

That decision changes everything:

- How we train operators

- How we spec shows

- How we design redundancy

- What “reliable” even means in a world where the production switcher might also be one OS update away from chaos

Closing Thought

The evolution from video switchers to event processors isn’t about better effects or higher resolution. It’s about a bigger shift, what are we actually trying to control besides just pixels.

- Show this camera

- Route this signal

- Fill this screen

- Define this visual state

- Describe this environment

Once you see the progression clearly, the tools stop competing, they start making sense.

And that’s the difference between operating a system and truly designing a show.

Next
Next

live production summit