Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Podcast w/ Bennett Liles: Signals from Space

Managing I/O and display at NASA Payload Ops

When you have a spacecraft flying by a planet there’s no time for a video system failure, so NASA recently outfitted its new Payload Operations Center with new video display routing. In this interview we talk to RGB Spectrum’s Bob Ehlers about how the company helped install the mission-critical video system.

Bennett Liles: What kind of a place is the Payload Operations Center?

Bob Ehlers: Well, that’s the Marshall Flight Control Center down in Alabama, and it’s one of the main locations where flight control for a lot of satellites and exploratory programs are run. It’s a state-of-the-art facility for managing ground support for the International Space Station. It was involved most recently with the New Horizon fly-by that was going out past Pluto and there was a lot of telemetry and control going on there. It really is the showcase for advanced audio and video technologies. They have an extremely large wall and many, many operators. The information that is up on the wall is time-critical, mission-sensitive and requires very high reliability and very, very high image quality and switching time. When somebody needs to look at visual information under high scrutiny—because these people are looking at this information and they’re looking for minute details in the imagery that’s coming back from these remote vehicles that might be orbiting the Earth or flying away from the Earth—they don’t want to lose any of those details while they’re looking at it. It’s also used for abnormal situation management and control in those types of instances.

I know that they have a lot of workers there who need to be coordinated in what they’re seeing and how they’re switching various sources. They’ve got computers, video feeds … What else are they watching?

They have everything from computer outputs, which might be SCADA human management interface systems, where they’re actually looking at the machinery that’s being operated either on ground control systems or up on the remote devices. They’re looking at news feeds from different sources. They may be looking at weather information. They may be looking at video feeds that are coming in as IP streams, H.264 streams, etc. Generally, the OmniWall processor, which is our 32-port processor that is running the room, is allowing them to take those 32 different input signals, combine them and display them simultaneously on the wall as windows side by side or scaling across multiple monitors so that they can get the resolution that they need and get the detail that they want form those individual systems. They can also put the systems up there next to each other to be able to do comparative analysis. One of the main things that our systems do is we have very, very low latency and so they can look at it frame by frame and see all the data in real-time to make those real-time comparisons that they need as they’re monitoring experiments and crew in space and things like that.

What all does the processor do?

It accepts up to 32 inputs and can have 32 outputs. I believe in this installation it’s, for the most part, standard DVI inputs that they’re routing to the system and so all the computer systems are in close proximity. But that system also supports inputs from HDBaseT sources over extenders. We have fiber sources that can be put into it, and we can take those as inputs and outputs and be able to extend the distances so if we needed to run it to a remote display somewhere we could certainly do that.

OK, and those links are coming in a lot of different ways. They have direct down links for sure from the Space Station, but there are signals coming in from other terrestrial locations and things like that?

Yeah. I mean really we’re agnostic to where the sources are coming from. We’re taking standard video signals and allowing multiple disparate systems to be integrated together. I mean the only thing that the sources have to share is that they’re outputting a video signal of some type whether it’s VGA all the way up to 1080p. We even do 1920×1200, so a higher resolution than even 1080p. We can combine all those sources into the wall and output them to the contiguous wall and then we can also output them to side monitors for monitoring and viewing that may not be contiguous to the wall. So we can define smaller walls within the system.

Does this system have any kind of redundant power or a power fail-over feature?

Yeah. The OmniWall 32 has redundant power supplies. They’re hot-swappable power supplies, and those are fed from disparate power sources. They can be removed and injected as they need to. I’m sure they have got all kinds of backup power for everything. Yeah. That’s true. This is not kind of your typical stressful environment where people are putting it to the test. We have other customers where they’ve installed our OmniWall or our link switchers, our media wall processors, and certainly they’re not as friendly. We’ve had them taken out by the military into tents in the sand, and of course, they cake the air filters on the devices right up. The units are designed for the air filters to be extracted and blown clean and they can be washed up and reused right away. Of course the power is always an issue, whether it’s being stabilized or not. Our systems are designed for a wide range of operating conditions. Whether they’re getting sags or surges on the power supplies, they clean up pretty well and those are not reflected in any of the video elements up on the wall.

I wanted to get a little more into the OmniWall itself. I know it has a setup routine and once you’ve done a lot of these installations you can probably breeze right through it, but how is it initially set up and configured?

The OmniWall is an embedded product. It uses field-programmable gate arrays (FPGA) and a processing architecture that’s designed for video. It has a client server interface basically run from a web browser that allows you to go in and configure the inputs and the outputs, the wall layouts, and create presets for the system operation. That’s all done through the GUI and what’s called the Web configuration program or WCP. Once the system is set up and you’ve configured the timings and the layouts for your wall designs, then we have a product we call VIEW Controller, which is the end-user control interface. So the integrator would use our WCP interface to set up the OmniWall and then the operator—the end-user—would use the VIEW Controller, which is really simplified point-and-click, and would allow the user to recall resets and move windows around up on the wall, etc. So [there’s] one design for the power user for setup and configuration and then [there’s] the end-user tool, which is greatly simplified and designed around a non-technical user.

I was just thinking that being able to work in NASA’s Payload Operations Center, with the people you would be talking to and what they do. It must have been a real blast for somebody to get in there.

This was really a fairly complex installation. They had a combination of lots of SDI video inputs; they had DVI inputs and even some dual-link DVI inputs that were coming from all the different computer systems that they used to monitor the launch activities. Then they routed that out through some extender systems to a pretty large wall. It was a 2×12 wall that kind of wrapped across the entire front of the room, and that in and of itself—just the scope of the project—is pretty large and pretty challenging by anybody’s estimations. But when our installer and our integrators come, we generally have them come out here to Alameda to our headquarters and get trained. We have courses that are certified by InfoComm for continuing education credits on using and configuring all of our products. We have 101, 201, and 301 classes. By the time they leave here they’ve had hands-on experience configuring and programming. They know how to replace the boards and the cards, and they know how to work with our design and engineering team. On a large project like this, we actually sent some of our support engineers out for a couple of days to assist the integrator in doing the setup. We generally will quote and include what we’ll call commissioning, working side-by-side with the integrator to get the configuration just right and making sure that the system is working as designed before we step back.

What would be the most important aspect in getting this system going? From what you’ve said here I would think that it would be just having well-trained people setting it up and operating it.

Well, training and knowing what you’re doing, what to look for. That’s where our partnerships with our integrators really matter. This particular installation had distribution amplifiers involved where they were splitting the signals. There was fiber-optic cable involved on the inputs. There were HDBaseT extenders going out. There were monitor groupings and a whole litany of things that went into the design. There was an AMX controller that was involved, etc. Making sure that you’re able to do proper mullion compensation, that’s kind of a fine art of counting pixels, knowing how to set off the space so that you don’t lose any pixels between your images and getting the compensation right so that people, when they look at the wall, they don’t see gaps and their eyes aren’t taken back by discontinuities in the image, getting the EDIDs set correctly, and making sure that all the timings and resolutions that are being passed from the sources into the processor and from the wall processor up to the displays are correct and that your timings are correct. And then, of course, there’s also the challenges of things like HDCP, which would include all of the matching of sources and making sure that you don’t have encoding and security conflicts that might exist in the system.

There’s got to be a lot of testing to do. How long would it normally take to wring out a system like that and tweak everything to be working exactly right?

Well, in a typical operation, you’re looking at a provisioning time of maybe two days; two to three days. That comes down to making sure that you’ve planned this appropriately and identified all of your sources in as much detail as possible. You’ve identified the cable distances that you’re trying to run. You’re making sure that all the products that you’ve selected in the mix are interoperable, that you don’t have, as we said, timing issues, latency issues that might cause some problems, particularly when we start getting into HDCP. When you have protected content you have latency limitations that the cryptography will allow, and so there’s a lot of different variables that go into designing the system well. If you can gather all those things up front and reduce the number of unknowns that you have before you go into the project, your chances of being successful and having it be a short installation go way up. If you miss things, and often that’s probably where you spend most of your time is on your 10 percent of exceptions that you find—the things that weren’t documented when you went into the project—that’s what will catch you. Our team here that does technical support design services spends a lot of time asking all of the right questions.

There must be a transitional period where your people have it up and running and they’re handing it off to the actual operators and holding training sessions for them, too.

That’s correct. We operate typically in a “train the trainer” type of model, where the integrator or integrator partner will be trained on how the system is installed. We make sure that they fully understand it, that they know how all the end-user tools, the controllers, etc., all work so that the end-user can get useful training and they don’t spend a lot of time struggling with the operation of the system.

In a facility like this where the stakes are very high, things evolve and technology marches on, and there will eventually be upgrades needed. What do you think is going to be coming along in the future? New video formats or other features that aren’t here yet?

Well, obviously we’ve released our next generation product, our MediaWall V processor and that processor is focused currently on HDMI. So we’ve kind of moved the focus of the input and the output from being DVIcentric, SDI-centric, into HDMI. That will be transitioning up to Display Port. There’s going to be signaling types and connector-ization types that are going to change. The resolutions start growing so where the Omni-Wall was capable of doing 1080p, MediaWall V could actually do up to 2560×1600. So it could do dual link and get up to pretty high resolution, but the MediaWall V goes up to full UHD and 4K resolution. There are also new standards for HDCP that are coming out, HDCP 2.2. All of our products are modular and so we can support transitions of technology.

Featured Articles

Close