Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

A Campus-wide IP Video Network for a State University, Part 2

At North Carolina State University, the campus-wide video system was upgraded to IP video with Haivision's Video Furnace, zero-footprint InStream player and Stingray set-top boxes

A Campus-wide IP Video Network for a State University, Part 2

Jun 8, 2010 12:01 PM

Listen to the Podcasts

Part 1

|

Part 2

Editor’s note: For your convenience, this transcription of the podcast includes Timestamps. If you are listening to the podcast and reading its accompanying transcription, you can use the Timestamps to jump to any part of the audio podcast by simply dragging the slider on the podcast to the time indicated in the transcription.

At North Carolina State University, the campus-wide video system was upgraded to IP video with Haivision‘s Video Furnace, zero-footprint InStream player and Stingray set-top boxes. Peter Maag is here to get into the details on how these features of the system work and how NC State is using them.

OK, Peter, in Part 1 we were talking about the Video Furnace installation at NC State. And there are so many different things on a university campus you can use video-over IP for, you could talk for an hour over just applications. How many channels are they currently using down there? Do you know?
Peter Maag:
I think they have 20 channels lit up with the expansion room for 10 more.

And one of these is what they called the “Wolf Channel.” Do you know what that is?
I believe so, I am not very familiar with the content that is being pumped through the system, but that’s a very interesting point because as I was saying in Part 1, many of the universities installed the system straight on the cost benefit of pumping live TV around. But it’s so easy for them once they have an IP delivery system to add on the real power of video over IP, which is to have prerecorded, created content and launch their own channels against the schedule or allow content to be accessed by a video on demand. So there is really three elements in what I would call push video technology, distributing video: one is there’s live channel distribution, the second one is the video on demand, and the third one is to create your own TV channels and set them up to play on a scheduled program plan. [Timestamp: 2:13]

And I guess some of the sources on this could be video production classes or the university communications people and you mentioned before, I believe—network emergency notices. In that case, I guess, it would be public safety who would have to have some sort of input to the server to put these things on.
Yes, for the emergency systems, there would be actually API level conduits between an EAS system, or an emergency alert system, and the Video Furnace. So it would kind of be an automatic push for the warnings, storm warnings, or whatever emergency alert comes across. But the ability to ingest and organize and schedule playouts of content is actually very powerful because a lot of these institutions want to launch their own TV stations. And some of it is live for live events but in the non-live times they need to fill it up with their own content. [Timestamp: 3:07]

At the user end, what kind of bit rates are we talking about at various points of viewing on this?
The bit rates of the video are typically dictated by the encoders and our encoders support anywhere from about 300k up to 15Mbps; 15Mbps would be very high-quality, high definition. Typically [with] standard-definition H.264 people will set at around 1.5Mbps to 2Mbps, and high definition people will set around 4Mbps, 5Mbps, or 6Mbps for traditional TV viewing; different market segments have their different sweet spots depending on the complexity of the content or whatever. But yeah, so the typical standard definition would be going out around maybe 2Mbps. [Timestamp: 3:56]

OK, and you mentioned before the server controls the players and the set-top boxes. What is the high/low streaming feature?
Yeah, that’s a very interesting feature, and it’s something that we introduced. It’s almost similar to controlled, what the industry would call “adaptive streaming,” or whatever. But there’s a lot of circumstances around a university. Let’s say you’re pumping a high-def channel and you want to receive by both the set-top boxes and the soft players. A high-def channel has a comfort area of 4Mbps, 5Mbps, or 6Mbps, and that’s the type of bandwidth that you want to direct to dedicated devices such as set-top boxes. But at the same time, you might want to take that live input source and make it available to PCs or Macintoshes that don’t really have—perhaps maybe they’re older and don’t have the horse power to decode such a large high-definition stream because decoding takes up CQ power. So you might want to reduce the frame rate a little bit, reduce the resolution a little bit and hit a 2Mbps high-definition stream—which is very beautiful full-screen on your laptop or even a window—and have the laptop viewers access the lower-bit-rate stream, which is less intrusive to their device. [Timestamp: 5:29]

In a university environment like NC State, who is controlling all this?
Both the IT and media department would be organizing all of that. In many cases, it’s in tight cooperation with the curriculum departments as they launch their course reserve material and make it available through video on demand. So you’ll have a number of different departments involved. But when push comes to shove, it’s a network device. It’s a video network, and the IT department is absolutely in full control of that, so they would provide the infrastructure, they would tune the network. And in some cases they would drive the administrative interface, and in other cases they would allow sections of the administrative interface to be driven by other people—perhaps the audiovisual department that wants to capture and log material. So through different rights access, we can segment different administrative zones of the Furnace for the particular user. [Timestamp: 6:31]

And how does the network video recorder work?
Network video recording—that’s quite a hot topic these days. People really like to know that if they’re investing and putting media onto the network that they have the ability to record it, edit it, classify it, associate metadata with it, and make it available for retrieval. Typically this is used for capturing, let’s say, classrooms. And the network video recorder—actually it’s quite flexible; it can be triggered a number of different ways. If you have content that’s coming in on a schedule such as programmed content, you can assign network video recorder resources to capture that in the future. Kind of like TiVo—”I want to capture this show next week between 2 p.m. and 3 p.m. or every week going forward on a Wednesday”—that’s a scheduled recording. You also have the ability to do crash recording, and a lot of that can be done either through third-party devices in the classroom such as a Crestron or some type of a room controller, or can be done through the web interface. That’s where you could actually start, stop, pause a recording. And we actually have a very new feature coming out that’s designed to help people retrieve points of interests and areas of interests within their recordings, and we call that feature HotMarks. That’s the ability to inject into the recording in realtime, while it happens, metadata. So you could be going through a class and there could be a particular—”OK, we’re starting the Q and A period,” and the user could bookmark in realtime that that is when that section of the class was initiated. So it’s some pretty interesting technology that allows video to be captured, but also allows video to be organized and searched through with great efficiency going forward. [Timestamp: 8:35]

And that’s a pretty big deal too because that’s one key area where print has an advantage over streaming audio or podcast—in that certain information within those programs is much easier to zero in on and find. So anything that would speed up being able to find things on a video or audio stream is a really good deal.
Well, it’s our vision, and we’re bringing forward a tag on the marketing guy; that’s what I am supposed to do, right? But we’re bringing forward the tag on the company’s intelligent video, and it’s certainly my firm belief that all video content going forward is going to be as easy to search as text is. And what we have to do is—there is going to be so much noise generated by that—is that we are really going to have to add user-generated information on top of that to make the retrieval of the video much more powerful. [Timestamp: 9:28]

And this thing has, I would assume, pretty sophisticated reporting features?
Don’t even get me started on that. It’s really quite amazing, because of that—I referred to earlier—the client-server architecture of InStream, the server knows exactly what every user is doing at all times. So from a server perspective, we can collect information on if the video is minimized, if it’s muted, if there’s Windows overlaid on top of it—we can report on who accessed what, when, to absolute the greatest amount of detail. But with our commanding feature, which is related to that, we can also—if there’s a campus-wide broadcast, we can also—for example, if you wanted to do so make all active players full-screen and turn up the volume or set or limit all of the players to have volume. So that’s the type of control that people are looking for when they’re putting out systems that deliver such a vast amount of media. [Timestamp: 10:33]

1 2Next

A Campus-wide IP Video Network for a State University, Part 2

Jun 8, 2010 12:01 PM

I would think the IT people who are going to be the primary ones in charge of this really gravitate to those reporting features. And being able to not only tell what’s going on for, say, security reasons and organizational things, but just for their own peace of mind as to being able to take a snapshot of what’s happening and see if they can tailor the system for better use once they have an idea of who’s using it, what for, and how much.
Oh, exactly. We have detailed graphs and reports on what videos are watched, when, and it’s really amazing the amount of information and when I use those type of systems personally. Let’s say for the website, it’s very important because it allows the people who are creating the content and the people who care about the consumption of their content; it allows them to be much better. When you know how the users are behaving, you can start tweaking and fiddling and make that experience so much richer. [Timestamp: 11:32]

It’s your own built-in Neilsen rating system.
Exactly. That’s exactly it.

So what’s Slide Caster all about?
Slide Caster is a pretty important tool. We have the ability when you do a recording—let’s say from the classroom—we have the ability to capture not only the video, but we can take a computer input at the same time and associate that firmly with the timeline of the video. So you can replay the camera that might be on the teacher and the whiteboard but as well, let’s say, his PowerPoint presentation. And we don’t do that the way that most systems do. Most systems they would say, “OK, launch the camera, upload your PowerPoint, and every time you hit the forward key we’ll advance it on the server or whatever.” We’re actually capturing the physical output his computer and associating that with the video captured, so it’s almost a multistream video capture type of environment. So the elements are always tied together, and we do that using some specific synchronization technology that we actually implemented throughout the system. [Timestamp: 12:42]

And when you’ve got a big installation like this going into a place like NC State, what are the kind of steps that you have to go through on the installation—I mean from the time that you know what you’re going to do to the time that you get in there and have it all ready to show people?
The IT department has to do the network planning. If you’re deploying video, you have to make sure there is no choke points, you have to make sure that bandwidth to all of the anticipated end points is provisioned, and it doesn’t take Internet architecture to do that; that’s fairly simple. The next step you have to do is make sure that multicast is enabled—and sometimes that’s a hurdle in some organizations, but at universities it’s typically no problem whatsoever. The next thing that you have to do—and the installation team for Video Furnace is very adept at this—what we do, we do a site survey, and within the site survey, we work very closely with the IT department to identify beforehand all of the port numbers, all of the IP addresses associated with this system, all of the multicast addresses that all of the channels are going to be broadcast out on. We work with them to make sure that their firewalls are adapted to those streams as necessary and we take all of this information back to our manufacturing or our systems staging plant in Chicago, and we actually kick the—it’s called kicking; we, what would be a normal word for that—we configure a server or a server cluster as the client needs with the exact information that the university has provided. So when we ship and install a Furnace system there is not a lengthy on-site process for determining all of this. It’s all been predetermined; it’s kicked into the system. They plug the power in, the Ethernet in, they flick the power switch, and it’s a very wonderful out-of-the-box experience. So typically we can launch a system within a few hours at a university. [Timestamp: 14:53]

So what’s been the reaction from the university so far to this thing?
Everybody at the university is extremely positive about the installation. I think we have another university that’s a great fan of the technology and the use of deployment. The students have access, where they’re allowed, to the videos that they need to get to as well as the free-to-air TV that they’re broadcasting, and of course the administrators love the reliability and the ease of use of the system. So I think, overall, they’re very happy clients. And like I said before, it’s amazing talking to the IT people who have implemented this system in what a hands-off type of a deployment it actually is. They don’t have to issue system configuration guides, they don’t have to install software anywhere. The system comes preconfigured and people can actually get down to what they need to do which is, aside from establishing the live content to cross the site, ingest the video on-demand assets and establish the playback channels for their internal TV stations, etc. [Timestamp: 16:03]

All right, Peter Maag with Haivision. The NC State IP video network that they’re breaking ground on now, it looks like they’ve got a lot of different uses for this that they’re going to be expanding into. And thanks very much Peter for being here to give us the details on this.
My pleasure, Bennett. Thank you very much.

Previous1 2

Featured Articles

Close