Bridging the Gap: Understanding the fundamentals of distance learning isthe key to tapping this lucrative market.
May 1, 1998 12:00 PM, Allan Lakey
Srolling by the conference room, you overhear, "Give me 3x BRI, H320, H281,G711-722-728, 12 point MCU, FCIF, QCIF, Fractional T1, H.323, T120 DataCollaboration and H261." You might assume the dialog is a scene from antelevision hospital drama. Chances are, it was a group of very experiencedengineers discussing the world of video teleconferencing.
Complicated? It may be hard to believe that today's videoconferencingtechnology is easier to use than before. The videoconferencing market issegmented into two main arenas-corporate communications and distancelearning. We are, however, seeing the two areas merge. Training in thecorporate market really is distance learning. As a result, the corporatemarket has driven distance learning to new levels.
It was not long ago that the industry had limited choices of products,manufacturers and price points, which limited the potential applications.This is not the case in today's market place. The implementation ofstandards-based protocol has brought tremendous growth to thevideo-conferencing industry and increased product variety. Thisstandardization has also allowed standards-based products to be developedand become more cost effective to fit an increased customer base.
One such application that has benefited tremendously from theseadvancements is distance learning. A couple of years ago, only a few of thelargest universities benefitted from distance learning. The two main causesfor the delay in wide acceptance of videoconferencing in distance learningapplications were the equipment costs and the service provider fees. In theearly days of videoconferencing, most systems operated with transmissionconnectivity of Half T1 (756 kbps, 6 BRI ISDN lines) and Full T1 (1536kbps, 12 ISDN lines). The cost to have these high-speed digital linesinstalled and maintained was high with respect to the rate of return onuse. Prohibitive costs were not limited to the educational environment. Itpushed the limit on the cost-savings benefit for the corporate market placeas well.
How has the videoconferencing industry progressed to make thedistance-learning market such an area of growth? There are several reasons,one of which would have to be the acceptance and usage of worldwide minimumstandards-based protocol. It was just a couple of years ago when avideoconferencing system from one manufacturer could not communicate withanother, and most systems were completely proprietary. The ITU(International Telecommunication Union, formerly known as CCITT) workedwith the industry manufacturers to develop a standards-based set ofcommunication code. The ITU, one of the specialized agencies of the UnitedNations, was founded in 1865 (before telephones were invented) as atelegraphy standards body. Some important and commonly discussed standardsare:
Communication: H.320: ITU-T minimum standards-based communication protocol.
H.281: ITU-T recommendation for A-V communication and far-endcamera-control aggregation.
H.231: Multipoint control units for A-V systems using digital channels upto 1,920 kbps.
H.243 : Multipoint control unit for A-V services chair control.
Video: H.263: Increased resolution video compression and decompression.
H.261: Bandwidth-efficient video compression and decompression.
CIF: Common Intermediate Format (352 x 288 pixels).
QCIF: Quarter Common Intermediate Format QCIF 4 x Common IntermediateFormat (704 x 576 pixels), (H.261 Annex D).
SQCIF: Sub Quarter Common Intermediate Format (128 x 96 pixels), (DecodeOnly) still image 4 x CIF.
Audio (H.320 specifies three types of audio): G.711: 48 to 64 kbps narrowband.
G.722: 48 to 64 kbps wide band.
G.728: 16 kbps narrow band.
Data: T.120: Protocol standards for data collaboration usingvideoconference bandwidth.
This is a small sample of standards, which raises another important issue.The standards-based protocol is but a minimum specification formanufacturers. H.320 actually contains three levels of implementation.Class 1 is minimum level of support; Class 2 is Class 1 with some optionalfeatures, and Class 3 is Class 2 with all optional features. The importanceof these classifications is that not all systems are created equal. Aproduct can specify that it is H.320-compliant but not have the performanceand features as a higher-level H.320 class of product.
One of the items affected by the various classifications to use H.320 isthe frame rate or frames per second (fps). The ITU-T H.320 standardrequires support of frame rates of 7.5, 10, 15 or 30 fps, dependent uponclass position. The lower frame rate does not provide the smooth motioninherent in broadcast television. The higher the frame rates, the betterthe image. The specification calls for each classification to meetparticular frame rate requirements. Class 1 denotes a frame rate of 7.5fps; Class 2 denotes a frame rate of 15 fps, and Class 3 denotes a framerate of 30 fps.
Using a standards-based classification, however, does not imply simplicity.The use of H.320 opens the communication compatibility for variousmanufacturer products. Unfortunately, even manufacturers who deliver Class3 systems are affected by manufacturers who are not providing products ofequal performance. The systems that communicate are always forced to thelowest common denominator. This translates to both transmission speeds andH.320 use. The higher-performing system will revert to the lower standard.
This collaboration in communication development created the potential formanufacturers to develop systems that could communicate on an open platformof manufacturers and still implement their own improved proprietary set ofalgorithms for one-to-one communications. Industry leaders quickly adaptedthis new set of standards.
The early days The limitations of multiply manufactured productscommunicating (or lack thereof) via different protocols to one anothercaused severe limitations in how and who could use and join adistance-learning network. If one state university chose one product line,and several community colleges chose another, they could not communicate.This did not, however, slow the interest in the potential benefitsvideoconferencing could bring to distance learning.
One example occurred only a few years ago at a school district that wantedto offer foreign language instruction to its high school students but didnot have the budget for a full time teacher. They could afford the cost ofequipping a classroom with videoconferencing along with the link to anotherschool district within the state that already had videocon-ferencing and acurriculum that included foreign languages. The district actually purchasedthe videocon-ferencing hardware and had the ISDN service installed only tofind out that the system was incompatible with the communications format ofthe main district. Whydidn't they just find out what the district had andpurchase the same equipment? It could have been dependent upon if theequipment was purchased from a service provider, A-V vendor, or through astate-run agency.
Again, many schools realized the benefit and potential applications as wellas the corporate world. One analogy that could sum up thevideocon-ferencing industry during this time would be to ask you to thinkof the problems you would encounter on a daily basis if there were no wayfor Microsoft MSWord and WordPerfect files to open each other's files.
These early systems were already providing quality images with 30 fps,audio echo cancellation and the ability to employ control systems. However,one fact was very evident in the early stages-to obtain the optimal audioand video performance, an integrator was a necessity to combinetechnologies. The fact remains that there is not one manufacturer whobuilds every product to the identical specification, but several leadersemerged to bring the communications standards together and allowcross-platform communication.
Technology today Today's technology provides advantages on several levels.While the cost of ISDN service has declined, the availability andapplications for ISDN has increased. Today, many service providers promoteISDN service for business and residential use with Internet access andvideoconferencing. This rise in popularity has caused the cost of ISDNservice to drop, making videoconferencing available to more users.
Hardware now had to rise to the occasion of affordability. Equipment thatcould provide the image and sound quality required without the necessity ofHalf T1 or Full T1 was necessary. Not surprisingly, the industry has seengrowth in systems that use transmission connectivity of 56 to 384 kbps. Asystem that functions on 384 kbps requires only 3 ISDN lines while stillproviding image quality exceeding that of the older system with Half T1 andFull T1. A system with today's technology and runs with 384 kbps canprovide full-motion video with 30 fps, echo cancellation bandwidth andincreased use of data collaboration.
Another major step forward is the technological advancements in cameras.Enhanced image quality came with a reduction in system cost. Severalmanufacturers recognized the growing demand for videoconference cameras andbegan development specifically for these applications.
In the early days, most distance-learning networks or systems were custombuilt by integrators. These systems were complicated and difficult tooperate, and the integration of a control system was almost a necessity.The majority of these offerings were developed as systems with the mainfocus being a closed-loop application, which typically meant that the mainteaching site was the sole control center. This was before the ITU-Tstandards addressed remote site control. Distance-learning applicationsexpanded when these communication standards fell into use.
Proper support One of the most important aspects of distance-learningsystems is the support of the networks and individual sites. This majorelement should never be overlooked. These systems typically have multipleusers who have a tendency to work with individual flare, meaning that theusers would adjust and leave the systems in various configurations. Thisoften requires a service call from a technician. Most codec manufacturerswere developing or using remote diagnostics for software upgrades andsystem diagnostics. The only drawback to this engineering feature was thatit did not allow remote troubleshooting for other products in the system.
Forcing a technician to make a service call every time something is notworking properly can be quite costly, but this is all about to change. On avisit to the new Crestron Electronics facility in New Jersey, I saw apreview of the new products to be shown at INFOCOMM '98 in Dallas. Youmight ask yourself what could be so revolutionary about a control system?
Crestron has designed the next generation of control systems that are fullyEthernet compatible. I know we have not even touched on Ethernet-that isanother article. This opens up a new design concept of control systemintegration and use. Imagine the possibilities of being able to call intoany distance-learning site to check any problem as well as modify systemprogramming. A control system that is truly Ethernet capable will allow youto use dedicated TCP/IP addresses and even permits dialing into systems viathe Internet. Once you are connected to the Crestron control system, youcan monitor, configure and analyze any product in the system. You can evenset up a conference for a non-technical user from your office.
Control systems have always had the capability to operate in Ethernetcommunication networks. Until now, the drawback has been the method ofcommunication-via an external box, which is just communicating to an RS-232port on the control system. This is similar to taking a Ferrari engine andusing a transmission from a Yugo to deliver power to the wheels.
The use of an Ethernet control system will allow a network to be monitoredworldwide from one or several locations. If a given site has a problem,they could even alert the monitoring control system via a help button. Thiswould then allow the technician to call directly into the locationrequesting assistance and see if it is a hardware or operator error. Itwould even allow the monitoring technician to control the system from theremote site.
Additionally, system capability has grown rapidly. One of the first signsof this growth would be the expanded list of manufacturers offeringcomplete product lines for videoconferencing. Not only has the list ofmanufacturers grown, but also the product offerings. Today, systems areavailable in dedicated codec units, roll-about systems, desktop PCconfigurations, desktop videophones, briefcase videophones, portable settop systems and packaged designed systems for distance learning andtelemedicine.
Several manufacturers have developed dedicated systems for thedistance-learning marketplace. Such systems may have a specially designedteaching podium that houses most, if not all, of the system hardware. Thesecompanies have attempted to provide the complete turnkey solution ofintegrated hardware, thereby presenting a nice package that will meet manyof the system requirements seen today.
NEC developed the Teaching Pro 5000 system to meet the growing demand indistance learning. The Teaching Pro 5000 provides a system featuring acodec with transmission speeds up to 1,538 kbps, echo cancellation, PCtransmission with T.120 data collaboration, VCR, slide-to-video converter,graphics illustration pen for writing on top of display feeds, preview andsend capability and ParkerVision camera systems. A Crestron touchscreencontrols the codec and all its components. PictureTel, Tandberg and VTELhave also developed systems for this market. The Tandberg unit is designedas a presentation/teaching podium similar to NECs. The PictureTel Socratesis a smaller system housed in a portable rack.
Geting started First, you will need to determine if the system will requirean MCU (multipoint control unit) or a service provider that can providethis equipment. The MCU provides the ability to have multiple locationscommunicating as one. A standard system operates as a one-to-one call. TheMCU acts as the bridge to allow multiple sites to communicate and be seenby each other or just by one. The bridge also has ITU-T communicationsstandards, which allows the host (teaching) site to control the activity ofbridged calls. This can allow multiple windows on a single screen and lockout the audio of various sites unless activated. This is the first step inthe design of a distance-learning network. Once this has been chosen, youcan now analyze the particular needs of each site. The primary site willrequire the most intensive control package. In this example, we are workingon a system with only one primary teaching area. The remote sites areclassrooms that will be joining into the main site.
Next, ask these questions:
Will there be connection to multiple remote sites at one time? If yes, howmany could be online together? This will determine the size and capabilityof an MCU.
Will the remote sites require control of the host site equipment? Thisanswer will determine the control complexity of the remote site.
Will the remote sites be permanently installed rooms or portable? If somerooms will not be installed systems they might be able to use the newer lowcost portable systems.
Do you know what transmission speeds will be acceptable for the network?(For 128 kbps with maximum of 15 fps, one BRI ISDN is line required; for384 kbps with maximum of 30 fps, three BRI ISDN are lines required; for 756kbps with maximum of 30 fps and increased bandwidth six BRI ISDN lines arerequired, and for 1,356 kbps with maximum of 30 fps and increasedbandwidth, 12 ISDN lines are required. This will determine the type ofcodec you will require. Not all codec manufacturers have systems capable oftransmission speeds higher than 384 kbps.)
How many students will each location support? (This could determine howmany mics will be required. This, in turn, could dictate using an externalecho cancellation system to handle the increased requirements better.)
How many locations would benefit from camera systems that use trackingfeatures?
Does the instructor site require PC data collaboration? (This willdetermine the design specification for the PC display and datacollaboration to be used within the codec.)
Does the instructor site require a touchscreen interface? If yes, would itbenefit from color display? Would they like a touchscreen that couldprovide video and computer images on screen for preview and control?
Does the instructor site require wireless touchscreen control?
Would the system benefit from using a wireless mouse for the PC datacollaboration? (If the instructor might need to move around the room whileteaching, an RF rather than infrared mouse is a necessity. Many things,such as line of sight and fluorescent lights, can affect the infraredunits. The RF wireless mouse can operate without these limitations.)
Will the main instructor site require the system to be moved for operationfrom multiple rooms? (This will determine the type of teaching podiumrequired to house the equipment and teaching control.)
These questions will provide a solid foundation of information to beginspecification, and their answers might dictate the use of many of thepre-packaged distance learning systems available today. If the systems movebeyond these capabilities, the products available will provide tremendousperformance and options.
Analog transmission: The way information is transmitted over a continuouslychanging electrical wave. All telephone calls used to be transmitted in ananalog format. Today, they are translated to digital pulses for both localand long-distance transmission.
Application sharing: a feature that allows two people to work togetherwithout the same application/software. This allows multiple participants tomake changes to a shared document.
ATM (Asynchronous Transfer Mode): A standard implementation of cell relay,which is a packet-switching technique using packets (cells) of a fixedlength. It is asynchronous in the sense that the recurrence of cellscontaining information from an individual is not periodic.
bps (bits per second): Aunit of measurement in which speed rate of datatransfer can be calculated.
Bandwidth: Describes how much information can be pushed through anelectronic pipe at any given time.
Baud: Rate of data transmission.
BRI (Basic Rate Interface): ISDN standard that monitors how phones andother electronic devices are connected to the ISDN switch.
Bridge: Usually made up of back-to-back codecs from different manufacturersto convert signals from one proprietary system to another.
Carrier: Refers to various telephone companies that provide local,long-distance or value-added services.
CCD (Charge Coupled Device): Used in cameras as an optical scanning mechanism.
CCITT (Consultative Committee for International Telegraphy and Telephony):Now called the International Telecommunications Union's TelecommunicationsStandardization Sector or TSS. An international body responsible forestablishing inter-operability standards for communications systems. Theworld's leading telecommunications standards organization.
CIF (Common Intermediate Format): An international standard for videodisplay formats developed by TSS.
Codec: Acronym for encoder/decoder. This device compresses (fortransmission) and decompresses (once received) digital video and analogaudio signals so that they occupy less bandwidth during transmission.
Compression: Any of several techniques that reduce the number of bitsrequired to represent information in data transmission or storage, therebyconserving bandwidth and/or memory.
Continuous presence: The transmission of two or more simultaneous images.
Demodulator: A videoconference receiver circuit that extracts ordemodulates the wanted signals from the received carrier.
Distance learning: The implementation of video and audio technologies intothe educational process so that students can attend classes and trainingsessions in a location distant from that where the course is beingpresented.
DS-1: The Level 1 standard for digital systems operating at 1.536 Mbps (24DS-0 channels). Also known as T1.
DS-3: Digital Signal Level 3. This term is used to refer to the 45 Mbpsdigital signal carried on a T3 facility.
Echo cancellation: An electronic circuit that attenuates or eliminates theecho effect on videoconference telephony links.
Echo effect: A time-delayed electronic reflection of a speaker's voice.
Encoder: A device used to alter a signal electronically so that it can onlybe viewed on a receiver equipped with a special decoder.
FCIF (Full Common Intermediate Format): Describes the type of video formattransmitted using TSS standard coding methods.
Fractional T-1: FT-1 or fractional T-1 refers to any data transmission ratebetween 56 Kbps and 1.544 Mbps.
Full-motion video: Video reproduction at 30 fps for NTSC signals or 25 fpsfor PAL signals. Also known as continuous-motion video. Videocon-ferencingsystems cannot provide 30 fps for all resolutions at all times, nor is thatrate always needed for a high-quality, satisfying video image.
H.320: A recommendation of the ITU-T based on Discrete Cosine Transform,CCM, and motion compensation techniques. It can be a video system's solecompression method or supplementary algorithm used instead of a proprietaryalgorithm when two dissimilar codecs have need to interoperate. H.320includes a number of individual recommendations for coding, framing,signaling, and establishing connections. It also includes three audioalgorithms: G.721, G.722 and G.728.
ISDN (Integrated Services Digital Network): A telecommunications standardallowing communications channels to carry voice, video and datasimultaneously.
ISDN Ordering Code: A predefined number that tells the phone company how toprovision your ISDN line based on the requirements of your ISDN hardware.
ITU (International Telecommunications Union): One of the specializedagencies of the United Nations and founded in 1865 before telephones wereinvented as a telegraphy standards body.
Jitter: The deviation of a transmission signal in time or phase. It canintroduce errors and loss of synchronization in high-speed synchronouscommunications.
Kbps (Kilobits per second): Refers to transmission speed of 1,000 bits persecond.
Kilohertz (kHz): Refers to a unit of frequency equal to 1,000 Hertz.
LAN (Local Area Network): a computer network linking workstations, fileservers, printers, and other devices within a local area, such as anoffice. LANs allow the sharing of resources and the exchange of both videoand data.
Mbps: Megabits per second.
Megahertz (MHz): Refers to a frequency equal to one million Hertz, orcycles per second.
Microwave: Line-of sight, point-to-point transmission of signals at highfrequency.
Modulation: The process of manipulating the frequency or amplitude of acarrier in relation to an incoming video, voice or data signal.
Modulator: A device which modulates a carrier. Modulators are found ascomponents in broadcasting transmitters and in videoconference transponders.
MPEG (Moving Picture Experts Group): MPEG has established standards forcompression and storage of motion video.
Multiplexing: Techniques that allow a number of simultaneous transmissionsover a single circuit.
Multipoint: Communication configuration in which several terminals orstations are connected. Compare to point-to-point, where communication isbetween two stations only.
Multipoint Control Unit (MCU): A device that bridges multiple inputs sothat more than three parties can participate in a videoconference. The MCUuses fast switching techniques to patch the presenter's or speaker's inputto the output ports representing the other participants.
NT 1 (Network Termination Type 1): A device that converts the two-wire linecoming from your telephone company into a four-wire line. The NT-1 isphysically connected between the ISDN board of your videoconferencingsystem and your ISDN phone line.
NTSC (National Television Standards Committee): A video standardestablished by the United States (RCA/NBC) and adopted by numerous othercountries. This is a 525-line video with 3.58 MHz chroma subcarrier and 60cycles per second. Frames are displayed at 30 fps.
Packet switching: Data transmission method that divides messages intostandard-sized packets for greater efficiency of routing and transportthrough a network.
Pan: To pivot a camera in a horizontal direction; tilt is to pivot in thevertical direction.
PBX (Private Branch Exchange): A telephone switch, usually located on acustomer's premises, connected to the telephone network but operated by thecustomer.
Pixel: The smallest element of the computer or television display on theraster scale.
POTS (Plain Old Telephone Service): Conventional analog telephone linesusing twisted-pair copper wire. This is used to provide residential service.
Real time: The processing of information that returns a result so rapidlythat the interaction appears to be instantaneous. Telephone calls andvideoconferencing are examples of real-time applications.
RJ-11: The most common telephone jack in the world. This is a six-conductormodular jack wired with four wires. You probably have RJ-11 jacks in yourhouse.
RJ-45: An 8-pin connector jack used with standard telephone lines andrequired by some ISDN hardware. A little larger than an RJ-11 jack.
Scrambler: A device used to alter a signal electronically so that it canonly be viewed or heard on a receiver equipped with a special decoder.
Service Profile Identifier (SPID): A number or set of numbers assigned toyour ISDN line by your phone company. In the United States, one SPID isassigned to each channel. The switch uses SPIDs as unique identificationnumbers for each ISDN line, so that it can determine where to send callsand signals.
Signal to Noise Ratio (S/N Ratio): The ratio of the signal power and noisepower. A video S/N ratio of 54 dB to 56 dB is considered to be excellent,that is, of broadcast quality.
Splitter: A passive device (one with no active electronic components) thatdistributes a television signal carried on a cable in two or more paths andsends it to a number of receivers simultaneously.
Spread spectrum: The transmission of a signal using a much wider bandwidthand power than would normally be required.
Synchronization (sync): The process of orienting the transmitter andreceiver circuits in the proper manner in order that they can besynchronized.
10Base-T: Standard Ethernet; a variant of IEEE 802.3 that allows stationsto be attached via twisted pair cable.
T1: The transmission bit rate of 1.544 Mbps.
T.120: A standard for audiographics exchange. Although H.320 does provide abasic means of graphics transfer, T.120 will support higher resolutions,pointing and annotation. Users can share and manipulate information much asthey would employ if they were in the same room though they are workingover distance and using a PC platform.
TELCO: Generic term for telephone company.
Telemedicine: The practice of using videoconferencing technologies todiagnose illness and provide medical treatment over a distance.
WAN: Wide Area Network.
Whiteboarding: A term used to describe the placement of shared documents onan on-screen shared notebook or whiteboard. Desktop video-conferencingsoftware includes snapshot tools that enable you to capture entire windowsor portions of windows and place them on the whiteboard.
X.25: A set of packet switching standards published by the CCITT.
Y/C: In component video, the "Y" or luminance signal is kept separate fromthe "C" (hue and color saturation signal) to allow greater control andenhance image.