MPEG2 for DBS

by Henry Davis

© Copyright Henry Davis 1995,
First magazine rights Miller Freemen, Inc. (Communications Systems Design)

This article first appeared in the September 1995 issue of Communications Systems Design

MPEG video compression has been the subject of much media hype in the computer press during the past two years. Even though PC use of MPEG has lagged, we believe that the continuing convergence of computers with TV apppliances will fuel the adoption of MPEG in the computer segment. While the computer industry has sung the praises of multimedia and MPEG, broadcast entrepreneurs have been methodically installing an infrastructure that now regularly delivers video via MPEG technology. Consumers in many marke ts can directly receive MPEG encoded video transmissions from providers like DirecTV. The direct satellite broadcast approach has resulted in one of the most rapid diffusions of technology into the consumer marketplace ever. In the first year of commercia l availability, DirecTV sold over 350,000 units. That's over ten times the penetration rate of the VHS VCR. In addition, some broadcasters, premium channels, and cable companies use MPEG encoded streams as a part of their transmission network.

MPEG (Motion Pictures Experts Group) was an outgrowth of earlier standards work for digital compression of still pictures. The international MPEG committee started in 1988 with the goal of standardizing video and audio for compact discs. The compression work for MPEG 1 was based upon film or other progressive sources. By 1990 the MPEG committee had created a data structure syntax for Source Input Format (SIF) video and compact disc audio using a combined data rate of 1.5 Mbit/Sec. This system approximated the perceptual quality of VHS consumer video tape, making MPEG video compression a visually acceptable technology. Although this MPEG standard was viable for progressive sources like film, it lacked the techniques to deal with standard broadcast interlaced video with good compression.

In 1992 over 200 companies from around the world were involved in the MPEG draft development, demonstrating strong support for the technology specification. Today, MPEG 2 syntax has been adopted for the United States Grand Alliance HDTV specification, the European Digital Video Broadcasting Group, and for the high density compact disc. MPEG 2 and MPEG 2 "near compliant" is also the backbone of commercially operational Direct Broadcast Satellite systems like DirecTV. Since the standard was not finalized be fore the DBS systems had to begin their infrastrucure development, there is a lag in making the video stream of commercial systems fully MPEG 2 compliant. This marketing choice has some implications for set top decoders that we'll discuss later. MPEG encoded video offers broadcasters the ability to transmit more programs on a given transmission channel. This means more channels at a lower infrastructure cost. In addition, digitally encoded streams eliminate drift, remove certain kinds of analog distortion , and reduce system wide maintenance.

If MPEG is so widely embraced by diverse groups and has so many benefits, then why isn't it more pervasive in personal computer systems? We categorize issues into four groups: cost to the consumer, title availability, and misunderstandings about MPEG capability.

Cost to the Consumer

DBS customers must purchase set top decoders specific to their preferred broadcaster. These set top units have retailed in the $700 to $800 range. With stabilization of the broadcast standard and increased consumer volume, we expect these prices to drop b y 50% by the end of the year. End consumers of motion video products for PCs have been presented with two MPEG 1 decoder alternatives. The first has been a low update rate software decoder capable of a few frames per second when running on higher performance personal computers. The resulting jerky motion is not satisfactory for most consumers, even for video games. Software-only decoders are now capable of near real-time operation for meaningful sized images when hosted on a Pentium class based machine. This means that MPEG video clips could now replace the lower resolution slow speed alternatives. Microsoft has announced that they will deliver an MPEG software decoder with Windows 95 and NT, IBM will support MPEG in OS/2 Warp 3.0, and Apple bundles MPEG playback in QuickTime 2.0. Achieving adequate performance consumes nearly the entire machine meaning that applications developers and users have to choose between good video and high performance applications. Although the software decoders are resource in tensive, this alternative provides a very low cost entry for consumers wishing to experience full motion video using MPEG. Most importantly, the mainstream operating system vendors have explicitly endorsed MPEG 1 as a video decoder for PCs.

The second PC alternative is based on what has been fairly expensive hardware decoders. MPEG decoder integrated chips remained at a high price due to the relative lack of competition. Two years ago Sigma Labs provided the sole PC based MPEG hardware decoder. Today there are about a dozen suppliers of add-in MPEG decoder boards. Now that most of the major semiconductor firms have produced or announced MPEG decoder devices, we expect prices to decline. We believe that the cost to the consumer for a hardware accelerated MPEG solution will likely fall below $50 within the next three years based on a scaleable hardware/software decoder architecture that can be embedded either on the motherboard or in a high performance video card.

Title Availability

DBS as prvided by DirecTV began operations with 50 programming channels and can grow to over 200. Since their source material is based on standard progressive and interlaced systems which is encoded at their uplink facility, there is virtually an unlimited array of choices. Further, since the distribution is via wireless broadcast, there is no market supply/demand issues for the distribution channel; discs and tapes only are stocked when demand has been proved. Broadcasting happens when the provider is willing to risk their money to add capability.

For computer video, consumers buy applications to satisfy a number of wants and needs. But, the technology is never the application. For motion video, applications include the control software to create a game, educational program, video player, or other video related product. Title availability had been held back by a lack of industry cooperation on the Application Programming Interface (API). Since no standards existed for the API in 1993, Sigma Designs created their own interface to play Philips CD-i discs and CDs containing proprietary extensions for interactive titles. Concern over Sigma Designs' ownership of the API and the long term potential for competitive conflicts of interest led to the formation of the Open PC MPEG Consortium (OM1). The group of 90 hardware, software, and content suppliers created a Windows MPEG 1 MCI Command Set that is very similar to Sigma's but eliminates the proprietary features. The existence of an open industry standard for MPEG 1 applications makes developing titles a less risky proposition, and when combined with direct OS API support will encourage title providers to embrace the technology.

Applications also require the availability of appropriate source materials. Since the broadcast industry is using MPEG, it is reasonable to expect that the library of source material will expand both for PC based interactive titles and pure playback products. Unlike many other technologies, MPEG has multiple market segments that will use the underlying technology. If for no other reason than access to the converted feature length films produced for other markets, personal computer video will include MPEG decode as an option. The limitation of source material may become an ongoing issue for PCs based on the broadcast industry's choice of MPEG 2 as compared to the current PC choice of MPEG 1. MPEG 1 is not a perfect subset of MPEG 2 even though the two standards use many of the same compression approaches. In general, MPEG 1 streams created by a full featured encoder will not be compatible with MPEG 2.

MPEG Capability

MPEG compression relies on lossy techniques. This means that the compressed image will always have less resolution and detail than the original source. However, MPEG exploits characteristics of the human visual system to minimize the viewer's ability to perceive errors and distortion. It is important to keep in mind that all MPEG techniques have been judged on a subjective basis to arrive at a compromise that is most satisfactory to those people who participated in the draft specification of the international standard. While objective statistical tests are specified by the ISO document, the algorithms themselves were not subjected exclusively to objective measurement.

MPEG currently consists of two operating specifications, MPEG 1 and MPEG 2. MPEG 1 was developed for progressive source materials like film, while MPEG 2 was enhanced to address the interlaced materials common in broadcast TV. Both standards include video , audio, and systems components such as time stamping for synchronization. MPEG 1 defines a bit stream syntax for compressed audio and video optimized to not exceed a bandwidth of 1.5 Mbit/Sec. The bandwidth restrictions fit the capabilities of single speed uncompressed CD ROM and Digital Audio Tape. Many people have taken the bandwidth design goal as a fundamental limit to MPEG 1 capability, but that is not the case. MPEG 1 defines the ability to process fields up to 4095 x 4095 and bit rates of 100 Mbit /Sec. As a practical systems tradeoff, many suppliers produce systems capable of much lower levels of resolution. Table 1 details some common MPEG digital image resolutions and their existing counterparts.

180x120  QSIF (Quarter SIF), video clips (PC quasi standard image size)  
352x240  SIF, CD Whitebook Movies, video games 
352x480  HHR, VHS equivalent
480x480  Bandlimited 4.2 MHz broadcast NTSC 
544x480  Laser disc, Bandlimited PAL/SECAM 
640x480  Square pixel NTSC 
720x480  CCIR 601, Studio D-1, upper limit of Main level 

	Table 1
The syntax of MPEG 1 and 2 provide efficient ways to represent image sequences in compact coded data form. Figure 1 depicts a taxonomy of image sources from progressive, such as movie film, to interlaced such as television. In addition to the structure of the bit stream, the MPEG specification defines the reconstruction process. Algorithms that make up the decoding process are determined in large part by the semantics of the MPEG bit stream definition. The semantics of MPEG can be used to exploit video characteristics such as spatial redundancy, uniform motion, spatial masking, and spatial redundancy. Each of these characteristics can be the basis for image compression and data reduction. While the precise mechanism for decoding is not rigorously defined, the results are. The specification includes maximal permitted error in the reconstructed image. Some of these video characteristics and their exploitation are detailed in table 2.
Condition                  coding technique
------------               ---------------------
Spatial correlation        
                           transform coding with 8x8 DCT (Discrete 
                           Cosine Transform)
Visual response of eye     
                           lossy scalar quantization of DCT 
                           coefficients exploits reduced visual acuity 
                           at higher spatial frequencies
Wide area correlation      
                           prediction of DC coefficients in the 8x8 DCT 
                           block
Spatial masking            
                           macroblock quantization scale factor
content dependent coding   
                           macroblock quantization scale factor
bit stream token encoding  
                           variable length coding of macroblock address 
                           increment, macroblock type, coded block 
                           pattern, error magnitude of motion vector 
                           prediction and DC coefficient error
Sparse matrix of DCT coefs 
                           end of block token
local picture characteristics 
                           adaptive quantization, block based coding, 
                           macroblock type 
constant step sizes in adaptive quantization 
                           special macroblock type codes provide new 
                           quantization
temporal redundancy        
                           motion vectors and forward/backward 
                           macroblock use 16x16 granularity
smooth optical flow areas  
                           prediction of motion vectors
occlusion                  
                           forwards or backwards temporal prediction in 
                           B pictures
non-integer PEL boundaries 
                           half-pel interpolation
limited motion in P pictures 
                           skipped macroblocks
co-planar motion in P pictures 
                           skipped macroblocks


	Table 2
While much attention has been placed on the decoding process by articles in the popular press, we wondered why there was so much variation in the quality of the images that we viewed. Since the decoders are defined to produce specific outputs, we were able to test several decoder implementations to verify that they were within specification. Still, we noticed artifacts and other annoying features appearing in the playback of some MPEG streams. We tried adjusting various parameters of the decoders, but the problems remained. We then turned our attention to the encoding process. Our experiments in encoding show that the source of video data and the quality of the encoding process have a very strong impact on the quality of the decoded image. MPEG encoding appears simple, but has many areas where preprocessing and compression choices will have a substantial impact both on compression efficiency and playback quality. Conceptually, MPEG encoding is a seven step process:
1. Preliminary Scaling and Color Conversion
Many sources are in NTSC format. In order to make the information fit into the SIF specification, encoders scale NTSC's 704x480 active area to 352x240. In addition, NTSC color information has to be translated from RGB to YUV (which is more compact). The YUV model separates the color information into independent luminance (brightness) and chrominance (hue) values.
2. Color Subsampling
Luminance is the dominant component that the human eye sees when looking at color while the hue is less important. As a result, the encoder eliminates 75% of the chrominance values.
3. Discrete Cosine Transformation
Each 8x8 block of image pixels is translated into a set of numbers that describe one level of image detail. Coarse patterns will result in many zeros, while patterns with much fine detail or high contrast lines will have fewer zero values. Each DCT value represents energy at a specific frequency. Images with little detail will have more zero values for the higher frequencies. No compression occurs in this step, but the results will be used later to achieve more compression.
4. Quantization
The results of the DCT are divided by a constant and then rounded off. This accomplishes two things. First, it reduces the number of possible values. It also makes it more likely that adjoining values will be identical. Quantization has a major impact on the size of the final encoded video stream. Larger constants yield fewer possible values and increases the compression ratio. The encoder must tradeoff lost information (and hence video quality) with the density of compression. In general, quantization is a key method of ensuring that the video stream data rate never exceeds the throughput of the target output device.
5. Run-length Encoding
The quantized DCT values are accessed in order from DC to the highest frequency. By doing so, the quantization that was performed will yield strings of identical values that can be encoded into a single token that gives the value and the number of times i t was repeated. Frames with little detail will generally compress more than those with a high degree of detail.
6. Huffman Coding
The tokens are then encoded by assigning the most common tokens to symbols that are as short as possible. This further compresses the image data by exploiting the distribution of token frequency.
7. Interframe compression.
Up to this point, the compression is very much like that performed by JPEG compression on still images. MPEG adds more compression by eliminating redundant data that appears on more than one frame. MPEG encoders may look ahead as many frames as desired to exploit repetitive blocks. Identical blocks of pixels common to two or more successive frames are replaced by a pointer that references a single copy of the block. The major complication has to do with the order of the frames. If the match occurs in a frame that has already appeared then the encoder replaces it with a pointer. If the frame hasn't appeared then the encoder replaces the block with a pointer and also reverses the order of the frames. During decoding the sequence will be put into the right order. Other interframe coding may be achieved by sophisticated motion detection techniques that describe the path of an object. This eliminates additional redundancy.
The encoding function is the part of an MPEG system most open to technological innovation. While there is potential for reduction in computational complexity in the decoder by clever restructuring, the encoding operation has the most latitude in how the transformations take place. For realtime conversion to an MPEG stream, there are limitations to how much processing can be accomplished without introducing delays in transmission. But, for archived materials like films and videotapes, there are no real limits to the extent of preprocessing that can be used. In our investigations, we discovered that the range of motion estimation has a significant impact on final image quality. Doubling the search range quadruples the search area, so there is a performance penalty associated with deeper searches. Further, it is possible for large search ranges to degrade the image. This appears to be the result of creating large differential motion vectors. Rate control is the other obvious impact on image quality. Rate control is achieved by different mechanisms in different encoders, but is a significant area for research and improvement. You need to pay careful attention to rate control when considering commercial encoding devices.

So Is It As Good As VHS, and How Much Compression to Expect?

Like many newly popularized technologies, MPEG capability suffers mis-statement. We've already seen how MPEG 1 can handle images bigger than the SIF standard. That's just one of the limiting mis-statements made about MPEG. Generally, these errors are simple misinterpretations of the standard based on a single implementation. Pick up any of the articles in the popular press about MPEG and you'll likely learn how MPEG compresses video an amazing 235 to 1. If this compression ratio was actually achieved, the 500 channel TV would be easily accommodated in the same bandwidth that three CCIR 601 streams consume. During our experiments we generally achieved compression of about 30 to 1. The huge difference in our results compared to the reported compression results led us to evaluate how these high compression numbers could be reported.

In our discussions with various companies, we discovered how the very high compression numbers were generated. Companies wrongly assume that quoting a higher compression ratio is preferable. The fact is that bit streams either conform to the MPEG standard or they don't. And, the compressed stream either meets the delivery channels' bandwidth limits or it doesn't. With the goal of quoting the highest possible compression numbers, people begin with the most popular studio signal known as D-1 or CCIR 601 digital video. This signal is coded at 270 Mbit/sec. We derive the 270 Mbit/sec using the following:

luminance 858 samples/line * 525 lines/frame * 30 frames/sec * 10 bits/sample ~= 135 Mbit/sec
R-Y 429 samples/line * 525 lines/frame * 30 frames/sec * 10 bits/sample ~= 68 Mbits/sec
B-Y 429 samples/line * 525 lines/frame * 30 frames/sec * 10 bits/sample ~= 68 Mbits/sec
Total 27 Msamples/sec * 10 bits/sample = 270 Mbit/sec

Using this simplistic approach to defining compression, we come up with an amazing 235:1 by dividing 270 by the defined CD ROM data rate of 1.15.

Let's look at this calculation a little closer. Television broadcast contains active information during the non-blanking intervals. So, only 720 out of the 858 luminance samples per line contain information. Actually, the number of 720 is in debate among TV engineers, with a consensus that the actual number is somewhere between 704 and 720. In a similar manner, there are only 480 lines that contain picture information, with some debate about an upper limit of 496. For MPEG 1 and MPEG 2 conformance points , Constrained Parameters Bitstreams and Main Level respectively, the numbers are chosen to be 704 samples * 480 lines for luminance and 352 * 480 lines for each of the two chrominance images. Now we compute a new compression rate:

luminance 704 samples/line * 480 lines (* 30 fps * 10 bits/sample ~= 104 Mbit/sec
chrominance 2 channels * 352 samples/line * 480 lines * 30 fps * 10 bits/sample ~=104 Mbit/sec
Total ~207 Mbit/sec for a compression of 180:1 (207/1.15).

All of our calculations are based on studio quality equipment that uses 10 bit samples. MPEG defines 8 bit samples, so the actual compression ratio is 180 * (8/10) = 144:1. The additional 2 bits of quantization in the studio equipment is used to suppress noise in multi-generation video. In tests of MPEG alternative quantization levels, the additional 2 bits did not provide discernible improvement in the video for single generation video.

So far we've considered the obvious source representation issues. Now there are a number of important but easily hidden signal qualities to consider.

The studio standard CCIR 601 represents the chroma signals with half the horizontal samples as the luminance signal. At the same time, it employs full vertical resolution. This ratio of subsampled components is designated 4:2:2. MPEG 1 and MPEG 2 both define the use of 4:2:0 for consumer applications. In this case both chrominance signals have half the resolution of the luminance signal. By reducing the resolution in the vertical direction, we now have a chrominance frame of 352 x 240. This gives an average of 1.5 samples per pixel, 1 for Y, 0.25 for Cr, and 0.25 for Cb. Now we recompute the compression as:

704 pixels * 480 lines * 30 fps * 8 bits/sample * 1.5 samples/pixel = 122 Mbit/sec yielding a compression ratio of 108:1.

The next question to address is the basic frame size. CCIR 601 is converted to a SIF image by subsampling 2:1 in both the horizontal and vertical directions. Overall, this results in a 4:1 drop in compression. Subsampling is an important step in pre-processing the source since it will have an impact on the quality of the compressed image. Computationally simple reduction is done by simply discarding every other line or sample. Higher quality compressed images can be developed from pre-processed images that are the result of FIR filtering or other decimation techniques. Improper decimation will guarantee the presence of image artifacts. Regardless of the quality of the subsampling, the compression ratio gets reduced one more time:

352 pixels * 240 lines * 30 fps 8 bits/sample * 1.5 samples/pixel ~= 30 Mbit/sec for a ratio of 26:1.

This reduced ratio is more in line with our experience and suggests that MPEG is still a very aggressive compression scheme, but not at the more than 200:1 level often referenced. While the reduced ratio applies to broadcast images in North America, many source images are converted from film at a frame rate of 24 frames per second. This last factor can reduce the compression factor to 21:1 in many cases.

There is actually quite good news in the reduced compression ratio. Film sources have an additional 20% bits available in comparison to broadcast sources. This means that film sources can have better quality reconstructed images, which meets with consumer expectations.

Comparing MPEG to TV, VHS, and Laserdisc

Many articles compare MPEG to common broadcast video. We have found that it is possible to encode a VHS quality picture within the MPEG 1 CD ROM video rate of 1.15Mbit/sec using a great deal of care and significant preprocessing. While this statement is based on subjective tests, we undertook an objective comparison of VHS and MPEG.

The luminance response curve of VHS places -3dB near 2 MHz. This VHS bandlimit is equivalent to 200 samples per line. VHS chroma is equivalent to about 80 samples per line. If we consider only the sampling density, MPEG is superior to all VHS parameters except vertical resolution. where VHS wins 480:240. When other real world analog factors like interfield crosstalk and the TV monitor's Kell factor are taken into account, the difference becomes less than 2:1, and may in fact be equal in many real world systems. In addition, timing errors, and other tape related issues further reduce the real resolution. Regardless of the offsetting factors, a nominal VHS tape and monitor will out perform MPEG SIF for high complexity sources. For "normal" viewing, MPEG-1 SIF and VHS are very close in resolution, but MPEG will always suffer from the loss of detail for high spatial frequency materials. In particular, the loss of detail is readily discernible for images that contain text.

Progressive source broadcast NTSC quality can be approximated with a bitstream of about 3 Mbit/sec. PAL requires a higher rate of about 4 Mbit/sec. High spatial complexity sources like sports may require a higher bit rate of 5 to 6 Mbit/sec. Material that is broadcast from a 30 fps source will in addition require a proportionately higher bit rate to achieve similar results.

Laserdisc bandlimited signals are often defined by manufacturers to be capable of 425 lines and 567 samples per line. An equivalent digital representation can be approximated by a 567 * 480 * 30 fps system. Regardless of the superior theoretical representation, well encoded progressive sources with medium detail can achieve Laserdisc and SVHS clarity. As with the VHS analysis, higher MPEG bit rates are required to match Laserdisc capabilities.

From a purely theoretical point of view, MPEG can never achieve the same quality as an uncompressed signal. However, for a very large portion of the consumer segment, MPEG at SIF size and 1.15 Mbit/sec can be nearly impossible to discern from an analog VHS image. Likewise, with appropriate selections for image size and compressed bit rate, MPEG can approximate SVHS, broadcast NTSC, and Laserdisc. From our evaluation, MPEG 1 SIF format does not meet SVHS or Laserdisc quality. Moreover, MPEG 2 does nothing to improve the fundamental underlying issues that affect image quality.

MPEG DBS Impact on Consumer Broadcast

MPEG DBS relies on three specific components: a digital uplink control center, the satellite, and a set top decoder. Figure 2 illustrates the whole system alternatives for MPEG DBS. Traditional analog signals can be delivered to the uplink center via satellite, microwave, direct cabling, fiber optics, or video tape. These sources are then encoded with a variable number of bits based on the source type and transmitted to the geosynchronous DBS satellite. The transponder then rebroadcasts directly to the consumer's set top decoder.

The fact that multiple channels are broadcast using the same transponder allows yet another level of compression tradeoff to be made. Unlike a single source MPEG compression, a broadcast service can adapt the rate control for several image and audio sources simultaneously. Adaptive rate control spanning multiple programs can give the highest bandwidth to complex sources while reducing the bandwidth to another source, but requires a more complex uplink control facility. This type of statistical multiplexing will sometimes lead to blurs or artifacts when the combined ideal bandwidth of the programs on a single transponder is too high. In these cases, one or more programs will have their bit rates dropped to meet the real bandwidth of the transponder. By careful program scheduling, broadcasters can minimize the impact of the fixed transponder bandwidth on the broadcast quality. As a complicating factor, adaptive rate control of multiple sources requires real time encoding for some of the programs. Thus, compression and quality will not be as high as possible with non-real time compression. In addition, broadcasters also encrypt the signal to reduce piracy of programs at the uplink facility. Regardless of these complication, the real world results shown by DBS providers are excellent, and closely match what we've been able to produce in the laboratory from the viewer's perception.

MPEG requires that satellite transponders have digital capabilities. Hughes' new generation of satellites for digital communications is the most powerful ever launched, The HS601 satellites for DBS operate in the BSS portion of the Ku band (12.2-12.7 GHz). The DirecTV satellites have 16 transponders each capable of 120 watts. This higher output is needed to reach the 18 inch dish antennas installed at consumer sites. Radiated power from the transponders is between 48 and 53 dBW depending on the configurat ion of the transponders. While the requirements of satellites may seem simple, the actual command and control for these sophisticated systems is quite involved and requires a significant ground based infrastructure apart from the MPEG services.

Set top requirements have spawned much debate, but the reality of commercial DBS services has established at least the first few series of set top products. Set top decoders perform several functions in addition to the pure MPEG decode. These additional functions include decryption and pay per view billing. Commercial premium programs require some sort of encryption in order to prohibit signal piracy. Digital encryption is a stronger method than many of the existing quasi analog techniques, making the entire delivery system more secure. Set top units for DirecTV include a low speed modem capability for communicating information such as billing details to the DBS provider. In this approach to pay per view, the consumer chooses items to view and is billed monthly. This contrasts with current pay per view systems that require the consumer to make a phone call to activate the reception of the program. Marketing experts contend that eliminating the phone call will increase orders for premium programs. The set top box makes a toll free phone call once per month to download the pay per view information.

Architecturally, set top boxes are based on processors commonly found in computer workstations. In addition, these set top units are reprogrammable via the satellite broadcast. This decision was made for some very basic reasons. At the time that DBS originated in 1994, the only standard for MPEG was MPEG 1. Suppliers who participated in the MPEG standards committee anticipated MPEG 2 and in fact adopted some of the MPEG 2 capabilities before the standard was approved. The decision to enter the market before MPEG 2 was ratified meant that either a pseudo standard would be used for many years, or the set top units needed to be reprogrammable. Reprogrammability allows the broadcaster to download new code into the set top automatically. This permits bug fixes, standards upgrades, and inclusion of new services. Reprogrammability also means a higher unit cost as compared to non-reprogrammable approaches. As a rule of thumb, RAM is about four times as area intensive as ROM. This translates into at least a four-fold increase in program memory cost. On the other hand, memory requirements for MPEG decode are high. The set top must be able to store several frames of image in order to complete the decode process. As DBS based MPEG gains in maturity, we expect fixed function devices to replace the generally reprogrammable processor. When combined with ROM coded control programs, there is a lot of potential for cost reduction in the set top unit.

Distribution channels for MPEG encoded programs are varied and pose different tradeoffs. When developing an image compression system based on MPEG you will need to consider the source of your images: progressive or interlaced, the amount of time and budget that you will have for image encoding, the nature of the scenes to be encoded, your end customers' familiarity with studio quality video, the allowable point of decode cost, and any interactivity. Finally, you will need to examine the bandwidth capabilities of your delivery vehicle. Single speed CD drives will limit bandwidth to at most 1.5 Mbit/sec if used solely for video, or 1.15 Mbit/sec for a combined stream. Satellite transmission, fiber optics, and high speed networks will provide you with capaci ty to meet SVHS and higher quality requirements. MPEG provides you with a syntax to implement high performance compressed video systems. This toolkit approach permits many alternatives; you don't have to live with the solutions described in the popular press. The solution that you choose will be in large part determined by where your product line fits in the convergence of TVs and PCs.