Additional Details on Video Encoding Standards

Efficient video encoding is required for 3DTV/3DV and for FVT/FVV. 3DTV/3DV support 3D depth impression of the observed scenery, while FVT/FVV additionally allow for an interactive selection of viewpoint and direction within a certain operating range. Hence, a common feature of 3DV and FVV systems is the use of multiple views of the same scene that are
transmitted to the user. Multi-view 3D video can be encoded implicitly in the V + D representation or, as is more often the case, explicitly.

In implicit coding one seeks to use (implicit) shape coding in combination with MPEG-2/MPEG-4. Implicit shape coding could mean that the shape can be easily extracted at the decoder, without explicit shape information present in the bitstream. These types of image compression schemes do not rely on the usual additive decomposition of an input image into a set of predefined spanning functions. These schemes only encode implicit properties of the image and reconstruct
an estimate of the scene at the decoding end. This has particular advantages when one seeks very low bitrate perceptually oriented image compression [32]. The literature on this topic is relatively scanty. Chroma Key might be useful in this context: Chroma Key, or green screen, allows one to put a subject anywhere in a scene or environment using the Chroma Key as the background. One can then import the image into the digital editing software, extract the Chroma Key and replace with another image or video. Chroma Key shape coding for implicit shape coding (for medium quality shape extraction) has been proposed and also demonstrated in the recent past.

On the other hand, there are a number of strategies for explicit coding of multiview video: (i) simulcast coding, (ii) scalable simulcast coding, (iii) multi-view coding, and (iv) Scalable Multi-View Coding (SMVC).

Simulcast coding is the separate encoding (and transmission) of the two video scenes in the CSV format; clearly the bitrate will typically be in the range of double that of 2DTV. V + D is more bandwidth efficient not only in the abstract,
but also in practice. At the practical level, in a V + D environment the quality of the compressed depth map is not a significant factor in the final quality of the rendered stereoscopic 3D video. This follows from the fact that the depth
map is not directly viewed, but is employed to warp the 2D color image to two stereoscopic views. Studies show that the depth map can typically be compressed to 10%–20% of the color information.

V + D (also called 2D plus depth, or 2D + depth, or color plus depth) has been standardized in MPEG as an extension for 3D filed under ISO/IEC FDIS 23002-3:2007(E). In 2007, MPEG specified a container format “ISO/IEC 23002-3 Representation of Auxiliary Video and Supplemental Information” (also known as MPEG-C Part 3) that can be utilized for V + D data. 2D + depth, as specified by ISO/IEC 23002-3 supports the inclusion of depth for generation of an increased number of views. While it has the advantage of being backward compatible with legacy devices and is agnostic of coding formats, it is capable of rendering only a limited depth range since it does not directly handle occlusions [33]. Transport of this data is defined in a separate MPEG systems specification “ISO/IEC 13818-1:2003 Carriage of Auxiliary Data.”

There is also major interest in MV + D. Applicable coding schemes of interest here include the following:

  • Multiple-view video coding (MVC)
  • Scalable Video Coding (SVC)
  • Scalable multi-view video coding (SMVC)

From a test/test-bed implementation perspective, for the first two options, each view can be independently coded using the public-domain H.264 and SVC codecs respectively. Test implementations for MVC and for preliminary implementations of an SMVC codec have been documented recently in the literature.

Multiple-View Video Coding (MVC)

It has been recognized that MVC is a key technology for a wide variety of future applications including FVV/FTV, 3DTV, immersive teleconference and surveillance, and other applications. An MPEG standard, “Multi-View Video Coding
(MVC),” to support MV + D (and also V + D) encoded representation inside the MPEG-2 transport stream has been developed by the JVT of ISO/IEC MPEG and ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6). MVC
allows the construction of bitstreams that represent multiple views [34]; MVC supports efficient encoding of video sequences captured simultaneously from multiple cameras using a single video stream. MVC can be used for encoding
stereoscopic (two-view) and multi-view 3DTV, and for FVV/FVT.

MVC (ISO/IEC 14496-10:2008 Amendment 1 and ITU-T Recommendation H.264) is an extension of the AVC standard that provides efficient coding of multi-view video. The encoder receives N temporally synchronized video streams and generates one bitstream. The decoder receives the bitstream, decodes and outputs the N video signals. Multi-view video contains a large amount of inter-view statistical dependencies, since all cameras capture the same scene from different viewpoints. Therefore, combined temporal and inter-view prediction is the key for efficient MVC. Also, pictures of neighboring cameras can be used for efficient prediction [35]. MVC supports the direct coding of multiple views and exploits inter-camera redundancy to reduce the bitrate. Although MVC is more efficient than simulcast, the rate of MVC encoded video is proportional to the number of views.

The MVC group in the JVT has chosen the H.264/MPEG-4 AVC-based multi-view video method as its MVC video reference model, since this method supports better coding efficiency than H.264/AVC simulcast coding. H.264/MPEG-4 AVC was developed jointly by ITU-T and ISO through the JVT in the early 2000s (the ITU-T H.264 standard and the ISO/IEC MPEG-4 AVC, ISO/IEC 14496-10-MPEG-4 Part 10 are jointly maintained to retain identical technical content). H.264 is used with Blu-ray Disc and videos from the iTunes Store. The standardization of H.264/AVC was completed in 2003, but additional extensions have taken place since then; for example, SVC as specified in Annex G of H.264/AVC added in 2007.

Owing to the increased data volume of multi-view video, highly efficient compression is needed. In addition to the redundancy exploited in 2D video for compression, the common idea for MVC is to further exploit the redundancy
between adjacent views. This is because multi-view video is captured by multiple cameras at different positions and significant correlations exist between neighbor views [36]. As hinted elsewhere, there is interest in being able to synthesize novel views from the virtual cameras in multi-view camera configurations; however, the occlusion problem can significantly affect the quality of virtual view rendering [37]. Also, for FVV, the depth map quality is important because it is used to render virtual views that are further apart than with the stereoscopic case: when the views are further apart, the distortion in the depth map has a greater effect on the final rendered quality—this implies that the data rate of the depth map has to be higher than in the CSV case.

Note: Most existing MVC techniques are based on the traditional hybrid DCTbased video coding schemes. These neither fully exploit the redundancy among different views nor provide an easy way of implementation for scalabilities. In
addition, all the existing MVC schemes mentioned above use DCT-based coding. A fundamental problem for DCT-based block coding is that it is not convenient to achieve scalability, which has become a more and more important feature for video coding and communications. As a research topic, wavelet-based image and video coding has been proved to be a good way to achieve both, good coding performance and full scalabilities including spatial, temporal, and Signal-To-Noise Ratio (SNR) scalabilities. In the past, MVC has been included in several video coding standards such as MPEG-2 MVP, and MPEG-4 MAC (Multiple Auxiliary Component). More recently, an H.264-based MVC scheme has been developed that utilizes the multiple reference structure in H.264. Although this method does exploit the correlations
between adjacent views through inter-view prediction, it has some constraints for practical applications compared to a method that uses, say, wavelets [36].

As just noted, MPEG has developed a suite of international standards to support 3D services and devices. In 2009 MPEG initiated a new phase of standardization to be completed by 2011. MPEG’s vision is a new 3DV format that goes beyond the capabilities of existing standards to enable both, advanced stereoscopic display processing and improved support for autostereoscopic N -view displays, while enabling interoperable 3D services. 3DV aims to improve rendering capability of 2D + depth format while reducing bitrate requirements relative to simulcast and MVC. Figure B3.1 illustrates ISO MPEG’s target of 3DV format illustrating limited camera inputs and constrained rate transmission

Target of 3D video format for ongoing MPEG standardization initiatives.

according to a distribution environment. The 3DV data format aims to be capable of rendering a large number of output views for autostereoscopic N -view displays and support advanced stereoscopic processing. Owing to limitations in
the production environment, the 3DV data format is assumed to be based on limited camera inputs; stereo content is most likely, but more views might also be available. In order to support a wide range of autostereoscopic displays, it should be possible for a large number of views to be generated from this data format. Additionally, the rate required for transmitting the 3DV format should be fixed to the distribution constraints; that is, there should not be an increase in the rate simply because the display requires a higher number of views to cover a larger viewing angle. In this way, the transmission rate and the number of output views are decoupled. Advanced stereoscopic processing that requires view generation at the display would also be supported by this format [33].

Compared to the existing coding formats, the 3DV format has several advantages in terms of bit rate and 3D rendering capabilities; this is also illustrated in Fig. B3.2 [33].

  • 2D + depth, as specified by ISO/IEC 23002-3, is only capable of rendering a limited depth range since it does not directly handle occlusions. The 3DV format is expected to enhance the 3D rendering capabilities beyond this format.
  • MVC is more efficient than simulcast but the rate of MVC encoded video is proportional to the number of views. The 3DV format is expected to significantly reduce the bitrate needed to generate the required views at the receiver.

Illustration of 3D rendering capability versus bit rate for different formats.

Scalable Video Coding (SVC)

The concept of the SVC scheme is to enable the encoding of a video stream that contains one (or several) subset bitstream(s) of a lower spatial or temporal resolution (that is, lower quality video signal)—each separately or in
combination—compared to the bitstream it is derived from (e.g., the subset bitstream is typically derived by dropping packets from the larger bitstream), that can itself (themselves) be decoded with a complexity and reconstruction quality
comparable to that achieved by using the existing coders (e.g., H.264/MPEG-4 AVC) with the same quantity of data as in the subset bitstream. A standard for SVC was recently being worked on by the ISO MPEG Group, and was completed in 2008. The SVC project was undertaken under the auspices of the JVT of the ISO/IEC MPEG and the ITU-T VCEG. In January 2005, MPEG and VCEG agreed to develop a standard for SVC, to become as an amendment of the H.264/MPEG-4 AVC standard. It is now an extension, Annex G, of the H.264/MPEG-4 AVC video compression standard.

A subset bitstream may encompass a lower temporal or spatial resolution (or possibly a lower quality video signal, say with a camera of lower quality) as compared to the bitstream it is derived from.

  • Temporal (Frame Rate) Scalability: the motion compensation dependencies are structured so that complete pictures (specifically packets associated with these pictures) can be dropped from the bitstream. (Temporal scalability is already available in H.264/MPEG-4 AVC but SVC provides supplemental information to ameliorate its usage.)
  • Spatial (Picture Size) Scalability: video is coded at multiple spatial resolutions. The data and decoded samples of lower resolutions can be used to predict data or samples of higher resolutions in order to reduce the bitrate to code the higher resolutions.
  • Quality Scalability: video is coded at a single spatial resolution but at different qualities. In this case the data and samples of lower qualities can be utilized to predict data or samples of higher qualities—this is done in order to reduce the bitrate required to code the higher qualities.

Products supporting the standard (e.g., for video conferencing) started to appear in 2008.

Scalable Multi-View Video Coding (SMVC).

Although there are many approaches published on SVC and MVC, there is no current work reported on scalable multi-view video coding (SMVC). SMVC can be used for transport of multi-view video over IP for interactive 3DTV by dynamic adaptive combination of temporal, spatial, and SNR scalability according to network conditions [38].

Conclusion

Table B3.1 based on Ref. [39] indicates how the “better-known” compression algorithms can be applied, and what some of the trade-offs in quality are (this study was done in the context of mobile delivery of 3DTV, but the concepts are similar in general). In this study, four methods for transmission and compression/ coding of stereo video content were analyzed. Subjective ratings show that the mixed resolution approach and the video plus depth approach do not impair
video quality at high bitrates; at low bitrates simulcast transmission is outperformed by the other methods. Objective quality metrics, utilizing the blurred or rendered view from uncompressed data as reference, can be used for optimization of single methods (they cannot be used for comparison of methods since they have a positive or negative bias). Further research of individual methods will include combinations like inter-view prediction for mixed resolution coding and depth representation at reduced resolution.

In conclusion, the V + D format is considered by researchers to be a good candidate to represent stereoscopic video that is suitable for most of the 3D displays currently available; MV + D (and the MVC standard) can be used for holographic displays and for FVV, where the user, as noted, can interactively select his or her viewpoint and where the view is then synthesized from the closest spatially located captured views [40]. However, for the initial deployment one will likely see (in order of likelihood).

  • spatial compression in conjunction with MPEG-4/AVC;
  • H.264/AVC stereo SEI message;
  • MVC, which is an H.264/MPEG-4 AVC extension.

Application of Compression Algorithms

 

More Advanced Methods

Other methods have been discussed in the industry, known generally as 2D in conjunction with metadata (2D + M). The basic concept here is to transmit 2D images and to capture the stereoscopic data from the “other eye” image in the form of an additional package, the metadata; the metadata is transmitted as part of the video stream (Fig. 3.12). This approach is consistent with MPEG multiplexing; therefore, to a degree, it is compatible with embedded systems. The requirement to transmit the metadata increases the bandwidth needed in the channel: the added bandwidth ranges from 60%–80% depending on quality goals and techniques used. As implied, a set-top box employed in a traditional 2D environment would be able to use the 2D content, ignoring the metadata, and properly display the 2D image; in a 3D environment the set-top box would be able to render the 3D signal.

Some variations of this scheme have already appeared. One approach is to capture a delta file that represents the difference between the left and right images.

2D in conjunction with metadata.

A delta file is usually smaller than the raw file because of intrinsic redundancies. The delta file is then transmitted as metadata. Companies such as Panasonic and TDVision use this approach. This approach can also be used for stored media. For example, Panasonic has advanced (and the Blu-ray Disc Association is studying), the use of metadata to achieve a full-resolution 3D Blu-ray Disc standard. A 1920 × 1080p 24 fps resolution per eye is achievable. This standard would make Blu-ray Disc a high-quality 3D content (storage) system. The goal was to agree to the standard by early 2010 and have 3D Blu-ray Disk players emerge by the end-of-year shopping season 2010. Another approach entails transmitting the 2D image in conjunction with a depth map of each scene.

Video Plus Depth (V + D)

As noted above, many 3DTV proposals often rely on the basic concept of “stereoscopic” video, that is, the capture, transmission, and display of two separate video streams (one for the left eye and one for the right eye). More recently, specific proposals have been made for a flexible joint transmission of monoscopic color video and associated per-pixel depth information [24, 25]. The concept of V + D representation is the next notch up in complexity.

From this data representation, one or more “virtual” views of the 3D scene can then be generated in real-time at the receiver side, by means of Depth- Image-Based Rendering (DIBR) techniques [26]. A system such as this provides important features, including backwards compatibility to today’s 2D digital TV; scalability in terms of receiver complexity; and easy adaptability to a wide range of different 2D and 3D displays. DIBR is the process of synthesizing “virtual” views of a scene from still or moving color images and associated per-pixel depth information. Conceptually, this novel view generation can be understood as the following two-step process: at first, the original image points are re-projected into the 3D world, utilizing the respective depth data; thereafter, these 3D space points are projected into the image plane of a “virtual” camera that is located at the required viewing position. The concatenation of re-projection (2D to 3D) and subsequent projection (3D to 2D) is usually called 3D image warping in the Computer Graphics (CG) literature and will be derived mathematically in the following paragraph. The signal processing and data transmission chain of this kind of 3DTV concept is illustrated in Fig. 3.13; it consists of four different functional building blocks: (i) 3D content creation, (ii) 3D video coding, (iii) transmission, and (iv) “virtual” view generation and 3D display.

As it can be seen in Fig. 3.14, a video signal and a per-pixel depth map is captured and eventually transmitted to the viewer. The per-pixel depth data can be considered a monochromatic luminance signal with a restricted range spanning
the interval [Znear, Zfar] representing, respectively, the minimum and maximum distance of the corresponding 3D point from the camera. The depth range is quantized with 8 bit, with the closest point having the value 255 and the most distant point having the value 0. Effectively, the depth map is specified as a grayscale image; these values can be supplied into the luminance channel of a video signal and the chrominance can be set to a constant value. In summary, this representation uses a regular video stream enriched with so-called depth maps providing a Z -value for each pixel. Note that V + D enjoys backward compatibility because a 2D receiver will display only the V portion of the V + D signal. Studies by

Depth-image-based rendering (DIBR) system.Video plus depth (V + D) representation for 3D video.

Regeneration of stereo video from V + D signals.

the European ATTEST (Advanced Three Dimensional Television System Technologies) project indicate that depth data can be compressed very efficiently and still be of good quality; namely, that it needs only around 20% of the bitrate
that would otherwise be needed to encode the color video (the qualitative results were confirmed by means of subjective testing). This approach can be placed in the category of Depth-Enhanced Stereo (DES).

A stereo pair can be rendered from the V + D information, by 3D warping at the decoder. A general warping algorithm takes a layer and deforms it in many ways: for example, twists it along any axis, or bends a layer around itself or adds
arbitrary dimension with a displacement map. The generation of the stereo pair from a V + D signal at the decoder as illustrated in Fig. 3.15. This reconstruction affords extended functionality compared to CSV because the stereo image can be adjusted and customized after transmission. Note that in principle, more than two views can be generated at the decoder thus enabling support of multi-view displays (and head motion parallax viewing within reason).

V + D enjoys backwards compatibility, compression efficiency, extended functionality, and the ability to use existing coding algorithms. It is only necessary to specify high-level syntax that allows a decoder to interpret two incoming video streams correctly as color and depth. The specifications “ISO/IEC 23002-3 Representation of Auxiliary Video and Supplemental Information” and “ISO/IEC 13818-1:2003 Carriage of Auxiliary Data” enable 3D video-based V + D to be
deployed in a standardized fashion by broadcasters interested in adopting this method.

It should be noted however, that the advantages of V + D over CSV entail increased complexity for both, sender and receiver. At the receiver side, view synthesis has to be performed after decoding to generate the second view of the
stereo pair. At the sender (capture) side, the depth data have to be generated before encoding can take place. This is usually done by depth/disparity estimation from a captured stereo pair; these algorithms are complex and still error
prone. Thus in the near future, V + D might be more suitable for applications with playback functionality, where depth estimation can be performed offline on powerful machines, for example in a production studio or home 3D editing suite,
enabling viewing of downloaded 3D video clips and 3DTV broadcasting [16].

Multi-View Video Plus Depth (MV + D)

There are some advanced 3D video applications that are not properly supported by any existing standards and where work by the ITU-R or ISO/MPEG is needed. Two such applications are given below:

  • wide range multi-view autostereoscopic displays (say, nine or more views);
  • FVV (environment where the user can chose his/her own viewpoint).

These 3D video applications require a 3D video format that allows rendering a continuum and/or large number of output views at the decoder. There really are no available alternatives: MVC discussed above does not support a continuum
and becomes inefficient for a large number of views; and, we noted that V + D could in principle generate more than two views at the decoder but in practice, it supports only a limited continuum around the original view (artifacts increase
significantly with the distance of the virtual viewpoint). In response, MPEG started an activity to develop a new 3D video standard that would support these requirements.

The MV + D concept is illustrated in Fig. 3.16. MV + D involves a number of complex processing steps where (i) depth has to be estimated for the N views at the capture point, and then (ii) N color with N depth video streams have to

Multi-view video plus depth (MV + D) concept.

be encoded and transmitted. At the receiver, the data have to be decoded and the virtual views have to be rendered (reconstructed).

As was implied just above, MV + D can be used to support multi-view autostereoscopic displays in a relatively efficient manner. Consider a display that supports nine views (V1–V9) simultaneously (e.g., with a lenticular display manufactured by Philips; Fig. 3.17). From a specific position a viewer can see

Multi-view autostereoscopic displays based on MV + D.

only a stereo pair of views, depending on the viewer’s position. Transmitting nine display views directly (e.g., by using MVC) would be taxing from a bandwidth perspective; in this illustrative example only three original views (views V1,
V5, and V9) along with corresponding depth maps D1, D5, and D9 are in the decoded stream—the remaining views can be synthesized from these decoded data by using DIBR techniques.

Layered Depth Video (LDV)

LVD is a derivative and also an alternative to MV + D. LDV is believed to be more efficient than MV + D because less information has to be transmitted; however, additional error-prone vision processing tasks are required that operate
on partially unreliable depth data. These efficiency assessments remain to be fully validated as of press time.

LVD uses (i) one-color video with associated depth map and (ii) a background layer with associated depth map; the background layer includes image content that is covered by foreground objects in the main layer. This is illustrated in
Figs 3.18 and 3.19. The occlusion information is constructed by warping two or

Layered depth video (LDV) concept.

Layered depth video (LDV) example.

more neighboring V + D views from the MV + D representation onto a defined center view. The LDV stream or substreams can then be encoded by a suitable LDV coding profile.

Note that LDV can be generated from MV + D by warping the main layer image onto other contributing input images (e.g., an additional left and right view). By subtraction, it is then determined which parts of the other contributing
input images are covered in the main layer image; these are then assigned as residual images and transmitted while the rest is omitted [16].

Figure 3.18 is based on a recent presentation at the 3D Media Workshop, Heinrich Hertz Institut (HHI) Berlin, October 15–16, 2009 [27, 28]. LDV provides a single view with depth and occlusion information. The goal is to achieve automatic acquisition of 3DTV content, especially to obtain depth and occlusion information from video and to extrapolate a new view without error.

Table 3.2, composed from technical details in Ref. [29] provides a summary of the issues associated with the various representation methods.

Summary of Formats

Summary of Formats