Subsystems include elements for the capture, representation/ definition, compression, distribution, and display of the signals. Figure 3.1 depicts a logical end-to-end view of a 3DTV signal management system; Fig. 3.2 provides additional details. Figure 3.3 provides a more physical perspective. 3D approaches are an extension of traditional video capture and distribution approaches. We focus here on the representation/definition of the signals and compression. We provide only a brief discussion on the capture and display technology. The reader may refer to [1–4] for more details on capture and display methods and technologies.
The availability of content will be critical to the successful introduction of the 3DTV service and that 3D content is
more demanding in terms of production. Real-time capture of 3D content almost invariably requires a pair of cameras to be placed side-by-side in what is called a 3D rig to yield a left-eye, right-eye view of a scene. The lenses on the left and right cameras in a 3D rig must match each other precisely. The precision of the alignment of the two cameras is critical; misaligned 3D video is cumbersome to watch and will be stressful to the eyes. Two parameters of interest for 3D camera acquisition are camera separation and toe-in; we already covered these issues in the previous chapter. This operation is similar to how the human eyes work: as one focuses on an object in close proximity the eyes toe-in; as one focuses on remote objects the eyes are parallel. Interaxial distance (also known as interaxial separation) is the distance between camera lenses’ axes; this can be also defined as the distance between two taking positions for a stereo photograph.
The baseline distance between visual axes (separation) for the eyes is around 2.5 in. (65 mm) although there is a distribution of values as shown in Fig. 3.4 that may have to be taken into account by content producers. 3D cameras use the same separation for baseline, but the separation can be smaller or larger to accentuate the 3D effect of the displayed material. The separation will need to be varied for different focal length lenses and by the distance from cameras to the subject . A number of measures can (or better yet, must) be taken to reduce eye fatigue
in 3D during content development/creation. Some are considering the creation of 3D material by converting a 2D movie to a stereoscopic product with left- /right-eye tracks; in some instances non–real time conversion of 2D to 3D may lead to (marginally) satisfactory results. It remains a fact however, that it is not straightforward to create a stereo pair from 2D content (issues relate to object depth and reconstruction of parts of the image that are obscured in the first eye). Nonetheless, conversion from 2D may play a role in the short term.