Nikon D7000, Playback

There are a couple of options for reviewing your video once you have finished recording. The first, and probably the easiest, is to press the Image Review button to bring up the recorded image on the rear LCD, and then use the OK button to start playing the video. The Multi-selector acts as the video controller and allows you to rewind and fast-forward as well as stop the video altogether

If you would like to get a larger look at things, you will need to either watch the video on your TV or move the video files to your computer. To watch low-res video on your TV, you can use the video cable that came with your camera and plug it into the small port on the side of the camera body (Figure 10.3). To get the full effect from your HD video, you will need to buy an HDMI cable (your TV needs to support at least 720HD and have an HDMI port to use this option). Once you have the cable hooked up, simply use the same camera controls that you use for watching the video on the rear LCD.

If you want to watch a video on your computer, you will need to download it using Nikon software or an SD card reader attached to the computer. The video file will have the extension .avi at the end of the filename. These files should play on either a Mac or a PC using software that came with your operating system (QuickTime for Mac and Windows Media Player for PC).

Plug your cable into
Figure 10.3 Plug your cable into this port to watch videos on your television.

 

Canon EOS 60D, Tips for Shooting Video

Transitioning from being a still photographer to making movies might seem like a piece of cake, but you’ll find that there are still a few things to keep in mind to make those videos shine.

SEE DIFFERENTLY

When I first started creating videos with my DSLR, I really started to pay attention to the cinematography of TV and movies. I noticed that the camera was usually still while the world around it moved. Subjects moved into the frame and out of the frame, and the camera didn’t always try to follow them. It can be tempting to move the camera to follow your subject, but sometimes keeping still can add more impact and drama to your scene (plus a lot of movement might make your viewers dizzy!). So let your subjects move in and out of the frame while you take a deep breath, relax, and keep your camera pointed in the same, unchanging direction.

DON’T RUSH

As still photographers, we tend to see things “in the moment.” When recording videos those moments last longer, and they need to flow through from one scene to the next. A common mistake that new video photographers tend to make is that they cut their videos short, meaning they stop the recordings too soon. It’s important that you have extra time before and after each scene not only to allow for smooth transitions in and out of the video, but also for editing purposes. It’s always good to have more than you need when piecing video clips together in postproduction.

So, when you think you are done with your video clip and you want to turn it off… don’t! Count to three, or four or five, and then stop your recording. It will feel odd at first, but don’t worry; you’ll get the hang of it. Those extra couple of seconds can make a world of difference.

VIDEO EDITING

Once you have recorded your movies, you might want to do a little bit more with them, such as assemble several video clips into one movie, or add sound or additional graphics and text. If so, you’ll probably want to learn a thing or two about how to edit your videos using video-editing software. Many different software programs are available for you to choose from. With some of the free or inexpensive programs, like iMovie (for Mac) or QuickTime Pro, you can do basic editing on your video clips. Other programs, such as Final Cut Pro or Adobe Premiere Pro, will allow you to do even more advanced editing and to add creative effects to your movies. Using editing software is not required to play back and share movies created with your 60D, but it is a fun way to take your movies to the next level.

 

Canon PowerShot G12, Watching Your Videos

There are a couple of different options for you to review your video once you have finished recording. The first is probably the easiest: Press the Playback button to bring up the recorded image on the LCD screen, and then use the Set button to start playing the video. The Left/Right buttons act as the video controller and allow you to rewind and fast-forward as well as stop the video altogether.

If you would like to get a larger look at things, you will need to either watch the video on your TV or move the video files to your computer. To watch on your TV, connect an HDMI cable to the HDMI Out port, or connect the video cable that came with the camera to the A/V Out/Digital port (Figure 11.4).

The video ports on the G12.
Figure 11.4 The video ports on the G12.

Once you’ve connected to your TV, simply use the same camera controls that you used for watching the video on the LCD screen.

If you want to watch or use the videos on your computer, you need to download the images using the Canon software or by using an SD card reader attached to your computer. The video files will have the extension “.mov” at the end of the file name. These files should play on either a Mac or a PC using software that came with your operating system or that can be downloaded for free (Apple’s QuickTime for Mac and Windows is available at www.apple.com/quicktime/download/).

Canon 7D, Tips for Shooting Video

Transitioning from being a still photographer to making movies might seem like a piece of cake, but you’ll fi nd that there are still a few things to keep in mind to make those videos shine.

SEE DIFFERENTLY

When I fi rst started creating videos with my DSLR, I really started to pay attention to the cinematography of TV and movies. I noticed that the camera was usually still while the world around it moved. Subjects moved into the frame and out of the frame, and the camera didn’t always try to follow them. It can be tempting to move the camera to follow your subject, but sometimes keeping still can add more impact and drama to your scene (plus a lot of movement might make your viewers dizzy!). So let your subjects move in and out of the frame while you take a deep breath, relax, and keep your camera pointed in the same, unchanging direction.

DON’T RUSH

As still photographers, we tend to see things “in the moment.” When recording videos those moments last longer, and they need to fl ow through from one scene to the next. A common mistake that new video photographers tend to make is that they cut their videos short, meaning they stop the recordings too soon. It’s important that you have extra time before and after each scene not only to allow for smooth transitions in and out of the video, but also for editing purposes. It’s always good to have more than you need when piecing video clips together in postproduction.

So, when you think you are done with your video clip and you want to turn it off… don’t! Count to three, or four or fi ve, and then stop your recording. It will feel odd at fi rst, but don’t worry; you’ll get the hang of it. Those extra couple of seconds can make a world of difference.

VIDEO EDITING

Once you have recorded your movies, you might want to do a little bit more with them, such as assemble several video clips into one movie, or add sound or additional graphics and text. If so you’ll probably want to learn a thing or two about how to edit your videos using video editing software. Many different software programs are available for you to choose from. With some of the free or inexpensive programs, like iMovie (for Macs) or QuickTime Pro, you can do basic editing on your video clips. Other programs, such as Final Cut Pro or Adobe Premier Pro, will allow you to do even more advanced editing and to add creative effects to your movies. Using editing software is not required to play back and share movies created with your 7D, but it is a fun way to take your movies to the next level.

 

Alarming Statistics on Rising Crime Rates – Why is Crime So Popular?

Not only are the continually rising crime rates in the U.S. terribly alarming and valid cause for great concern, but equally curious and unfortunate is the public’s perception that crime and its related drama are somehow entertaining.The most popular television show in the United States is currently ‘CSI’ (Crime Scene Investigations). Drawing in an estimated 84 million viewers, the show has been a fan favorite since its 2000 debut. Now in its ninth season, CSI continues to pull in regular viewers.Criminal Minds, a televised series now in its fourth season, also receives praise from regular TV watchers. The fascination with crime series and the topic of violence and unlawful activity on film is nothing new; people have crowded into theaters and gathered in front of their TV screens to watch this type of action play out for decades.As life imitates art (or is it the other way around?), cities across America continue to see occurrences of crime increase. New Orleans, Louisiana, tops the list as the most dangerous American city, leading in areas of both violent crime (they had over 200 murders in 2008) and property crime. The small town of Gary, Indiana was a close contender in 2nd place. Los Angeles, California, New York, New York and Atlanta, Georgia all had disturbingly high numbers of violent crime, including murder, as well as thefts, break-ins and burglaries, car thefts, and other incidents of property crime.American citizens, despite their inexplicable acceptance of criminal topic matter as venues for entertainment, spend a good deal of money to protect themselves, their homes and their families. Reliable security systems become more common every year, as people realize they need far more comprehensive home safety plans than a dog and a peephole on the front door.Citizens arm themselves with mace, personal alarms and take classes on self-defense, making very deliberate efforts to devise potentially life-saving responses to an attack. Car alarms and the ‘club’ devices (an implement that attaches to a steering wheel to deter auto theft) are also regularly purchased.Obviously, people are aware of the threatening danger of crime, and most people are responsible enough to make efforts to protect themselves, and yet they almost promote it by accepting crime-related subject matter as an acceptable form of entertainment. Isn’t it possible that, if crime was not given top billing on television or in the cinema, maybe it would not be such a top-rated pastime amongst real-life citizens?It’s not unreasonable to imagine that, were the media to cease glorifying violence and unsavory behavior, it would become less attractive to act out in this manner amongst society. Actions speak louder than words. If our culture stops acting as if criminal lifestyles are essentially fascinating, we could very well witness an alarmingly positive decrease in the negative statistics that currently pervade the U.S. crime scene.

Other Advocacy Entities

This section provides a short survey of industry advocacy and activities in support of 3DTV.

[email protected] Consortium

Recently (in 2008) the [email protected] Consortium was formed with the mission to speed the commercialization of 3D into homes worldwide and provide the best possible viewing experience by facilitating the development of standards, roadmaps, and education for the entire 3D industry—from content, hardware, and software providers to consumers.

3D Consortium (3DC)

The 3D Consortium (3DC) aims at developing 3D stereoscopic display devices and increasing their take-up, promoting expansion of 3D contents, improving distribution, and contributing to the expansion and development of the 3D market. It was established in Japan in 2003 by five founding companies and 65 other companies including hardware manufacturers, software vendors, contents vendors, contents providers, systems integrators, image producers, broadcasting agencies, and academic organizations.

European Information Society Technologies (IST) Project ‘‘Advanced Three-Dimensional Television System  Technologies’’ (ATTEST)

This is a project where industries, research centers, and universities have joined forces to design a backwards-compatible, flexible, and modular broadcast 3DTV system. The ambitious aim of the European Information Society Technologies (IST) project ATTEST is to design a novel, backwards-compatible, and flexible broadcast 3DTV system. In contrast to former proposals that often relied on the basic concept of “stereoscopic” video, that is the capturing, transmission, and display of two separate video streams (one for the left eye and one for the right eye), this activity focuses on a data-in-conjunction-with-metadata approach. At the very heart of the described new concept is the generation and distribution of a novel data representation format that consists of monoscopic color video and associated per-pixel depth information. From these data, one or more “virtual” views of a real-world scene can be synthesized in real-time at the receiver side (i.e., a 3DTV STB) by means of the DIBR techniques. The modular architecture of the proposed system provides important features, such as backwards-compatibility to today’s 2D DTV, scalability in terms of receiver complexity, and adaptability to a wide range of different 2D and 3D displays.

3D Content Creation. For the generation of future 3D content, novel three-dimensional material is created by simultaneously capturing video and associated per-pixel depth information with an active range camera such as the so-called ZCamTM developed by 3DV Systems. Such devices usually integrate a high-speed pulsed infrared light source into a conventional broadcast TV camera and they relate the time of flight of the emitted and reflected light walls to direct measurements of the depth of the scene. However, it seems clear that the need for sufficient high-quality, three-dimensional content can only partially be satisfied with new recordings. It will therefore be necessary (especially in the introductory phase of the new broadcast technology) to also convert already existing 2D video material into 3D using so-called “structure from motion” algorithms. In principle, such (offline or online) methods process one or more monoscopic color video sequences to (i) establish a dense set of image point correspondences from which information about the recording camera, as well as the 3D structure of the scene can be derived or (ii) infer approximate depth information from the relative movements of automatically tracked image segments. Whatever 3D content generation approach is used in the end, the outcome in all cases consists of regular 2D color video in European DTV format (720 × 576 luminance pels, 25 Hz, interlaced) and an accompanying depth-image sequence with the same spatiotemporal resolution. Each of these depth-images stores depth information as 8-bit gray values with the gray level 0 specifying the furthest value and the gray level 255 defining the closest value. To translate this data representation format to real, metric depth values (that are required for the “virtual” view generation (and to be flexible with respect to 3D scenes with different depth characteristics, the gray values are normalized to two main depth clipping planes.

3DV Coding. To provide the future 3DTV viewers with threedimensional content, the monoscopic color video and the associated per-pixel depth information have to be compressed and transmitted over the conventional 2D DTV broadcast infrastructure. To ensure the required backwards-compatibility with existing 2D-TV STBs, the basic 2D color video has to be encoded using the standard MPEG-2 as MPEG-4 Visual or AVC tools currently required by the DVB Project in Europe.

Transmission. The DVB Project, a consortium of industries and academia responsible for the definition of today’s 2D DTV broadcast infrastructure in Europe, requires the use of the MPEG-2 systems layer specifications for the distribution of audiovisual data via cable (DVB-C), satellite (DVB-S), or terrestrial (DVB-T) transmitters.

‘‘Virtual’’ View Generation and 3D Display. At the receiver side of the proposed ATTEST system, the transmitted data is decoded in a 3DTV STB to retrieve the decompressed color video- and depth-image sequences (as well as the additional metadata). From this data representation format, a DIBR algorithm generates “virtual” left- and right-eye views for the three-dimensional reproduction of a real-world scene on a stereoscopic or autostereoscopic, singleor multiple-user 3DTV display. The backwards-compatible design of the system ensures that viewers who do not want to invest in a full 3DTV set are still able to watch the two-dimensional color video without any degradations in quality using their existing digital 2DTV STBs and displays.

3D4YOU

3D4YOU7 is funded under the ICT Work Programme 2007–2008, a thematic priority for research and development under the specific program “Cooperation” of the Seventh Framework Programme (2007–2013). The objectives of the project are

  1. to deliver an end-to-end system for 3D high-quality media;
  2. to develop practical multi-view and depth capture techniques;
  3. to convert captured 3D content into a 3D broadcasting format;
  4. to demonstrate the viability of the format in production and over broadcast chains;
  5. to show reception of 3D content on 3D displays via the delivery chains;
  6. to assess the project results in terms of human factors via perception tests;
  7. to produce guidelines for 3D capturing to aid in the generation of 3D media production rules;
  8. to propose exploitation plans for different 3D applications.

The 3D4YOU project aims at developing the key elements of a practical 3D television system, particularly, the definition of a 3D delivery format and guidelines for a 3D content creation process.

The 3D4YOU project will develop 3D capture techniques, convert captured content for broadcasting, and develop 3D coding for delivery via broadcast that is suitable to transmit and make public. 3D broadcasting is seen as the next major step in home entertainment. The cinema and computer games industries have already shown that there is considerable public demand for 3D content but the special glasses that are needed limits their appeal. 3D4YOU will address the consumer market that coexists with digital cinema and computer games. The 3D4YOU project aims to pave the way for the introduction of a 3D TV system. The project will build on previous European research on 3D, such as the FP5 project ATTEST that has enabled European organizations to become leaders in this field.

3D4YOU endeavors to establish practical 3DTV. The key success factor is 3D content. The project seeks to define a 3D delivery format and a content creation process. Establishing practical 3DTV will then be demonstrated by embedding this content creation process into a 3DTV production and delivery chain, including capture, image processing, delivery, and then display in the home. The project will adapt and improve on these elements of the chain so that every part integrates into a coherent interoperable delivery system. A key project’s objective is to provide a 3D content format that is independent of display technology, and backward compatible with 2D broadcasting. 3D images will be commonplace
in mass communication in the near future. Also, several major consumer electronics companies have made demonstrations of 3DTV displays that could be in the market within two years. The public’s potential interest in 3DTV can be seen by the success of 3D movies in recent years. 3D imaging is already present in many graphics applications (architecture, mechanical design, games, cartoons, and special effects for TV and movie production).

In recent years, multi-view display technologies have appeared that improve the immersive experience of 3D imaging that leads to the vision that 3DTV or similar services might become a reality in the near future. In the United States, the number of 3D-enabled digital cinemas is rapidly growing. By 2010, about 4300 theaters are expected to be equipped with 3D digital projectors with the number increasing every month. Also in Europe, the number of 3D theaters is growing. Several digital 3D films will surface in the months and years to come and several prominent filmmakers have committed to making their next productions in stereo 3D. The movie industry creates a platform for 3D movies, but there is no established solution to bring these movies to the domestic market. Therefore, the next challenge is to bring these 3D productions to the living room. 2D to 3D conversion and a flexible 3D format are an important strategic area. It has been recognized that multi-view video is a key technology that serves a wide variety of applications, including free viewpoint and 3DV applications for the home entertainment and surveillance business fields. Multi-view video coding and transmission systems are most likely to form the basis for next-generation TV broadcasting applications and facilities. Multi-view video will greatly improve the efficiency of current video coding solutions performing simulcasts of independent views. This project builds on the wealth of experience of the major players in European 3DTV and intends to bring the date of the start of 3D broadcasting a step closer by combining their expertise to define a 3D delivery format and a content creation process.

The key technical problems that currently hamper the introduction of 3DTV to the mass market are as follows:

  1. It is difficult to capture 3DV directly using the current camera technology. At least two cameras need to operate simultaneously with an adjustable but known geometry. The offset of stereo cameras needs to be adjustable to
    capture depth, both close by and far away.
  2. Stereo video (acquired with 2-cameras) is currently not sufficient input for glasses-free, multi-view autostereoscopic displays. The required processing, such as disparity estimation, is noise-sensitive resulting in low 3D picture quality.
  3. 3D postproduction methods and 3DV standards are largely absenterimmature.

The 3D4YOU project will tackle these three problems. For instance, a creative combination of two or three high-resolution video cameras with one or two lowresolution depth range sensors may make it possible to create 3DV of good quality without the need for an excessive investment in equipment. This is in contrast to installing, say, 100 cameras for acquisition where the expense may hamper the introduction of such a system.

Developing tools for conversion of 3D formats will stimulate content creation companies to produce 3DV content at acceptable cost. The cost at which 3DV should be produced for commercial operation is not yet known. However, currently, 3DV production requires almost per frame user interaction in the video, which is certainly unacceptable. This immediately indicates the issue that needs to be solved: currently, fully automated generation of high 3DV is difficult; in the future it needs to be fully, or at least semi-automatic with an acceptable minimum of manual supervision during postproduction. 3D4YOU will research how to convert 3D content into a 3D broadcasting format and prove the viability of the format in production and over broadcast chains.

Once 3DV production becomes commercially attractive because acquisition techniques and standards mature, then this will impact the activities of content producers, broadcasters, and telecom companies. As a result, one may see that these companies may adopt new techniques for video production just because the output needs to be in 3D. Also, new companies could be founded that focus on acquiring 3DV and preparing it for postproduction. Here, there is room for differentiation since, for instance, the acquisition of a sport event will require large baselines between cameras and real-time transmission, whereas the shooting of narrative stories will require both small and large baselines and allows some manual postproduction for achieving optimal quality. These activities will require new equipment (or a creative combination of existing equipment) and new expertise.

3D4YOU will develop practical multi-view and depth capture techniques. Currently, the stereo video format is the de facto 3D standard that is used by the cinemas. Stereo acquisition may, for this reason, become widespread as an acquisition technique. Cinemas operate with glasses-based systems and can therefore use a theater-specific stereo format. This is not the case for the glasses-free autostereoscopic 3DTV that 3D4YOU foresees for the home. To allow glassesfree viewing with multiple people at home, a wide baseline is needed to cover the total range of viewing angles. The current stereo video that is intended for the cinema will need considerable postproduction to be suitable for viewing on a multi-view autostereoscopic display. Producing visual content will therefore, become more complex and may provide new opportunities for companies currently active in (3D) movie postproduction. According to the Networked and Electronic Media (NEM) Strategic Research Agenda, multi-view coding will form the basis for next-generation TV broadcast applications. Multi-view video has the advantage that it can serve different purposes. On the one hand, the multi-view input can be used for 3DTV. On the other hand, it can be shown on a normal TV where the viewer can select his or her preferred viewpoint of the action. Of course, a combination is possible where the viewer selects his or her preferred viewpoint on a 3DTV. However, multi-view acquisition with 30 views for example, will require 30 cameras to operate simultaneously. This initially requires a large investment. 3D4YOU therefore sees a gradual transition from stereo capture to systems with many views. 3D4YOU will investigate a mixture of 3DV acquisition techniques to produce an extended center view plus depth format (possibly with one or two extra views) that is, in principle, easier to produce, edit, and distribute. The success of such a simpler format relies on the ease (read cost!) at which it can be produced. One can conclude that the introduction of 3DTV to the mass market is hampered by (i) the lack of highquality 3DV content; (ii) by the lack of suitable 3D formats; and (iii) lack of appropriate format conversion techniques. The variety of new distribution media further complicates this.

Hence, one can identify the following major challenges that are expected to be overcome by the project:

  1. Video Acquisition for 3D Content: Here, the practicalities of multi-view and depth capture techniques are of primary importance, the challenge is to find the trade off such as number of views to be recorded, and how to
    optimally integrate depth capture with multi-view. A further challenge is to define which shooting styles are most appropriate.
  2. Conversion of Captured Multi-View Video to a 3D Broadcasting Format: The captured format needs new postproduction tools (like enhancement and regularization of depth maps or editing, mixing, fading, and compositing of V+D representations from different sources) and a conversion step generating a suitable transmission format that is compatible with used postproduction formats before the 3D content can be broadcast and displayed.
  3. Coding Schemes for Compression and Transmission: A last challenge is to provide suitable coding schemes for compression and transmission that are based on the 3D broadcasting format under study and to demonstrate their feasibility in field trials under real distribution conditions.

By addressing these three challenges from an end-to-end systems point of view, the 3D4YOU project aims to pave the way to the definition of a 3D TV system suitable for a series of applications. Different requirements could be set depending on the application, but the basic underlying technologies (capture, format, and encoding) should maintain as much commonality as possible so as to favor the emergence of an industry based on those technologies.

3DPHONE

The 3DPHONE project aims to develop technologies and core applications enabling a new level of user experience by developing end-to-end all-3D imaging mobile phone. Its aim is to have all fundamental functions of the phone—media display, User Interface (UI), and personal information management (PIM) applications—realized in 3D. We will develop techniques for all-3D phone experience: mobile stereoscopic video, 3D UIs, 3D capture/content creation, compression, rendering, and 3D display. The research and development of algorithms for 3D audiovisual applications including personal communication, 3D visualization, and content management will be done.

The 3DPhone Project started on February 11, 2008. The duration of the project is 3 years and there are six participants from Turkey, Germany, Hungary, Spain, and Finland. The partners are Bilkent University, Fraunhofer, Holografika, TAT, Telefonica, and University of Helsinki. 3DPhone is funded by the European Community’s ICT programme in Framework Programme Seven.

The goal is to enable users to

  • capture memories in 3D and communicate with others in 3D virtual spaces;
  • interact with their device and applications in 3D;
  • manage their personal media content in 3D.

The expected outcome will be simpler use and a more personalized look and feel. The project will bring state-of-the-art advances in mobile 3D technologies with the following activities:

  • A mobile hardware and software platform will be implemented with both 3D image capture and 3D display capability, featuring both 3D displays and multiple cameras. The project will evaluate different 3D display
    and capture solutions and will implement the most suitable solution for hardware–software integration.
  • UIs and applications that will capitalize on the 3D autostereoscopic illusion in the mobile handheld environment will be developed. The project will design and implement 3D and zoomable UI metaphors suitable for autostereoscopic displays.
  • End-to-end 3DV algorithms and 3D data representation formats, targeted for 3D recording, 3D playback, and real-time 3DV communication will beinvestigated and implemented.
  • Ergonomics and experience testing to measure any possible negative symptoms, such as eye strain created by stereoscopic content, will be performed. The project will research ergonomic conditions specific to the mobile handheld usage: in particular, the small screen, one hand holding the device, absence of complete keyboard, and limited input modalities.

In summary, the general requirements on 3DV algorithms on mobile phones are as follows:

  • low power consumption,
  • low complexity of algorithms,
  • limited memory/storage for both RAM and mass storage,
  • low memory bandwidth,
  • low video resolution,
  • limited data transmission rates and limited bitrates for 3DV signal.

These strong restrictions derived from terminal capabilities and from transmission bandwidth limitations usually result in relatively simple video processing algorithms to run on mobile phone devices. Typically, video coding standards take care of this by specific profiles and levels that only use a restricted and simple set of video coding algorithms and  low-resolution video. The H.264/AVC Baseline Profile for instance, uses only a simple subset of the rich video coding algorithms that the standard provides in general. For 3DV, the equivalent of such a low-complexity baseline profile for mobile phone devices still needs to be defined and developed. Obvious requirements of video processing and coding apply for 3DV on mobile phones as well, such as

  • high coding efficiency (taking bitrate and quality into account);
  • requirements specific for 3DV that apply for 3DV algorithms on mobile phones including
    • flexibility with regard to different 3D display types,
    • flexibility for individual adjustment of 3D impression.

 

 

HDMI Licensing, LLC

HDMI Licensing, LLC, of Sunnyvale, California, promulgates the HDMI specifications. In 2009, they published the new HDMI 1.4 that was discussed in Appendix A5. HDMI cabling is typically used between the STB or BD player and the TV display. This upgrade has been viewed as one of the key developments to enable 3DTV. Of all the new HDMI 1.4 features, 3D is reportedly getting the most interest from the broadcasters.

The HDMI 1.4 work grew out of interactions between the HDMI Licensing group and a related working group in the CEA that owns CEA 861. There are improvements expected with new silicon interface chips as these support higher transfer rates on the interface, but the short-term goal is also to have existing equipment be as functional as possible because without HDMI support, one cannot readily deploy 3DTV. The HDMI Licensing group is also relaxing its specifications so that many existing STBs and TVs do not have to handle a variety of previously mandatory formats, often beyond their processing capabilities or needs. Instead, they can handle stereo 3D broadcasts in the top/bottom format with a firmware upgrade.

Consumer Electronics Association (CEA)

The CEA is the preeminent trade association promoting growth in the $172 billion US consumer electronics industry. More than 2000 companies are members of the CEA, including legislative advocacy, market research, technical training and education, industry promotion, and the fostering of business and strategic relationships.

At recent CEA Industry Forums (2009), the focus has been on consumer electronics retail trends (e.g., changes in channel dynamics), 3DTV technology, green technology, and social media. CEA takes the (tentative) position that the 3DTV technology is demonstrating clear success at movie theaters and will gradually evolve into other facets of consumers’ viewing habits. But the guidance is that the industry needs to have reasonable expectations for 3DTV. 3DTV is gaining momentum, as covered in this text, but may not completely reach critical mass for several years. CEA recently observed that the top trends and technologies likely to prominently feature at upcoming international CES events are as follows: interactive TV topped the list as a trend to watch with a variety of partnerships, widgets, menus, and new ways to manage content across screens likely to generate “buzz” at upcoming CES trade shows; and 3DTV also will be a big trend, with the question of whether 3D glasses or an alternative solution will emerge as the most viable option. E-books and Netbooks were also highlighted as top 2010-and-beyond CES trends [17].

CEA is developing standards for the interface for an uncompressed digital interface between (say) the STB (called source) and the 3D display (called sink); these standards will need to include signaling details, 3D format support, and other interoperability requirements between sources and sinks. In 2008 CEA started standards work aimed at enabling home systems to play stereoscopic 3DTV. The group’s first step was to upgrade the interconnect standard used in the High-Definition Multimedia Interface (HDMI) to enable the cable/interface to carry stereo 3D data. Specifically this entailed an upgrade of the CEA 861 standard (A DTV Profile for Uncompressed High-Speed Digital Interfaces, March 2008) that defines an uncompressed video interconnect for HDMI. The standard defines video timing requirements, discovery structures, and a data transfer structure (InfoPacket) that is used for building uncompressed, baseband, digital
interfaces on DTVs or DTV monitors. A single physical interface is not specified, but any interface implemented must use Video Electronics Standards Association Enhanced Extended Display Identification Data (VESA E-EDID) for format discovery. CEA-861-E establishes protocols, requirements, and recommendations for the utilization of uncompressed digital interfaces by consumer electronics devices such as DTVs, digital cable, satellite or terrestrial STBs, and related peripheral devices including, but not limited to DVD players/recorders, and other related sources or sinks. CEA-861-E is applicable to a variety of standard DTVrelated high-speed digital physical interfaces such as Digital Visual Interface
(DVI) 1.0, Open Low Voltage Differential Signaling Display Interface (LDI), and HDMI specifications. Protocols, requirements, and recommendations that are defined include video formats and waveforms; colorimetry and quantization; transport of compressed and uncompressed, as well as Linear Pulse Code Modulation (LPCM), audio; carriage of auxiliary data; and implementations of the VESA E-EDID, that is used by sinks to declare display capabilities and characteristics.

At press time, CEA was also working on creating standards for 3DTV active and passive eyeglasses, metadata, on-screen displays, and user controls. A CEA group set up in 2009 was working on a standard for infrared signals used to control active shutter glasses; the group developed a requirements document and published a broad call for proposals in early 2010. The CEA also has a task group studying how to place captions in 3D space; the group was expected to issue a call for proposals in early 2010.

Rapporteur Group On 3DTV of ITU-R Study Group 6

Arranging a television system so that viewers can see 3D pictures is both simple and complex. ITU–R has agreed on a new study topic on 3D television, and in 2010 it expects to be building up knowledge of the options. Proponents had made the proposal to the ITU-R in 2008 that the time was ripe for worldwide agreements on 3DTV, and the ITU-R Study Group 6 has agreed on a “new Study Question” on 3D television, that will be submitted for approval by the ITU-R Membership

Though there are different views about whether current technology can provide a system which is entirely free of eyestrain, for those who wish to start such services, there could be advantages in having a worldwide common solution, or at least interoperable solutions, and the ITU-R Study Group 6 specialists have been gathering information, which might lead to such a result.

Therefore, the Question from ITU-R calls for contributions on systems that include, but also go beyond stereoscopy, and include technology that may record what physicists call the “object wave.” Clearly, this a more futuristic version of 3DTV. Holograms record in a limited way the “object wave.” Will there be a way of broadcasting to record an “object wave”? This remains to be seen. No approaches are excluded at this stage. The “Question” is essentially a call for proposals for 3DTV. Journals and individuals are asked to “spread the word” about this, and to invite contributions. Such contributions are normally channeled via national administrations, or via the other Members of the ITU—the so-called Sector Members. Which proposals will be made and which may be the subject of agreement remains to be seen, but the ITU-R sector has launched, in its own words, “an exciting new issue, which may have a profound impact on television
in the years ahead.”

The Question is included below to give the readers perspective on the ITU-R work.

QUESTION ITU-R 128/6
Digital three-dimensional (3D) TV broadcasting

The ITU Radiocommunication Assembly

considering

a) that existing TV broadcasting systems do not provide complete perception of reproduced pictures as natural three-dimensional scenes;

b) that viewers’ experience of presence in reproduced pictures may be enhanced by 3D TV, which is anticipated to be an important future application of digital TV broadcasting;

c) that the cinema industry is moving quickly towards production and display in 3D;

d) that research into various applications of new technologies (for example, holographic imaging) that could be used in 3D TV broadcasting is taking place in many countries;

e) that progress in new methods of digital TV signal compression and processing is opening the door to the practical realization of multifunctional 3D TV broadcasting systems;

f) that the development of uniform world standards for 3D TV systems, covering various aspects of digital TV broadcasting, would encourage adoption across the digital divide and prevent a multiplicity of standards;

g) the harmonization of broadcast and non-broadcast applications of 3D TV is desirable, decides that the following Questions should be studied

  1. What are the user requirements for digital 3D TV broadcasting systems?
  2. What are the requirements for image viewing and sound listening conditions for 3D TV?
  3. What 3D TV broadcasting systems currently exist or are being developed for the purposes of TV program production, post-production, television recording, archiving, distribution and transmission for realization of 3D TV broadcasting?
  4. What new methods of image capture and recording would be suitable for the effective representation of three-dimensional scenes?
  5. What are the possible solutions (and their limitations) for the broadcasting of 3D TV digital signals via the existing terrestrial 6, 7 and 8MHz bandwidth channels or broadcast satellite services, for fixed and mobile reception?
  6. What methods for providing 3D TV broadcasts would be compatible with existing television systems?
  7. What are the digital signal compression and modulation methods that may be recommended for 3D TV broadcasting?
  8. What are the requirements for the 3D TV studio digital interfaces?
  9. What are appropriate picture and sound quality levels for various broadcast applications of 3D TV?
  10. What methodologies of subjective and objective assessment of picture and sound quality may be used in 3D TV broadcasting?

also decides

  1. that results of the above-mentioned studies should be analyzed for the purpose of the preparation of new Reports and Recommendation(s);
  2. that the above-mentioned studies should be completed by 2012.

It should be noted that the ITU-R has already published some standards and reports on 3DTV in the past, including the following:

  • Rec. ITU-R BT.1198 (1995) Stereoscopic television based on R- and L-eye two-channel signals
  • Rec. ITU-R BT.1438 (2000) Subjective assessment of stereoscopic television pictures’
  • Report ITU-R BT.312-5 (1990) Constitution of stereoscopic television
  • Report ITU-R BT.2017 (1998) Stereoscopic television MPEG-2 multi-view profile
  • Report ITU-R BT.2088 (2006) Stereoscopic Television.

ITU-R BT.1198, Stereoscopic television based on R- and L-eye two-channel signals, suggests some general principles to be followed in development of stereoscopic television systems to maximize their compatibility with existing monoscopic systems. It contains

  • requirements for compatibility with monoscopic signal;
  • requirement for a discrete two-channel digital video coding scheme;
  • requirement for a discrete channel plus difference channel digital video coding scheme.

Obviously these are “old” standards, but they point to the fact that transmission of 3DTV signals is not completely a new concept.

Society of Motion Picture and Television Engineers (SMPTE) 3D Home Entertainment Task Force

There is a need for a single mastering standard for viewing stereo 3D content on TVs, PCs, and mobile phones, where the content could originate from optical disks, broadcast networks, or the Internet. To that end, SMPTE formed a 3D Home Entertainment Task Force in 2008 to work the issue and a standards effort was launched in 2009 via an SMPTE 3D Standards Working Group to define a content format for stereo 3D. The SMPTE 3D Standards Working Group had about 200 participants at press time; the Home Master standard was expected to become available in mid-2010. The group is in favor of a mastering standard for the Home Master specification based on 1920 × 1080 pixel resolution at 60 fps/eye. The specification is expected to support an option for falling back to a 2D image. The standard is also expected to support hybrid products, such as BDs that can support either 2D or stereo 3D displays.

SMPTE’s 3D Home Master defines high-level image formatting requirements that impact 3DTV designs, but the larger bulk of the 3DTV standards for hardware are expected to come from other organizations, such as CEA. Studios or game publishers would deliver the master as source material for uses ranging from DVD and BD players to terrestrial and satellite broadcasts and Internet downloadable or streaming files

As we have seen throughout this text, 3DTV systems must support multiple delivery channels, multiple coding techniques, and multiple display technologies. Digital cinema, for example, is addressed with a relatively simple left–right sequence approach; residential TV displays involve a greater variety of technologies necessitating more complex encoding. Content transmission and delivery is also supported by a variety of physical media such as BDs as well as broadcasting, satellite, and cable delivery. The SMPTE 3D Group has been considering what kind of compression should be supported. One of the key goals of the standardization process is defining and/or identifying schemes that minimize the total bandwidth required to support the service; the MVC extension to MPEG- 4/H.264 discussed earlier is being considered by the group. Preliminary studies have shown, however, that relatively little bandwidth may be saved when compared to simulcast because high-quality images require 75–100% overhead and images of medium quality require 65–98% overhead. In addition to defining the representation and encoding standards (which clearly drive the amount of channel bandwidth for the additional image stream), 3DTV service entails other requirements; for example, there is the issue of graphics overlay, captions and subtitles, and metadata. 3D programming guides have to be rethought, according to industry observers; the goal is to avoid floating the guide in front of the action and instead, to push the guide behind the screen and let the action play over it because practical research shows that people found it jarring when the programming guide is brought to the forefront of 3DV images [13]. The SMPTE Group is also looking at format wrappers, such as Material eXchange Format (MXF; a container format for professional digital video and audio media defined by a set of SMPTE standards), whether an electrical interface should be specified, and if depth representation is needed for an early version of the 3DTV service, among other factors [14]. As we have noted earlier in the text, 3DTV has the added consideration of physiological effects because disjoint stereoscopic images can adversely impact the viewer.