Proper Main Light Meter Placement

The correct position of the light meter determines the correctness of exposure. Indeed, there’s a reason why incident light meters utilize a plastic dome to see the light before it hits the meter’s sensor, and that reason is undeniably easy to understand; the shape of the dome mimics the shape of the face, and that shape reads the strength of light and shadow relative to a face and gives you a correct and proper f-stop for your camera. It makes no difference what color of skin you’re working with because a correctly metered light will be a perfectly exposed light, and any skin color will be properly represented.

So, what is the correct position of the meter’s dome? In the vast majority of circumstances, the place to hold the meter is directly under the subject’s chin and aimed at the camera. This guarantees the meter reads the three zones of light—the specular highlight, diffused highlight , and transition zone—with an equal balance. The result, if your meter is calibrated, is an exposure that’s so perfect you can go straight to proofs without tweaking Levels at all. When you’re using a calibrated light meter, the amount of time you’ll save is enormous. You will be confident enough with your metering techniques to avoid shooting RAWfiles entirely, if you wish.

Let’s back up for just a minute. RAW files are the digital equivalent of a film negative, allowing you up to 2 stops in exposure compensation. RAW files are a digital gift because in tough shooting situations they can really save your bacon. Each RAWfile is worthless by itself, however. It must be “processed” via digital software before it can be used as a JPEG, TIFF, or any other format.

As you know or can imagine, the amount of time required to process a large batch of RAW files can be enormous. Imagine a wedding photographer who shoots two or three thousand shots in RAWformat over the course of the big day. Each shot may take 3 to 4 minutes to tweak, plus the time necessary to process the files into a TIFF or JPEG format (depends on the speed of your computer). Think about the amount of time you might save if you nail the exposure, especially in JPEG format, straight out of the gate.

If you feel you must shoot RAW, set your camera to shoot large JPEG files and RAW files at the same time. When you download, separate the RAWs and JPEGs into separate folders. Look at the JPEGs first. If you find JPEGs that need to be tweaked, use the RAW files to do so. If the JPEGs are fine, trash the RAW files or just burn them to a disc. Your time savings will be huge.

Metering for correct exposure in the studio is quite easy. I’ve photographed a few test images just to show you how foolproof it can be.

Let’s begin with the light set at zero degrees to the lens axis (i.e., directly over the lens). I used a simple parabolic reflector as a modifier for this test. It’s rather contrasty, but it will demonstrate the principle nicely.

In image 2.1, you’ll see there is a full range of tones, from the bright whites of the subject’s teeth through the shadows of her hair.

The correct position of the light meter, in at least 90 percent of all situations, is directly under the chin and aimed directly at the lens. This will guarantee the meter will read all three zones and deliver an average reading that will give you a proper representation of those zones. See image 2.2.

At 22.5 degrees from center, the angle of incidence begins to change. You may have been taught in photo class that the “angle of incidence equals the angle of deflection,” and this is absolutely true. The meter angle stays the same, straight on to camera, but you begin to see some changes in the specularity of the light because it’s now aimed at different planes on the face and reflecting directly into the camera from some of them. See image 2.3.

At 45 degrees, the shadows deepen because there is no fill on the shadow side. The exposure, measured with the meter still aimed at the camera, still produced a perfect exposure when the strobe generator was adjusted to the target exposure, f/10 for this example. See image 2.4.

11-3-2556 14-53-45

At 60 degrees, which is more than most attractive portraits will tolerate, a meter reading aimed at the camera still yields a beautiful result. Shadows and highlights are properly represented, even though the image is very contrasty. See image 2.5.

So, what happens if we aim the meter at the light? At 60 degrees, what can the difference be, after all? Interestingly, the difference can be quite major.When you aim the meter at the light, you will only measure the brightest part of the light, not the average of highlights and shadows we’ve been measuring so far. With the meter aimed at the light, note the difference in shadow density and highlight brilliance between the previous examples. The inference is clear: most circumstances do not require the meter to be aimed at the light. Aiming it at the camera will produce more consistent results almost all of the time. The first image was made at the previous aperture, the second was made with the reading given by aiming the meter at the light, not at the camera, a 1/4-stop difference. See images 2.6 and 2.7.

The easiest way to add fill light to your image is to bring in a white bookend or any other kind of white fill to add light to the shadow side. I’ve never been a fan of adding another strobe as fill. I much prefer a fill card of some kind because it will not add any shadows of its own. Be advised that, even at 3 feet away from the subject, the extra light that bounces in will affect the overall exposure. In this case, introducing the bookend added 1/3 stop of light to the overall exposure, which meant I had to either take the exposure down at the source (as I would recommend) or move the main light straight back a few inches. Either approach will maintain the ratio of any other lights that may have been set. This image, metered with the dome aimed at the camera, is a perfect example of how bounce fill can open up the shadows without looking like a second source of light. See image 2.8.

Metering a profile is different in that it’s one of the few times you’ll need to aim the meter at the light rather than the camera. This assumes that the light is coming from in front of the profile (and from the side,

11-3-2556 14-55-38

11-3-2556 14-56-16

11-3-2556 14-56-44

11-3-2556 14-57-49

relative to the camera). When the light is coming from any direction less than the 60 degrees we previously discussed you can meter, with confidence, with the dome facing the camera.

When the light is coming from 90 degrees to the side, if we were to meter with the dome aimed at the camera, the amount of shadow would throw off the accuracy of the reading, causing a poor exposure. See images 2.9 and 2.10.

Once you’re satisfied with the reading and f-stop, set and meter any other lights you wish to use. I reintroduced the white bookend and added a hair light, powered to the same f-stop as the main light. A small piece of black foamcore, mounted on an accessory arm, created a flag that blocked off part of the light striking the background. The result (image 2.11) is a visually interesting and lovely image. This is a perfect technique for many portrait applications, from beauty and glamour to graduation portraiture.

Canon EOS 60D, Picture Styles

Picture styles on the 60D will allow you to enhance your images in-camera depending on the type of photo you are taking. The picture style is automatically selected when you are using any of the Basic Zone modes. When using a Creative Zone shooting mode, you decide which style to use.

There are a few things to keep in mind when using picture styles. The first is that when you are shooting in RAW, the picture style doesn’t really “stick.” When previewing your images on the LCD Monitor, you’ll see it applied to the image; but once you bring it into your RAW editing software, you can change it to any of the other styles. Shooting JPEG images or video, however, will permanently embed the picture style to the image or movie and can’t be changed. This is extremely important to keep in mind when using styles such as the Monochrome picture style, since you will be discarding all color from your image.

These styles can be applied in the menu, while shooting in Live View, or while editing your RAW images in-camera. There are six styles to choose from, along with three additional user-defined styles:

  • Standard: This general-purpose style is used to create crisp images with bold, vibrant colors. It is suitable for most scenes.
  • Portrait: This style enhances the colors in skin tone, and is used for a softerlooking image.
  • Landscape: This style enhances blues and greens, two colors that are typically visible in a landscape image.
  • Neutral: This style creates natural colors and subdued images, and it is a good choice if you want to do a lot of editing to your photos on the computer.
  • Faithful: This picture style is similar to the neutral style but creates better color when shooting in daylight-balanced light (color temperature of 5200K). It’s also a good option if you prefer to edit your photos on the computer.
  • Monochrome: This style creates black and white images. It’s important to note that if you use the Monochrome style and shoot in JPEG, you cannot revert the image to color.

SETTING THE PICTURE STYLE IN THE ME

  1. Press the Menu button on the back of the camera, and then use the Multi- Controller to get to the second menu tab.
  2. Using the Quick Control dial, scroll down to the Picture Style menu item. Press the Set button.
  3. Use either the Main dial or the Quick Control dial to scroll through the styles. When you’ve selected the one you want to use, press the Set button.
  4. To edit any of these styles, select the one you want to change, and then press the Info button. To edit a specific setting, select the setting, press Set, and then use the Quick Control dial to make the changes.

SETTING THE PICTURE STYLE WITH LIVE VIEW

SETTING THE PICTURE STYLE WITH LIVE VIEW

  1. Press the Live View shooting button to get into the Live View shooting mode.
  2. With Live View activated, press the Quick Control button on the back of the camera, and then use the Multi- Controller to scroll down to the Picture Style icon. Press Set.
  3. Use the Main dial on the top of the camera to select from among the different base picture style choices (A).
  4. Once you’ve selected a picture style, you can change any of its four parameters by using the Multi-Controller or Quick Control dial to select them (sharpness, contrast, saturation, and color tone), and then use the Main dial to make the changes (B).
  5. Press the Set button to lock in your changes.

 

Canon EOS 60D, In-Camera Image Editing

The 60D has image-editing features that allow you to quickly process images in-camera and save those files as a JPEG on your SD card. This feature is not a replacement for editing images on your computer, but it is a useful and fun way to create quick, ready-to-use images directly from your memory card.

CREATIVE FILTERS

The Creative filters are a fun way to add different effects to your images. The 60D comes with four different filters, each with settings you can change to customize the look of your image. Now, one thing to note is that you are unable to apply these effects to images photographed in the mRAW or sRAW quality settings.

  • Grainy B/W: This will make the image black and white and also add grain to the image. You can control the amount of contrast in the image—the contrast setting in Figure 10.2 was set to “low.”
  • Soft Focus: This adds a classic “soft glow” to an image by adding blur (Figure 10.3). You have control over the amount of blur you would like to add to your image.
  • Toy Camera effect: This effect adds a color cast and also vignettes the corners of the image to make it look as though it was photographed with a toy camera (Figure 10.4).
  • Miniature effect: If you want to mimic the look of a tilt-shift lens, then this is really fun to use. This filter adds contrast and blur to the image to make your scene look like a diorama, and it allows you to select the area of focus. It looks best when applied to photos taken from high up, like from a cliff or balcony (Figure 10.5).
Grainy B/W
FIGURE 10.2 Grainy B/W
Soft Focus
FIGURE 10.3 Soft Focus
Toy Camera effect
FIGURE 10.4 Toy Camera effect
Miniature effect
FIGURE 10.5 Miniature effect

APPLYING A CREATIVE FILTER TO AN IMAGE

APPLYING A CREATIVE FILTER TO AN IMAGE

  1. Press the Menu button and use the Main dial to go to the fifth tab from the left. Scroll down to the Creative Filters option using the Quick Control dial and press Set (A).
  2. Use the Quick Control dial to select an image to edit (your camera will only display compatible images at this point). Press the Set button.
  3. Use the Quick Control dial to select the Creative filter you would like to apply, and then press Set (B).
  4. Use the Quick Control dial to adjust the filter (the options are different for each filter) (C). When you are finished editing, press the Set button. (You can also exit any of the filters at any time by pressing the Menu button to go to the previous screen.)
  5. Select OK on the next screen, and your image is now saved as a JPEG on your memory card. Press OK to confirm, and press the Menu button to exit.

RAW PROCESSING

Along with the Creative filters, you can also do basic adjustments to RAW files on your 60D. This feature is helpful if you need to quickly edit a file and save it as a JPEG, and you don’t have access or time to do so on a computer. Just like with the Creative filters, you cannot process images photographed in the mRAW and sRAW quality settings.

PROCESSING RAW IMAGES WITH THE 60D

PROCESSING RAW IMAGES WITH THE 60D

  1. Press the Menu button and use the Main dial to go to the fifth tab from the left. Scroll down to the RAW Image Processing option using the Quick Control dial, and press Set (A).
  2. Use the Quick Control dial to select an image to edit (your camera will only display compatible images at this point). Press the Set button.
  3. Use the Multi-Controller to select an option to edit. Then use the Quick Control dial to make changes.
  4. Continue making changes to each setting as necessary, and when you are finished processing the image, scroll down to the Save option (B). Press Set.
  5. Select OK on the next screen, and your image is now saved as a JPEG on your memory card. Press OK to confirm, and press the Menu button to exit.

RESIZING IMAGES

Sometimes you might want to quickly resize an image, and the 60D has a feature that makes this very easy. You can resize JPEG L/M/S1 and S2 images, but not RAW and JPEG S3 files. This feature is perfect if you edited an image using a Creative filter discussed earlier in this section and need to use the image on the Web or send it as an email attachment.

RESIZING IMAGES ON THE 60D

RESIZING IMAGES ON THE 60D

  1. Press the Menu button and use the Main dial to go to the fifth tab from the left. Scroll down to the Resize option using the Quick Control dial and press Set (A).
  2. Use the Quick Control dial to select an image to resize (your camera will only display compatible images at this point). Press the Set button.
  3. Use the Quick Control dial to select the size you would like your image to be, and then press the Set button (B).
  4. Select OK on the next screen, and your image is now saved as a JPEG on your memory card. Press OK to confirm, and press the Menu button to exit.

 

VARI-ANGLE LCD MONITOR

One really cool feature of the 60D is its Vari-angle LCD Monitor (commonly called an “articulating screen”), which can be really handy in certain situations. Benefits of using this feature are very apparent when shooting in Live View or video mode, since you can angle the display so that it’s shaded from the sun. You can also angle the display when you want to lower or raise the camera beyond your field of view by moving the LCD Monitor so that it’s always facing in your direction. You can also swivel the display so that it’s flipped completely around, making it possible to do self-portraits or videos of yourself.

Another nice benefit of the Vari-angle LCD Monitor is that you can turn the display so that it’s flush against the camera, protecting the LCD Monitor from scratches while not in use. This is a good option when packing the camera in a camera bag or while using it in a harsh environment where damage to the monitor can easily occur.

VARI-ANGLE LCD MONITOR

Nikon D7000, Reducing Red-Eye

We’ve all seen the result of using on-camera flashes when shooting people: the dreaded red-eye! This demonic effect is the result of the light from the flash entering the pupil and then reflecting back as an eerie red glow. The closer the flash is to the lens, the greater the chance that you will get red-eye. This is especially true when it is dark and the subject’s pupils are fully dilated. There are two ways to combat this problem. The first is to get the flash away from the lens. That’s not really an option, though, if you are using the pop-up flash. Therefore, you will need to turn to the Red-eye Reduction feature.

This is a simple feature that shines a light from the camera at the subject, causing his or her pupils to shrink, thus eliminating or reducing the effects of red-eye (Figure 8.13).

Notice that the pupils on the image without red-eye are smaller as a result.
Figure 8.13 Notice that the pupils on the image without red-eye are smaller as a result.

This small adjustment can make a big difference in time spent post-processing. You don’t want to have to go back and remove red-eye from both eyes of every subject!

The feature is set to Off by default and needs to be turned on by using the information screen or by using a combination of the flash button and the Command dial.

Turn on the lights!

When shooting indoors, another way to reduce red-eye, or just shorten the length of time that the reduction lamp needs to be shining into your subject’s eyes, is to turn on a lot of lights. The brighter the ambient light levels, the smaller the subject’s pupils will be. This will reduce the time necessary for the red-eye reduction lamp to shine. It will also allow you to take more candid pictures because your subjects won’t be required to stare at the red-eye lamp while waiting for their pupils to reduce.

Turning on the Red-eye Reduction feature

Turning on the Red-eye Reduction feature

  1. Press and hold the flash button while viewing the control panel.
  2. While holding the flash button, rotate the Command dial until the small eye appears in the box. This means Red-eye Reduction is on. Release the flash button.
  3. With Red-eye Reduction activated, compose your photo and then press the shutter release button to take the picture.

When Red-eye Reduction is activated, the camera will not fire the instant that you press the shutter release button. Instead, the red-eye reduction lamp will illuminate for a second or two and then fire the flash for the exposure. This is important to remember as people have a tendency to move around, so you will need to instruct them to hold still for a moment while the lamp works its magic.

Truth be told, I rarely shoot with Red-eye Reduction turned on because of the time it takes before being able to take a picture. If I am after candid shots and have to use the flash, I will take my chances on red-eye and try to fix the problem in my image processing software or even in the camera’s retouching menu. The Nikon Picture Project software that comes with your D7000 has a feature to reduce red-eye that works really well, although only on JPEG images.

 

Canon EOS 60D, Exposure Settings for Video

Setting the exposure for video is similar to setting exposure for still photographs, but you will notice a few differences that will only apply when recording movies. One obvious difference is that you can only view your scene in Live View, and the LCD Monitor will display a simulated exposure for what your video will look like during the recording process. There are also some limitations on shutter speed and exposure—keep on reading to learn more about them.

AUTOEXPOSURE VS. MANUAL EXPOSURE

When shooting movies on the 60D, you have two options for exposure: Auto and Manual. When shooting in Auto, the camera determines all exposure settings (aperture, shutter speed, and ISO), whereas with Manual, you have control over these settings just as you would when shooting still images. Auto is a simple setting to use if you want to get a quick video and don’t have the time to change the settings manually. However, with autoexposure you have limited control, and if you want to take full advantage of your DSLR and lenses when shooting video, you’ll probably want to give the Manual mode a try.

The Manual mode for video functions in the same way as it does for still photography: You pick the aperture, shutter speed, and ISO. You can even change your settings while you are recording (although the microphone might pick up camera noises— read more about audio later in this chapter). I prefer to use the Manual mode when shooting video because I like to have control over all of my settings, and I also like to use the largest aperture possible to decrease the depth of field in the scene.

One important thing to note when shooting video is that you have some shutter speed limitations, depending on your frames-per-second setting. The slowest shutter speed when shooting with a frame rate of 50 or 60 fps is 1/60 of a second, and for 24, 25, or 30 fps, you can go down to 1/30 of a second. You can’t go any faster than 1/4000 of a second, but it’s recommended that you keep your shutter speed between 1/30 and 1/125 of a second, especially when photographing a moving subject. The slower your shutter speed is, the smoother and less choppy the movement in your video will be.

CHANGING THE MOVIE EXPOSURE SETTING

CHANGING THE MOVIE EXPOSURE SETTING

  1. Set the camera to video mode using the Mode dial on the top of the camera.
  2. Press the Menu button and use the Main dial to get to the first menu tab, and then select the Movie Exposure option at the top (A). Press the Set button.
  3. Make your selection (Auto or Manual), and then press the Set button once again to lock in your changes (B).

WHITE BALANCE AND PICTURE STYLES

When shooting video, you want to be sure to get the white balance right. Remember the difference between RAW and JPEG. Well, think of a video file as a JPEG. If you were to edit the video file on your computer, it would be difficult to change the white balance without damaging the pixels, and if the white balance is completely off, you might not even be able to salvage the video’s original colors.

What’s neat about shooting video is that you can see what the video quality will be like before you start recording. This means that you can set the white balance and see it changing right in front of you

Picture styles are also a very useful tool when shooting video. They work the same way as with still photography and you can preview your scene with the changes while in the video Live View mode. Just remember that once you record in one of these settings, you can’t change this quality of the video. For example, when using the Monochrome (black and white) picture style, once you’ve recorded a movie, there is no way to go back and retrieve the color information.

 

 

Canon PowerShot G12, Shadow Correction and Dynamic Range (DR) Correction

Your camera provides two functions that can automatically make your pictures look a little better: Shadow Correction and Dynamic Range (DR) Correction. With Shadow Correct, the camera evaluates the tones in your image and then lightens any areas that it believes are too dark or lacking contrast after you take the shot (Figures 10.10 and 10.11). Dynamic Range Correction works in the other direction, attempting to prevent highlights from blowing out to white. The Correct modes work only when shooting JPEG images (not RAW or RAW+JPEG).

The two controls are accessed from the same menu, even though they perform opposite actions.

Without Shadow Correction, the shadows on the chair and the fireplace are dark.
Figure 10.10 Without Shadow Correction, the shadows on the chair and the fireplace are dark.
Although the exposure hasn’t changed, the shadows are brighter after enabling the Shadow Correction feature.
Figure 10.11 Although the exposure hasn’t changed, the shadows are brighter after enabling the Shadow Correction feature.

Setting up Shadow Correction and Dynamic Range Correction

Shadow Correction and Dynamic Range Correction
Shadow Correction and Dynamic Range Correction
  1. Press the Function/Set button.
  2. Use the Up or Down button to highlight the DR Correction menu item (just above the white balance setting).
  3. To enable DR Correction, press the Right or Left button to choose Auto (only if the camera is in Auto mode), 200%, or 400%. The higher the setting, the more correction is applied.
    To enable Shadow Correction, press the Display button and then press the Right or Left button to choose the Auto setting.
  4. Press the Function/Set button to return to the shooting mode.

Canon PowerShot G12, Advanced Techniques to Explore

For most of this book, I’ve focused on how to take a great shot—one exposure, one image. But shooting digital opens other options that combine several shots into one better photo. The following two sections, covering panoramas and high dynamic range (HDR) images, require you to use image-processing software to complete the photograph. They are, however, important enough that you should know how to correctly shoot for success, should you choose to explore these two popular techniques.

Shooting panoramas

If you have ever visited the Grand Canyon, you know just how large and wide open it truly is—so much so that it’s difficult to capture its splendor in just one frame. The same can be said for a mountain range, or a cityscape, or any extremely wide vista. Two methods can help you capture the feeling of this type of scene.

The “fake” panorama

The first method is to shoot as wide as you can and then crop out the top and bottom portion of the frame. Panoramic images are generally two or three times wider than a normal image.

Creating a fake panorama

  1. To create the look of the panorama, zoom out to the camera’s widest focal length, 6.1mm.
  2. Using the guidelines discussed earlier, compose and focus your scene, and select the smallest aperture possible.
  3. Shoot your image. That’s all there is to it, from a photography standpoint.
  4. Open the image in your favorite image-processing software and crop the extraneous foreground and sky from the image, leaving you with a wide panorama of the scene.

Figure 7.16 isn’t a terrible photo, but the amount of sky at the top of the image detracts from the dramatic clouds below. This isn’t a problem, though, because it was shot for the purpose of creating a “fake” panorama. Now look at the same image, cropped for panoramic view (Figure 7.17). As you can see, it makes a huge difference and gives much higher visual impact by drawing your eyes across the length of the image.

This is an okay image, but the sky occupying the top half detracts from the clouds.
Figure 7.16 This is an okay image, but the sky occupying the top half detracts from the clouds.
Cropping adds more visual impact and makes for a more appealing image.
Figure 7.17 Cropping adds more visual impact and makes for a more appealing image.

The multiple-image panorama

The reason the previous method is sometimes referred to as a “fake” panorama is because it is made with a standard-size frame and then cropped down to a narrow perspective. To shoot a true panorama, you need to combine several frames. Although the camera can’t stitch the photos together, it does contain a Stitch Assist mode that aids in lining up the images to be merged together later.

The multiple-image pano has grown in popularity in the past few years; this is principally due to advances in image-processing software. Many software options are available now that will take multiple images, align them, and then “stitch” them into a single panoramic image (Figures 7.18 and 7.19). The real key to shooting a multipleimage pano is to overlap your shots by about 30 percent from one frame to the next. I’ll cover the Stitch Assist mode, but I’ve also included instructions for doing the job manually. Also, it’s possible to handhold the camera while capturing your images, but you’ll get much better results if you use a tripod.

Using the Stitch Assist Mode

  1. Mount your camera on your tripod and make sure it is level.
  2. Choose a focal length for your lens that is somewhere in the middle of the zoom range (a wide angle can distort the edges, making it harder to stitch together).
  3. Turn the Mode dial to SCN and then turn the Control dial until you’ve selected the Stitch Assist scene. (There are actually two Stitch Assist scenes: One helps you shoot left to right, the other helps you shoot right to left.)
  4. Take the first photo.
  5. Carefully pan the camera, using the portion of the previous shot as a guide to align the next shot (see below). When the two images overlap, capture another photo.
  6. Repeat steps 4 and 5 until you’ve captured the entire panorama. Then switch to another mode to exit Stitch Assist.
the Stitch Assist Mode
the Stitch Assist Mode

Shooting properly for a multiple-image panorama

  1. Mount your camera on a tripod and make sure it is level.
  2. Choose a focal length for the lens.
  3. In Av mode, use a very small aperture for the greatest depth of field. Take a meter reading of a bright part of the scene, and make note of it.
  4. Now change your camera to Manual mode (M), and dial in the aperture and shutter speed that you obtained in the previous step.
  5. Switch to manual focus, and then focus your lens for the area of interest. (If you use autofocus, you risk getting different points of focus from image to image, which makes the stitching more difficult for the software.) Or, use the autofocus and remember to set the lens to MF before shooting your images.
  6. While carefully panning your camera, shoot your images to cover the entire area of the scene from one end to the other, leaving a 30 percent overlap from one frame to the next.
Here you see the makings of a panorama, with four shots overlapping by about 30 percent from frame to frame.
Figure 7.18 Here you see the makings of a panorama, with four shots overlapping by about 30 percent from frame to frame.
I used Adobe Photoshop Elements to combine the exposures into one large panoramic image. I also cropped and adjusted the color of the final image.
Figure 7.19 I used Adobe Photoshop Elements to combine the exposures into one large panoramic image. I also cropped and adjusted the color of the final image.

Now that you have your series of overlapping images, you can import them into your image-processing software to stitch them together and create a single image.

Shooting high dynamic range (HDR) images

One of the more recent trends in digital photography is the use of high dynamic range (HDR) to capture the full range of tonal values in your final image. Typically, when you photograph a scene that has a wide range of tones from shadows to highlights, you have to make a decision regarding which tonal values you are going to emphasize, and then adjust your exposure accordingly. This is because your camera has a limited dynamic range, at least as compared to the human eye. HDR photography allows you to capture multiple exposures for the highlights, shadows, and midtones, and then combine them into a single image (Figures 7.20–7.23).

There are two ways to get an HDR image with the G12. Switch to the SCN mode and choose the HDR option. Be sure to stabilize the camera on a tripod or other solid surface and press the shutter button to take the shot. The camera shoots and combines multiple exposures into one HDR shot with a complete range of exposures using a process called “tonemapping.”

For more control over the HDR photo’s appearance, capture multiple shots at different exposures and use third-party software to process them. I will not be covering the software applications, but I will explore the process of shooting a scene to help you render properly captured images for the HDR process. Note that using a tripod is absolutely necessary for this technique, since you need to have perfect alignment of each image when they are combined.

Sorting your shots for the multi-image panorama

If you shoot more than one series of shots for your panoramas, it can sometimes be difficult to know when one series of images ends and the other begins. Here is a quick tip for separating your images.

Set up your camera using the steps listed here. Now, before you take your first good exposure in the series, hold up one finger in front of the camera and take a shot. Now move your hand away and begin taking your overlapping images. When you have taken your last shot, hold two fingers in front of the camera and take another shot

Now, when you go to review your images, use the series of shots that falls between the frames with one and two fingers in them. Then just repeat the process for your next panorama series.

Underexposing one stop renders more detail in the highlight areas of the sky.
Figure 7.20 Underexposing one stop renders more detail in the highlight areas of the sky.
This is the normal exposure as dictated by the camera meter.
Figure 7.21 This is the normal exposure as dictated by the camera meter.
Overexposing by two stops ensures that the darker areas are exposed to get detail in the shadows.
Figure 7.22 Overexposing by two stops ensures that the darker areas are exposed to get detail in the shadows.
This is the final HDR image that was rendered from the three other exposures.
Figure 7.23 This is the final HDR image that was rendered from the three other exposures.

Setting up for shooting an HDR image

  1. Set your ISO to 80 to ensure clean, noise-free images.
  2. Set your program mode to Av. During the shooting process, you will be taking three shots of the same scene, creating an overexposed image, an underexposed image, and a normal exposure. Since the camera is going to be adjusting the exposure, you want it to make changes to the shutter speed, not the aperture, so that your depth of field is consistent.
  3. Set your camera file format to RAW. This is extremely important because the RAW format contains a much larger range of exposure values than a JPEG file, and the HDR software will need this information.
  4. Adjust the auto exposure bracket (AEB) mode to shoot three exposures in twostop increments. To do this, press the Function/Set button and highlight the Bracket setting (third from the top). Next, use the Control dial or press the Right button to select the AEB option (A).
  5. Press the Display button to access the exposure control setting.
  6. Turn the Control dial to the right until the AEB indicators move all the way out to –2 and +2 (B). Press the Set button to lock in your changes.
  7. Focus the camera using the manual focus method discussed earlier, compose your shot, secure the tripod, and press the shutter button once; the camera fires all three shots automatically.
shooting an HDR image
shooting an HDR image

A software program such as Adobe Photoshop or Photomatix Pro can now process your exposure-bracketed images into a single HDR file.

Bracketing your exposures

In HDR, bracketing is the process of capturing a series of exposures at different stop intervals. You can bracket your exposures even if you aren’t going to be using HDR. Sometimes this is helpful when you have a tricky lighting situation and you want to ensure that you have just the right exposure to capture the look you’re after. You can bracket in increments as small as a third of a stop. This means that you can capture several images with very subtle exposure variances and then decide later which one is best.

Nikon D7000, Classic Black-and-White Portraits

There is something timeless about a black-and-white portrait. It eliminates the distraction of color and puts all the emphasis on the subject. To get great black-andwhites without having to resort to any image-processing software, set your picture control to Monochrome (Figure 6.10).

The picture controls are automatically applied when shooting with the JPEG file format. If you are shooting in RAW, the picture that shows up on your rear LCD display will look black and white, but it will appear as a color image when you open it in your software. You can use the software to apply Monochrome, or any other picture control, to your RAW image within the image-editing software.

The real key to using the Monochrome picture control is to customize it for your portrait subject. The control can be changed to alter the sharpness and contrast. For women, children, puppies, or anyone else you want to look somewhat soft, set the Sharpness setting to 0 or 1. For old cowboys, longshoremen, and anyone else who you want to look really detailed, including the wrinkles, try a setting of 6 or 7. I typically like to leave Contrast at a setting of around –1 or –2. This gives me a nice range of tones throughout the image.

The other adjustment that you should try is to change the picture control’s Filter effect from None to one of the four available settings (Yellow, Orange, Red, and Green). Using the filters will have the effect of either lightening or darkening the skin tones. The Red and Yellow filters usually lighten skin, while the Green filter can make skin appear a bit darker. Experiment to see which one works best for your subject.

Getting high-quality black-and-white portraits is as simple as setting the picture control to Monochrome.
Figure 6.10 Getting high-quality black-and-white portraits is as simple as setting the picture control to Monochrome.

 

Canon EOS 60D, Setting Up Your Camera for Continuous Shooting and Autofocus

In order to photograph fast-moving subjects, get several shots at a time, and stay focused on the subject through the entire process, you’ll need to make a few changes to your camera settings. The 60D makes the process simple, but it can be a bit confusing when you first start to work with it. Here, I briefly explain the two areas that are addressed in this section: drive modes and AF (autofocus) modes.

DRIVE MODES

The 60D’s drive mode determines how quickly each photo is taken and how many photos it will take continuously. The drive modes available on your camera include the following:

  • Single shooting: With this setting you will take only one photo each time you press and hold the Shutter button.
  • High-speed continuous shooting: When you press and hold the Shutter button, your camera will continuously take photos very quickly until you release the Shutter button, up to 8 frames per second.
  • Low-speed continuous shooting: When you press and hold the Shutter button, your camera will continuously take photos at a slower pace until you release the Shutter button, up to 3 frames per second. You can also easily take just one shot by quickly pressing and releasing the Shutter button.
  • 10-sec self-timer: Self-timer mode: the camera waits 10 seconds to take a photo once the Shutter button is pressed. This mode can also be used when shooting with a wireless remote control.
  • 2-sec self-timer: Self-timer mode: the camera waits 2 seconds to take a photo once the Shutter button is pressed. This mode can also be used when shooting with a wireless remote control.

For action and sports photography, the best option is High-speed continuous shooting. In this mode you will take several consecutive photos very quickly and are more likely to capture a good image of your fast-moving subject. Keep in mind that taking this many images at a time will fill up your memory card much more quickly than taking just one image at a time. The speed of your SD (Secure Digital) card also limits how many images you can take in a row.

Within your camera is a buffer, a feature that processes the image data before it can be written to the SD card. When you take a photo, you’ll see a red light on the back of your camera (the Card Busy indicator)—you usually won’t notice anything is happening, because the buffer is big enough to hold data from several photos at a time. When you take a lot of photos in a row with the High-speed continuous drive mode, however, the buffer fills up more quickly, and if it completely fills up while you are shooting, your camera will “freeze” momentarily while the images are written to the card. Shooting in RAW is likely to slow down your buffer and fill it up fast when shooting several images in a row—sports photographers who shoot in JPEG can get more images written to the card much more quickly.

One way to stay on top of this while you are shooting is to look inside the viewfinder—in the lower-right corner you’ll see a number. This number tells you how many photos you can take before the buffer is full (Figure 6.7). In general, it’s a good idea to do short bursts of photos instead of holding the Shutter button down for several seconds. This will help keep the buffer cleared, and the card won’t fill up as quickly.

The number on the far-right side of the viewfinder shows you how many shots you have left (max bursts) before the buffer is full.
FIGURE 6.7 The number on the far-right side of the viewfinder shows you how many shots you have left (max bursts) before the buffer is full.

USE THE CONTINUOUS MODE TO CAPTURE EXPRESSIONS

Using a fast shutter speed is not just for fast-moving subjects, but also for catching the ever-changing expressions of people, especially small children. This image (Figure 6.8) shows how an expression can go from happy to sad in a matter of seconds. Taking several consecutive shots allowed me to capture each moment as it happened without missing a thing.

This baby changed her expression from a smile to a frown in less than 10 seconds, and I was able to capture this change by taking several consecutive photographs.
FIGURE 6.8 This baby changed her expression from a smile to a frown in less than 10 seconds, and I was able to capture this change by taking several consecutive photographs.

SELECTING AND SHOOTING IN HIGH-SPEED CONTINUOUS DRIVE MODE

  1. Press the DRIVE button on the top of the camera.
  2. Rotate the Main dial until you see the drive setting that shows an “H.”
  3. Locate and focus on your subject in the viewfinder, and then press and hold the Shutter button to take several continuous images.

FOCUS MODES

Now that your drive setting is ready to go, let’s move on to focusing. The 60D allows you to shoot in three different autofocus modes: One Shot, AI Focus, and AI Servo (AI stands for Artificial Intelligence). The One Shot mode is designed for photographing stationary objects, or subjects that don’t move around very much; this setting is typically not very useful with action photography. You will be photographing subjects that move often and quickly, so you’ll need a focus mode that can keep up with them. The AI Servo mode will probably be your best bet. This setting will continue to find focus when you have your Shutter button pressed halfway, allowing you to keep the focus on your moving target.

SELECTING AND SHOOTING IN AI SERVO FOCUS MODE

  1. Press the DRIVE button on the top of the camera.
  2. Use your index finger to rotate the Main dial until AI SERVO appears in the top LCD Panel.
  3. Locate your subject in the viewfinder, then press and hold the Shutter button halfway to activate the focus mechanism. You’ll notice that while in this mode you won’t hear a beep when the camera finds focus.
  4. The camera will maintain focus on your subject as long as the subject remains within your focus point(s) in the viewfinder, or until you take a picture.

The AI Focus mode is another setting that can be useful when you have a subject that is stationary at first but then starts to move—it’s the “best of both worlds” when it comes to focusing on your subject. Imagine that you are photographing a runner about to sprint in a race—you want to focus on the person’s eyes as they take the “ready” position and don’t want your camera to change focus. But just as the runner starts running down the track, the camera will kick into AI Servo mode to track and focus on the runner as they are moving.

The AF-ON button will activate the autofocusing system in your 60D without your having to use the Shutter button. Note that this button will not work when shooting in one of the fully automatic modes.
FIGURE 6.9 The AF-ON button will activate the autofocusing system in your 60D without your having to use the Shutter button. Note that this button will not work when shooting in one of the fully automatic modes.

You should note that holding down the Shutter button for long periods of time will quickly drain your battery, because the camera is constantly focusing on the subject. You can also activate the focus by pressing the AF-ON button on the back of the camera (Figure 6.9). This is a great way to get used to the focusing system without worrying about taking unwanted pictures.

AF POINTS

The Canon 60D has a total of nine focus points and two different settings for autofocus: Manual selection and Automatic selection. Manual selection lets you choose one of the nine focus points within the viewfinder to set your autofocus to (A), while Automatic selection allows the camera to decide which autofocus points to focus on for each shot (B).

When you are photographing something and are able to set the focus point on the part of the image you want in focus all the time, such as when it’s focused on the eyes of a person, then it’s best to use Manual selection. If you’re photographing something where your subject will be near the center of the screen and moving around quickly, such as children running around on a soccer field, then you might want to give the Automatic selection a try. Experiment with each setting to find out which one works best with your shooting style.

AF POINTS
AF POINTS

Jigsaw Puzzle (Drag Gesture & WriteableBitmap)

Jigsaw Puzzle enables you to turn any picture into a challenging 30-piece jigsaw puzzle. You can use one of the included pictures, or choose a photo from your camera or pictures library. You can even zoom and crop the photo to get it just right. Drag pieces up from a scrollable tray at the bottom of the screen and place them where you think they belong. As you drag a piece, it snaps to each of the 30 possible correct positions to reduce your frustration when arranging pieces. Jigsaw Puzzle also can solve the puzzle for you, or reshuffle the pieces, both with fun animations.

Other than the instructions page, Jigsaw Puzzle contains a main page and a page for cropping an imported picture. This app leverages gesture listener’s drag events for a few different reasons. On the main page, dragging is used for moving puzzle pieces and for scrolling the tray of unused pieces at the bottom of the screen. On the page for cropping imported pictures, dragging is used to pan the picture.

Are you thinking of ways to increase the difficulty of the puzzles in this app? Although the puzzle pieces would become too difficult to drag if you make them much smaller (without also enabling zooming), you could enable pieces to be rotated.The next chapter demonstrates how to implement a rotation gesture.

The Main Page

Jigsaw Puzzle’s main page contains the 30 pieces arranged in 6 rows of 5. Each piece is a 96×96 canvas that contains a vector drawing represented as a Path element. 14 distinct shapes are used (4 if you consider rotated/flipped versions as equivalent), shown in Figure 42.1.

The 14 shapes consist of 4 corner pieces, 8 edge pieces, and 2 middle pieces.
FIGURE 42.1 The 14 shapes consist of 4 corner pieces, 8 edge pieces, and 2 middle pieces.

Each piece is actually larger than 96 pixels in at least one dimension, which is fine because each Path can render outside the bounds of its parent 96×96 canvas. Each Path is given an appropriate offset inside its parent canvas to produce the appropriate interlocking pattern, as illustrated in Figure 42.2. Every puzzle presented by this app uses these exact 30 pieces in the exact same spots; only the image on the pieces changes.

The choice of a vector-based path to represent each piece is important because it enables the nonrectangular shapes to interlock and retain precise hit-testing. If puzzle-pieceshaped images were instead used as an opacity mask on rectangular elements, the bounding box of each piece would respond to gestures on the entire area that overlaps the bounding box of any pieces underneath. This would cause the wrong piece to move in many areas of the puzzle. The use of paths also enables us to apply a custom stroke to each piece to highlight its edges.

The User Interface

Listing 42.1 contains the XAML for the main page.

The 30 vector-based shapes, each shown with its parent canvas represented as a yellow square outline.
FIGURE 42.2 The 30 vector-based shapes, each shown with its parent canvas represented as a yellow square outline.

LISTING 42.1 MainPage.xaml—The User Interface for Jigsaw Puzzle’s Main Page

[code]

<phone:PhoneApplicationPage x:Class=”WindowsPhoneApp.MainPage”
xmlns=”http://schemas.microsoft.com/winfx/2006/xaml/presentation”
xmlns:x=”http://schemas.microsoft.com/winfx/2006/xaml”
xmlns:phone=”clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone”
xmlns:shell=”clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone”
xmlns:toolkit=”clr-namespace:Microsoft.Phone.Controls;
➥assembly=Microsoft.Phone.Controls.Toolkit”
SupportedOrientations=”Portrait”>
<!– Listen for drag events anywhere on the page –>
<toolkit:GestureService.GestureListener>
<toolkit:GestureListener DragStarted=”GestureListener_DragStarted”
DragDelta=”GestureListener_DragDelta”
DragCompleted=”GestureListener_DragCompleted”/>
</toolkit:GestureService.GestureListener>
<!– The application bar, with 4 buttons and 5 menu items –>
<phone:PhoneApplicationPage.ApplicationBar>
<shell:ApplicationBar Opacity=”.5” ForegroundColor=”White”
BackgroundColor=”#443225”>
<shell:ApplicationBarIconButton Text=”picture”
IconUri=”/Shared/Images/appbar.picture.png” Click=”PictureButton_Click”/>
<shell:ApplicationBarIconButton Text=”start over”
IconUri=”/Shared/Images/appbar.delete.png” Click=”StartOverButton_Click”/>
<shell:ApplicationBarIconButton Text=”solve” IsEnabled=”False”
IconUri=”/Images/appbar.solve.png” Click=”SolveButton_Click”/>
<shell:ApplicationBarIconButton Text=”instructions”
IconUri=”/Shared/Images/appbar.instructions.png”
Click=”InstructionsButton_Click”/>
<shell:ApplicationBar.MenuItems>
<shell:ApplicationBarMenuItem Text=”cat and fish”
Click=”ApplicationBarMenuItem_Click”/>
<shell:ApplicationBarMenuItem Text=”city”
Click=”ApplicationBarMenuItem_Click”/>
<shell:ApplicationBarMenuItem Text=”statue of liberty”
Click=”ApplicationBarMenuItem_Click”/>
<shell:ApplicationBarMenuItem Text=”traffic”
Click=”ApplicationBarMenuItem_Click”/>
<shell:ApplicationBarMenuItem Text=”under water”
Click=”ApplicationBarMenuItem_Click”/>
</shell:ApplicationBar.MenuItems>
</shell:ApplicationBar>
</phone:PhoneApplicationPage.ApplicationBar>
<!– Prevent off-screen pieces from appearing during a page transition –>
<phone:PhoneApplicationPage.Clip>
<RectangleGeometry Rect=”0,0,480,800”/>
</phone:PhoneApplicationPage.Clip>
<Canvas Background=”#655”>
<!– The tray at the bottom –>
<Rectangle x:Name=”Tray” Fill=”#443225” Width=”480” Height=”224”
Canvas.Top=”576”/>
<!– All 30 pieces placed where they belong –>
<Canvas x:Name=”PiecesCanvas”>
<!– Row 1 –>
<Canvas Width=”96” Height=”96”>
<Path Data=”F1M312.63,0L385.7,0C385.48,…” Height=”129” Stretch=”Fill”
Width=”96” Stroke=”#2000”>
<Path.Fill>
<ImageBrush Stretch=”None” AlignmentX=”Left” AlignmentY=”Top”/>
</Path.Fill>
</Path>
<Canvas.RenderTransform><CompositeTransform/></Canvas.RenderTransform>
</Canvas>
<Canvas Canvas.Left=”96” Width=”96” Height=”96”>
<Path Data=”F1M25.12,909.28C49.47,909.27,…” Height=”96”
Canvas.Left=”-33” Stretch=”Fill” Width=”162” Stroke=”#2000”>
<Path.Fill>
<ImageBrush Stretch=”None” AlignmentX=”Left” AlignmentY=”Top”>
<ImageBrush.Transform>
<TranslateTransform X=”-63”/>
</ImageBrush.Transform>
</ImageBrush>
</Path.Fill>
</Path>
<Canvas.RenderTransform><CompositeTransform/></Canvas.RenderTransform>
</Canvas>
… 27 pieces omitted …
<Canvas Canvas.Left=”384” Canvas.Top=”480” Width=”96” Height=”96”>
<Path Data=”F1M777.45,0L800.25,0C802.63,…” Canvas.Left=”-33”
Height=”96” Stretch=”Fill” Width=”129” Stroke=”#2000”>
<Path.Fill>
<ImageBrush Stretch=”None” AlignmentX=”Left” AlignmentY=”Top”>
<ImageBrush.Transform>
<TranslateTransform X=”-351” Y=”-480”/>
</ImageBrush.Transform>
</ImageBrush>
</Path.Fill>
</Path>
<Canvas.RenderTransform><CompositeTransform/></Canvas.RenderTransform>
</Canvas>
</Canvas>
<!– The image without visible piece boundaries, shown when solved –>
<Image x:Name=”CompleteImage” Visibility=”Collapsed” IsHitTestVisible=”False”
Stretch=”None”/>
</Canvas>
</phone:PhoneApplicationPage>

[/code]

  • A gesture listener is attached to the entire page to listen for the three drag events: DragStarted, DragDelta, and DragCompleted.

My favorite way to create vector artwork based on an illustration is with Vector Magic (http://vectormagic.com). It’s not free, but it does a fantastic job of converting image files to a variety of vector formats. If you download the result as a PDF file and then rename the file extension to .ai, you can import it into Expression Blend, which converts it to XAML.

  • Although each piece is placed in its final “solved” position, the code-behind adjusts each position by modifying TranslateX and TranslateY properties on the CompositeTransform assigned to each piece. This gives us the nice property that no matter where a piece is moved, it can be returned to its solved position by setting both of these properties to zero.
  • The magic behind making each puzzle piece contain a portion of a photo is enabled by the image brush that fills each path. (The actual image is set in code-behind.) To make each piece contain the correct portion of the photo, each image brush (except the one used on the piece in the top left corner) is given a TranslateTransform. This shifts its rendering by the distance that the piece is from the top-left corner. (To make this work, each image brush is marked with top-left alignment, rather than its default center alignment.)

Fortunately for apps such as Jigsaw Puzzle, using many image brushes that point to the same image is efficient. Silverlight shares the underlying image rather than creating a separate copy for each brush.

  • The scrolling tray at the bottom isn’t an actual scroll viewer; it’s just a simple rectangle. The code-behind manually scrolls puzzle pieces when they sufficiently overlap this rectangle. This is done for two reasons: It’s convenient to keep the pieces on the same canvas at all times, and the gesture listener currently interferes with Silverlight elements such as scroll viewers.
  • The CompleteImage element at the bottom of the listing is used to show the complete image once the puzzle is solved, without the puzzle piece borders and tiny gaps between pieces obscuring it. Because it is aligned with the puzzle, showing this image simply makes it seem like the puzzle edges have faded away. Figure 42.3 shows what this looks like for the cat-and-fish puzzle shown at the beginning of this chapter. Because CompleteImage is not hit-testable, the user can still drag a piece while it is showing. As soon as any piece moves out of its correct position, the codebehind hides the image once again.
Once the puzzle is solved, the puzzle piece borders are no longer visible.
FIGURE 42.3 Once the puzzle is solved, the puzzle piece borders are no longer visible.

The Code-Behind

Listing 42.2 contains the code-behind for the main page.

LISTING 42.2 MainPage.xaml.cs—The Code-Behind for Jigsaw Puzzle’s Main Page

[code]

using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
using System.Windows.Threading;
using Microsoft.Phone.Controls;
using Microsoft.Phone.Shell;
namespace WindowsPhoneApp
{
public partial class MainPage : PhoneApplicationPage
{
bool isDraggingTray;
bool isDraggingPiece;
double cumulativeDeltaX;
double cumulativeDeltaY;
int topmostZIndex;
List<FrameworkElement> piecesOnTray = new List<FrameworkElement>();
Random random = new Random();
IApplicationBarIconButton solveButton;
public MainPage()
{
InitializeComponent();
this.solveButton = this.ApplicationBar.Buttons[2]
as IApplicationBarIconButton;
}
protected override void OnNavigatedFrom(NavigationEventArgs e)
{
base.OnNavigatedFrom(e);
// Persist the offset currently being applied to each piece, so
// they can appear in the same locations next time
Settings.PieceOffsets.Value.Clear();
foreach (FrameworkElement piece in this.PiecesCanvas.Children)
{
Settings.PieceOffsets.Value.Add(new Point(
(piece.RenderTransform as CompositeTransform).TranslateX,
(piece.RenderTransform as CompositeTransform).TranslateY));
}
}
protected override void OnNavigatedTo(NavigationEventArgs e)
{
base.OnNavigatedTo(e);
RefreshPuzzleImage();
bool arePiecesCorrect = false;
if (Settings.PieceOffsets.Value.Count == this.PiecesCanvas.Children.Count)
{
// Restore the persisted position of each piece
for (int i = 0; i < this.PiecesCanvas.Children.Count; i++)
{
UIElement piece = this.PiecesCanvas.Children[i];
CompositeTransform t = piece.RenderTransform as CompositeTransform;
t.TranslateX = Settings.PieceOffsets.Value[i].X;
t.TranslateY = Settings.PieceOffsets.Value[i].Y;
}
arePiecesCorrect = AreAllPiecesCorrect();
}
else
{
// This is the first run. After a 1-second delay, animate the pieces
// from their solved positions to random positions on the tray.
DispatcherTimer timer = new DispatcherTimer {
Interval = TimeSpan.FromSeconds(1) };
timer.Tick += delegate(object sender, EventArgs args)
{
StartOver();
timer.Stop();
};
timer.Start();
}
if (arePiecesCorrect)
ShowAsSolved();
else
ShowAsUnsolved();
}
// The three drag event handlers
void GestureListener_DragStarted(object sender, DragStartedGestureEventArgs e)
{
// Determine if we’re dragging the tray, a piece, or neither
FrameworkElement source = e.OriginalSource as FrameworkElement;
if (source == this.Tray)
{
// An empty spot on the tray is being dragged
if (e.Direction == System.Windows.Controls.Orientation.Horizontal)
this.isDraggingTray = true;
return;
}
FrameworkElement piece = GetPieceFromDraggedSource(source);
if (piece == null)
return;
if (e.Direction == System.Windows.Controls.Orientation.Horizontal &&
GetPieceTop(piece) > Constants.ON_TRAY_Y)
{
// Although a piece is being dragged, the piece is on the tray and the
// drag is horizontal, so consider this to be a tray drag instead
this.isDraggingTray = true;
}
else
{
this.isDraggingPiece = true;
// A piece is being dragged, so record its pre-drag position
CompositeTransform t = piece.RenderTransform as CompositeTransform;
this.cumulativeDeltaX = t.TranslateX;
this.cumulativeDeltaY = t.TranslateY;
}
}
void GestureListener_DragDelta(object sender, DragDeltaGestureEventArgs e)
{
if (this.isDraggingTray)
{
// Scroll the tray
ScrollTray(e.HorizontalChange);
}
else if (this.isDraggingPiece)
{
FrameworkElement piece = GetPieceFromDraggedSource(
e.OriginalSource as FrameworkElement);
if (piece == null)
return;
CompositeTransform t = piece.RenderTransform as CompositeTransform;
// Apply the position change caused by dragging.
// We’re keeping track of the total change from DragStarted so the piece
// remains in the right spot after repeated snapping and unsnapping.
this.cumulativeDeltaX += e.HorizontalChange;
this.cumulativeDeltaY += e.VerticalChange;
t.TranslateX = this.cumulativeDeltaX;
t.TranslateY = this.cumulativeDeltaY;
// Ensure that this piece is on top of all others
this.topmostZIndex++;
Canvas.SetZIndex(piece, this.topmostZIndex);
// Ensure that the puzzle is no longer in the solved state
ShowAsUnsolved();
// If the piece is not on the tray, snap it to a solved horizontal
// and/or vertical boundary if it’s close enough
double left = GetPieceLeft(piece);
double top = GetPieceTop(piece);
if (top > Constants.ON_TRAY_Y)
return; // The piece is on the tray, so never mind
// Snapping to a horizontal boundary
if (left % Constants.PIECE_WIDTH < Constants.SNAPPING_MARGIN)
t.TranslateX -= left % Constants.PIECE_WIDTH;
else if (left % Constants.PIECE_WIDTH >
Constants.PIECE_WIDTH – Constants.SNAPPING_MARGIN)
t.TranslateX += Constants.PIECE_WIDTH – left % Constants.PIECE_WIDTH;
// Snapping to a vertical boundary
if (top % Constants.PIECE_HEIGHT < Constants.SNAPPING_MARGIN)
t.TranslateY -= top % Constants.PIECE_HEIGHT;
else if (top % Constants.PIECE_HEIGHT >
Constants.PIECE_HEIGHT – Constants.SNAPPING_MARGIN)
t.TranslateY += Constants.PIECE_HEIGHT – top % Constants.PIECE_HEIGHT;
}
}
void GestureListener_DragCompleted(object sender,
DragCompletedGestureEventArgs e)
{
// Give the tray an extra push (simulating inertia) based on
// the final dragging horizontal velocity
if (this.isDraggingTray && e.HorizontalVelocity != 0)
ScrollTray(e.HorizontalVelocity / 10);
this.isDraggingTray = this.isDraggingPiece = false;
if (AreAllPiecesCorrect())
ShowAsSolved();
}
FrameworkElement GetPieceFromDraggedSource(FrameworkElement source)
{
// When a piece is dragged, the source is the path,
// but we want to return its parent canvas
if (source == null || source.Parent == null ||
(source.Parent as FrameworkElement).Parent == null ||
(source.Parent as FrameworkElement).Parent != this.PiecesCanvas)
return null;
else
return source.Parent as FrameworkElement;
}
double GetPieceTop(FrameworkElement piece)
{
return Canvas.GetTop(piece) +
(piece.RenderTransform as CompositeTransform).TranslateY;
}
double GetPieceLeft(FrameworkElement piece)
{
return Canvas.GetLeft(piece) +
(piece.RenderTransform as CompositeTransform).TranslateX;
}
void ScrollTray(double amount)
{
// Retrieve the minimum and maximum horizontal positions among all
// pieces in the tray, to provide bounds on how far it can scroll
double minX = double.MaxValue;
double maxX = double.MinValue;
this.piecesOnTray.Clear();
foreach (FrameworkElement piece in this.PiecesCanvas.Children)
{
if (GetPieceTop(piece) > Constants.ON_TRAY_Y)
{
this.piecesOnTray.Add(piece);
double left = GetPieceLeft(piece);
if (left < minX) minX = left;
if (left > maxX) maxX = left;
}
}
if (this.piecesOnTray.Count == 0)
return;
// Change the amount if it would make the tray scroll too far
if (amount < 0 && (maxX + amount < this.ActualWidth –
Constants.MAX_PIECE_WIDTH || minX < Constants.NEGATIVE_SCROLL_BOUNDARY))
amount = Math.Max(-maxX + this.ActualWidth – Constants.MAX_PIECE_WIDTH,
Constants.NEGATIVE_SCROLL_BOUNDARY – minX);
if (amount > 0 && minX + amount > Constants.TRAY_LEFT_MARGIN)
amount = Constants.TRAY_LEFT_MARGIN – minX;
// “Scroll” the tray by moving each piece on the tray the same amount
foreach (FrameworkElement piece in this.piecesOnTray)
(piece.RenderTransform as CompositeTransform).TranslateX += amount;
}
// Move each piece to the tray in a random order
void StartOver()
{
// Copy the children to an array so their order
// in the collection is preserved
UIElement[] pieces = this.PiecesCanvas.Children.ToArray();
// Shuffle the children in place
for (int i = pieces.Length – 1; i > 0; i–)
{
int r = this.random.Next(0, i);
// Swap the current child with the randomly-chosen one
UIElement temp = pieces[i]; pieces[i] = pieces[r]; pieces[r] = temp;
}
// Now move the pieces to the bottom in their random order
for (int i = 0; i < pieces.Length; i++)
{
UIElement piece = pieces[i];
// Alternate the pieces between two rows
CreatePieceMovingStoryboard(piece, TimeSpan.Zero, TimeSpan.FromSeconds(1),
(i % 2 * Constants.TRAY_2ND_ROW_HORIZONTAL_OFFSET) +
(i / 2) * Constants.TRAY_HORIZONTAL_SPACING – Canvas.GetLeft(piece),
(i % 2 * Constants.TRAY_VERTICAL_SPACING) + Constants.TRAY_TOP_MARGIN
– Canvas.GetTop(piece)).Begin();
// Reset the z-index of each piece
Canvas.SetZIndex(piece, 0);
}
this.topmostZIndex = 0;
ShowAsUnsolved();
The Main Page 925
}
// Create a storyboard that animates the piece to the specified position
Storyboard CreatePieceMovingStoryboard(UIElement piece, TimeSpan beginTime,
TimeSpan duration, double finalX, double finalY)
{
DoubleAnimation xAnimation = new DoubleAnimation { To = finalX,
Duration = duration, EasingFunction = new QuinticEase() };
DoubleAnimation yAnimation = new DoubleAnimation { To = finalY,
Duration = duration, EasingFunction = new QuinticEase() };
Storyboard.SetTargetProperty(xAnimation, new PropertyPath(“TranslateX”));
Storyboard.SetTargetProperty(yAnimation, new PropertyPath(“TranslateY”));
Storyboard storyboard = new Storyboard { BeginTime = beginTime };
Storyboard.SetTarget(storyboard, piece.RenderTransform);
storyboard.Children.Add(xAnimation);
storyboard.Children.Add(yAnimation);
return storyboard;
}
bool AreAllPiecesCorrect()
{
for (int i = 0; i < this.PiecesCanvas.Children.Count; i++)
{
UIElement piece = this.PiecesCanvas.Children[i];
CompositeTransform t = piece.RenderTransform as CompositeTransform;
if (t.TranslateX != 0 || t.TranslateY != 0)
return false; // This piece is in the wrong place
}
// All pieces are in the right place
return true;
}
void ShowAsSolved()
{
this.solveButton.IsEnabled = false;
int piecesToMove = 0;
Storyboard storyboard = null;
// For any piece that’s out of place, animate it to the solved position
for (int i = 0; i < this.PiecesCanvas.Children.Count; i++)
{
UIElement piece = this.PiecesCanvas.Children[i];
CompositeTransform t = piece.RenderTransform as CompositeTransform;
if (t.TranslateX == 0 && t.TranslateY == 0)
continue; // This piece is already in the right place
// Animate it to a (0,0) offset, which is its natural position
storyboard = CreatePieceMovingStoryboard(piece,
TimeSpan.FromSeconds(.3 * piecesToMove), // Spread out the animations
TimeSpan.FromSeconds(1), 0, 0);
storyboard.Begin();
// Ensure each piece moves on top of pieces already in the right place
this.topmostZIndex++;
Canvas.SetZIndex(piece, this.topmostZIndex);
piecesToMove++;
}
if (storyboard == null)
{
// Everything is in the right place
this.CompleteImage.Visibility = Visibility.Visible;
}
else
{
// Delay the showing of CompleteImage until the last storyboard
// has completed
storyboard.Completed += delegate(object sender, EventArgs e)
{
// Ensure that the user didn’t unsolve the puzzle during the animation
if (!this.solveButton.IsEnabled)
this.CompleteImage.Visibility = Visibility.Visible;
};
}
}
void ShowAsUnsolved()
{
this.solveButton.IsEnabled = true;
this.CompleteImage.Visibility = Visibility.Collapsed;
}
void RefreshPuzzleImage()
{
ImageSource imageSource = null;
// Choose the right image based on the setting
switch (Settings.PhotoIndex.Value)
{
// The first case is for a custom photo saved
// from CroppedPictureChooserPage
case -1:
try { imageSource = IsolatedStorageHelper.LoadFile(“custom.jpg”); }
catch { imageSource = new BitmapImage(new Uri(“Images/catAndFish.jpg”,
UriKind.Relative)); }
break;
// The remaining cases match the indices in the application bar menu
case 0:
imageSource = new BitmapImage(new Uri(“Images/catAndFish.jpg”,
UriKind.Relative));
break;
case 1:
imageSource = new BitmapImage(new Uri(“Images/city.jpg”,
UriKind.Relative));
break;
case 2:
imageSource = new BitmapImage(new Uri(“Images/statueOfLiberty.jpg”,
UriKind.Relative));
break;
case 3:
imageSource = new BitmapImage(new Uri(“Images/traffic.jpg”,
UriKind.Relative));
break;
case 4:
imageSource = new BitmapImage(new Uri(“Images/underWater.jpg”,
UriKind.Relative));
break;
}
if (imageSource != null)
{
this.CompleteImage.Source = imageSource;
// Each of the 30 pieces needs to be filled with the right image
foreach (Canvas piece in this.PiecesCanvas.Children)
((piece.Children[0] as Shape).Fill as ImageBrush).ImageSource =
imageSource;
}
}
// Application bar handlers
void PictureButton_Click(object sender, EventArgs e)
{
this.NavigationService.Navigate(new Uri(“/CroppedPictureChooserPage.xaml”,
UriKind.Relative));
}
void StartOverButton_Click(object sender, EventArgs e)
{
if (MessageBox.Show(“Are you sure you want to dismantle the puzzle and “ +
“start from scratch?”, “Start over”, MessageBoxButton.OKCancel)
== MessageBoxResult.OK)
StartOver();
}
void SolveButton_Click(object sender, EventArgs e)
{
if (MessageBox.Show(“Do you give up? Are you sure you want the puzzle to “
+ “be solved for you?”, “Solve”, MessageBoxButton.OKCancel)
!= MessageBoxResult.OK)
return;
ShowAsSolved();
}
void InstructionsButton_Click(object sender, EventArgs e)
{
this.NavigationService.Navigate(new Uri(“/InstructionsPage.xaml”,
UriKind.Relative));
}
void ApplicationBarMenuItem_Click(object sender, EventArgs e)
{
for (int i = 0; i < this.ApplicationBar.MenuItems.Count; i++)
{
// Set the persisted photo index to match the menu item index
if (sender == this.ApplicationBar.MenuItems[i])
Settings.PhotoIndex.Value = i;
}
RefreshPuzzleImage();
}
}
}

[/code]

  • This app uses two settings defined in a separate Settings.cs file for remembering the user’s chosen photo and for remembering the position of every puzzle piece:

    [code]
    public static class Settings
    {
    public static Setting<int> PhotoIndex = new Setting<int>(“PhotoIndex”, 2);
    public static Setting<List<Point>> PieceOffsets =
    new Setting<List<Point>>(“PieceOffsets”, new List<Point>());
    }
    [/code]

  • This app also uses many constants, defined as follows in a Constants.cs file:

    [code]
    public static class Constants
    {
    public const int PUZZLE_WIDTH = 480;
    public const int PUZZLE_HEIGHT = 576;
    public const int PIECE_WIDTH = 96;
    public const int PIECE_HEIGHT = 96;
    public const int MAX_PIECE_WIDTH = 162;
    public const int SNAPPING_MARGIN = 15;
    public const int NEGATIVE_SCROLL_BOUNDARY = -1550;
    public const int TRAY_HORIZONTAL_SPACING = 110;
    public const int TRAY_VERTICAL_SPACING = 80;
    public const int TRAY_LEFT_MARGIN = 24;
    public const int TRAY_TOP_MARGIN = 590;
    public const int TRAY_2ND_ROW_HORIZONTAL_OFFSET = 50;
    public const int ON_TRAY_Y = 528;
    }
    [/code]

  • The first time Jigsaw Puzzle is run, the pieces animate from their solved positions to a random ordering on the tray. (This condition is detected in OnNavigatedTo because the PieceOffsets list does not initially contain the same number of elements as pieces in PiecesCanvas.) This ordering of pieces on the tray is shown in Figure 42.4. Every other time, the pieces are placed exactly where they were previously left by reapplying their persisted TranslateX and TranslateY values.
The puzzle pieces are arranged on the tray once they animate away from their solved positions.
FIGURE 42.4 The puzzle pieces are arranged on the tray once they animate away from their solved positions.
  • The three drag event handlers act differently depending on whether a puzzle piece is being dragged or the tray is being dragged. When the tray is dragged horizontally, we want it to scroll and reveal off-screen pieces. When a piece is dragged, it should move wherever the user’s finger takes it.
  • The DragStarted event handler (GestureListener_DragStarted) determines which type of dragging is occurring and sets either isDraggingTray or isDraggingPiece. (This handler can be called in cases where neither is true, such as dragging on an empty upper part of the screen, because these handlers are attached to the whole page.)

    DragStarted isn’t raised as soon as a finger touches the screen and starts moving; the gesture listener waits for the finger to move more than 12 pixels away to ensure that the gesture is a drag rather than a tap, and to determine the primary direction of the drag. DragStartedGestureEventArgs exposes this primary direction as a Direction property that is either Horizontal or Vertical.

    GestureListener_DragStarted leverages the Direction property to determine which kind of drag is happening. If the element reporting the event is the tray and the direction is horizontal, then it considers the gesture to be a tray drag. If a piece is being dragged horizontally and the vertical position of the piece visually makes it look like it’s on the tray, it also considers the gesture to be a tray drag. This is important to avoid the requirement that the tray can only be dragged on an empty spot. If a piece is being dragged vertically, or if it’s dragged in any direction far enough from the tray, then it’s considered to be a piece drag.

    Although this scheme is easy to implement, users might find the requirement to drag pieces off the tray in a mostly vertical fashion to be confusing and/or inconvenient. A more flexible approach would be to perform your own math and use a wider angle range.

The Direction property passed to drag events never changes until a new drag is initiated!

Although the Direction property exposed to DragStarted handlers is also exposed to DragDelta and DragCompleted handlers, its value never changes until the drag has completed and a new drag has started.This is true even if the actual direction of the finger motion changes to be completely vertical instead of horizontal, or vice versa.This makes it easy to implement panning or other motion that is locked to one axis, although it also means that detecting more flexible motion requires you to interpret the finger motion manually.

In Jigsaw Puzzle, this fact can cause frustration if a user tries to drag a piece from the tray directly to its final position, yet the straight-line path is more horizontal than it is vertical.To help combat this, the instructions page explains that pieces must be dragged upward to leave the tray.

  • The DragDelta event exposes two more properties than DragStarted: HorizontalChange and VerticalChange. For tray dragging, the HorizontalChange value is passed to a ScrollTray helper method. This method provides the illusion of scrolling by manually updating the horizontal position of every piece whose vertical position makes it appear to be on the tray. Keeping all the pieces in the same canvas at all times (instead of moving pieces on the tray to a separate panel inside an actual scroll viewer) makes the logic throughout this page easier.

    For piece dragging, both HorizontalChange and VerticalChange are applied to the current piece’s transform, and then the piece is snapped to one or two solved-piece boundary locations if it’s close enough to a horizontal and/or vertical boundary. Ordinarily, the values of HorizontalChange and VerticalChange would be directly added to TranslateX and TranslateY, respectively, but this doesn’t work well when snapping is done. Because each snap moves the piece by as much as 14 pixels away from its natural position, continued snapping would cause the piece to drift further away from the user’s finger if we continued to add the HorizontalChange and VerticalChange values. Instead, by manually tracking the total cumulative distance from the beginning of the drag, the piece is returned to its natural position after it breaks free of a snapping boundary.

The HorizontalChange and VerticalChange properties exposed by DragDelta are relative to the previous raising of DragDelta!

Unlike PinchGestureEventArgs, whose DistanceRatio and TotalAngleDelta properties are relative to the values when pinching or stretching started, the HorizontalChange and VerticalChange properties exposed by DragDeltaGestureEventArgs and DragCompletedGestureEventArgs do not accumulate as dragging proceeds.

  • The DragCompleted event exposes all the properties from DragDelta plus two more: HorizontalVelocity and VerticalVelocity. These values are the same ones exposed to the Flick event, and enable inertial flicking motion at the end of a drag. Just like in Chapter 40, “Darts,” the velocity is scaled down and then used to continue the dragging motion a bit. This is done for tray dragging only, to make it better mimic a real scroll viewer. Therefore, only the horizontal component of the velocity is used. At the end of every drag action, the location of each piece is checked to see whether the puzzle has been solved. We know that the pieces are all in the correct spots if their transforms all have TranslateX and TranslateY values of zero.
  • The “picture” and “instructions” button click handlers navigate to other pages, and the “start over” and “solve” button click handlers trigger animations that either move the pieces to random spots on the tray or to their solved positions. The solve animation is performed by the ShowAsSolved method, which animates each outof- place piece to its correct position over the course of one second, spaced .3 seconds apart. The resulting effect smoothly fills in the pieces in row major order, as pictured in Figure 42.5.
The automatic solving animation makes the pieces float into place according to their order in the canvas.
FIGURE 42.5 The automatic solving animation makes the pieces float into place according to their order in the canvas.

The Cropped Picture Chooser Page

Because the photo chooser can sometimes be slow to launch, and because decoding the chosen picture can be slow, this page shows a “loading” message during these actions. Figure 42.6 demonstrates the user flow through this page.

 The sequence of events when navigating to the cropped photo chooser page.
FIGURE 42.6 The sequence of events when navigating to the cropped photo chooser page.

The User Interface

Listing 42.3 contains the XAML for this page.

LISTING 42.3 CroppedPictureChooserPage.xaml—The User Interface for Jigsaw Puzzle’s Cropped Picture Chooser Page

[code]

<phone:PhoneApplicationPage x:Class=”WindowsPhoneApp.CroppedPictureChooserPage”
xmlns=”http://schemas.microsoft.com/winfx/2006/xaml/presentation”
xmlns:x=”http://schemas.microsoft.com/winfx/2006/xaml”
xmlns:phone=”clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone”
xmlns:shell=”clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone”
xmlns:toolkit=”clr-namespace:Microsoft.Phone.Controls;
➥assembly=Microsoft.Phone.Controls.Toolkit”
FontFamily=”{StaticResource PhoneFontFamilyNormal}”
FontSize=”{StaticResource PhoneFontSizeNormal}”
Foreground=”{StaticResource PhoneForegroundBrush}”
SupportedOrientations=”Portrait”>
<!– The 2-button application bar, shown on return from the PhotoChooserTask –>
<phone:PhoneApplicationPage.ApplicationBar>
<shell:ApplicationBar Opacity=”0” IsVisible=”False” ForegroundColor=”White”>
<shell:ApplicationBarIconButton Text=”done”
IconUri=”/Shared/Images/appbar.done.png” Click=”DoneButton_Click”/>
<shell:ApplicationBarIconButton Text=”cancel”
IconUri=”/Shared/Images/appbar.cancel.png” Click=”CancelButton_Click”/>
</shell:ApplicationBar>
</phone:PhoneApplicationPage.ApplicationBar>
<!– Listen for drag events anywhere on the page –>
<toolkit:GestureService.GestureListener>
<toolkit:GestureListener DragDelta=”GestureListener_DragDelta”
PinchStarted=”GestureListener_PinchStarted”
PinchDelta=”GestureListener_PinchDelta”/>
</toolkit:GestureService.GestureListener>
<!– Prevent a zoomed-in photo from making the screen go blank –>
<phone:PhoneApplicationPage.Clip>
<RectangleGeometry Rect=”0,0,480,800”/>
</phone:PhoneApplicationPage.Clip>
<Canvas Background=”#443225”>
<!– Shown on return from the PhotoChooserTask –>
<Canvas x:Name=”CropPanel” Visibility=”Collapsed”>
<!– Designate the puzzle boundary –>
<Rectangle Fill=”White” Canvas.Top=”112” Width=”480” Height=”576”/>
<!– The canvas provides screen-centered zooming –>
<Canvas Canvas.Top=”112” Width=”480” Height=”576”
RenderTransformOrigin=”.5,.5”>
<Canvas.RenderTransform>
<!– For zooming –>
<CompositeTransform x:Name=”CanvasTransform”/>
</Canvas.RenderTransform>
<Image x:Name=”Image” Stretch=”None” CacheMode=”BitmapCache”>
<Image.RenderTransform>
<!– For panning –>
<CompositeTransform x:Name=”ImageTransform”/>
</Image.RenderTransform>
</Image>
</Canvas>
<!– Top and bottom borders that let the image show through slightly –>
<Rectangle Opacity=”.8” Fill=”#443225” Width=”480” Height=”112”/>
<Rectangle Canvas.Top=”688” Opacity=”.8” Fill=”#443225” Width=”480”
Height=”112”/>
<!– The title and instructions –>
<StackPanel Style=”{StaticResource PhoneTitlePanelStyle}”>
<TextBlock Text=”CROP PICTURE” Foreground=”White”
Style=”{StaticResource PhoneTextTitle0Style}”/>
<TextBlock Foreground=”White” TextWrapping=”Wrap” Width=”432”>
Pinch &amp; stretch your fingers to zoom.<LineBreak/>
Drag to move the picture.
</TextBlock>
</StackPanel>
</Canvas>
<!– Shown while launching the PhotoChooserTask –>
<Canvas x:Name=”LoadingPanel”>
<StackPanel Style=”{StaticResource PhoneTitlePanelStyle}”>
<TextBlock x:Name=”LoadingTextBlock” Text=”LOADING…” Foreground=”White”
Style=”{StaticResource PhoneTextTitle0Style}” Width=”432”
TextWrapping=”Wrap”/>
</StackPanel>
</Canvas>
</Canvas>
</phone:PhoneApplicationPage>

[/code]

  • Drag gestures (and pinch/stretch gestures) are detected with a page-level gesture listener, just like on the main page.
  • The white rectangle not only reveals the puzzle’s dimensions if the picture doesn’t completely cover the area, but it also ends up giving the puzzle a white background when this happens. This is demonstrated in Figure 42.7.
  • Two different transforms are used to make the user’s gestures feel natural. Whereas drag gestures adjust ImageTransform much like the dragging of puzzle pieces on the main page, pinch and stretch gestures are applied to CanvasTransform. Because the canvas represents the puzzle area and it’s marked with a centered render transform origin of (.5,.5), the user’s gesture always zooms the image centered around the middle of the puzzle. If this were applied to the image instead, zooming would occur around the image’s middle, which might be far off-screen as the image gets zoomed and panned.
  • Two subtle things on this page prevent its performance from being disastrous. The bitmap caching on the image makes the panning and zooming much smoother than it would be otherwise, as does the clip applied to the page. (The page’s clip also prevents the entire screen from going blank if the photo is zoomed in to an extreme amount, caused by Silverlight failing to render a surface that is too big.)
A white rectangle serves as the puzzle’s background if the picture doesn’t completely cover it.
FIGURE 42.7 A white rectangle serves as the puzzle’s background if the picture doesn’t completely cover it.

If an image has transparent regions, those regions are not hit-testable when used to fill a shape!

This app ensures that the picture used to fill the puzzle pieces never contains any transparency. Those transparent regions would not respond to any touch events,making it difficult or impossible to drag the affected pieces.Note that this is different behavior compared to giving an otherwise- rectangular element transparent regions with an opacity mask. With an opacity mask, the transparent regions are still hit-testable (just like an element marked with an opacity of 0).

The Code-Behind

Listing 42.4 contains the code-behind for this page.

LISTING 42.4 CroppedPictureChooserPage.xaml.cs—The Code-Behind for Jigsaw Puzzle’s Cropped Picture Chooser Page

[code]

using System;
using System.IO;
using System.Windows;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using Microsoft.Phone;
using Microsoft.Phone.Controls;
using Microsoft.Phone.Tasks;
namespace WindowsPhoneApp
{
public partial class CroppedPictureChooserPage : PhoneApplicationPage
{
bool loaded;
double scaleWhenPinchStarted;
public CroppedPictureChooserPage()
{
InitializeComponent();
this.Loaded += CroppedPictureChooserPage_Loaded;
}
void CroppedPictureChooserPage_Loaded(object sender, RoutedEventArgs e)
{
if (this.loaded)
return;
this.loaded = true;
// When navigating to this page in the forward direction only (from the
// main page or when reactiving the app), launch the photo chooser task
Microsoft.Phone.Tasks.PhotoChooserTask task = new PhotoChooserTask();
task.ShowCamera = true;
task.Completed += delegate(object s, PhotoResult args)
{
if (args.TaskResult == TaskResult.OK)
{
WriteableBitmap imageSource = PictureDecoder.DecodeJpeg(
args.ChosenPhoto);
// Perform manual “uniform to fill” scaling by choosing the larger
// of the two scales that make the image just fit in one dimension
double scale = Math.Max(
(double)Constants.PUZZLE_WIDTH / imageSource.PixelWidth,
(double)Constants.PUZZLE_HEIGHT / imageSource.PixelHeight);
this.CanvasTransform.ScaleX = this.CanvasTransform.ScaleY = scale;
// Center the image in the puzzle
this.ImageTransform.TranslateY =
-(imageSource.PixelHeight – Constants.PUZZLE_HEIGHT) / 2;
this.ImageTransform.TranslateX =
-(imageSource.PixelWidth – Constants.PUZZLE_WIDTH) / 2;
// Show the cropping user interface
this.Image.Source = imageSource;
this.LoadingPanel.Visibility = Visibility.Collapsed;
this.CropPanel.Visibility = Visibility.Visible;
this.ApplicationBar.IsVisible = true;
}
else
{
// The user cancelled from the photo chooser, but we can’t automatically
// navigate back right here, so update “LOADING…” with instructions
this.LoadingTextBlock.Text =
“Press the Back button again to return to the puzzle.”;
}
};
task.Show();
}
// Raised for single-finger dragging
void GestureListener_DragDelta(object sender, DragDeltaGestureEventArgs e)
{
// Pan the image based on the drag
this.ImageTransform.TranslateX +=
e.HorizontalChange / this.CanvasTransform.ScaleX;
this.ImageTransform.TranslateY +=
e.VerticalChange / this.CanvasTransform.ScaleY;
}
// Raised when two fingers touch the screen (likely to begin a pinch/stretch)
void GestureListener_PinchStarted(object sender,
PinchStartedGestureEventArgs e)
{
this.scaleWhenPinchStarted = this.CanvasTransform.ScaleX;
}
// Raised continually as either or both fingers move
void GestureListener_PinchDelta(object sender, PinchGestureEventArgs e)
{
// The distance ratio is always relative to when the pinch/stretch started,
// so be sure to apply it to the ORIGINAL zoom level, not the CURRENT
double scale = this.scaleWhenPinchStarted * e.DistanceRatio;
this.CanvasTransform.ScaleX = this.CanvasTransform.ScaleY = scale;
}
// Application bar handlers
void DoneButton_Click(object sender, EventArgs e)
{
// Create a new bitmap with the puzzle’s dimensions
WriteableBitmap wb = new WriteableBitmap(Constants.PUZZLE_WIDTH,
Constants.PUZZLE_HEIGHT);
// Render the page’s contents to the puzzle, but shift it upward
// so only the region intended to be for the puzzle is used
wb.Render(this, new TranslateTransform { Y = -112 });
// We must explicitly tell the bitmap to draw its new contents
wb.Invalidate();
using (MemoryStream stream = new MemoryStream())
{
// Fill the stream with a JPEG representation of this bitmap
wb.SaveJpeg(stream, Constants.PUZZLE_WIDTH, Constants.PUZZLE_HEIGHT,
0 /* orientation */, 100 /* quality */);
// Seek back to the beginning of the stream
stream.Seek(0, SeekOrigin.Begin);
// Save the file to isolated storage.
// This overwrites the file if it already exists.
IsolatedStorageHelper.SaveFile(“custom.jpg”, stream);
}
// Indicate that the user has chosen to use a custom image
Settings.PhotoIndex.Value = -1;
// Return to the puzzle
if (this.NavigationService.CanGoBack)
this.NavigationService.GoBack();
}
void CancelButton_Click(object sender, EventArgs e)
{
// Don’t do anything, just return to the puzzle
if (this.NavigationService.CanGoBack)
this.NavigationService.GoBack();
}
}
}

[/code]

  • Inside DragDelta, the HorizontalChange and VerticalChange values are directly added to the image transform each time; although they must be divided by any scale applied to the parent canvas so the distance that the image travels remains consistent. This direct application works well because the TranslateX and TranslateY are not changed through any other means, such as the snapping logic used on the main page.

    The PinchStarted handler records the scale when two fingers touch the screen, arbitrarily choosing ScaleX because both ScaleX and ScaleY are always set to the same value. The PinchDelta handler multiplies the initial scale by the finger distance ratio and then applies it to the canvas transform.

  • The handler for the done button’s click event, DoneButton_Click, leverages WriteableBitmap’s killer feature—the ability to capture the contents of any element and write it to a JPEG file. A new WriteableBitmap is created with the puzzle’s dimensions, and then this (the entire page) is rendered into it with a transform that shifts it 112 pixels upward. This is necessary to avoid rendering the page’s header into the captured image. Figure 42.8 demonstrates what happens if null is passed for the second parameter of Render.

    After refreshing the bitmap with a call to Invalidate, SaveJpeg writes the contents in JPEG format to a memory stream which can then be written to isolated storage.

If the page is rendered to the puzzle image from its top-left corner, the page header becomes part of the puzzle!
FIGURE 42.8 If the page is rendered to the puzzle image from its top-left corner, the page header becomes part of the puzzle!

If you want to take a screenshot of your app for your marketplace submission on a real phone rather than the emulator, you can temporarily put code in your page that captures the screen and saves it to your pictures library as follows:

[code]

void CaptureScreen()
{
// Create a new bitmap with the page’s dimensions
WriteableBitmap wb = new WriteableBitmap((int)this.ActualWidth,
(int)this.ActualHeight);
// Render the page’s contents with no transform applied
wb.Render(this, null);
// We must explicitly tell the bitmap to draw its new contents
wb.Invalidate();
using (MemoryStream stream = new MemoryStream())
{
// Fill the stream with a JPEG representation of this bitmap
wb.SaveJpeg(stream, (int)this.ActualWidth, (int)this.ActualHeight,
0 /* orientation */, 100 /* quality */);
// Seek back to the beginning of the stream
stream.Seek(0, SeekOrigin.Begin);
// Requires referencing Microsoft.Xna.Framework.dll
// and the ID_CAP_MEDIALIB capability, and only works
// when the phone is not connected to Zune
new Microsoft.Xna.Framework.Media.MediaLibrary().SavePicture(
“screenshot.jpg”, stream);
}
}

[/code]

Once there, you can sync it to your desktop with Zune to retrieve the photo in its full resolution. This has a few important limitations, however:

  • It doesn’t capture parts of the user interface outside of the Silverlight visual tree—the application bar, status bar, and message boxes.
  • It doesn’t capture any popups, even ones that are attached to an element on the page.
  • It doesn’t capture any WebBrowser instances.
  • It doesn’t capture any MediaElement instances.

Also, you need to determine a way to invoke this code without impacting what you’re capturing.

Listing 42.4 doesn’t make any attempt to preserve the page’s state in the face of deactivation and reactivation; it simply relaunches the photo chooser.There are a few strategies for preserving the state of this page.The most natural would be to persist the image to a separate temporary file in isolated storage, along with values in page state that remember the current zoom and panning values.

What’s the difference between detecting dragging with gesture listener drag events versus using mouse down, mouse move, and mouse up events?

One major difference is that the drag events are only raised when one finger is in contact with the screen. With the mouse events (or with the multi-touch FrameReported event), you can base dragging on the primary finger and simply ignore additional touch points.This may or may not be a good thing, depending on your app. Because the preceding chapter uses the mouse events for panning, it gives the user the ability to do zooming and panning as a combined gesture, which mimics the behavior of the built-in Maps app. In Jigsaw Puzzle’s cropped photo chooser page, on the other hand, the user must lift their second finger if they wish to pan right after zooming the picture.

Another difference is that the drag events are not raised until the gesture listener is sure that a drag is occurring, e.g. one finger has made contact with the screen and has already moved a little bit. In contrast, the mouse move event is raised as soon as the finger moves at all. For Jigsaw Puzzle, the delayed behavior of the drag events is beneficial for helping to avoid accidental dragging.

A clear benefit of the drag events, if applicable to your app, is that the finger velocity at the end of the gesture is exposed to your code.However, you could still get this information when using the mouse approach if you also attach a handler to gesture listener’s Flick event.The Direction property exposed to the drag events, discussed earlier, also enables interesting behavior that is tedious to replicate on your own.

The Finished Product

Windows Phone Jigsaw Puzzle (Drag Gesture & WriteableBitmap)
Jigsaw Puzzle (Drag Gesture & WriteableBitmap)