GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS

Written by thoughtbot

Designing for iOS: Blending Modes

Let's say that we have an app that makes use of image assets for icons, custom progress bars, etc. Now we want to allow users to theme the app, and these images need to conform to this new color scheme. The obvious solution is to add 1x and 2x versions of every single possible image asset in the app, right?

No, of course not. Duplication is bad, and duplicating images is just another form of duplication. But we're clever folks, I'm sure we can find a nice way around this!

We'll start out with a simple image that we need to tint to our new color scheme. It has an alpha channel, but it's a flat color and the new color is going to be flat as well.

The smart way to do this would be to redraw the image using a new color using CoreGraphics. Let's consider the steps needed to make this happen:

  1. Create a new image context to draw into.
  2. Set the new tint color to be the fill.
  3. Get the image size.
  4. Draw the source image with the new color while retaining the alpha information.
  5. Get the image out of the current context and return it.

Looks pretty straightforward, but retaining the alpha channel will likely be the trickiest part of the exercise. Looking into the UIImage documentation, we know we're going to want to use -drawInRect:blendMode:alpha: to draw the new image. Looking at the available blending modes leaves us a little… confused:

kCGBlendModeSourceAtop
R = S*Da + D*(1 - Sa)
Available in iOS 2.0 and later.
Declared in CGContext.h.

That's not actually super helpful. Let's keep reading, I guess.

The blend mode constants introduced in OS X v10.5 represent the Porter-Duff blend modes.
The symbols in the equations for these blend modes are:
R is the premultiplied result
S is the source color, and includes alpha
D is the destination color, and includes alpha
Ra, Sa, and Da are the alpha components of R, S, and D

Ah, ok. So we need to do some translation here. What we're looking for is a blending mode that will draw the new color (Destination, represented by D) using the alpha components from the source (represented by Sa). Looking through the available options armed with this new knowledge, we find one that looks promising:

kCGBlendModeDestinationIn
R = D*Sa
Available in iOS 2.0 and later.
Declared in CGContext.h.

That looks like it's using the right components, but there's only one way to be completely sure. Let's get ready for some good old fashioned trial and error. First, we'll go ahead and create a category on UIImage:

// UIImage+Tint.h

@interface UIImage (Tint)

- (UIImage *)tintedImageWithColor:(UIColor *)tintColor;

@end

This feels like a nice interface. We can assume that we will either have the image that needs tinting, or we can simply chain it together with -imageNamed: to grab the image from the bundle. Now for the initial implementation:

// UIImage+Tint.m

#import "UIImage+Tint.h"

@implementation UIImage (Tint)

- (UIImage *)tintedImageWithColor:(UIColor *)tintColor
{
    // It's important to pass in 0.0f to this function to draw the image to the scale of the screen
    UIGraphicsBeginImageContextWithOptions(self.size, NO, 0.0f);
    [tintColor setFill];
    CGRect bounds = CGRectMake(0, 0, self.size.width, self.size.height);
    UIRectFill(bounds);
    [self drawInRect:bounds blendMode:kCGBlendModeDestinationIn alpha:1.0];

    UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return tintedImage;
}

@end

Plugging this into our app is as simple as [[UIImage imageNamed:@"ralph"] tintedImageWithColor:[UIColor thoughtbotRed]];

That was simple enough. But what if we want to retain a gradient from the image? Let's say we have a nice grayscale button that should be skinned as well:

All we have to do is pass it through our tint method, and voila!

Oops. Looking back at the blending mode, this won't work at all. The only things that kCGBlendModeDestinationIn takes into account are the destination color, and the source alpha. We're losing all of the grayscale info from the source image. Looks like we still have more work to do. Once again, we turn to the documentation, and find kCGBlendModeOverlay:

kCGBlendModeOverlay
Either multiplies or screens the source image samples with the background image samples,
depending on the background color. The result is to overlay the existing image samples
while preserving the highlights and shadows of the background. The background color mixes
with the source image to reflect the lightness or darkness of the background.

Available in iOS 2.0 and later.
Declared in CGContext.h.

That sounds like it should tint our gradient nicely. But we don't want to change the current behavior. This calls for some refactoring. The new interface becomes:

// UIImage+Tint.h

@interface UIImage (Tint)

- (UIImage *)tintedGradientImageWithColor:(UIColor *)tintColor;
- (UIImage *)tintedImageWithColor:(UIColor *)tintColor;

@end

And the new implementation:

// UIImage+Tint.m

#import "UIImage+Tint.h"

@implementation UIImage (Tint)

#pragma mark - Public methods

- (UIImage *)tintedGradientImageWithColor:(UIColor *)tintColor
{
    return [self tintedImageWithColor:tintColor blendingMode:kCGBlendModeOverlay];
}

- (UIImage *)tintedImageWithColor:(UIColor *)tintColor
{
   return [self tintedImageWithColor:tintColor blendingMode:kCGBlendModeDestinationIn];
}

#pragma mark - Private methods

- (UIImage *)tintedImageWithColor:(UIColor *)tintColor blendingMode:(CGBlendMode)blendMode
{
    UIGraphicsBeginImageContextWithOptions(self.size, NO, 0.0f);
    [tintColor setFill];
    CGRect bounds = CGRectMake(0, 0, self.size.width, self.size.height);
    UIRectFill(bounds);
    [self drawInRect:bounds blendMode:blendMode alpha:1.0f];

    UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return tintedImage;
}

@end

This gets us closer, but you can see that now we've lost our alpha information.

Since kCGBlendModeOverlay doesn't take the source alpha into account, we need to try to get that back. There's a very simple way that we can accomplish this. We already know that kCGBlendModeDestinationIn` draws the destination image in the source alpha, so we can conditionally redraw the image using this mode:

// UIImage+Tint.m

- (UIImage *)tintedImageWithColor:(UIColor *)tintColor blendingMode:(CGBlendMode)blendMode
{
    UIGraphicsBeginImageContextWithOptions(self.size, NO, 0.0f);
    [tintColor setFill];
    CGRect bounds = CGRectMake(0, 0, self.size.width, self.size.height);
    UIRectFill(bounds);
    [self drawInRect:bounds blendMode:blendMode alpha:1.0f];

    if (blendMode != kCGBlendModeDestinationIn)
        [self drawInRect:bounds blendMode:kCGBlendModeDestinationIn alpha:1.0];

    UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return tintedImage;
}

@end

And we finally have the desired image:

This image is perfect to assign to a static variable to allow it to be used app-wide, without having to redraw.

By using CoreGraphics to tint our images for us, we open ourselves up to a much wider range of functionality in our apps, and we get the added benefit of smaller bundle sizes, and reduced asset duplication.

Be sure to check out the many different blending modes available in Apple's documentation. You can check out the sample app that includes the code used here on GitHub.