Did you ever see image split available in Kaleidoscope? It's really easy to implement similar one in iOS with some CoreAnimation code.

# Basic

1. Stack 2 UIImageViews on top of each other.
2. Use CAShapeLayer to generate mask to hide part of our top one.
3. Mask will be a simple triangle that changes it's extents as we move our finger.

We only need to create a simple triangle that conveys that.
To be able to reduce or increase mask size we need to manipulate topLeft and bottomRight vertices of that triangle.
We just need to move them proportionally to our width/height ratio like so:

``````- (CGPathRef)pathForMaskingUpToPercentage:(CGFloat)percentage
{
//! 1.
const CGFloat width = CGRectGetWidth(self.bounds);
const CGFloat height = CGRectGetHeight(self.bounds);
const CGFloat ratio = width / height;

//! 2.
const CGFloat min = MAX(height, width);
const CGFloat offset = -min + 2 * min * percentage;

//! 3.
UIBezierPath *bezierPath = [UIBezierPath bezierPath];
[bezierPath moveToPoint:CGPointMake(0, height)];
[bezierPath addLineToPoint:CGPointMake(width + offset * ratio, height)];
[bezierPath closePath];
return bezierPath.CGPath;
}
``````
1. Calculate screen ratio
2. Grab min which equals to turned-off masking.
3. Expand triangle by using offset and adjusting for screen ratio.

# Driving UIX by GestureRecognizer

We want to be able to drive our masking by using simple pan gesture recognizer, we can implement it as follows:

``````- (void)handlePanGesture:(UIPanGestureRecognizer *)gestureRecognizer
{
//! 1.
const CGPoint location = [gestureRecognizer locationInView:self];
const CGFloat width = CGRectGetWidth(self.bounds);
const CGFloat height = CGRectGetHeight(self.bounds);

//! 2.
const CGFloat distance = sqrtf((float)(location.x * location.x + pow((height - location.y), 2)));
const CGFloat maxDistance = sqrtf(width * width + height * height);

//! 3.
CGFloat fraction = distance / maxDistance;
const CGPathRef newPath = [self pathForMaskingUpToPercentage:fraction];
self.shapeLayer.path = newPath;
}
``````
1. Just grab values for calculation.
2. Use simple vector math to calculate max allowed distance and the distance to user finger.
3. Calculate how far in normalized distance user finger is, then grab a mask for that percentage and set it on shapeLayer.

It would be nice if when we lifted finger close to screen boundries our mask would snap with animation, we can add that by simply using CoreAnimation and simple math:

``````//! 1.
const BOOL isEnding = gestureRecognizer.state == UIGestureRecognizerStateEnded;
const CGFloat snapMargin = kPathSnapMarginPercentage;
if (isEnding && fraction > 1.0 - snapMargin) {
fraction = 1;
}

if (isEnding && fraction < snapMargin) {
fraction = 0;
}

const CGPathRef newPath = [self pathForMaskingUpToPercentage:fraction];

//! 2.
if (isEnding) {
CABasicAnimation *pathAnimation = [CABasicAnimation animationWithKeyPath:@"path"];
//! 3.
pathAnimation.fromValue = (id)self.shapeLayer.path;
pathAnimation.toValue = (__bridge id)newPath;
pathAnimation.duration = kPathSnappingDuration;
pathAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut];
}
//! 4.
self.shapeLayer.path = newPath;
``````
1. Verify that user lifted their finger.
2. Add simple path animation to our shape layer.
3. Remember to set fromValue as we will be changing model layer after this animation block, otherwise you'd have a difference between model and presentation layers.
4. Update model layer.

For more masking examples check-out my old article about Pinch to Reveal effect

Full source code at GitHub

You've successfully subscribed to Krzysztof Zabłocki