In 2011 Facebook purchased Push Pop Press, a company founded by former Apple employees Kimon Tsinteris and Mike Matas aimed at creating a platform on which engaging digital books and publications for iOS could be built. Push Pop Press’ technology was initially used to create Al Gore’s book Our Choice, which would become the flagship example of the platform. At the time, it was unknown as to whether Facebook had plans for the platform Push Pop Press had developed or whether this was purely an “acquihire”.
When Facebook announced Paper alongside Facebook Creative Labs, a previously unknown skunkworks within Facebook, it now seems apparent that at the very least, the ethos behind Push Pop Press’ innovative digital creation tool has not fallen by the wayside.
While Paper is laden with noteworthy interactions, we’re going to be looking at the panoramic photo panner. In particular, this control typifies an incredibly articulate use of device motion and touch free interaction. Unless you’ve used Paper in person, it’s hard to convey the level of immersion you get when viewing photos, especially panoramic content. It feels like what Apple were trying to do when they implemented their parallax effect for icons and dialogs in iOS 7.
There is an inherent risk when including motion based controls in an app, often they serve no real purpose and at worst, they can hinder engagement rather than improving it. The Facebook Creative Labs team have done a superb job at treading these boundaries to create a control that is there when you need it and gets out of your way when you don’t. In short, it just “feels right”.
If we think of the photo panner without the associated motion control, we can break it down into a view to display the image (
UIImageView), a view for the image view to live in which handles the panning (
UIScrollView) and a layer to display the scroll bar (
Displaying the image at the full height of our device can be handled by
UIScrollView’s zoom functionality, we just need to provide the appropriate zoom scale based on the aspect ratio of the device and image.
From these components, we can build much of the photo panner functionality. To react to device motion, we’ll need to make use of the many device sensors that we have at our disposal.
The last few generations of iOS devices have shipped with a bunch of different sensors for measuring device orientation and acceleration, the one in particular that we’ll be looking at is the gyroscope. To quote the tome of all human knowledge:
A gyroscope is a device for measuring or maintaining orientation, based on the principles of angular momentum.
For us, this means we can accurately predict, based on a reference point, what direction our device is facing.
To be notified when the gyroscope orientation has changed, we’ll be using an instance of
CMMotionManager, a class that encapsulates interaction with the device motion sensors.
CMMotionManager allows you effectively subscribe (using a block) to changes in accelerometer, gyroscope or magnetometer data. It also provides something called device motion updates which encompass the attitude, rotation rate and acceleration of a device. We’ll be using this as it provides the same data as the raw gyroscope callbacks, but whose bias has been removed by Core Motion algorithms.
To start receiving callbacks for device motion, we call
startDeviceMotionUpdatesToQueue:withHandler: and pass in an
NSOperationQueue and a block to perform on the change.
Now that we’re receiving callbacks from our
CMMotionManager when the device orientation changes, we need to translate the gyroscope data into something that can adjust our
The data we care about the most is the
rotationRate which will give us the
z rotation of our device.
In particular, we’ll be using the
yRotationRate to determine our device tilt, however we’ll also be using the
If we take a look at Paper, much of what makes it “feel right” is that there’s little to no accidental movement triggered when you rotate the device along an axis other than the
To accomplish (or at the very least approximate) this, we’ll use the
zRotationRate as a threshold for responding to our
If we have a
yRotationRate that is greater than the sum of the
zRotationRate, we’ll assume that the movement was intentional and adjust our
Translating the device movement into scroll position is an instance where the dreaded magic number becomes a necessity. There’s no direct analog we can use to translate between device motion and scroll position, it’s a matter of playing with a multiplier until you find something that, again, “feels right”.
We also have to factor in the zoom scale of the image we’re displaying so regardless of image dimensions, device motions translates into the same relative change in scroll position.
clampedContentOffsetForHorizontalOffset is a simple method takes a horizontal offset and returns a
CGPoint representing an offset for the
UIScrollView that centers the content vertically and restricts it from exceeding the horizontal bounds.
We’ve also inverted the rotation rate to mimic Paper’s scroll direction when rotating the device (a detail I overlooked for far too long).
At this stage, we’d have something that on Paper does the job, but if we build and run our code now, we’ll quickly notice a disconcerting jitter on movement. This is because we’ve done nothing to smooth or filter the changes.
It’s a bit too accurate.
If we take another look at Paper, rotating the device causes the image to glide across the screen, coming to a slow rest when it reaches it’s apex. In our current state, we glide with the finesse of a giraffe on ice. We not only need to smooth out the general movement but also incorporate an ease out function so we don’t arrive at a complete stop.
Choosing the highest level of abstraction and only dropping down a level when the original doesn’t meet the requirements or is not performant is a mindset that’s prevalent throughout Cocoa and Core Foundation. We’re provided with a wealth of options for common problems. Grand Central Dispatch and
NSOperationQueue’s for asynchronous tasks,
CALayer for displaying content on the screen and UIView block based animation and
CAAnimation for animating that content.
Each of these overlap in functionality to a large extent, but exist to tackle problems in a different way, depending on the requirement.
By using UIView block based animation, we can tap into the power of Core Animation, provide an ease out function, not block user interaction and have our changes be relative to our current state, all with one call.
Now if we build and run our code, we’ll notice the jitter gone and the whole interaction feeling much more natural.
To implement the scroll bar, we’re going to be using
CAShapeLayer and the
strokeEnd properties to adjust the apparent length and location of the scroll bar. We won’t go into too much detail on this technique as we’ve covered it previously, instead we’ll be delving into a way to keep it in lock-step with the
contentOffset of the
CADisplayLinkobject is a timer object that allows your application to synchronize its drawing to the refresh rate of the display.
We’ll be using our
CADisplayLink object to provide us with a display synchronized callback in which to poll the current position of our
UIScrollView content and adjust our scroll bar accordingly. The benefit of using a
CADisplayLink over an
NSTimer for this kind of operation is that we can align our scroll bar changes to the potentially varying frame rate of the display.
Setting up a
CADisplayLink object is similar to that of an
NSTimer; we create the
CADisplayLink, set the target callback to fire at every screen refresh and add it to a run loop.
You may notice we’re adding our
CADisplayLink object to the run loop for the
NSRunLoopCommonModes mode. While it’s not covered in this post, we’ll also want to support touch tracking which will place our
UIScrollView in the
UITrackingRunLoopMode tracking mode. If we were to use the
NSDefaultRunLoopMode in this instance, we’d lose visual feedback of the scroll bar while tracking touches.
As we’re using block based animation to update our
UIScrollView, we can’t rely on the
contentOffset property to be 1:1 with what is being displayed onscreen. Instead we’ll be polling the
presentationLayer which will provide us with a close approximation of what is being displayed.
Unfortunately the properties we need aren’t as easily identified on the
presentationLayer as they are on
UIScrollView, but it’s not difficult to translate what we have to what we understand.
To retrieve the current
contentSize, we’ll be using the
presentationLayers from our
Once we have these, we need to calculate based on our
UIScrollView width, what percentage of the content is visible and the position of the content relative to the size of the content.
All that is left now is to pass these values along to our
CAShapeLayer, which in this case is a
sublayer of another view’s
layer, every time we receive the callback from our
At this point, there’s one more thing to take into account. It wasn’t mentioned in the previous post, but many
CALayer properties provide an implicit animation that we conveniently overrode by providing our own
In our case, this would cause our scroll bar updates to lag behind the actual
UIScrollView updates while their animations completed.
To get around this implicit animation, we need to remove the actions for the
strokeEnd from our
Hopefully this post has provided some appreciation for the thought and finesse that has gone into this and other iOS controls. We’ve only just scratched the surface.
You can checkout this project on Github.