Android Animations - interacting with the user
| Anders SkaalsveenOne part of the Android framework has more in common with body language than you might think.
Make it move
Animation makes still images appear as moving. In order to create an animation, all you need is to create multiple images of the same object to show in a fast sequence. If you make small changes between each image, you can make it look like the objects are moving. Just like that.
A physical screen is made up by tiny pixels to emit red, green and blue light. So, in reality what we look at when we admire a beautiful animation on a phone, is just an array of pixels that is precisely controlled and continuously updated to create an illusion of made up moving objects.
On top of seemingly basic physical components, like the light emitting pixels, we build more and more complex ideas and abstract concepts in software, to control what is visible on the screen. We trick our brains to perceive depth on 2-dimensional surfaces and it doesn’t really matter if it’s just an illusion as long as it looks real enough.
The illusion
When we talk about animation on Android, what we mean is generally screen elements, or views, that are moved or gradually altered in other ways, to make changes to the layout.
By borrowing concepts from physics, such as gravity and friction, we can animate objects to make them appear with the familiar and predictable behavior that we all know from the real world.
When we animate a view to move, we need a frequent rate of new images to be created and displayed on the screen, in order to uphold the illusion of movement. Devices with higher frame rates generally show more fluid animations, but even with fast displays, the processors can still become too busy to render our images, and we start to see lag from frames being dropped.
Fortunately, all the calculations related to frame-rate is handled for you by
Android when using the built-in animation libraries. To demonstrate this, we can
look at how to use the ValueAnimator
. All you need to do in order to set it
up, is to define a start-value, an end-value and define what to do during each
update. The animated value, between the start- and the end-value, is passed to
us in the callback.
val startValue = 0f
val endValue = 100f
ValueAnimator.ofFloat(startValue, endValue).apply {
addUpdateListener { animator ->
val animatedValue = animator.animatedValue
if (animatedValue is Float) {
view.x = animatedValue
}
}
start()
}
Move-animation, triggered by a click, that reverses
There are also many more options we can set for the value-animator. Some examples are duration of the animation and start-delay, but there are also more advanced options that I will try to cover later in one of the upcoming posts.
A language of motion
To shape illusions to be perceived by others is a very powerful thing. Sometimes, it can be more effective as a way to communicate than using words. Words can mean different things to people, but with animation, we can tap into a universal language of motion and mimic known things, such as physics and body-language.
Animation can be used to make it look like views have gravity and other physical properties, but it can also express the attitude and personality of our apps. Spring motion for instance, can make objects appear as more energetic and alive, similar to how people move differently, based on their mood and energy level.
To apply spring-motion to views, we can use a library called DynamicAnimation
.
This library contains classes for both spring-animation and fling-animation,
that takes velocity into account, so you can base it on interaction with the
touch-screen. The example below is not as advanced, but it demonstrates how to
make it look like we are shaking a view.
fun View.shake(endValue: Float) {
SpringAnimation(this, DynamicAnimation.TRANSLATION_X, endValue).apply {
spring.stiffness = STIFFNESS_MEDIUM
spring.dampingRatio = DAMPING_RATIO_HIGH_BOUNCY
start()
}
}
Spring-animation followed by a move-animation
When used sparingly, spring-animations provide an effective way of directing the users’ attention to a specific view on the screen.
Getting feedback
The importance of non-verbal communication and our instant reaction to it should be in the back of our minds when we strive to create the best possible apps.
In order to let users know that their actions are being registered by the device, they require instant recognition. With instant feedback to your actions, the flow of communication improves, and we reduce uncertainty.
Interactive screen elements that looks the same all the time, can prevent users from getting the instant feedback they want. While button-views comes with simple animation built-in, we can also make other elements respond to touch as well.
By making sure that the views we can interact with, provide visual feedback when touched, we can make sure the user knows that the app still works, even when it takes 50 or 100 milliseconds too much to open up the next screen.
To add different states to a view based on interaction, such as selection and
press, we start by creating a new file in the drawable
-folder. Then, we will
add a selector
-tag, linking the states we want to different looking
drawable
s like this:
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item
android:drawable="@drawable/button"
android:state_selected="true" />
<item
android:drawable="@drawable/button_pressed"
android:state_pressed="true" />
<item
android:drawable="@drawable/button" />
</selector>
Finally, we use it as the background-resource for the view we want to change.
A view that responds to click
The examples I have shown so far are very basic, but before we go into more complex and interesting tools for animation, I think it makes sense to take a step back and reflect on why it is important with the all the small details of animation.
Visual cues
To feel heard and acknowledged are some of our deepest longings as humans. That’s why we truly appreciate when it happens. And why we react so strongly when we are being ignored, misread, misunderstood or not heard at all.
Waiting for an app that didn’t register your input can feel frustrating. Especially if you know that the app doesn’t provide any visual cues to indicate the tap of a button or other interactions with it.
Perhaps you no longer notice the small signs of acknowledgment and acceptance when you talk to people on a regular basis, but we constantly pick up on visual cues and hand gestures. Mimics and body language express our immediate reactions and opinions of what the other person is saying. It also reveals us if we’re not really listening in.
To improve our apps we can try to think about how we can apply visual cues to our apps, when it is needed and how to make animations correspond to body language so we can communicate more with motion.