A couple of interesting things from Telegram Android app source code
As an Android Developer I love finding some neat trick or piece of code in android application codebase. It not only extends my knowledge but also it is really interesting how other developers might think of solving particular problems.
One of the most convenient, smooth and solid applications I use as my daily driver is Telegram messenger. Since the android app source code is available on GitHub, sometimes I’d like to dig into codebase to check how Telegram devs did some feature. Pretty often approaches they use are really interesting, so I decided to share couple of things I encountered in their codebase.
Splitting devices by performance classes
First interesting thing is splitting devices by performance classes. Since Android has really huge fragmentation and you want your app to be as smooth as possible on any device it’s probably a good thing to do some things based on hardware power of the device.
Telegram splits all devices into three performance classes LOW, AVERAGE and HIGH, and each performance class is being set based on hardware information of the device like CPU count, CPU frequencies and memory class.
Telegram shows some animations and sets blur params, measures particles count in particles animations and defines size of the area in which camera stream is to be drawn based on the performance class of the device.
Actually the idea of splitting devices by performance classes it not unique, and there are already a few existing solutions for that. There is a library from Meta, that does something similar, and Google recently released alpha version of their performance class determining library.
Interesting approach for animation
There are quite a few ways to launch animation on android, each of them has it’s pros and cons, but there is one, that I’ve never encountered before. It’s so simple yet pretty elegant. The idea behind it can be shown with couple lines of code.
Now we’ve got our animation loop, each time onDraw called, we’re just invalidating view so it will be called again on next draw pass. Ok, but where is the animation? You’re right, it’s not animation yet, and for it to be an animation the animated value (could be anything, color, translation, whatever) should be slightly changed from one draw pass to another, therefore for users it will look like an animation.
A good usecase for it is sound (voice for example) amplitudes animation, since it’s just emits a stream of amplitudes and view should be able to animate between them quickly.
A stream of amplitudes will look just like array of float values in range 0 to 1200f. [0f, 5f, 646.5f … 700f, 400f, 200f, … ]. The idea is each time new value dispatched to the view we will set new target amplitude to animate and until there will be no new values dispatched each time onDraw() called we will slightly adjust current amplitude value towards new target amplitude.
Let’s split this concept into few parts.
First one is just updating view with new amplitude value came from hardware. At this step we should also calculate small amplitude delta — value which will be added to or subtracted from current amplitude on each onDraw() call. The greater this value, the greater the speed with which the amplitude will be changed to be drawn in view.
Don’t mind the all the random numbers you saw in this Gist, they are just picked up for deltaAmplitude variable to be relatively small.
Second part is to actually update current amplitude value considering this deltaAmplitude variable and draw on canvas. For this example I will just draw a circle which will represent current amplitude. Telegram draws something called blob instead.
Key thing in the Gist above is calculateNextFrame function, it takes dt — delta time between subsequent onDraw() calls, and based on it and deltaAmplitude calculates next amplitude which to be drawn on canvas.
One last thing is just dispatch some random amplitude values to view and see how it handles it.
And with combining 2 DynamicViews together in one layout with setting different speed to them we can see pretty nice results.
I hope you enjoyed the reading, as always all code is available on GitHub. Cheers!