Evolution of vulnerabilities in Android apps - (not)Unique experience

Evolution of vulnerabilities in Android apps


The history of Android app development has gone through several notable stages, from small apps running locally, to client-server apps, app ecosystems, and super-apps. Each of these stages raised the bar of complexity, creating new vulnerabilities, and increased developers’ concern about the security of both the applications and the data they operate with. The operating system itself has evolved, providing developers with more options and security mechanisms. But there are always a few more unknowns in this system of equations than meets the eye. This article will cover how mobile app vulnerabilities have evolved, what influenced them, what vulnerabilities are relevant now, and what’s in store for the future.

Android apps’ main vulnerabilities

There are quite a few types of mobile app vulnerabilities, but we can highlight some generalized types that cover the main landscape. The most frequent vulnerabilities are related to insecure storage of user and app data. The developer doesn’t even need to do anything for those to appear. Just storing sensitive information in unencrypted form does that. Some developers, when thinking about security, store this data in the application’s internal directory, known as the sandbox. But in many cases, this is not enough. An example is when commands can be executed on the user’s device on behalf of a superuser (root). This function is not usually included in the standard OS, but advanced users add it themselves to use certain applications or to improve the operating system’s UX. Then the following scenario is possible: a conventionally legitimate app requests a higher permission to perform its main function, and once it has been granted, starts behaving in a way the user doesn’t expect it to. For example, copying data from the sandboxes of other applications. Another example is the presence of vulnerabilities that allow contents of a sandbox to be read from another application. Here, the malicious app does not need elevated permissions. It will exploit this vulnerability and gain access to unencrypted data in the internal directory of the target application. This is why the data needs to be encrypted. Fortunately, it is very easy to do this these days, and you don’t need to be an expert in cryptography. You can just use the vendor’s solutions and follow the practices described in official documentation.

Another no less interesting type of vulnerability is the lack of control over the integrity of executable files and protection against modification. Here, if the developer doesn’t do anything, there will be no protection. This would allow attackers to modify an original application and distribute it as if it were the original. Surely, nobody would want to download a non-original application, would they? But in fact, many people do. In addition to commonplace demands like cutting out advertising and mechanisms controlling paid features, users may need to run applications on devices with modified firmware. This firmware very often has the ability to execute commands on behalf of a superuser, and banking applications containing appropriate security mechanisms refuse to operate on such devices. As a consequence, it is necessary to remove all these checks from a banking app for it to work on the firmware. These activities are usually performed by enthusiasts just for the sheer sport of it. But attackers can do the same thing, and then not only checks disappear in the banking app, but there will also be code that steals login credentials. Protecting mobile applications from such modifications is quite difficult, and as a rule, this requires the additional purchase of specialized packer utilities that complicate reverse engineering and make an attacker waste a lot of time researching security mechanisms. It is possible to try and write the required security mechanisms yourself, but this requires qualifications way beyond the competence of ordinary mobile app developers.

Vulnerabilities related to network communication are worthy of a separate note. Many developers settle on using secure HTTPS protocol without adding any additional protection. Under certain conditions, this allows an attacker controlling the communication channel to perform a MITM attack on the application and obtain confidential information. A basic scenario of such an attack is as follows. When connecting to an untrusted Wi-Fi network, the user is shown a fake captive portal and asked to install an SSL certificate to the device. The attacker can then intercept all traffic generated by the user’s smartphone. Certificate pinning is usually employed to protect against this attack. More specifically, a hard-coded certificate or certificate chain of a legitimate server in a mobile app. There are other variations of this protection, but they are all aimed at preventing data exchange with another server.

Also for Android, especially the early versions (4.1.1 and below), vulnerabilities related to inter-process communication and inappropriate use of OS and framework features are very common. For a long time, documentation about these features left much to be desired, and some parts were not documented at all. Along with a lack of clear guidelines and best practice descriptions, this forced developers to write peculiar code, often reinventing mechanisms that were already in the OS. A particularly telling example is the ‘android:exported’ flag, which controls whether a component of an application can be called by other apps. In Android 4.1.1 and below specifically, this flag is set to ’true’ by default, which means that all components where this flag is not set by the developer will be clearly available for other applications to call. This can lead to bypassing authentication mechanisms, such as a PIN screen, or exploiting other vulnerabilities by interacting directly with those components the developer designed to be internal and inaccessible externally. This is the concept of Android apps. They should not have one mandatory entry point, and there can be a number of them. It is therefore very important to reduce the number of external components, and those that remain should strictly control any communication with the outside world.

Another independent type of vulnerability is the storage of various API access keys for technical services in the code. This includes analytics and error collection systems, cloud databases, and other external services. These services often provide keys with different types of access, because the developers of these services understand that they will be used in an untrusted environment. But app developers still leave keys with “extra” privileges in the code for various reasons. The risk of leaking these keys depends on the situation, but obtaining a server key for Firebase Cloud Messaging, for example, would allow an attacker to send arbitrary push messages to all registered app users.

Fading vulnerability types

As operating systems evolve, so do vulnerabilities. Some disappear altogether, while others become increasingly difficult to exploit, yet it is still possible. Also, new OS mechanisms create new vulnerabilities, or reincarnate old ones that begin to work again due to bugs in implementation of those mechanisms. One such vulnerability is CVE-2020-0188. It allowed files to be read from the internal directory of the standard Settings app, which uses the Slices mechanism introduced in Android 11. Regarding vulnerabilities that are becoming increasingly rare in applications, it is worth mentioning again the bypassing of the PIN screen by directly calling up the home screen. Why did this become possible? There are several factors:

  1. At some point, Google changed the default value for the ‘android:exported’ flag, and all components became unavailable by default to other applications, unless the flag was explicitly set by the developer. Later, Google made the presence of this flag mandatory.
  2. Sections on application security that describe practices for the correct use of such important mechanisms were included in the official documentation.
  3. Single activity architecture became popular in application development. It is worth going into a little more detail about this architecture, because it has had an impact on more than just this vulnerability. We said before that Android apps usually have more than one entry point and can be called up in several different ways. This happens because an app can have multiple “screens” (activity, in framework terms) and if a screen is exported, it can be run independently of the others. Single activity architecture dictates that we should avoid multiple activities in favor of a single screen (fragment, in framework terms) that all other screens live within. In addition to purely technical convenience, this reduces the number of entry points into the app and allows input control to be organized at a single point, rather than on each individual screen. Other architectural principles applied with this architecture also reduce the number of Android components used. Developers therefore generally don’t need to introduce additional services, broadcast receivers, and content providers in the volume that was previously required. However, they are still needed for various specific tasks, so sometimes you simply can’t do without them. In these cases, vendor documentation on best practices for using certain components from a security point of view is helpful. And every year, the operating system itself becomes less and less tolerant to all kinds of abuse. A more trivial example of fading vulnerabilities is insecure broadcast message handling. We haven’t seen this vulnerability in our customers’ applications in three years. This is mostly due to the fact that there is no need for applications to process specific message types. All there is are standard mechanisms that usually come from standard libraries and work correctly in most cases. The vulnerability related to push notification spoofing met the same fate. Developers were left with standard mechanisms created in accordance with documentation, while vendors were left with restricted rights to API access keys for working with push notifications. Also, developers finally realized that everything in an app might become available to attackers, and practically stopped leaving debugging features in release builds.

Current vulnerability types

Despite best efforts from Google and the community for secure development, vulnerabilities can still be found in applications. In addition to the vulnerabilities already described above, which may be referred to as “simple” because they exist by themselves, “complex” vulnerabilities are now becoming more common. These are no longer vulnerabilities in themselves, but rather full-blown attacks that chain together multiple vulnerabilities and/or features of an application and the Android framework. There are several reasons for this. In addition to increasing the security of the platform itself, the complexity of applications is growing, and data going into them from the outside often goes through a rather long chain of transformations. And this, in turn, leads to a situation where the chain may be interrupted at some stage of the exploitation simply because the developers needed to transform the data so that a vulnerability became unexploitable. They might not have thought about security at all. A good example is an attack on an insecure OAuth implementation in an application. Developers have understood well that they should use the PCKE extension in untrusted environments, but errors still occur because of the complexity of implementation. There are three parties involved in the protocol: the mobile app, the mobile app server, and the OAuth provider’s server. That means there are three points where something can go wrong. For example, if the OAuth provider’s server incorrectly checks the redirect_url (the parameter for redirecting a user to a mobile app), an attacker could substitute their own value into it and intercept the code required to get the authorization token from the mobile app server. Alternatively, the mobile app may not have enough control over the data sent to the OAuth provider’s server, in which case an attacker can intercept and force the user to enter their credentials on a fake site. There are many ways to attack this framework, and some scenarios are quite complex. This year, in bug bounty programs I came across a 10-step attack involving interaction with all three parties, ultimately leading to a full takeover of the user’s account on the target service, and getting more information about the user from the OAuth provider by manipulating the list of data requested during authentication. The increasing complexity of apps has also led to vulnerabilities related to the app ecosystem. Why would you check carefully when you pass data to an app written by another team and you know for a fact that everything there is fine? The problem is that the app might be the wrong one, for a variety of reasons. For example, a malicious app has the same identifier as a legitimate app, say “com.news.app”. If another application in that ecosystem performs no further checks, and simply relies on the existence of that identifier in the system, then sends it some sensitive data, we face an ecosystem vulnerability. It also works the other way around. Receiving data from “trusted” applications without additional checks can have fatal consequences for the user. An example of my own is an application that would check for a certain identifier on the system, and if it found it, would request a configuration. This allowed the first app to set a debug flag and make it save user data in a place accessible to all applications. Local authentication vulnerabilities also remain relevant. PINs, biometrics, and 2FA can be bypassed due to bugs in implementation, or due to developers’ lack of understanding of the framework concepts. In the case of local PIN code login, developers sometimes forget to save the number of login attempts used. In this case, it is possible to reset the attempt count by simply restarting the application. And this is more common than it might initially seem. In a slightly more complicated version, system time transfer helps, as it can be poorly detected by the application logic. This leads to a reset of the number of login attempts. Bypassing biometrics is a bit more difficult, but is still possible if the application displays a biometric dialog box to simply verify the data presented. Under certain conditions, it is possible to hide this window and get into the application. This is possible because presenting biometrics does not involve any cryptographic operations on application data, so canceling the dialog does not affect any internal authentication processes. And the ability to bypass 2FA very much depends on the app logic. A recent example is a 2FA bypass on TikTok due to a random server timeout when several incorrect login attempts are made in a particular sequence.

Where things are headed

Android is constantly advancing, and its security mechanisms are continually being improved. But not all problems can be solved from a technical perspective. Sometimes they have to be dealt with by managing them. For example, starting from Android 14, applications targeting Android SDK versions below 23 (Android 6.0) cannot be installed. The problem is that attackers deliberately lower the SDK version in malicious apps in order to exploit the system’s well-known flaws thanks to its backward compatibility mechanism. Applications are also changing. More and more cross-platform applications are appearing, and the process of developing an app for multiple operating systems at once is becoming easier. But everything comes at a price. Cross-platform applications, in addition to platform-specific bugs, add their own behavioral features, which can also be exploited by attackers. The problem here is that the tools and libraries for developing such applications are far from perfect, or are completely absent. That’s why developers have to implement some functions themselves. That is also fraught with errors, especially when implementing cryptographic operations or certain protocols. The development of these applications is always done at a certain layer of abstraction, when the mechanisms of a particular platform are hidden from the developer. If desired, of course, developers can get to these mechanisms and interact with them directly. But then another problem arises. A good Android app developer is unlikely to have a deep understanding of the security mechanisms of the iOS platform. And vice versa. All this, plus a lack of well-documented best practices for secure cross-platform app development, leads to rather simple and obvious vulnerabilities. For example, in one cross-platform application, I managed to find several API access keys to external systems that shouldn’t be there at all. They simply couldn’t have gotten into the application in that form if it had been developed using a native approach.

An example of the immaturity of the tools is Hermes format support for React Native applications. This is a binary format into which the resulting JavaScript code containing the application logic is converted. The lack of decent tools to decompile this format made it very difficult to explore mobile applications. But support for this format only existed for Android apps for a while, and the standard trick (which still works today) was to get the resulting JavaScript code from an iOS app if the Android version was compiled in Hermes.

In short, the competition between armor and projectile continues. New OS features appear, vulnerabilities are discovered in them. Those vulnerabilities get closed, but ways to bypass the defenses are found. It’s all like a constantly evolving living organism. I have only described a small part of what is going on to show the path that vulnerabilities in Android apps have taken and what impact they have had on the development of the operating system. I would recommend that app developers keep a close eye on new security mechanisms that appear in Android and start applying them as soon as possible to protect users. In turn, users need to look at what is going on in their device with a critical eye, and remember that if you think for even a second that something is wrong, then something really is wrong. There’s just too many dimensions to this issue, so the best thing we can do as mobile app security specialists is to keep on looking for vulnerabilities in mobile apps and operating systems to improve ways of protecting against them, and to educate developers in order to make that aspect of life a little bit safer.


See also

comments powered by Disqus