How Android Developers Can Mitigate Risks of Pixnapping-style Attacks

Pixnapping is a style of attack that uses a hardware side-channel vulnerability to leak information displayed by other Android apps in a stealthy way. Published under CVE-2025-48561, it offers a really good example of in-depth security research and the team that discovered it has done a great job describing how it works along with relevant details at their site https://pixnapping.com.
As described by the researchers who discovered this technique, the attack works in the following way.
The three steps a malicious app can use to mount a Pixnapping attack are:
- Invoking a target app (e.g., Google Authenticator) to cause sensitive information to be submitted for rendering. This step is described in Section 3.1 of the paper.
- Inducing graphical operations on individual sensitive pixels rendered by the target app (e.g., the pixels that are part of the screen region where a 2FA character is known to be rendered by Google Authenticator). This step is described in Section 3.2 of the paper.
- Using a side channel (e.g., GPU.zip) to steal the pixels operated on during Step 2, one pixel at a time. This step is described in Section 3.3 of the paper.
Steps 2 and 3 are repeated for as many pixels as needed to run OCR over the recovered pixels and recover the original content. Conceptually, it is as if the malicious app was taking a screenshot of screen contents it should not have access to.
The most novel aspect of the attack scenario is the fact that it does not require any special permissions to achieve, giving malware developers an advantage against typical defenses.
—
Our research
Our aim is to share our investigation into how app developers may be able to mitigate some of the effects of Pixnapping-style attacks and protect the sensitive information in their application. Note that at the time of writing, the paper describing the attack is available, but the actual proof of concept implementation is still under embargo. Because of this, we have evaluated the proposed defenses against our own reproduction of the attack, which, to the best of our knowledge, reproduces the researchers' results. We have reached out to the researchers for feedback on the proposed defenses.
—
Visualization of the reproduced attack. An attacker application recovers a Guardsquare logo displayed by a victim activity. For a better viewing experience, the flickering of the attacker's overlays has been reduced in this video.
Will Google or OEMs address the vulnerability?
Researchers disclosed their findings to Google as well as Samsung early on in 2025. Google has patched part of the vulnerability, but workarounds remain that still leave users vulnerable. Google continues to work on additional patching which is expected to resolve the vulnerability fully in the coming months. While a Google patch is expected to be available in the December Android security bulletin, other OEMs may take several months to integrate patches into their distributions of Android. It should also be expected that large populations of Android devices will remain unpatched due to updates not being applied or devices being out of their long-term support cycle.
Additionally, there is no known patching strategy from the GPU vendors that would further mitigate or prevent this style of attack.
Risks for Android apps
The main risk for Android apps relates to displaying sensitive data on screen, especially if that information is reachable from an intent launchable by another application. While unoptimized attacks may require hours to recover the information, the researchers show that knowledge of the exact layout and formatting of the victim application can lead to optimized versions that run in under 30 seconds.
The researchers demonstrated this risk most clearly with the Google Authenticator app, which displays 2FA authentication codes on screen for up to 30 seconds. While this would be too short a window for an unoptimized attack, applying precise knowledge of the specific fonts and positioning of the digits that make up the code would enable sufficient time to perform the attack and capture the 2FA code.
If your application displays authentication codes or other sensitive information for such periods of time, it may therefore be vulnerable to this style of attack and should be evaluated for potential mitigations.
Guardsquare research on mitigations
In order to protect our customers’ Android applications, we continuously research protections against common Android malware techniques. This results in both regular product updates for our customers as well as additions to our Malware Security Research Center. This free resource describes various protection strategies and provides the relevant code snippets to further harden your application.
Continuing this tradition, we have studied the Pixnapping attack, tested our current mitigations, and identified an additional countermeasure to push for a higher level of security. Below, we’ll show that existing mitigations (FLAG_SECURE and Activity Injection Defenses) are not at all or only slightly effective against Pixnapping-style attacks, and then describe and showcase a new and effective mitigation.
Reducing display of sensitive information by design
A design principle that you should consider when producing an app is ensuring that sensitive information is only displayed on screen when absolutely necessary and for a minimal amount of time. Reducing how long the information is displayed on screen can help reduce the potential for a side-channel attack being able to extract complete information. In cases where the information displayed does not change and can be invoked again, this design principle may not be enough.
Use of FLAG_SECURE
FLAG_SECURE
is an Android developer option that was designed to prevent fraud/abuse of screens containing sensitive information. It prevents displayed information from being captured by screenshots when the app is in the background as well as information from being displayed on external screens or projectors.
In theory, FLAG_SECURE
should have helped against this type of attack since it makes the screen appear blank in the recent apps view. We’ve found, however, that FLAG_SECURE
only prevents the display. The underlying rendering of the content still occurs, leaving it vulnerable to the side channel inspection.
Traditional activity injection defenses
We’ve tested activity injection defenses, which are a feature of DexGuard that can be automatically applied to targeted activities at build time. These types of defenses are also detailed as code samples in our Security Research Center. Activity injection defenses help break trivial attempts at this attack, but we believe an attacker could modify their approach to potentially workaround this mitigation.
In this scenario the victim activity is brought to the front, which breaks the activity stack that the attacker tries to set up. Without changes to the approach outlined in the original research paper, this stops the attack from taking place. This also makes it less stealthy, as the victim application can warn the user about what’s taking place and the potential presence of malware.
However, let us assume that with some additional efforts the attacker finds a way to, evade the detection at some point and get the activities injected. Thus, the attack continues. This means that app developers who implement activity injection malware defenses have some initial protection from the attack, but they may need to strengthen this further to completely mitigate the risks.
Limiting background visibility of Activities
Our research indicates that you can protect sensitive views in your app by restricting the visibility of the sensitive view when the app is not in the foreground.
Visualization of the mitigation. When the attacker layers are overlayed on top of the victim activity, its views are hidden. Because of this, no information is leaked to the attacker.
In the video just above, you can see a demo after applying this protection to Google Authenticator:
- The beginning of the video shows the Activity Injector app. The hide button is pressed, so you can see what it looks like as it will be applied on Google Authenticator.
- After three seconds, the activity views are restored.
- Next, the inject button is pressed:
- Google Authenticator is launched. Note that its
onResume
method is called at this point, so the views are made visible. This can be seen in the log: Restoring all the views. - After five seconds, the malicious app injects an activity on top of the target. As the Google Authenticator app’s activity has gone to the background, the
onPause
method has been called and all the views have been hidden (this is simply to recursively applyview.setVisibility(View.INVISIBLE)
to all the views). We can see in the log: Hiding all the views. At this point, the attack should not work, as there is no information visible on the screen. - After five seconds, the injected activity is removed, so we can see again the Google Authenticator activity (and Restoring all the views in the log).
- Google Authenticator is launched. Note that its
You can find an example implementation of this mitigation technique in our Security Research Center:
Additional recommendations when considering malware defenses
When considering any malware defense, it is also important to ensure the implementation of these defenses is hardened against modification and repackaging. Malware that may be targeting your app can be persistent and if they observe how your app implements malware defenses, attackers will attempt to modify and repackage a version of your application that disables these defenses. Using sophisticated phishing campaigns, they will target users to install the modified version of your app, so that they can successfully complete their malware campaign.
For more information on application protections that prevent this kind of reverse engineering and tampering of Android applications, you can learn more about DexGuard.
Additional resources: