November 18, 2025

Why Mobile App Security Needs Both Scanning and Protection

The uncomfortable truth about secrets in apps

Mobile apps need to talk to the world. They fetch maps, process payments, call APIs, and sync data to cloud services. To do that securely, they often rely on API keys, OAuth tokens, or cryptographic secrets to prove the authenticity of the app or user.

The problem is that those keys are often hard-coded in the app. Once the app ships, anyone reverse engineering the app can find them. And attackers do.

In the 2025 study entitled “Leaky Apps: Large-scale Analysis of Secrets Distributed in Android and iOS Apps”, researchers from the University of Vienna unpacked 10,331 apps and discovered 416 valid credentials across 65 different services. They ranged from payment gateways, cloud APIs, and even Git repository tokens that exposed private source code.

Even more striking, the team found that iOS apps leaked secrets slightly more often than Android apps.

Additional 2025 research, led by Alecci et al., tested whether large language models (LLMs) could detect secrets in Android apps. Their prototype, SecretLoc, scanned thousands of APKs and identified 4,800+ hidden secrets, many missed by traditional regex scanners. The takeaway was that even modern apps with only basic obfuscation can be mined for credentials by anyone with the right tools and AI makes that process faster.

Why do hidden secrets keep happening

Hardcoded secrets appear for two main reasons:

  1. Intentional inclusions:
    • Some secrets are genuinely required like a Google Maps API key or a payment provider token that must reside within the app in order for it to function offline. Thus, purposely included to support required functionality, or added under the assumption that the associated risk is low.
    • Added on purpose, often because the developer doesn’t know there are safer ways to handle it, or simply assumes it’s secure.
  2. Accidental leakage. More often, keys slip in by mistake, let’s say a developer leaves a test credential in the code, a build script pulls in an environment variable, or an unused config file containing internal keys is bundled into the final binary.

The Leaky Apps team even found iOS apps shipping with .gitlab-ci.yml files and shell scripts that contained private tokens. In one case, a mobile banking app included its entire Swift source directory inside the production package.

From an engineering-manager’s point of view, this is negligence. Modern mobile pipelines are complex, third-party SDKs are opaque, and release cycles are fast. Without explicit checks, secrets sneak in.

That’s where scanning comes in.

Scanning catches the secrets that shouldn’t be there

Scanning provides the answer to the question: “Does this app contain anything that doesn’t belong in the hands of the public?”

Static scanners unpack the app (APK or IPA), extract text and code, and search for patterns that look like secrets, like AWS keys, JWT tokens, Google API keys, cryptographic material, and more.

What the research shows

  • The study by Leaky Apps confirmed the existence of 416 secrets across more than 10k apps, including 13 Git credentials that opened access to 2,440 private repositories.
  • The LLM-based SecretLoc approach went further, finding 42 percent of Android apps contained at least one secret, many in formats no regex rule could match.

These findings highlight two realities:

  1. Attackers are already scanning public apps at scale.
  2. Developers need to scan their apps before attackers do.

Real-world perspective with AppSweep

At Guardsquare, we have run automated analyses of 5,480 public Android apps through AppSweep, our iOS and Android app analysis product, since January 2025. We found 164 keys flagged as potential secrets.

The dataset is skewed toward sensitive institutions given their reliance on AppSweep's auto-analysis pipeline. The security stakes for these organizations are already high and protections are common. Yet secrets still appeared. That mirrors the academic findings that even mature teams struggle to keep credentials out of release builds.

All the above findings highlight that scanning isn’t just a compliance checkbox. The same automation that powers attackers’ discovery tools is available to you, but only if you build it into your process. Treat scanning as a continuous security feedback loop, not a one-off code audit.

How to integrate application scanning

  • Make it part of CI/CD. Run a scan on every build, just like unit tests.
  • Use multiple detectors. Combine pattern-based scanners (Gitleaks, truffleHog, AppSweep) with AI-assisted tools as they mature.
  • Review false positives but don’t ignore them. It is worse to over-filter than to occasionally get noise.

Scanning is about visibility. It ensures you know what’s in your app before the world does. But visibility alone doesn’t equal safety—some secrets can’t be removed. That’s where protection comes in.

Under secure architecture practices, most mobile apps should not embed secrets at all. When exceptional cases require a client-side key, the aim is not to rely on obfuscation as a hiding mechanism, but to limit the key’s privileges, lifespan, and extractability.

Why hiding isn’t enough

Once an app is downloaded on a device, it’s in an attacker’s hands. They can decompile, debug, hook APIs, or inspect memory. Simple tricks to hide the keys, like Base64 encoding or string splitting, tend to delay rather than truly stop anyone. The goal isn’t to make reverse-engineering impossible (because it isn’t), but to make it difficult, time-consuming, and ultimately unattractive.

One of the more sobering results from the SecretLoc paper was that its LLM easily decoded Base64-obfuscated keys. LLMs can identify secrets even when they don’t match known patterns. They detect intent and context, not just string shapes. This lets them uncover secrets that tools like LeakScope or traditional regex-driven scanners consistently miss.

While scanning reduces the number of unnecessary secrets in the binary, it does not eliminate the need to embed certain operational secrets. These embedded credentials remain attractive targets and require strong in-app defenses. In practice, this means applying multiple complementary layers of protection.

Layers of protection

  1. Code obfuscation. While obfuscation is not a substitute for proper secret management, it directly impacts how easily secrets can be extracted from a deployed app. Advanced protection tools such as DexGuard(Android) and iXGuard(iOS) apply code-flow obfuscation, string encryption, and class renaming to ensure that any secrets present and the code paths that use them cannot be trivially located by static or dynamic analysis.
  2. Secure storage. Tokens or encryption keys need to be stored in OS-level keystores (Android KeyStore, iOS Keychain), not plaintext constants.
  3. Runtime integrity checks. Obfuscation increases the difficulty of static extraction, but attackers frequently pivot to runtime analysis to capture secrets as they are loaded into memory. Runtime protections (RASP) detect debugging, hooking, and other forms of instrumentation, and can help take necessary actions when such activity is observed. This significantly constrains the feasibility of straightforward runtime secret extraction.
  4. Backend controls. Limit what an embedded key can do: apply rate limits, IP restrictions, or tie keys to app signatures.
  5. Key rotation and monitoring. Assume secrets will leak eventually. Track their use and revoke them quickly.

The role of perception and policy

Some developers sometimes assume iOS apps are inherently safer because IPAs are encrypted on the App Store. The Leaky Apps study proved otherwise, more iOS apps than Android ones contained live credentials. Security through obscurity, and security through Apple's encryption, isn't actual protection against this threat.

Protection is about buying time and increasing the attacker's cost. You cannot make reverse-engineering impossible, but you can make it unprofitable.

Scanning vs. protection isn’t a debate, it’s a duet

Teams sometimes present this as a binary choice:

“If we obfuscate the app, do we still need scanning?”
“If we scan everything, do we still need to protect it?”

This framing misses the point. These are not interchangeable strategies as they address different stages of mobile security, and you need all of them to build secure apps.

Scanning identifies accidental issues: Hardcoded secrets that never should have been committed, leftover test files, configuration artifacts, or third-party SDK regressions. It makes sure the app is clean before release.

Architecture minimizes inherent risk: Ideally, most mobile apps should not ship with secrets at all. By pushing sensitive logic to the backend, using short-lived tokens, or relying on hardware-backed storage, teams can reduce or eliminate the need for client-side secrets. When good architecture removes the secret entirely, the attack surface shrinks dramatically.

Protection (obfuscation, encryption, RASP) serves a different purpose. It hardens the delivered app against reverse engineering and manipulation. Even when all secrets are handled correctly on the server, attackers can still target the app’s logic, API flows, or client-side checks. Protection ensures it is significantly harder to analyze, tamper with, or repurpose the app in ways that undermine backend security.

A useful way to understand the relationship is offense and defense:

  • Offense: scanning finds vulnerabilities early and keeps mistakes from reaching production.
  • Design: strong architecture removes the need for secrets on-device and reduces what must be protected.
  • Defense: protection hardens the running app, ensuring that even if something does slip through or if attackers target logic instead of secrets it’s far more difficult to exploit.

Relying on scanning alone is insufficient. Even with scans on every build, developers may need to include a limited-scope key for a legitimate use case, or attackers may go after the app’s logic rather than its secrets. Likewise, relying only on protection is flawed. Obfuscation and runtime checks cannot compensate for fundamentally insecure architectural decisions.

The healthy pipeline does all three, It scans continuously to catch mistakes early, architects to avoid shipping secrets whenever possible, and protects the final binary to resist tampering and reverse engineering. These layers reinforce each other and create genuine defense-in-depth.

Lessons from the research

Both academic papers offer insights worth repeating for anyone managing mobile app security programs.

From Leaky Apps (University of Vienna, 2025):

  • The paper found 416 functional credentials across 65 services, including 13 Git credentials that grant access to 218 public and 2,440 private repositories.
  • Secrets aren’t limited to code—they lurk in configs, scripts, and even docs inside app bundles.
  • Developers often removed secrets in later versions but didn’t revoke them on the backend, leaving them exposed in older builds.
  • iOS apps leaked slightly more than Android apps, challenging assumptions about platform security.

From Evaluating LLMs in Detecting Secrets (Alecci et al., 2025):

  • LLM-powered scanners dramatically outperformed pattern-based tools, discovering new secret types like OpenAI API keys and JWT tokens.
  • 42% of newly analyzed Android apps contained at least one secret.
  • The same AI capability can be used by attackers to automate secret discovery across app stores.

Together, these studies make an uncomfortable but constructive point: the ecosystem leaks. Every app store contains hundreds of live secrets. Attackers don’t need zero-days;they can simply harvest misconfigurations.

For developers and engineering managers, the response shouldn’t be fear, but process: adopt continuous scanning and robust protection so your app isn’t one of the easy targets.

Practical best practices for teams

Below are nine practical recommendations distilled from research, industry experience, and AppSweep data.

  1. Integrate scanning early and often
    Add a secret-scan stage to every CI/CD pipeline. Treat failures like build breakers.
  2. Re-scan release artifacts
    Scan the compiled APK/IPA, not just source code. Secrets often slip in through third-party SDKs.
  3. Externalize configuration
    Keep secrets off the device when possible. Use backend-issued tokens or configuration services.
  4. Apply least privilege to all keys
    Narrow their scope, add quotas, restrict origins, and monitor usage anomalies.
  5. Obfuscate aggressively
    Commercially available advanced obfuscation products with string encryption, resource hiding, and class renaming.
  6. Use the platform’s secure storage
    Android KeyStore and iOS Keychain exist for a reason. Let them hold your sensitive material.
  7. Enable runtime self-protection (RASP)
    Detect tampering, rooting, and debugging. Consider terminating sessions or disabling functionality if the app environment looks hostile.
  8. Plan for rotation and revocation
    Keep an inventory of secrets per release. Practice revoking a key and pushing a patched build quickly.
  9. Educate and codify
    Train developers that “compiled” doesn’t mean “hidden.” Publish internal guidelines on secret handling and scanning procedures.

When teams adopt these habits, scanning and protection stop being bolt-ons and become invisible hygiene, just part of shipping a secure app.

Why this matters for engineering managers and skeptics

Security teams often face skepticism from product or engineering leaders:

“Our app doesn’t handle sensitive data.”
“Nobody would bother reverse-engineering us.”

The research proves otherwise. Attackers don’t need your user data; they can exploit your backend through an exposed API key or abuse your paid services using a leaked token. The Leaky Apps team found that a single Git credential in a single app granted access to more than 1,000 private repositories.

Moreover, these issues aren’t limited to giant enterprises. The dataset included small startups and hobby apps—anyone can leak secrets, and anyone’s secrets can be valuable.

By embedding scanning and protection into normal development workflows, you’re not adding bureaucracy—you’re reducing future firefights. The cost of adding one scanning step is negligible compared to the disruption of an unplanned, emergency key rotation after a leak.

Final takeaway

Every mobile app carries a small universe of data inside it—code, assets, and sometimes secrets. Attackers know this and have automated ways to dig them out.

The 2025 academic research and the findings from AppSweep results both tell the same story:

  • Secrets are still leaking into production.
  • Basic obfuscation no longer fools modern scanners.
  • Teams that pair continuous scanning with secure-by-design architecture and strong app protection dramatically reduce their attack surface.

If you’re leading an engineering or security team, the path forward is clear:

  1. Detect what shouldn’t be there. Run scanners as part of your standard CI.
  2. Defend what must be there. Obfuscate, encrypt, and limit its blast radius.
  3. Monitor and rotate. Assume every secret is temporary.

TL;DR

  • Recently, research on more than 10,000 Android and iOS apps found hundreds of live API keys and credentials hidden inside production builds.
  • Scanning and protection solve different problems. Scanning finds secrets that shouldn’t be there; protection defends the app’s logic and behavior against tampering. Each covers what the other cannot.
  • Guardsquare’s AppSweep automatically analyzed 5,480 Android apps (mostly financial) and detected 164 hardcoded keys, a proof that even security-aware teams slip up.
  • Best practices include integrating proactive scanning into CI/CD, minimize what you embed, harden what remains, and plan for rotation and incident response.

 

Ready to learn more about protecting and testing your mobile app?

Contact a Guardsquare expert today

 

Discover how Guardsquare provides industry-leading protection for mobile apps.

Request Pricing

Other posts you might be interested in