
As the global volume of mobile applications continues to grow, this attack surface presents a broader and more lucrative target for threat actors. A recent Enterprise Strategy Group survey of mobile app developers and security professionals showed that a typical organization released 13 unique mobile applications last year. At the same time, organizations also suffered an average of nine mobile app security incidents, with an associated cost of nearly $7 million per incident.
The report also revealed that nearly 40% of organizations only rely on default operating system (OS)-level protections or in-house built security solutions to prevent incidents. Despite this, 93% of survey respondents think their current mobile app protections are sufficient. In combination, these findings suggest pervasive overconfidence in the mobile app security posture at most organizations.
Leaving mobile applications unprotected makes reverse engineering, unauthorized modding, and malicious tampering attacks much easier. A few of the industries where mobile apps are specifically targeted with reverse engineering and tampering attacks include:
- Financial Services – The “SeaFlower” campaign maliciously cloned both iOS and Android wallet apps to steal cryptocurrencies.
- Gaming – Clones of popular mobile games (like Minecraft) featuring cheat mods are common. Very often, unofficial game versions may also contain malware, as was the case with Hamster Kombat.
- Retail – Fake “impersonator” retail apps abound (from airlines to fast food) to spread malware or steal credentials. But an actual cloned retail mobile app functions like the real thing. These can be used to perpetrate many types of retail fraud, including unauthorized transactions and loyalty program rewards points theft.
While the current state of mobile app exploitation is bad enough, another variable has the potential to make these kinds of attacks even easier for threat actors.
AI Enters the Chat
Today, AI is increasingly being used as a tool for mobile application and SDK development. Devs around the world are turning to GenAI tools to help expedite processes and accelerate delivery cycles. Gartner predicts that by 2028, 90% of software engineers will use AI code assistants, up from less than 14% in early 2024. Microsoft and Google executives have both recently estimated that somewhere around 30% of their code is already artificially generated.
Despite AI’s ever-widening acceptance, there are some well-documented security issues associated with these tools, both in terms of code quality and responsible usage. For example:
- Stanford researchers found that seasoned app developers using AI tools were 80% more likely to produce less secure code than equally experienced devs coding the old-fashioned way. Even worse, those same AI-assisted developers were 3.5 times more likely to think their code was actually secure.
- LLM package hallucinations are another common issue. One university study of 500,000+ LLM-generated code samples found that almost 20% of suggested packages didn’t actually exist.
- Another recent study showed that as AI-generated code becomes more mainstream, 81% of organizations knowingly ship vulnerable code.
- About one-third of developers admit that they don’t review AI-generated code before every deployment, despite most believing that AI will exacerbate open-source malware threats.
From vibecoding to vibe hacking
Practically any tool developed for productive use cases can be abused for selfish or malicious purposes. Probably the most well-known examples of AI-assisted attacks to date have been against banking apps. Financial institutions have seen an onslaught of deepfake-enabled fraud attacks targeting “know your customer” (KYC) systems. Fraudsters combine AI-powered face-swapping with virtual-camera tools to spoof liveness-detection controls and other security features. While we’re just at the beginning of the timeline for AI-enabled malfeasance, new cases seem to pop up every week.
“Vibecoding” introduced a pathway to another type of AI-related threat. For the unfamiliar, this term refers to developers using informal GenAI prompts to request a desired mobile app outcome. The tool then generates code to try to fulfill the spirit of the vibed prompt.
Vibecoding started gaining popularity in February 2025. By April, it achieved buzzword status at RSAC 2025—followed almost immediately by a flurry of alarmist think piece articles about the imminent threat posed by its evil twin: “vibe hacking.” The theory was that a bad actor could apply the same intuitive prompt techniques to help develop advanced malware variants or to refine techniques for a sophisticated attack. As of July 2025, some experts were confident that a vibe hack was still not possible. But things can change very quickly in the 21st century.
In August 2025, Anthropic’s most recent Threat Intelligence Report described attack examples where their “Claude” family of AI assistants were misused, including a large-scale vibe hacking extortion operation using Claude Code, a fraudulent employment scheme from North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills.
According to Anthropic’s report, “Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming ransom notes that were displayed on victim machines.”
While the lifecycle from novelty buzzword to verified nation-state threat only took about six months in these instances, it’s still fair to question how much of a threat AI-based tools actually pose to everyday mobile applications.
Is AI-assisted reverse-engineering and tampering even a real thing?
Are attackers using AI against your mobile apps? Unless a credible threat actor voluntarily confesses the intimate details of their process, we may not know anytime soon. But if software engineers and developers are widely using GenAI copilots and the like to build, it’s fair to assume that adversaries are using the same class of tools in the same ways to exploit. Long before the general availability of ChatGPT, reverse engineering and tampering threats were already a problem for mobile app developers. As of today, there are realistic scenarios where AI coding assistants and LLMs can be applied to make these attacks faster and easier.
Reverse engineering: Code is hard to read. If your mobile app is left unprotected, a threat actor could send the abstract code to an LLM and ask “what’s happening?” in order to annotate it. The LLM will then decompile and translate code into a more human-readable language. However, when you translate code in this way, some semantic information typically gets lost. As such, these tools are most useful and dangerous in the hands of a skilled attacker with existing experience and knowledge. This approach would work best on small code samples, unobfuscated code, or even poorly obfuscated code. With this in mind, apps need strong, multi-layered protection using several forms of code obfuscation.
Tampering: Once an app is reverse-engineered, an attacker could then use an AI assistant to vibecode prompts in a dynamic code instrumentation tool (like Frida) to help accelerate malicious code modification. AI is an accelerator; it helps smart people go faster. The most reasonable assumption for a threat use case here is very similar to how the tools are being used for “normal” coding workflows.
A question of protection
Bottom line: it doesn’t matter if your adversaries are using the latest tools or relying on old-fashioned time-based methods to rip and run with your code, because the corrective best practices remain the same. Don’t leave targeted mobile apps unprotected.
Proactive protection against reverse engineering, modding, and tampering requires a purpose-built, multi-layered approach to mobile app security:
Multi-layered code hardening defends against malicious static analysis attempts and helps prevent reverse engineering or extraction of secrets and sensitive information. These protections hide sensitive code related to authentication, transactions, and in-app purchases to protect user privacy. An effective solution keeps logic hidden by applying multiple techniques, such as control flow obfuscation, code virtualization, name obfuscation, and data encryption.
Almost 70% of organizations are not currently using obfuscation, leaving their apps vulnerable to static analysis.
Runtime application self-protection (RASP) capabilities monitor a mobile app’s behavior in real time to detect tampering, modding, or jailbreak attempts. Anti-debugging capabilities can respond to an attack by automatically terminating the app or restricting its functionality. Surprisingly, 60% of organizations don’t have RASP enabled, leaving their apps open to dynamic analysis.
Application attestation is another kind of runtime protection that prevents API abuse by verifying the authenticity of any frontend applications in the wild that want to interact with backend services. This ensures the app is authentic, unmodified, and running on a secure device. New research shows that API threats surged in the first half of 2025.
In addition, threat monitoring can provide real-time visibility that helps developers track emerging threats, stay ahead of malicious actors, and limit the impact of attacks by stopping them at their outset. A fully featured threat monitoring solution will also provide real-world validation of information gathered by RASP checks to reinforce protection.
A comprehensive approach to mobile app security should also include purpose-built mobile application security testing (MAST) capabilities that are integrated and continuously referenced throughout the design, development, and testing phases to verify secure coding best practices in accordance with OWASP standards. This is especially critical if the developers themselves are using some kind of intelligent coding assistant. NYU research shows exploitable security vulnerabilities in up to 40% of the code generated by AI co-pilots.
The security story–same as it ever was
AI-based coding tools make certain tasks faster and easier – for everyone. Any advantages offered by GenAI and LLMs to legitimate developers must reasonably be assumed to offer an equal and opposite advantage to malicious actors. While they may also lower the barrier of entry for baby hackers, emerging fraudsters, and novice game cheats to explore and educate themselves through experimentation, the most viable risks come from making it faster and easier for people with skills and experience to exploit your mobile application code.
These tools are only going to get better and faster over time. So don’t leave vulnerable mobile applications unprotected. Ensure comprehensive security across the mobile SDLC.
Jason Cortlund is a Mobile App Security Evangelist at Guardsquare.



