Cyber Security

Navigating Security Concerns, Preparedness, and Adoption Trends in the GenAI Era

By: Michael Callahan, cybersecurity specialist at Salt Security

The most recent State of API Security Report was expanded to encompass one of the world’s current biggest talking points: Artificial Intelligence (AI). Security leaders were asked pressing questions related to GenAI to provide valuable insights into how different industries perceive the security risks associated with GenAI, their confidence in detecting and responding to AI-driven attacks, and their current level of adoption for Application Programming Interface (API) development.The report found a mounting enthusiasm for AI-driven innovation that coexists with legitimate security concerns and varying levels of readiness to address emerging threats.

The Growing Concern Over Generative AI and Security Risks

Generative AI has captured the attention of organizations across multiple industries, but with its rise comes a new wave of security concerns. The survey results show that perception of risk varies widely depending on the industry. Retail (59%) and Financial Services & Insurance (52%) lead the pack in recognizing GenAI as a growing security risk. Given the nature of these industries, handling sensitive customer data and financial transactions, it’s no surprise that organizations in these sectors are more attuned to potential AI-driven threats such as fraud, deepfake manipulation, and automated cyberattacks.
Technology companies are also cautious, with 41% of respondents acknowledging GenAI as a security concern. While this sector is at the forefront of AI advancements, its practitioners are likely well aware of the vulnerabilities that AI can introduce.
On the other end of the spectrum, industries like Education (57%) do not yet see GenAI as a significant risk. This could stem from a lack of direct AI applications in critical security areas or a limited understanding of potential threats. It is also possible that the sector views AI as more of an opportunity to assist with learning or providing useful teaching resources. However, as AI-generated content and deepfake technology continue to evolve, educational institutions may find themselves reassessing this stance in the near future.

Are Organizations Ready to Address AI-Driven Cyber Threats?

The survey results indicate that confidence levels vary significantly across industries when it comes to how prepared organizations are to detect and respond to attacks leveraging GenAI.
The technology sector, with 26% of respondents feeling “very confident” and 52% “somewhat confident,” appears relatively prepared. Given their familiarity with AI and access to cutting-edge security technologies, this sector is expected to lead the way in developing robust countermeasures against AI-driven attacks.
Government agencies and educational institutions, however, tell a different story. While 80% of government respondents claimed to be “somewhat confident,” not a single respondent expressed strong confidence. Similarly, in education, 71% were “somewhat confident,” but none felt “very confident.” This suggests that while these sectors recognize the risks, they may lack the necessary expertise, resources, or infrastructure to fully address them.
The healthcare sector presents a particularly concerning picture, with half of respondents stating they are “not very confident” in their ability to respond to AI-driven threats. Given the increasing reliance on AI for diagnostics, patient data management, and medical research, this gap in preparedness could have serious implications. Healthcare organizations must prioritize investment in AI security measures, ensuring that sensitive medical data is protected from potential manipulation or exploitation.

Who’s Leading the Way in Generative AI in API Development?

Beyond security concerns, the survey explored how organizations are utilizing GenAI for API development. While some industries are already leveraging GenAI to streamline development processes, others remain hesitant or are in the early stages of adoption.
Industries such as Energy & Utilities and Entertainment & Media report the highest adoption rates for using GenAI in all API development efforts. This suggests a strong belief in AI’s ability to enhance efficiency and innovation within these fields.
Meanwhile, the technology sector, often seen as the driving force behind AI innovation, shows significant but more measured adoption, with 48% using GenAI for some API development rather than all.
Other industries, such as Healthcare (39%) and Financial Services & Insurance (31%), show a clear intention to adopt GenAI within the next 6–12 months. Their cautious approach may stem from regulatory considerations, security risks, or the complexity of integrating AI-driven solutions into legacy systems. However, their willingness to embrace GenAI in the near future signals a broader trend toward AI-driven automation across sectors.
Interestingly, some industries remain resistant to the adoption of GenAI for API development. In Education (36%) and Manufacturing (25%), a significant portion of respondents indicated that they do not plan to use GenAI in API development at all. This reluctance could stem from a lack of immediate applicability, limited technical expertise, or concerns about AI-generated code reliability and security.

What These Findings Mean for the Future of AI Adoption

The survey results highlight both the opportunities and challenges associated with the rise of Generative AI. While certain industries are embracing AI-driven solutions with enthusiasm, others remain cautious, either due to security concerns, regulatory constraints, or uncertainties about AI’s long-term impact.
For industries that already recognize GenAI as a security risk, proactive measures must be taken to mitigate potential threats. Organizations should invest in security solutions that can detect and respond to AI-generated attacks in real-time. This includes deploying advanced anomaly detection systems, enhancing AI auditing capabilities, having strong governance in place and fostering a security-first culture among employees.
For those industries that lack confidence in their ability to address AI-driven threats, collaboration and knowledge-sharing will be key. Governments, healthcare providers, and educational institutions must work together with AI researchers and cybersecurity experts to build comprehensive security frameworks that address the unique challenges posed by AI-powered cyberattacks.
Meanwhile, organizations that are hesitant about integrating GenAI into API development should consider piloting small-scale AI projects to assess feasibility and benefits. By starting with controlled, low-risk implementations, companies can gradually scale up their use of AI while ensuring that proper safeguards are in place.

Balancing Innovation with Security in the AI Era

As Generative AI continues to evolve, organizations across all industries will find themselves navigating the fine line between innovation and security. While AI offers transformative potential in areas like API development, customer service automation, and data analysis, it also introduces new risks that cannot be ignored.
The survey findings indicate that while many organizations acknowledge these risks, confidence in AI security preparedness remains uneven. Moving forward, businesses and government institutions must take a proactive approach that includes investing in AI-driven security tools, fostering AI literacy among employees, and establishing regulatory guidelines to ensure responsible AI adoption.
The future of AI is undoubtedly promising, but success will depend on how well organizations adapt to its challenges. By balancing enthusiasm for AI-driven innovation with a strong commitment to security, industries can harness the power of Generative AI while safeguarding their digital ecosystems against emerging threats.

Author

Related Articles

Back to top button