
People want to be first – first to test, adopt, and play with the newest tools and technologies. Look at the queues outside of a flagship store during a product launch or the release of the latest iPhone and it’s clear to see there’s always demand for the latest and most exciting shiny new toy. While that appetite for novelty is deeply human, in a business setting it can create serious security and operational risks if you don’t have the fundamentals in place.
The solution isn’t to block experimentation, but to channel it. Organisations can give employees and IT teams the freedom to explore emerging technologies without putting the company at major risk. The key is to be thoughtful and considered about their use cases and implement core security practices from the outset.
Building a Secure Foundation for AI
As AI firms up and new platforms, solutions, and iterations emerge (on a near hourly basis), organisations are still grappling with what AI use cases look like in practice.
A recent survey from Gigamon found that mass AI adoption is causing businesses to overlook the essentials, often inadvertently providing adversaries with new opportunities to strike. The study found 91% of organisations make risky security compromises due to AI. And while AI investment surged by 62% in 2024, the WEF Global Cybersecurity Outlook Report found only 37% of companies had a process to assess security before AI tools are implemented.
If organisations want their AI investments to scale, they need to prioritise security just as much. AI deployments usually start with the best intentions. Automating tasks, analysing threat patterns, or enhancing user experiences are just a few of the benefits that AI can offer internal IT and security teams. However, without proper guardrails or controls in place, these systems can bring with them new risks. Integrating AI into an IT ecosystem with unpatched systems, weak access controls, a lack of training, or poor visibility is like building a skyscraper on sand.
To build on more stable ground, organisations have to prioritise thoughtful AI adoption. This means thoroughly vetting and testing new AI-enabled solutions before integrating them into operational workflows, establishing clear, robust cybersecurity frameworks to underpin AI integration, and having foundational security practices in place before deploying the new AI tools. Comprehensive data management strategies, regular vulnerability assessments, automated backups, strict access controls, and effective endpoint management are just a few components that go into laying a firm security foundation – and they’re essential for trying out the latest shiny new toys.
Managing Diverse Workstyles and Devices
Personalisation is the name of the game for the new digital employee experience. Today’s employees expect to be able to work from anywhere across a wide variety of devices (some folks do their best work on MacBooks, while others prefer Windows devices), and they expect to have a wide variety of tools and solutions at their disposal to make doing that work as easy as possible.
But supporting and securing employees across a diverse range of locations, devices, and applications is a tall order for IT and security teams to manage. Employees, often eager to play with the latest tools, are infamous for bypassing security protocols to access and install their favourite new app – often failing to recognise that they’re introducing a wide swath of new (and potentially unknown) risks to their organisation. In fact, recent research from Software AG found that half of all employees are using non-company issued AI tools.
This is particularly true in environments where endpoints aren’t being effectively managed. 90% of successful cyberattacks start at the endpoint. And users experimenting with unauthorised AI tools (think: generative assistants) run the risk of leaking sensitive data or creating unintended openings for malicious actors without proper IT or security oversight. This can open organisations up to additional risk, regulatory fines, and real reputational damage.
While AI represents a generational opportunity for organisations and individuals alike, its promise can only be realised if organisations are able to manage the technology thoughtfully and securely. Organisations need to set parameters with employees around internal AI use and roll awareness trainings and education programs around any new emerging technologies to make sure their teams know the expectations and risks that come with new tools.
Preparing for a Secure and Sustainable Future
It falls to IT and security teams working together to provide employees with the tools they need to work effectively, without exposing the organisation to unnecessary risk. It’s important for organisations to assess every major technology investment carefully, rather than rushing to adopt the latest solution without fully understanding its implications. That’s why a strong cybersecurity foundation, reinforced by continuous employee awareness programmes, is essential. It allows businesses to embrace innovation intentionally, while staying ahead of emerging threats.
The future of secure innovation and sustainable growth relies on matching enthusiasm for new technology with disciplined IT and security practices. New tools will continue to pop up constantly, and businesses that invest in their ability to monitor and control these tools will be ready to adopt and experiment on their own terms. With the right foundation in place, IT and security leaders can unlock new opportunities, deliver a stronger employee experience, and maintain the visibility and control they need to stay one step ahead.



