Future of AIAI

AI as the investigator’s ally: Why AI should amplify human judgement in OSINT investigations

By Stuart Clarke, CEO of Blackdot Solutions

Open source intelligence (OSINT) has become a go-to method of gathering intel for organisations investigating crime or adhering to compliance requirements.  

First developed as a military intelligence discipline, the growing breadth and value of open source data has meant that the investigative practice has spread across the public and private sectors: law enforcement agencies, financial institutions and corporations are all in need of techniques like OSINT to deal with a rising range of threats.  

But what exactly is OSINT? And why is there an increasing need for it?  

The growing demand for OSINT and AI  

In short, OSINT is the targeted collection and analysis of publicly available data to produce actionable insights. This data can be gathered from any publicly available source, be it news publications, public social media, and corporate registries such as Companies House, or harder-to-reach locations like the dark web.  

Crucially, whether investigators are looking into organised crime networks or identifying fraudsters, OSINT enables them to gain valuable context and reveal insights that are not always apparent in internal or privileged data. They can shine a light on hidden risks and uncover connections between companies and people. For example, they might discover a person of interest is connected to an entity engaging in money laundering.  

The evolution of the internet and technology means it’s never been easier for criminals to mask themselves behind fake identities, and information has never been scattered across so many sources. To contend with this data-saturated world, AI is becoming indispensable for modern OSINT investigations, especially as criminals could use the technology themselves to create synthetic identities and spread fake information.   

But there is an important caveat. AI only works when it is used to augment, not automate, the investigator’s role. 

AI as a data analyst 

Scouring the web and finding information is AI’s bread and butter. Where it thrives is in delivering major efficiency gains, significantly reducing the need for investigators to manually collect and sift through data. 

Teams can use AI to build automated workflows for collecting and processing vast amounts of open source data and to then summarise the data in a digestible format. In particular, AI can transform processes such as collating corporate records or mapping connections between individuals, which are typically laborious, time consuming and error prone.  

By automating repetitive tasks, investigators can then spend more time analysing the information and insights presented to them and credible intelligence can be shared seamlessly internally. AI tools could even enhance the quality of analytical insights, which subsequently could contribute to improved investigation outcomes. Ultimately, if teams are able to spend more time analysing and evaluating data, they can draw better actionable insights that help drive better decisions. 

However, what if organisations decide that AI can do the job for them? The idea might be tempting for some. But if AI is relied upon too much, it will generate far more risk than it solves.  

AI as an ally, not the lead detective 

All the reasons why we need a human investigator – critical thinking, accuracy, applying ethics – are all the things AI is not necessarily good at. If cases were being run entirely by AI, we could end up in a situation where prospective hires were wrongly flagged as red flags or criminal proceedings were brought against innocent people.  

This is because while AI can collate and analyse information very well, where it falls short is in making judgements. There are plenty of examples in the news of AI models ‘hallucinating’ to report incorrect information as fact or reflect biases present in its training data. Apple’s AI false headlines were one such instance. For OSINT, this of course becomes a high-risk problem.  

As outlined previously, all of AI’s benefits are to aid human investigators in their work, not replace their tasks completely. Humans still need to review AI-generated insights to ensure the models are working as they should be for quality assurance purposes too.  

Ultimately, AI itself won’t suffer the consequences for breaching laws and regulations, but humans, even if unknowingly, will. That’s why a level of human oversight is always required to ensure the accuracy of the process and that insights are being acted upon effectively and compliantly.   

OSINT is a human and AI activity 

AI and humans are truly complementary for OSINT investigations. AI can drive efficiency and surface key information missed by human investigators, but humans are integral to applying their expertise, experience and judgement to the data.    

The optimum model leaves decision making in the hands of humans but utilises AI to augment OSINT – AI tools can automate the data collection process, analysing and presenting relevant insights to investigators in a clear and visual way. Consequently, users remain in control but workflows become more efficient and teams can better tackle the vast and varied nature of open source data.   

Above all, from streamlining repetitive tasks to surfacing credible information at scale, AI must be deployed to amplify human decision making, protect operational security and help investigators focus on what truly matters: context, critical thinking and credible conclusions. 

Author

Related Articles

Back to top button