Ethics & ResponsibilityAI & Technology

Power with principles: why AI needs ethics to survive

By Mats Thulin, Director of AI and Video Analytics, Axis Communications 

Axis Communications’ Director AI & Analytics Solutions, Mats Thulin, explores the ways AI is changing video surveillance – and the reasons ethical design is now mission critical. 

Artificial Intelligence has moved through the hype cycle. Its initially inflated expectations were followed by inevitable disillusionment – but it has now proven itself ready to deliver. AI is transforming the video surveillance industry. The analytics, automations and insights made possible by hardware upgrades and smart software give surveillance an edge it has never had before. AI delivers real advantages, but it also magnifies longstanding privacy concerns in a high-stakes industry.  

As its deployment has surged, AI’s utility has grown – yet without direction and understanding, the perceived risks could outweigh its benefits. AI deployment processes and best practices are still evolving, causing end users to worry; 61% of those surveyed between the channel and the end market highlight cybersecurity, risk and privacy as their industry’s most significant trends. So how can the world deploy and maintain surveillance AI in a responsible and meaningful way? 

Building in ethics 

Primarily, the industry must prioritise ethics. Ethical AI principles must be embedded from the design stage, woven together with fairness, transparency, and accountability. AI needs human oversight and explainability to maintain full compliance and accountability. AI is a tool to make us sharper and more efficient: it is not a replacement for humanity. 

Facial recognition is a key example. It can enhance watch lists, reduce activities like shoplifting, and protect workers. But if its algorithms don’t align with privacy regulations and aren’t deployed with full transparency, they risk causing reputational damage to the technologies and their users.

Bias mitigation is equally critical. AI trained on limited datasets can reinforce stereotypes and generate false positives, and AI must be rigorously tested to ensure they perform fairly. And trust, ultimately, is the currency of AI adoption. Privacy and compliance are non-negotiable – top-to-bottom ethical innovation is the only way to ensure public trust and acceptance.  

Smarter cameras, broader horizons 

Surveillance now supports diverse business functions far beyond its roots in crime prevention. The camera is a powerful digital sensor, a computer and, with AI, it is a source of curated intelligence which can align with other data sources.   

In a smart city, video analytics can help manage congestion and combine with environmental data to do so in a way that cuts congestion. In retail, AI analysis of customer movement helps improve store layouts and staffing arrangements. In manufacturing, aligning camera and machine sensor data highlights production issues, ensures safety compliance, and improves operational workflows.  

Surveillance systems now support strategic decision making far beyond its original scope. But this technology shift requires a mindset shift, embracing data intelligence as a force for good, guided by ethical guardrails. More data, more responsibility – it is up to stakeholders to maintain privacy and civil liberties even as use cases expand. 

A smarter surveillance stack 

Security’s architecture has evolved along with its move to a general intelligence role. The client/server model has been joined by a hybrid model, using edge processing and the scalability of the cloud to enable real-time response and efficient data management. Placing AI on the network edge cuts latency; the cloud allows for storage, distributed analytics and integration with other enterprise systems. Designing systems around a hybrid architecture means improved reliability, scalability, and cost efficiency. 

The hybrid model also requires additional ethical vigilance. A hybrid system must be developed and tuned to stay within regulatory and moral boundaries. Exploiting cloud infrastructure, often managed by third parties, means new cyber and data risks to manage. Mitigating these risks starts with good system design – data which is secure and sometimes anonymised at the point of collection and being sure of having a system level security approach. 

Ethics is a team sport 

Ethical AI is not just the concern of those developing or deploying AI engines. It is a unifier which applies to everyone from technology vendors, system integrators and regulators to end users. We must all play our role in getting AI right. We do this by collaboratively developing standards which prevent fragmented and inconsistent integration, and through strong cybersecurity principles which protect against corrupted AI data and the rise of AI-driven malware.  

Regulation is catching up – governments globally are establishing boundaries around high-risk AI use, making proactive compliance essential. And cross-discipline collaboration is just as vital: engineers need input from ethicists, designers should learn from regulators and civil activists, vendors must educate users. We need to talk, freely and openly, about both the benefits and pitfalls of AI.  

The transformation starts now 

We may feel that we’ve already gone through a major change, but the truth is that the real transformation is just beginning. AI could drive technological change and societal progress if – and only if – society implements it thoughtfully. Getting it right means greater efficiency, improved safety, and more informed decisions. Get it wrong and every industry, not just security, risks eroding public trust and violating fundamental human rights.   

AI in surveillance requires us to find and achieve a balance between security and freedom, to build intelligence while maintaining integrity. The question is no longer what AI can do – it’s what it should do to earn its place in a smarter, safer world. 

Author

Related Articles

Back to top button