
Robots have already been granted a license to kill. The real question is where society draws the line.
Nearly a decade after Dallas police used a bomb-disposal robot packed with explosives to end a deadly standoff, U.S. cities and the U.N. are still struggling with how much power and autonomy machines should get (especially when lives are at stake). At the same time, unarmed patrol robots are quietly spreading into everyday public spaces.
A line crossed in Dallas, a line redrawn in San Francisco
In July 2016, Dallas officers ended a siege by sending a bomb-disposal robot towards gunman Micah Johnson, carrying about a pound of C-4. They detonated it, killing him. It was the first recorded case of U.S. police using a robot to deliver lethal force. Chief David Brown defended the move: “I approved it… and I’ll do it again if presented with the same circumstances.”
Six years later, San Francisco went right up to that same line, and then stepped back. On November 29, 2022, the city’s Board of Supervisors voted 8-3 to let police use teleoperated robots to deliver deadly force in “extreme” cases. The public response was swift and angry. Within a week, the Board reversed course, sent the policy back to committee, and temporarily banned lethal use of robots.
That quick U-turn showed how fragile public support becomes when robotics meets state violence.
The back-and-forth was possible because of California’s AB 481, a law that forces police departments to publicly spell out how they acquire and use “military equipment,” including robots that are armed or could be armed. Civil-liberties groups said SFPD’s first policy was far too broad. The fight over that document became an early model for how local governments might oversee police robots across the state.
“We just stopped the use of killer robots in SF… Common sense prevailed,” Supervisor Hillary Ronen said after the reversal.
Battlefield autonomy is sprinting ahead
While U.S. city councils are hesitating, war zones are scaling unchecked.
A U.N. Panel of Experts report on Libya described Turkish-made Kargu-2 loitering munitions that “hunted down and remotely engaged” retreating fighters “programmed to attack targets without requiring data connectivity… ‘fire, forget and find’.”
Even though experts still argue about how fully autonomous the systems were, the report marked a disturbing moment – weapons that can search for and attack people with very little human input.
Diplomatic pressure is growing in response. In 2023, the U.N. General Assembly passed its first resolution on autonomous weapons. In 2025, the U.N. Secretary-General repeated the message that “machines that have the power and discretion to take human lives… are politically unacceptable \[and\] morally repugnant” and should be banned.
Groups like Human Rights Watch and the Stop Killer Robots coalition are still pushing for a binding treaty that would require “meaningful human control” over any weapon that can kill.
What “meaningful human control” means for policing
Turning high-level ideas like “human in the loop” into rules for city policing is much harder in practice.
On one hand, teleoperated robots can keep officers away from gunfire, explosives, and other direct danger. On the other hand, distance can mean losing context (you see the scene through a camera, not your own eyes) and it can normalise extreme tactics over time.
San Francisco’s debate showed this tension clearly. So did earlier corporate experiments, like Axon’s 2022 idea to use Taser-equipped drones for school safety. That proposal triggered the resignation of most of the company’s AI ethics board and forced Axon to pause the plan.
Robot makers are now trying to set their own limits. In 2022, Boston Dynamics, Agility and several others publicly promised not to weaponize their general-purpose robots and called on policymakers to prevent misuse.
That pledge has influenced marketing and compliance strategies within startups that want to sell robots to police and other public-safety agencies.
The unarmed path is already mainstream
Away from the headlines, unarmed robots are quietly becoming more common.
The NYPD reintroduced Boston Dynamics’ Spot in 2023 for high-risk situations such as bomb threats or barricaded suspects. In 2024, a Massachusetts State Police Spot, nicknamed “Roscoe,” was shot while scouting a barricaded gunman (a dramatic example of a robot taking physical risk instead of an officer). Older bomb-disposal robots like Remotec and Andros models are still standard gear in explosive ordnance teams across the country.
Security “rovers” such as Knightscope’s K5 now patrol malls, parking lots, and campuses. These machines (basically moving sensor towers with cameras, LIDAR, and two-way audio) show both the promise and the problems of automation in public spaces. They can extend surveillance and reduce some risks, but they also raise questions about edge-case failures, privacy, and whether communities really accept being watched by robots at all times.
Micropolis shows what a non-lethal model can look like
Micropolis Robotics, a Dubai-based company, is taking a clearly non-lethal path with its DPR-02 autonomous patrol cars, now deployed at Dubai’s Global Village. Dubai Police and Micropolis say the vehicles are unarmed, watch for unusual behavior, and then escalate to human operators when needed. They are presented as “friendly, AI-powered sentries” meant to help and talk to visitors, not scare them. The rollout was timed with GITEX 2025 and plugged directly into the city’s central command systems.
“This patrol vehicle was fully developed in collaboration between Dubai Police and Micropolis Robotics,” said Major General Khalid Alrazooqi, announcing the DPR-02’s official start of service on October 15, 2025, at Global Village. The project turns Micropolis into a real-world example of non-weaponized public-safety robotics: fleet management, remote teleoperation, and detection models, with no trigger at the end of the workflow. For police departments nervous about headlines and political backlash, that is a much easier story to defend than “killer robots.”
Micropolis also highlights a basic commercial truth for founders and investors: you can sell capability without adding lethality. In tourist destinations and crowded urban venues, the experience for visitors is crucial. Patrol robots that can deter trouble, give directions, and call for help extend the reach of human officers and can show clear returns (more coverage hours, faster responses to incidents) while avoiding the risks and legal overhead that come with weapons.
For startups and buyers: where the risk (and opportunity) sits
Anyone building or buying patrol robots today has to think beyond features and specs.
Policy risk is real
California’s AB 481 forces police departments to publish inventories, usage rules, and regular approvals for “military equipment” – a label that can include weaponized robots and even some accessories. A 2025 state Senate bill, SB 93, pushed further and tried to limit when weapon-equipped robots could be used at all. Founders should expect similar laws to appear in other states and countries.
Norms are shifting
The U.N. process and the Secretary-General’s language put reputational pressure on any company that leans toward lethal use. Investors should look closely at whether a startup’s roadmap fits with public non-weaponization pledges, or has a clear and defensible reason if it does not.
Capabilities will converge
Many robot platforms are “dual-use” by nature: the same sensors and software that support patrol can also aim less-lethal devices or control a breaching tool. That’s why clear policy, technical safeguards, audit logs, and explicit “no weaponization” clauses are likely to become standard requirements in public-sector contracts.
Community trust is a feature
San Francisco’s reversal is a reminder of how quickly approvals can vanish if police seem to move faster than public oversight. Vendors that lead with transparency (publishing model cards, data-retention policies, and results from bias testing) will have an advantage when cities decide whom to trust.
Challenges ahead
Even if robots stay unarmed, the hardest questions are still ahead.
Accountability
When a robot is involved in a harmful incident, who is responsible? The remote operator, the police chief who approved the deployment, or the vendor whose software misread the situation? Oversight laws like AB 481 answer part of this, but not all of it.
Data drift and bias
AI models trained in one environment might overreact or mislabel behavior in another, which raises fairness and equity concerns. Clear rules for when a human must take over (and good records of those handoffs) will matter as much as the model’s raw accuracy.
Arms-race dynamics
Libya shows that advanced autonomy often appears first in war zones and then spreads. Police agencies will feel pressure to “keep up” with whatever tools are seen as modern or effective. That urgency is exactly why global norms around “meaningful human control,” including for domestic security, are being fought over now.
The takeaway
Ultimately, the technology is moving fast, but public acceptance is lagging.
Cities may eventually be comfortable with robots acting as eyes, ears, and wheels (tools that buy time and distance for human officers) while keeping final decisions in human hands. Dallas created a precedent for lethal use in a crisis that still divides opinion. San Francisco showed how quickly a community can push back.
For now, the main commercial momentum is with unarmed systems like Micropolis’s DPR-02, which extend situational awareness and response without crossing the moral line that many policymakers and citizens are not ready to move.



