Machine-driven weapons need an international system of accountability
International rules around LAWS are relatively underdeveloped, and in the absence of clear norms on human accountability and attribution for autonomous weapons, we could see states like Pakistan deploy LAWS for operations outside their borders.
In March 2014, hundreds of mysterious gunmen in camouflage appeared on the streets of Crimea and began taking over local government buildings. While Russia initially denied the existence of the “little green men”, as they came to be known, President Vladimir Putin admitted that they were Russian military at the one-year anniversary of the Crimean occupation. Ethical issues and the complete violation of the Geneva Conventions notwithstanding, the logic behind this tactic was quite straightforward and compelling: Aid pro-Russian forces while creating enough uncertainty about Russian involvement so as to prevent NATO retaliation and global backlash.
The tactic itself has been deployed by rogue states before, but usually through proxies. Pakistan, for instance, has a history of supporting terrorist and insurgent groups in Jammu and Kashmir. Yet with fewer remaining strategic partners and important resources connecting it to more powerful states, Pakistan no longer possesses the leverage needed to shield itself from international backlash as Russia did. In such an environment, states like Pakistan may be tempted to turn toward new methods to achieve their goals.
Lethal Autonomous Weapons Systems (LAWS) — which can detect, select and attack targets without human intervention — are one such avenue. International rules around LAWS are relatively underdeveloped, and in the absence of clear norms on human accountability and attribution for autonomous weapons, we could see states like Pakistan deploy LAWS for operations outside their borders.
LAWS present several benefits for “middle powers”: They increase the reach and effectiveness of forces, reduce casualties and enable persistent presence in vast, inaccessible terrains. Countries like India or South Korea, which operate in a complicated geostrategic context, can therefore use LAWS to effectively police and protect their territory. On the flipside, LAWS can be used by state and non-state actors to engage in asymmetric tactics. This could take three forms: A state could directly deploy LAWS against an adversary state; a state could equip proxies such as insurgent or terrorist groups with autonomous weapons units; a non-state actor steals or otherwise illegally acquires autonomous systems or units.
With this destabilising potential in mind, external state actors that actively aid insurgencies and terrorist organisations will be tempted to deploy autonomous systems and claim they are stolen or rogue units.
While LAWS are still in the development stage and are fairly inaccessible for most states — let alone non-state groups — due to high costs, and lack of skilled AI talent and operators, it is not a complete stretch of imagination to envision a future where autonomous weapons are within the reach of any state or non-state actor that wants them.
Four of the Permanent-5 powers in the United Nations — the US, France, Russia and UK — have explicitly rejected moving toward a new international law on autonomous weapons. The US and Russia are actively pursuing AI-driven military systems, the UK Ministry of Defence was recently revealed to be funding a no-longer-secret programme, and while China has called for a ban, its military has continued to research and develop LAWS. It therefore seems likely that these powers would support a regime on LAWS, if at all, only after they have developed and perfected the technology themselves.
However, even in the absence of comprehensive international framework agreements on LAWS, stakeholders in the emerging LAWS ecosystem need to promote the creation of export controls and rules. Relevant private technology companies — some of which (like Google) have already taken the lead in developing internal ethical guidelines for AI technologies — with buy-in from state actors should establish an export control group to create guidelines for LAWS and component technology sales. This must include basic stipulations on accountability in cases of theft or hacking. Suppliers must be able to prove that they have in place the necessary physical and non-physical safeguards to protect their LAWS technologies. In this vein, the AI and weapons industry must craft specific standards for such safeguards. Autonomous systems themselves could help in export controls through persistent surveillance of LAWS manufacturing facilities, although it may be difficult to get actors to agree to such measures.
To ensure we shape the future of LAWS to tilt toward a positive overall outcome, we need to start building up a flexible, evolvable framework of rules that accounts for the myriad ways in which the technology can be misused, not just in traditional state-to-state conflicts, but in the context of the many non-traditional modes of conflict.
This article first appeared in the print edition on February 22, 2019, under the title ‘Lethal And Autonomous’. The writer is a junior fellow with the Cyber Initiative, ORF.
No hay comentarios:
Publicar un comentario