SPACE & DEFENSE

Can Laws Control Military AI Before It's Too Late?

Rapid advances in thinking weapons spark worries over ethics and accountability.

By Donna Joseph
Oct 24, 2023 10:12 PM Updated November 22, 2023
Can Laws Control Military AI Before It's Too Late? Photo by SBR

The use of artificial intelligence (AI) in the defense industry raises concerns about legislation and ethical implications. The rise of autonomous vehicles, like drones, has sparked debates on how to keep up with rapid AI advancements and ensure ethical military use.

AI in defense is described as a set of technologies enabling machines to perform tasks requiring human intelligence. However, there is no legal definition of AI. It is characterized by adaptivity, inferring patterns from data, and autonomy, making decisions without human control.

Determining accountability when AI technology fails is a challenge. Complex AI systems make it difficult to understand their reasoning, hindering legal responsibility assignment. This "responsibility gap" between AI systems and human operators complicates holding organizations accountable for AI actions, potentially leaving crimes unpunished and weakening deterrent laws.

Bias and discrimination are also concerns in AI systems. AI tools are only as good as their training data, which can be flawed or tampered with. Without legislation addressing biases, AI systems may perpetuate discrimination or unequal treatment.

Experts propose an "ethics by design" approach to AI development, establishing rules for both development and engagement. This approach could shift legal responsibility to developers. However, implementing this approach presents challenges for the legal profession.

In 2021, the European Commission proposed a legal framework on AI, aiming to establish harmonized rules for its development and use in the European Union. The framework categorizes AI systems based on risk levels, subjecting each category to different regulatory scrutiny and compliance requirements. The concept of an "AI Liability Directive" has also been introduced to address legal proof and accountability difficulties related to AI.

While these initiatives are steps in the right direction, they do not solve all legal challenges associated with AI. Policy papers, like the UK's AI Strategy and the US Department of Defense's Responsible AI Strategy, provide guidance on adhering to international law and ethical principles in AI development and use in defense.

Aligning AI development with legal and regulatory frameworks is crucial for safe and ethical deployment in defense. Clear accountability and addressing biases will allow us to harness AI's potential while upholding legal and ethical standards.


What To Read Next

Blue Energy Raises $380M to Build World’s First Project-Financeable Nuclear Plant

Blue Energy Raises $380M to Build World’s First Project-Financeable Nuclear Plant

The Blue Energy team has made remarkable progress de-risking the single hardest problem in nuclear — the cost structure that makes it project-financeable.
Adobe’s AI Suite Launch Intensifies Competition in Enterprise Software
For Adobe, competition therefore extends beyond traditional enterprise software vendors. It includes companies building general-purpose AI systems that can overlap with or replace parts of application-specific functionality.
Clean Technology Training Trust Appoints Betony Jones to its National Advisory Council
Betony’s led the federal government’s work to ensure major energy investments deliver high-quality jobs and economic opportunity. She shaped over $200 Billion investments for clean energy projects, ensuring alignment between industry and worker needs.

Business