SPACE & DEFENSE

Can Laws Control Military AI Before It's Too Late?

Rapid advances in thinking weapons spark worries over ethics and accountability.

By Donna Joseph
Oct 24, 2023 10:12 PM Updated November 22, 2023
Can Laws Control Military AI Before It's Too Late? Photo by SBR

The use of artificial intelligence (AI) in the defense industry raises concerns about legislation and ethical implications. The rise of autonomous vehicles, like drones, has sparked debates on how to keep up with rapid AI advancements and ensure ethical military use.

AI in defense is described as a set of technologies enabling machines to perform tasks requiring human intelligence. However, there is no legal definition of AI. It is characterized by adaptivity, inferring patterns from data, and autonomy, making decisions without human control.

Determining accountability when AI technology fails is a challenge. Complex AI systems make it difficult to understand their reasoning, hindering legal responsibility assignment. This "responsibility gap" between AI systems and human operators complicates holding organizations accountable for AI actions, potentially leaving crimes unpunished and weakening deterrent laws.

Bias and discrimination are also concerns in AI systems. AI tools are only as good as their training data, which can be flawed or tampered with. Without legislation addressing biases, AI systems may perpetuate discrimination or unequal treatment.

Experts propose an "ethics by design" approach to AI development, establishing rules for both development and engagement. This approach could shift legal responsibility to developers. However, implementing this approach presents challenges for the legal profession.

In 2021, the European Commission proposed a legal framework on AI, aiming to establish harmonized rules for its development and use in the European Union. The framework categorizes AI systems based on risk levels, subjecting each category to different regulatory scrutiny and compliance requirements. The concept of an "AI Liability Directive" has also been introduced to address legal proof and accountability difficulties related to AI.

While these initiatives are steps in the right direction, they do not solve all legal challenges associated with AI. Policy papers, like the UK's AI Strategy and the US Department of Defense's Responsible AI Strategy, provide guidance on adhering to international law and ethical principles in AI development and use in defense.

Aligning AI development with legal and regulatory frameworks is crucial for safe and ethical deployment in defense. Clear accountability and addressing biases will allow us to harness AI's potential while upholding legal and ethical standards.


What To Read Next

AI Social Networking App Series Raises $5.1 Million in a Pre-Seed Round

AI Social Networking App Series Raises $5.1 Million in a Pre-Seed Round

When two users express interest, the system connects them directly. There is no need to exchange phone numbers or move to another platform. The conversation starts in the same thread where the discovery happened. This keeps the interaction simple and contained.
The Return to Office Debate Misses the Point
The debate is often reduced to a preference conflict. Some people want offices, others want flexibility. But framing it as a preference hides the structural issue underneath.
Shade Raises $14 Million to Develop Natural Language Search for Video Libraries
The company’s goal is to make large collections of videos easier to access without requiring detailed manual tagging or structuring. This is particularly relevant for companies that store large amounts of footage across different projects.

Business