Defence & Security Newsletter
This mini-arc on the military application of artificial intelligence begins in Washington because the decisions taken there are already reshaping how wars are fought. Africa’s future battlefields will be influenced by this shift — whether African states recognise it early and adapt, or are forced to react after the fact.
This is not a technology experiment. It is a declaration of how future wars will be fought — and it carries significant implications for Africa’s security environment.

What the U.S. Has Actually Announced
The new U.S. military AI strategy rests on four core pillars.
First, the Department of War will fast-track seven priority AI projects designed to reshape battlefield operations. These projects focus on compressing decision-making timelines, automating intelligence processing, improving weapons development through AI-driven simulations, and modernising command-and-control systems.
In practical terms, this means shrinking the time between detection and action — a critical advantage in modern warfare.
Second, Washington plans to massively expand access to AI computing power, explicitly leveraging what Hegseth described as “hundreds of billions of dollars in private capital flowing into American AI.” This signals a deep alignment between the U.S. military and Silicon Valley-scale technology ecosystems, blurring the line between civilian innovation and military capability.
Third, the Pentagon will integrate leading commercial AI tools directly into classified and unclassified networks. Department of War personnel will soon be able to access Elon Musk’s Grok AI alongside government-adapted versions of Google’s Gemini via GenAI.mil. Hegseth described this as “long overdue,” signalling frustration with bureaucratic inertia.
Fourth — and most controversially — the strategy explicitly orders the removal of “woke DEI” and social-ideology constraints from military AI systems. In Hegseth’s words:
“We will not employ AI models that won’t allow you to fight wars.”
This represents a decisive prioritisation of operational lethality over ethical experimentation.
A Wartime Approach to AI Bureaucracy
Perhaps the most telling element of the announcement is not technological, but institutional. The Department of War has adopted what Hegseth called a “wartime approach to people and policies” blocking AI implementation.
A special “barrier removal SWAT team” has been established with authority to waive non-statutory requirements, fast-track approvals, and publicly identify bureaucratic roadblocks.
This is a clear message: the era of cautious, incremental AI adoption inside the U.S. military is over.

From Human-in-the-Loop to Human-on-the-Loop
Strategically, the U.S. shift accelerates a deeper transformation in warfare.
AI is no longer framed merely as a decision aid. It is becoming the central nervous system of military power. Commanders increasingly supervise algorithmic outputs rather than generate every decision themselves — moving from “human-in-the-loop” to “human-on-the-loop” control.
In this model:
-
Algorithms fuse intelligence feeds
-
Systems flag targets and anomalies
-
Humans approve or override decisions under time pressure
The advantage is speed. The risk is over-reliance.
Wars are increasingly won not by who has more troops, but by who can observe, decide, and act faster.
Russia’s Parallel Track — and the Emerging AI Arms Race
The U.S. is not acting in isolation. Russia has also placed AI at the centre of its military strategy. President Vladimir Putin has repeatedly described AI and robotics as essential to national, technological, and value sovereignty.
Moscow’s approach emphasises:
-
domestic AI development
-
reduced reliance on foreign platforms
-
integration of AI into command systems and robotics
The result is an emerging AI arms race, not just in weapons, but in data dominance, compute capacity, and algorithmic doctrine.
For countries outside these power blocs, the consequences are profound.
Why Africa Cannot Ignore This Shift
Africa does not sit outside this transformation. It sits directly in its operational path.
The continent’s most persistent security threats — insurgency, terrorism, banditry, piracy, and organised crime — are intelligence-driven conflicts, not conventional wars. They require rapid detection, pattern analysis, and predictive response — exactly where AI excels.
Yet most African militaries remain structurally unprepared for AI-enabled warfare.
Key weaknesses include:
-
fragmented national security databases
-
limited ISR (intelligence, surveillance, reconnaissance) integration
-
weak data governance and cyber security
-
dependence on foreign AI platforms
-
severe shortages of technical talent
This creates what analysts increasingly describe as algorithmic vulnerability — a condition where states possess forces and weapons, but lack decision speed.
Nigeria as a Case Study
Nigeria illustrates both the danger and the opportunity.
On one hand, Nigeria has extensive operational experience, capable personnel, and growing use of drones and surveillance platforms. On the other, its intelligence architecture remains siloed. Data sharing between agencies is inconsistent. AI adoption remains tactical rather than doctrinal.
By contrast, countries such as Egypt, Morocco, and Algeria have invested more systematically in integrated command systems, defence technology partnerships, and domestic technical capacity.
The difference is not simply budgetary. It is strategic intent.
The DEI Question — and Africa’s Governance Risk
The U.S. decision to strip DEI considerations from military AI systems will resonate globally.
For African states, the issue is not ideological alignment. It is governance capacity. AI systems trained without safeguards can amplify bias, misclassify civilians, and misinterpret social behaviour — especially in complex human terrain.
With weaker legal frameworks and limited civilian oversight, African militaries risk deploying AI tools without accountability, increasing the danger of civilian harm, political misuse, and international backlash.
Ironically, the very speed that makes AI attractive in combat makes it dangerous in fragile democracies.
AI Will Not Replace Soldiers — But It Will Expose Weak States
One misconception must be addressed: AI does not replace human fighters. It replaces slow institutions.
States with:
-
poor data discipline
-
politicised command structures
-
fragmented intelligence systems
-
opaque procurement processes
will not benefit from AI. They will be overwhelmed by it.
For Nigeria and its peers, adopting AI without reforming institutions risks deepening existing failures rather than fixing them.
What African Militaries Should Watch Now
Five indicators will determine whether Africa adapts or falls behind:
-
Integration of national security databases
-
Creation of joint intelligence fusion centres
-
Retention of technical talent within defence institutions
-
Clear doctrine on human control and accountability
-
Data sovereignty over AI systems and vendors
These are not future concerns. They are present strategic choices.
Bottom Line
The U.S. decision to go “AI-first” marks the end of the experimental phase of military artificial intelligence. Warfare is now being reorganised around algorithms.
For Africa, neutrality is not an option. States will either adapt deliberately and responsibly, or become technologically shaped by external powers.
AI will not decide Africa’s wars on its own. But it will decide who controls the tempo, the battlespace, and the outcome.
Majemite Jaboro is a London based Defence Analyst






Leave a Reply