Moroccan Military Forum alias FAR-MAROC
Vous souhaitez réagir à ce message ? Créez un compte en quelques clics ou connectez-vous pour continuer.

Moroccan Military Forum alias FAR-MAROC

Royal Moroccan Armed Forces Royal Moroccan Navy Royal Moroccan Air Forces Forces Armées Royales Forces Royales Air Marine Royale Marocaine
 
AccueilDernières imagesS'enregistrerConnexion
-60%
Le deal à ne pas rater :
Table basse rectangulaire LIFT – Plateau relevable – Style ...
34.99 € 87.99 €
Voir le deal

 

 Artificial Intelligence

Aller en bas 
3 participants
AuteurMessage
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5531
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Artificial Intelligence   Artificial Intelligence Icon_minitimeDim 29 Sep 2019 - 16:23

https://warontherocks.com/2019/09/terrorist-groups-artificial-intelligence-and-killer-drones/

Citation :


Terrorist Groups, Artificial Intelligence, and Killer Drones

-
Jacob Ware


Editor’s Note: This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It is based on a chapter by the authors in the forthcoming book ‘AI at War’ and addresses the first question (part a. and b.) which asks how AI will impact competition below the threshold of armed conflict, and what might happen if the United States fails to develop robust AI capabilities that address national security issues.

In 2016, the Islamic State of Iraq and the Levant (ISIL) carried out its first successful drone attack in combat, killing two Peshmerga warriors in northern Iraq. The attack continued the group’s record of employing increasingly sophisticated technologies against its enemies, a trend mimicked by other nonstate armed groups around the world. The following year, the group announced the formation of the “Unmanned Aircraft of the Mujahedeen,” a division dedicated to the development and use of drones, and a more formal step toward the long-term weaponization of drone technology.

Terrorist groups are increasingly using 21st-century technologies, including drones and elementary artificial intelligence (AI), in attacks. As it continues to be weaponized, AI could prove a formidable threat, allowing adversaries — including nonstate actors — to automate killing on a massive scale. The combination of drone expertise and more sophisticated AI could allow terrorist groups to acquire or develop lethal autonomous weapons, or “killer robots,” which would dramatically increase their capacity to create incidents of mass destruction in Western cities. As it expands its artificial intelligence capabilities, the U.S. government should also strengthen its anti-AI capacity, paying particular attention to nonstate actors and the enduring threats they pose. For the purposes of this article, I define artificial intelligence as technology capable of “mimicking human brain patterns,” including by learning and making decisions.

AI Could Turn Drones into Killer Robots

The aforementioned ISIL attack was not the first case of nonstate actors employing drones in combat. In January 2018, an unidentified Syrian rebel group deployed a swarm of 13 homemade drones carrying small submunitions to attack Russian bases at Khmeimim and Tartus, while an August 2018 assassination attempt against Venezuela’s Nicolas Maduro used exploding drones. Iran and its militia proxies have deployed drone-carried explosives several times, most notably in the September 2019 attack on Saudi oil facilities near the country’s eastern coast.

Pundits fear that the drone’s debut as a terrorist tool against the West is not far off, and that “the long-term implications for civilian populations are sobering,” as James Phillips and Nathaniel DeBevoise note in a Heritage Foundation commentary. In September 2017, FBI Director Christopher Wray told the Senate that drones constituted an “imminent” terrorist threat to American cities, while the Department of Homeland Security warned of terrorist groups applying “battlefield experiences to pursue new technologies and tactics, such as unmanned aerial systems.” Meanwhile, ISIL’s success in deploying drones has been met with great excitement in jihadist circles. The group’s al-Naba newsletter celebrated a 2017 attack by declaring “a new source of horror for the apostates!”

The use of drones in combat indicates an intent and capability to innovate and use increasingly savvy technologies for terrorist purposes, a process sure to continue with more advanced forms of AI. Modern drones possess fairly elementary forms of artificial intelligence, but the technology is advancing: Self-piloted drones are in development, and the European Union is funding projects to develop autonomous swarms to patrol its borders.

AI will enable terrorist groups to threaten physical security in new ways, making the current terrorism challenge even more difficult to address. According to a February 2018 report, terrorists could benefit from commercially available AI systems in several ways. The report predicts that autonomous vehicles will be used to deliver explosives; low-skill terrorists will be endowed with widely available high-tech products; attacks will cause far more damage; terrorists will create swarms of weapons to “execute rapid, coordinated attacks”; and, finally, attackers will be farther removed from their targets in both time and location. As AI technology continues to develop and begins to proliferate, “AI [will] expand the set of actors who are capable of carrying out the attack, the rate at which these actors can carry it out, and the set of plausible targets.”

For many military experts and commentators, lethal autonomous weapon systems, or “killer robots,” are the most feared application of artificial intelligence in military technology. In the words of the American Conservative magazine, the difference between killer robots and current AI-drone technology is that, with killer robots, “the software running the drone will decide who lives and who dies.” Thus, killer robots, combining drone technology with more advanced AI, will possess the means and power to autonomously and independently engage humans. The lethal autonomous weapon has been called the “third revolution in warfare,” following gunpowder and nuclear weapons, and is expected to reinvent conflict, not least terrorist tactics.

Although completely autonomous weapons have not yet reached the world’s battlefields, current weapons are on the cusp. South Korea, for instance, has developed and deployed the Samsung SGR-A1 sentry gun to its border with North Korea. The gun supposedly can track movement and fire without human intervention. Robots train alongside marines in the California desert. Israel’s flying Harpy munition can loiter for hours before detecting and engaging targets, while the United States and Russia are developing tanks capable of operating autonomously. And the drones involved in the aforementioned rebel attack on Russian bases in Syria were equipped with altitude and leveling sensors, as well as preprogrammed GPS to guide them to a predetermined target.

Of particular concern is the possibility of swarming attacks, composed of thousands or millions of tiny killer robots, each capable of engaging its own target. The potentially devastating terrorist application of swarming autonomous drones is best summarized by Max Tegmark, who has said that “if a million such killer drones can be dispatched from the back of a single truck, then one has a horrifying weapon of mass destruction of a whole new kind: one that can selectively kill only a prescribed category of people, leaving everybody and everything else unscathed.” Precisely that hypothetical scenario was illustrated in a recent viral YouTube video, “Slaughterbots,” which depicted the release of thousands of small munitions into British university lecture halls. The drones then pursued and attacked individuals who had shared certain political social media posts. The video also depicts an attack targeting sitting U.S. policymakers on Capitol Hill. The video has been viewed over three million times, and was met with increasing concern about potential terrorist applications of inevitable autonomous weapons technology. So far, nonstate actors have only deployed “swarmed” drones sparingly, but it points to a worrying innovation: Swarming, weaponized killer robots aimed at civilian crowds would be nearly impossible to defend against, and, if effective, cause massive casualties.

Terrorists Will Be Interested in Acquiring Lethal Autonomous Weapons

Terrorist groups will be interested in artificial intelligence and lethal autonomous weapons for three reasons — cost, traceability, and effectiveness.

Firstly, killer robots are likely to be extremely cheap, while still maintaining lethality. Experts agree that lethal autonomous weapons, once fully developed, will provide a cost-effective alternative to terrorist groups looking to maximize damage, with Tegmark arguing that “small AI-powered killer drones are likely to cost little more than a smartphone.” Additionally, killer robots will minimize the human investment required for terrorist attacks, with scholars arguing that “greater degrees of autonomy enable a greater amount of damage to be done by a single person.” Artificial intelligence could make terrorist activity cheaper financially and in terms of human capital, lowering the organizational costs required to commit attacks.

Secondly, using autonomous weapons will reduce the trace left by terrorists. A large number of munitions could be launched — and a large amount of damage done — by a small number of people operating at considerable distance from the target, reducing the signature left behind. In Tegmark’s words, for “a terrorist wanting to assassinate a politician … all they need to do is upload their target’s photo and address into the killer robot: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure nobody knows who was responsible.” With autonomous weapons technology, terrorist groups will be able to launch increasingly complex attacks, and, when they want to, escape without detection.

Finally, killer robots could reduce, if not eliminate, the physical costs and dangers of terrorism, rendering the operative “essentially invulnerable.” Raising the possibility of “fly and forget” missions, lethal autonomous weapons might simply be deployed toward a target, and engage that target without further human intervention. As P. W. Singer noted in 2012, “one [will] not have to be suicidal to carry out attacks that previously might have required one to be so. This allows new players into the game, making al-Qaeda 2.0 and the next-generation version of the Unabomber or Timothy McVeigh far more lethal.” Additionally, lethal autonomous weapons could potentially reduce human aversion to killing, making terrorism even more palatable as a tactic for political groups. According to the aforementioned February 2018 report, “AI systems can allow the actors who would otherwise be performing the tasks to retain their anonymity and experience a greater degree of psychological distance from the people they impact”; this would not only improve a terrorist’s chances of escape, as mentioned, but reduce or even eliminate the moral or psychological barriers to murder.

Terrorist Acquisition of Lethal Autonomous Weapons Is Realistic

The proliferation of artificial intelligence and killer robot technology to terrorist organizations is realistic and likely to occur through three avenues — internal development, sales, and leaks.

Firstly, modern terrorist organizations have advanced scientific and engineering departments, and actively seek out skilled scientists for recruitment. ISIL, for example, has appealed for scientists to trek to the caliphate to work on drone and AI technology. The individual technologies behind swarming killer robots — including unmanned aerial vehicles, facial recognition, and machine-to-machine communication — already exist, and have been adapted by terrorist organizations for other means. According to a French defense industry executive, “the technological challenge of scaling it up to swarms and things like that doesn’t need any inventive step. It’s just a question of time and scale and I think that’s an absolute certainty that we should worry about.”

Secondly, autonomous weapons technology will likely proliferate through sales. Because AI research is led by private firms, advanced AI technology will be publicly sold on the open market. As Michael Horowitz argues, “militant groups and less-capable states may already have what they need to produce some simple autonomous weapon systems, and that capability is likely to spread even further for purely commercial reasons.” The current framework controlling high-tech weapons proliferation — the Wassenaar Arrangement and Missile Technology Control Regime — is voluntary, and is constantly tested by great-power weapons development. Given interest in developing AI-guided weapons, this seems unlikely to change. Ultimately, as AI expert Toby Walsh notes, the world’s weapons companies can, and will, “make a killing (pun very much intended) selling autonomous weapons to all sides of every conflict.”

Finally, autonomous weapons technology is likely to leak. Innovation in the AI field is led by the private sector, not the military, because of the myriad commercial applications of the technology. This will make it more difficult to contain the technology, and prevent it from proliferating to nonstate actors. Perhaps the starkest warning has been issued by Paul Scharre, a former U.S. defense official: “We are entering a world where the technology to build lethal autonomous weapons is available not only to nation-states but to individuals as well. That world is not in the distant future. It’s already here.”

Counter-Terrorism Options

Drones and AI provide a particularly daunting counter-terrorism challenge, simply because effective counter-drone or anti-AI expertise does not yet exist. That said, as Daveed Gartenstein-Ross has noted, “in recent years, we have seen multiple failures in imagination as analysts tried to discern what terrorists will do with emerging technologies. A failure in imagination as artificial intelligence becomes cheaper and more widely available could be even costlier.” Action is urgently needed, and for now, counter-terrorism policies are likely to fit into two categories, each with flaws: defenses and bans.

Firstly, and most likely, Western states could strengthen their defenses against drones and weaponized AI. This might involve strengthening current counter-drone and anti-AI capabilities, improving training for local law enforcement, and establishing plans for mitigating drone or autonomous weapons incidents. AI technology and systems will surely play an important role in this space, including in the development of anti-AI tools. However, anti-AI defenses will be costly, and will need to be implemented across countless cities throughout the entire Western world, something Michael Horton calls “a daunting challenge that will require spending billions of dollars on electronic and kinetic countermeasures.” Swarms, Scharre notes, will prove “devilishly hard to target,” given the number of munitions and their ability to spread over a wide area. In addition, defenses will likely take a long time to erect effectively and will leave citizens exposed in the meantime. Beyond defenses, AI will also be used in counter-terrorism intelligence and online content moderation, although this will surely spark civil liberties challenges.

Secondly, the international community could look to ban AI use in the military through an international treaty sanctioned by the United Nations. This has been the strategy pursued by activist groups such as the Campaign to Stop Killer Robots, while leading artificial intelligence researchers and scientific commentators have published open letters warning of the risk of weaponized AI. That said, great powers are not likely to refrain from AI weapons development, and a ban might outlaw positive uses of militarized AI. The international community could also look to stigmatize, or delegitimize, weaponized AI and lethal autonomous weapons sufficiently to deter terrorist use. Although modern terrorist groups have proven extremely willing to improvise and innovate, and effective at doing so, there is an extensive list of weapons — chemical weapons, biological weapons, cluster munitions, barrel bombs, and more — accessible to terrorist organizations, but rarely used. This is partly down to the international stigma associated with those munitions — if a norm is strong enough, terrorists might avoid using a weapon. However, norms take a long time to develop, and are fragile and untrustworthy solutions. Evidently, good counter-terrorism options are limited.

The U.S. government and its intelligence agencies should continue to treat AI and lethal autonomous weapons as priorities, and identify new possible counter-terrorism measures. Fortunately, some progress has been made: Nicholas Rasmussen, former director of the National Counterterrorism Center, admitted at a Senate Homeland Security and Governmental Affairs Committee hearing in September 2017 that “there is a community of experts that has emerged inside the federal government that is focused on this pretty much full time. Two years ago this was not a concern … We are trying to up our game.”

Nonstate actors are already deploying drones to attack their enemies. Lethal autonomous weapon systems are likely to proliferate to terrorist groups, with potentially devastating consequences. The United States and its allies should urgently address the rising threat by preparing stronger defenses against possible drone and swarm attacks, engaging with the defense industry and AI experts warning of the threat, and supporting realistic international efforts to ban or stigmatize military applications of artificial intelligence. Although the likelihood of such an event is low, a killer robot attack could cause massive casualties, strike a devastating blow to the U.S. homeland, and cause widespread panic. The threat is imminent, and the time has come to act.

Jacob Ware holds a master’s in security studies from Georgetown University and an MA (Hons) in international relations and modern history from the University of St Andrews. His research has previously appeared with the International Centre for Counter-Terrorism – The Hague.

Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5531
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeMar 26 Nov 2019 - 20:52

https://breakingdefense.com/2019/11/exclusive-pentagons-ai-problem-is-dirty-data-lt-gen-shanahan/

Citation :

EXCLUSIVE Pentagon’s AI Problem Is ‘Dirty’ Data: Lt. Gen. Shanahan

Sydney J. Freedberg Jr.

CRYSTAL CITY: “Some people say data is the new oil. I don’t like that,” the Defense Department’s AI director told me in his office here. “I treat it as mineral ore: There’s a lot of crap. You have to filter out the impurities from the raw material to get the gold nuggets.”

Lt. Gen. Jack Shanahan learned this the hard way as head of the much-debated Project Maven, which he led for two years before becoming the founding director of the Joint Artificial Intelligence Center last year. The lessons from that often-painful process – discussed in detail below – now shape Shanahan’s approach to the new and ever-more ambitious projects the Defense Department is taking on. They range from the relatively low-risk, non-combat applications that JAIC got warmed up with in 2019, like predicting helicopter engine breakdowns before they happen, to the joint warfighting efforts Shanahan wants to ramp up to in 2020:

Joint All-Domain Command & Control: This is a pilot project working towards what’s also called Multi-Domain C2, a vision of plugging all services, across all five domains — land, sea, air, space, and cyberspace — into a single seamless network. It’s a tremendous task to connect all the different and often incompatible technologies, organizations, and cultures.
Autonomous Ground Reconnaissance & Surveillance: This involves adding Maven-style analysis algorithms to more kinds of scout drones and even ground robots, so the software can call humans’ attention to potential threats and targets without someone having to watch every frame of video.
Operations Center Cognitive Assistant: This project aims to streamline the flow of information through the force. It will start with using natural-language processing to sort through radio chatter, turning troops’ urgent verbal calls for airstrikes and artillery support into target data in seconds instead of minutes.
Sensor To Shooter: This will build on Maven to develop algorithms that can shrink the time to locate potential targets, prioritize them, and present them to a human, who will decide what action to take. In keeping with Pentagon policy, Shanahan assured me, “this is about making humans faster, more efficient, and more effective. Humans are still going to have to make the big decisions about weapons employment.”
Dynamic & Deliberate Targeting: The idea here is to take targets (for example, ones found by the Sensor To Shooter software) and figure out which aircraft is best positioned to strike it with which weapons along which flight path – much like how Uber matches you with a driver and route.

“The data’s there in all the cases I described, but what’s the quality? Who’s the owner of the data?” Shanahan said. “There’s a lot of proprietary data that exists in weapons systems” – from maintenance diagnostics to targeting data – “and unlocking that becomes harder than anybody expected. Sometimes the best data is treated as engine exhaust rather than potential raw materials for algorithms.

“What has stymied most of the services when they dive into AI is data,” he said. “They realize how hard it is to get the right data to the right place, get it cleaned up, and train algorithms on it.”

Today’s military has vast amounts of data, Shanahan said, but “I can’t think of anything that is really truly AI-ready. In legacy systems we’re essentially playing the data as it lies, which gets complicated, because it’s messy, it’s dirty. You have certain challenges of data quality, data provenance, and data fidelity, and every one of those throws a curve ball.”

While the Pentagon needs solid data for lots of different purposes, not just AI, large amounts of good data are especially essential for machine learning. Fighting wars is only going to get more complex in the future: Military leaders see huge opportunities to use AI to comb through that complexity to make operations more efficient, reduce collateral damage, and bring the troops home safely.

Lessons From Maven: Show Me The Camel

Project Maven showed Shanahan just how hard the data wrangling could get. The aim of Maven was to analyze huge amounts of drone surveillance video that human analysts couldn’t keep up with, training machine-learning algorithms to recognize hints of terrorist activity and report it.

“We thought it would be easier than it was, because we had tens of thousands of hours of full motion video from real missions,” Shanahan told me. “But it was on tapes somewhere that someone had stored, and a lot of the video gets stored for a certain amount of time and then gets dumped. We had to physically go out and pick tapes up.”

While the military data was patchy and dirty, open-source image libraries and other civilian sources were too clean to teach an algorithm how to understand a war zone, Shanahan told me. “If you train against a very clean, gold-standard data set, it will not work in real world conditions,” he said. “It’s much more challenging — smoke, haze, fog, clouds — fill in the blank.

“Then you have the edge cases, something that is so unusual that you just didn’t have enough data to train against it,” Shanahan said. “For example, we may not have had enough camel imagery.” That sounds comical – until the first few hundred times your algorithm glitches because it can’t figure out what this strange lumpy object is that it’s seeing from 10,000 feet overhead.

Even once you had the data in usable form, Shanahan continued, you needed humans to categorize “tens of thousands, if not millions, of images” so the algorithm could learn, for example, what camels look like as opposed to pickup trucks, people, buildings and weapons. Machine learning algorithms need to see millions of clearly-labeled examples before they can figure out how to deal with new, unlabeled data. So it takes a huge amount of human labor, doing tasks that require little intelligence, to get the data in a form the machine can actually learn from.

On Maven, intelligence community analysts helped with data labeling a lot. The Intelligence Systems Support Office down in Tampa, near Special Operations Command’s SOFWERX, even spun off a dedicated subunit just to support Shanahan. (This Algorithmic Warfare Provisional Program Activity Office now helps JAIC as well).

Even so, manpower was a problem. “We never got the numbers we needed, so we had to get contractor support,” Shanahan said. Unlike a commercial company outsourcing data-labeling to, say, China, the Defense Department had sensitive operational information that could only be worked on by US nationals with security clearances. And before handing the video to the cleared contractors, Shanahan said, “you had to get rid of some sensitive things and some extreme potentially graphic things you didn’t necessarily want data labelers to look at.”

All told, it was a huge amount of work – and it’s never really done. “When you fly it for the first time, the algorithm is going to find things you didn’t train it on,” Shanahan said. “They’re constantly updated through what we call dynamic retraining.”

Even civilian algorithms require continual tweaking, because the world keeps changing. And many military algorithms have to deal with an adversary who’s actively trying to deceive them. The cycle of countermeasure and counter-countermeasures is as old as warfare, but the rise of machine learning has spawned a whole science of adversarial AI to deceive the algorithms.

“We learned in Maven, even if you fielded a decent algorithm, if you don’t update that algorithm in the next six months, [users] become cynical and disillusioned that it’s not keeping up with what they’re seeing in the real world,” Shanahan told me. Today, after much streamlining of processes, Maven is updated regularly at a pace unobtainable even a year ago, Shanahan said, but it’s still far short of the almost-daily updates achievable in civilian software.

Artificial Intelligence Screen-Shot-2018-12-06-at-10.18.26-PM-1024x753
SOURCE: Army Multi-Domain Operations Concept, December 2018.

Beyond Maven: AI For Joint Warfighting

Maven solved its problems – mostly. The head of Air Combat Command has publicly said he doesn’t entirely trust its analysis, not yet, and Shanahan himself admitted its accuracy was initially about 50-50. But the entire basis for Maven was to deliver initial capabilities – a minimum viable product – as quickly as possible to the field, then get real-world feedback, improve it, field the upgrade, and repeat.

But the tools for tackling full motion video don’t necessarily translate to other tasks that the new Joint AI Center is taking on.

Even when JAIC is seeking to apply Maven-style video analysis to other kinds of surveillance footage, the algorithms need to be retrained to recognize different targets in different landscapes and weather conditions, all seen from different angles and altitudes through different kinds of cameras. “You can’t just train an algorithm on electro-optical data and expect it to perform in infrared,” Shanahan said. “We tried that.”

And many of JAIC’s projects don’t involve video at all: They range from predicting helicopter engine breakdowns and using natural-language processing to turn troops’ radio calls for air support into unambiguous targeting data.

This is another reason why Shanahan prefers to think of data as mineral ore rather than petroleum, he told me: “It’s not fungible like oil is.” You can think of full motion video, for example, as palladium: an essential catalyst for some applications, irrelevant for others. And like rare minerals, all the different kinds of data are out there – somewhere – if you can find them, get permission to exploit them from whoever currently owns them, and separate them from the junk that they’re embedded in.

There’s no simple silver bullet solution, Shanahan said. Some suggest rigorously imposing some kind of top-down standard for formatting and handling data, but he argues the Defense Department has too many standards already, and they are inconsistently applied.

“There are a lot of people who want to just jump to data standards. I don’t,” he told me. “Every weapons system that we have, and every piece of data that we have, conforms to some standard. There are over a thousand different standards related to data today. They’re just not all enforced.”

“It’s less a question of standards and more of policies and governance,” he told me. “We now have to think about data as a strategic asset in its own right. Now, a much better approach to drive interoperability is to start with a discussion of metadata standards that are as lightweight as possible, as well as a Modular Open Systems Architecture. Or put another way, we need to agree on the definition of ‘AI Ready’ when it comes to our weapon systems.”

That includes getting acquisition program managers, traditionally focused on the physical performance of the weapons they are developing, fielding, and sustaining, to consider data as “part of the life-cycle management process just as much as the hardware is,” Shanahan said. “I see signs of the services beginning to have that conversation about future weapons systems.”

The fundamental issue: “The Department of Defense is different from Amazon, Google, Microsoft, which were born as digital companies,” he said. “The Department of Defense was not. It started as a hardware company. It’s an industrial age environment and we’re trying to make this transformation to an information-age, software-driven environment.”

One of JAIC’s key contributions here will be to build a “common foundation” that pulls together usable data and proven algorithms from across the Defense Department for any DoD user to access and apply to their specific needs. (This will require a DoD-wide cloud computing system, he noted).

“We want to have APIs [Application Program Interfaces] that allow anyone to come in and access our common foundation or platform. We will publish API definitions, what you need to write to,” Shanahan said. But the sheer diversity of the data and the different purposes it can be put to, he said, means that “there is never going to be a single standard API.”

Likewise, he said, while there will “minimum common denominator” standards for tagging metadata with various categories and labels, “you will have lots of flexibility for mission-specific tagging.”

It’s a tremendous task, but one with equally tremendous potential benefits. Working with Chief Data Officer Michael Conlin, “we are trying to fix all sorts of problems with data across the Department of Defense,” not just for AI, Shanahan told me. “I am optimistic.”

“AI will likely become the driving force of change in how the department treats data,” Shanahan told me. “And technology is changing so fast that the painful data wrangling processes we endure today may well be transformed into something entirely more user-friendly a year from now.”
Revenir en haut Aller en bas
AIT
Victime
Victime
AIT


messages : 612
Inscrit le : 02/02/2019
Localisation : Ait Hdiddou
Nationalité : maroc-espagne
Médailles de mérite : Artificial Intelligence Unbena31

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeSam 22 Fév 2020 - 13:18

Un rapport de l'institut RAND sur l'impact de l'IA sur la dissuasion

https://twitter.com/RANDCorporation/status/1231088478466826240?s=19

_________________
Artificial Intelligence Gamins13
ⵜⴰⵢⵔⵉ ⵏ'ⵜⵎⴰⵣⵉⵔⵜ
Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5531
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeMar 20 Avr 2021 - 7:16

Artificial Intelligence 7e545510
Artificial Intelligence 79766210
Artificial Intelligence 0fdbfe10
Artificial Intelligence 841b9110

Artificial Intelligence Bc953010

Artificial Intelligence 31dde810
Artificial Intelligence Ce67bd10
Artificial Intelligence 7679f410

Adam aime ce message

Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5531
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeDim 19 Sep 2021 - 0:42

Revenir en haut Aller en bas
Adam
Modérateur
Modérateur
Adam


messages : 6300
Inscrit le : 25/03/2009
Localisation : Royaume pour tous les Marocains
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena31Artificial Intelligence Ambass10
Artificial Intelligence Medail10Artificial Intelligence Unbena15

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeMer 29 Sep 2021 - 21:33


_________________
Les peuples ne meurent jamais de faim mais de honte.

Shugan188 aime ce message

Revenir en haut Aller en bas
Adam
Modérateur
Modérateur
Adam


messages : 6300
Inscrit le : 25/03/2009
Localisation : Royaume pour tous les Marocains
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena31Artificial Intelligence Ambass10
Artificial Intelligence Medail10Artificial Intelligence Unbena15

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeJeu 7 Oct 2021 - 0:06


_________________
Les peuples ne meurent jamais de faim mais de honte.
Revenir en haut Aller en bas
Adam
Modérateur
Modérateur
Adam


messages : 6300
Inscrit le : 25/03/2009
Localisation : Royaume pour tous les Marocains
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena31Artificial Intelligence Ambass10
Artificial Intelligence Medail10Artificial Intelligence Unbena15

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeLun 25 Oct 2021 - 16:12


_________________
Les peuples ne meurent jamais de faim mais de honte.
Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5531
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeLun 22 Nov 2021 - 18:18

https://gjia.georgetown.edu/2021/11/22/swords-and-shields-autonomy-ai-and-the-offense-defense-balance/

Citation :



Swords and Shields: Autonomy, AI, and the Offense-Defense Balance

- Georgetown Journal of International Affairs
8 - 10 minutes

Introduction

Military machines can increasingly move, search for targets, and even kill without human control. Growing computer power coupled with advances in artificial intelligence empower autonomous weapons and platforms to carry out more sophisticated behaviors and activities. Autonomy fundamentally means reducing human involvement in command and control, which theoretically means that virtually every military platform and weapon can be made autonomous. Whether this autonomy is sensible or ethical is another question.

Current autonomous weapons have typically been used for tactical defense. Land and sea mines are extremely simple autonomous weapons, based on mechanical triggers to keep an enemy from crossing or holding a particular piece of territory. Other autonomous weapons like close-in weapon systems and active protection systems are used to defend military platforms against incoming projectiles. Of course, that tactical defense may support strategic offense, protecting invading forces from defender attack. Some systems like loitering munitions, particularly radar-hunting missiles, are designed primarily for offense, to destroy defending air and missile defenses.

The future of autonomous weapons is unclear. A broad range of factors influence whether autonomy will ultimately favor offense or defense and the degree of this impact. These factors include the type and nature of the application, reliability in face of adversary interference, affordability and availability of application types, and overall effects on the cost of war.

Differing Applications of Autonomous Weapons

Autonomy can be applied to virtually any weapon system or platform, so the net effects for offense and defense depend on which applications prove most significant. For example, militaries are developing drone swarms to target air and missile defenses. Cheap drones may overwhelm and destroy defenses to ensure more expensive manned aircraft are safe from reprisal. But autonomy and artificial intelligence can also improve those same defenses, increasing the risks to manned aircraft. The need to protect against aerial drone swarm attacks is already driving improvements to those defenses.

The nature of the application matters too. Offense-defense theorists argue nuclear weapons are the ultimate defensive weapon, because they ensure nuclear-armed states can retaliate with overwhelming destruction. If so, then any technology weakening that deterrent must favor the offense. Some researchers have argued that the creation of massive underwater sensor networks may render the ocean transparent, effectively eliminating undersea nuclear second-strike capabilities. Hypothetically, a mixture of unmanned undersea, surface, and aerial vehicles; sensors; and manned anti-submarine warfare systems would comb the ocean to find nuclear submarines. The argument goes that if an adversary can locate every nuclear submarine, they may be able to destroy them all in a single strike. The reality is likely more complex, given the challenge of processing and managing such a huge network and carrying out strikes against identified targets. Nonetheless, any meaningful risk to nuclear stability certainly would have outsized global effects.

Autonomy and artificial intelligence can also play a support role for manned forces. Autonomous vehicles can provide logistical support by helping transport supplies and forces to the battlefield, which would favor offensive operations. But what happens if an autonomous convoy comes under attack? Limits on autonomous cognition may inhibit their response, making them more of a liability than an asset. At the same time, autonomous systems can help collect intelligence to identify targets, assess enemy defenses, and plan military actions, and artificial intelligence can help sift and process that information. This can help attackers identify vulnerabilities and plan attacks, while also granting defenders better situational awareness to monitor movements of attackers and plan ambushes.

Reliability of Autonomous Weapons

The reliability of autonomous weapons and platforms also affects the net impact of autonomy on offense and defense. Current machine vision systems are heavily dependent on training data. Machines require large amounts of data to know the difference between a car, a tank, a cruise ship, and a naval destroyer. Training data may exhibit structural biases that affect how and when the weapons are used. For example, an autonomous weapon might be able to recognize an unobstructed tank on a sunny day, but what about a foggy or snowy day, or if the tank is partially obscured by a tree or a building? Militaries cannot always anticipate the environmental characteristics of where the autonomous systems are deployed. Differences in climate, flora, architecture, and other factors may create reliability problems in deployed areas. For example, Russia deployed the unmanned combat vehicle Uran-9 in Syria, but it worked poorly: it struggled to spot enemies farther away than 1.25 miles, its sensors and weapons were useless when moving, the tracked suspension had unreliable parts, and the remote control system had a much shorter range than expected. For autonomous weapons to be meaningful in combat, these challenges need to be resolved.

Enemy action also affects reliability. In the civilian domain, mere stickers on a stop sign have been enough to manipulate an autonomous car. Hopefully, a military autonomous system would use multiple sensors to prevent such a simple manipulation, but interference is still possible. At an extreme level, manipulation could cause autonomous weapons to fire on friendly forces. The unreliability of autonomous weapons may mean that they tend to favor the defense, as defending militaries can more readily control and influence the environment to cause problems for attacking forces.

Effects on the Cost of War

Conversely, autonomous weapons may make wars easier to start, benefiting the offense. Autonomous weapons necessarily reduce the immediate risk to human operations by removing humans from the battlefield, potentially reducing the perceived risk of military conflict. If war is cheaper to start, then theoretically, war will happen more readily. A military composed of numerous autonomous and unmanned systems would exacerbate those concerns. A drone support force is one thing; an army of them is another entirely.

Affordability of Autonomous Weapons

The affordability of autonomous weapons may favor defense. In theory, autonomous weapons should be cheaper than manned systems. They can be more readily mass produced since they do not need to support human life, and may be as simple as a small drone with a bomb strapped onto it. And defensive autonomous weapons are likely to be cheaper than offensive weapons, because defensive weapons do not require mobility and may be designed to control a known, expected battlefield. If so, autonomous weapons and platforms may be deployed in larger numbers by defending states, allowing them to impose greater costs on an attacking state.

Conclusion

Autonomous weapons and artificial intelligence are clearly growing features of global conflict, but what that means for global stability is unclear. Researchers and policy-makers need to better understand what weapons are used, how they are used, and by whom. That requires research and analysis at all levels of warfare, across all domains—land, sea, air, and space. States and civil society need to exploit the opportunities autonomy offers, while identifying and countering the risks. Global security depends on it.



Zachary Kallenborn is a Policy Fellow at the Schar School of Policy and Government, a Research Affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), the Master Coordination Director for Project Exodus Relief helping evacuate high-risk Afghan refugees, an officially proclaimed U.S. Army “Mad Scientist,” and national security consultant. His research on autonomous weapons, drone swarms, weapons of mass destruction (WMD) and WMD terrorism has been published in a wide range of peer-reviewed, wonky, and popular outlets, including the Brookings Institution, Foreign Policy, Slate, War on the Rocks, and the Nonproliferation Review. Journalists have written about and shared that research in the New York Times, NPR, Forbes, the New Scientist and WIRED, among numerous others.


Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5531
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeLun 10 Juil 2023 - 2:38

Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5531
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeJeu 20 Juin 2024 - 23:43

Citation :


How AI is changing warfare

21 - 27 minutes

IN LATE 2021 the Royal Navy approached Microsoft and Amazon Web Services, a pair of American tech giants, with a question: Was there a better way to wage war? More specifically, could they find a more effective way to co-ordinate between a hypothetical commando strike team in the Caribbean and the missile systems of a frigate? The tech firms collaborated with BAE Systems, a giant armsmaker, and Anduril, a smaller upstart, among other military contractors. Within 12 weeks—unfathomably fast in the world of defence procurement—the consortium gathered in Somerset in Britain for a demonstration of what was dubbed StormCloud.

Marines on the ground, drones in the air and many other sensors were connected over a “mesh” network of advanced radios that allowed each to see, seamlessly, what was happening elsewhere—a set-up that had already allowed the marines to run circles around much larger forces in previous exercises. The data they collected were processed both on the “edge” of the network, aboard small, rugged computers strapped to commando vehicles with bungee cables—and on distant cloud servers, where they had been sent by satellite. Command-and-control software monitored a designated area, decided which drones should fly where, identified objects on the ground and suggested which weapon to strike which target.

The results were impressive. It was apparent that StormCloud was the “world’s most advanced kill chain”, says an officer involved in the experiment, referring to a web of sensors (like drones) and weapons (like missiles) knitted together with digital networks and software to make sense of the data flowing to and fro. Even two years ago, he says, it was “miles ahead”, in terms of speed and reliability, of human officers in a conventional headquarters.

AI-enabled tools and weapons are not just being deployed in exercises. They are also in use on a growing scale in places like Gaza and Ukraine. Armed forces spy remarkable opportunities. They also fear being left behind by their adversaries. Spending is rising fast (see chart 1). But lawyers and ethicists worry that AI will make war faster, more opaque and less humane. The gap between the two groups is growing bigger, even as the prospect of a war between great powers looms larger.

There is no single definition of AI. Things that would once have merited the term, such as the terrain-matching navigation of Tomahawk missiles in the 1980s or the tank-spotting capabilities of Brimstone missiles in the early 2000s, are now seen as workaday software. And many cutting-edge capabilities described as AI do not involve the sort of “deep learning” and large language models underpinning services such as ChatGPT. But in various guises, AI is trickling into every aspect of war.
ProsAIc but gAIinful

That begins with the boring stuff: maintenance, logistics, personnel and other tasks necessary to keep armies staffed, fed and fuelled. A recent study by the RAND Corporation, a think-tank, found that AI, by predicting when maintenance would be needed on A-10C warplanes, could save America’s air force $25m a month by avoiding breakdowns and overstocking of parts (although the AI did worse with parts that rarely failed). Logistics is another promising area. The US Army is using algorithms to predict when Ukrainian howitzers will need new barrels, for instance. AI is also starting to trickle into HR. The army is using a model trained on 140,000 personnel files to help score soldiers for promotion.

At the other extreme is the sharp end of things. Both Russia and Ukraine have been rushing to develop software to make drones capable of navigating to and homing in on a target autonomously, even if jamming disrupts the link between pilot and drone. Both sides typically use small chips for this purpose, which can cost as little as $100. Videos of drone strikes in Ukraine increasingly show “bounding boxes” appearing around objects, suggesting that the drone is identifying and locking on to a target. The technology remains immature, with the targeting algorithms confronting many of the same problems faced by self-driving cars, such as cluttered environments and obscured objects, and some unique to the battlefield, such as smoke and decoys. But it is improving fast.

Between AI at the back-end and AI inside munitions lies a vast realm of innovation, experimentation and technological advances. Drones, on their own, are merely disrupting, rather than transforming, war, argue Clint Hinote, a retired American air-force general, and Mick Ryan, a retired Australian general. But when combined with “digitised command and control systems” (think StormCloud) and “new-era meshed networks of civilian and military sensors” the result, they say, is a “transformative trinity” that allows soldiers on the front lines to see and act on real-time information that would once have been confined to a distant headquarters.

AI is a prerequisite for this. Start with the mesh of sensors. Imagine data from drones, satellites, social media and other sources sloshing around a military network. There is too much to process manually. Tamir Hayman, a general who led Israeli military intelligence until 2021, points to two big breakthroughs. The “fundamental leap”, he says, eight or nine years ago, was in speech-to-text software that enabled voice intercepts to be searched for keywords. The other was in computer vision. Project Spotter, in Britain’s defence ministry, is already using neural networks for the “automated detection and identification of objects” in satellite images, allowing places to be “automatically monitored 24/7 for changes in activity”. As of February, a private company had labelled 25,000 objects to train the model.

Tom Copinger-Symes, a British general, told the House of Lords last year that such systems were “still in the upper ends of research and development rather than in full-scale deployment”, though he pointed to the use of commercial tools to identify, for instance, clusters of civilians during Britain’s evacuation of its citizens from Sudan in early 2023. America seems further along. It began Project Maven in 2017 to deal with the deluge of photos and videos taken by drones in Afghanistan and Iraq.

Maven is “already producing large volumes of computer-vision detections for warfighter requirements”, noted the director of the National Geospatial-Intelligence Agency, which runs the project, in May. The stated aim is for Maven “to meet or exceed human detection, classification, and tracking performance”. It is not there yet—it struggles with tricky cases, such as partly hidden weapons. But The Economist’s tracker of war-related fires in Ukraine is based on machine-learning, entirely automated and operates at a scale that journalists could not match. It has already detected 93,000 probable war-related blazes.

AI can process more than phone calls or pictures. In March the Royal Navy announced that its mine-hunting unit had completed a year of experimentation in the Persian Gulf using a small self-driving boat, the Harrier, whose towed sonar system could search for mines on the seabed and alert other ships or units on land. And Michael Horowitz, a Pentagon official, recently told Defense News, a website, that America, Australia and Britain, as part of their AUKUS pact, had developed a “trilateral algorithm” that could be used to process the acoustic data collected by sonobuoys dropped from each country’s submarine-hunting P-8 aircraft.

In most of these cases, AI is identifying a signal amid the noise or an object amid some clutter: Is that a truck or a tank? An anchor or a mine? A trawler or a submarine? Identifying human combatants is perhaps more complicated and certainly more contentious. In April +972 Magazine, an Israeli outlet, claimed that the Israel Defence Forces (IDF) were using an AI tool known as Lavender to identify thousands of Palestinians as targets, with human operators giving only cursory scrutiny to the system’s output before ordering strikes. The IDF retorted that Lavender was “simply a database whose purpose is to cross-reference intelligence sources”.

In practice, Lavender is likely to be what experts call a decision-support system (DSS), a tool to fuse different data such as phone records, satellite images and other intelligence. America’s use of computer systems to process acoustic and smell data from sensors in Vietnam might count as a primitive DSS. So too, notes Rupert Barrett-Taylor of the Alan Turing Institute in London, would the software used by American spies and special forces in the war on terror, which turned phone records and other data into huge spidery charts that visualised the connections between people and places, with the aim of identifying insurgents or terrorists.
ExplAIn or ordAIn?

What is different is that today’s software benefits from greater computing power, whizzier algorithms (the breakthroughs in neural networks occurred only in 2012) and more data, owing to the proliferation of sensors. The result is not just more or better intelligence. It is a blurring of the line between intelligence, surveillance and reconnaissance (ISR) and command and control (C2)—between making sense of data and acting on it.

Consider Ukraine’s GIS Arta software, which collates data on Russian targets, typically for artillery batteries. It can already generate lists of potential targets “according to commander priorities”, write Generals Hinote and Ryan. One of the reasons that Russian targeting in Ukraine has improved in recent months, say officials, is that Russia’s C2 systems are getting better at processing information from drones and sending it to guns. “By some estimates,” writes Arthur Holland Michel in a paper for the International Committee of the Red Cross (ICRC), a humanitarian organisation, “a target search, recognition and analysis activity that previously took hours could be reduced…to minutes.”

The US Air Force recently asked the RAND Corporation to assess whether AI tools could provide options to a “space warfighter” dealing with an incoming threat to a satellite. The conclusion was that AI could indeed recommend “high-quality” responses. Similarly, DARPA, the Pentagon’s blue-sky research arm, is working on a programme named, with tongue firmly in cheek, the Strategic Chaos Engine for Planning, Tactics, Experimentation and Resiliency (SCEPTER)—to produce recommended actions for commanders during “military engagements at high machine speeds”. In essence, it can generate novel war plans on the fly.

“A lot of the methods that are being employed” in SCEPTER and similar DARPA projects “didn’t even exist two to five years ago”, says Eric Davis, a programme manager at the agency. He points to the example of “Koopman operator theory”, an old and obscure mathematical framework that can be used to analyse complex and non-linear systems—like those encountered in war—in terms of simpler linear algebra. Recent breakthroughs in applying it have made a number of AI problems more tractable.
PrAIse and complAInts

The result of all this is a growing intellectual chasm between those whose job it is to wage war and those who seek to tame it. Legal experts and ethicists argue that the growing role of AI in war is fraught with danger. “The systems we have now cannot recognise hostile intent,” argues Noam Lubell of the University of Essex. “They cannot tell the difference between a short soldier with a real gun and a child with a toy gun…or between a wounded soldier lying slumped over a rifle and a sniper ready to shoot with a sniper rifle.” Such algorithms “cannot be used lawfully”, he concludes. Neural networks can also be fooled too easily, says Stuart Russell, a computer scientist: “You could then take perfectly innocent objects, like lampposts, and print patterns on them that would convince the weapon that this is a tank.”

Advocates of military ai retort that the sceptics have an overly rosy view of war. A strike drone hunting for a particular object might not be able to recognise, let alone respect, an effort at surrender, acknowledges a former British officer involved in policy on ai. But if the alternative is intense shellfire, “There is no surrendering in that circumstance anyway.” Keith Dear, a former officer in the Royal Air Force who now works for Fujitsu, a Japanese firm, goes further. “If…machines produce a lower false positive and false negative rate than humans, particularly under pressure, it would be unethical not to delegate authority,” he argues. “We did various kinds of tests where we compared the capabilities and the achievements of the machine and compared to that of the human,” says the IDF’s General Hayman. “Most tests reveal that the machine is far, far, far more accurate…in most cases it’s no comparison.”
AI is analyzing the scene of a soldier who puts his gun on the ground and raises his hands.

One fallacy involves extrapolating from the anti-terror campaigns of the 2000s. “The future’s not about facial recognition-ing a guy and shooting him from 10,000 feet,” argues Palmer Luckey, the founder of Anduril, one of the firms involved in StormCloud. “It’s about trying to shoot down a fleet of amphibious landing craft in the Taiwan Strait.” If an object has the visual, electronic and thermal signature of a missile launcher, he argues, “You just can’t be wrong…it’s so incredibly unique.” Pre-war modelling further reduces uncertainty: “99% of what you see happening in the China conflict will have been run in a simulation multiple times,” Mr Luckey says, “long before it ever happens.”

“The problem is when the machine does make mistakes, those are horrible mistakes,” says General Hayman. “If accepted, they would lead to traumatic events.” He therefore opposes taking the human “out of the loop” and automating strikes. “It is really tempting,” he acknowledges. “You will accelerate the procedure in an unprecedented manner. But you can breach international law.” Mr Luckey concedes that AI will be least relevant in the “dirty, messy, awful” job of Gaza-style urban warfare. “If people imagine there’s going to be Terminator robots looking for the right Muhammad and shooting him… that’s not how it’s going to work out.”

For its part, the ICRC warns that AI systems are potentially unpredictable, opaque and subject to bias, but accepts they “can facilitate faster and broader collection and analysis of available information…minimising risks for civilians”. Much depends on how the tools are used. If the IDF employed Lavender as reported, it suggests the problem was over-expansive rules of engagement and lax operators, rather than any pathology of the software itself.

For many years experts and diplomats have been wrangling at the United Nations over whether to restrict or ban autonomous weapon systems (AWS). But even defining them is difficult. The ICRC says AWS are those which choose a target based on a general profile—any tank, say, rather than a specific tank. That would include many of the drones being used in Ukraine. The ICRC favours a ban on AWS which target people or behave unpredictably. Britain retorts that “fully” autonomous weapons are those which identify, select and attack targets without “context-appropriate human involvement”, a much higher bar. The Pentagon takes a similar view, emphasising “appropriate levels of human judgment”.

Defining that, in turn, is fiendishly hard. And it is not just to do with the lethal act, but what comes before it. A highly autonomous attack drone may seem to lack human control. But if its behaviour is well understood and it is used in an area where there are known to be legitimate military targets and no civilians, it might pose few problems. Conversely, a tool which merely suggests targets may appear more benign. But commanders who manually approve individual targets suggested by the tool “without cognitive clarity or awareness”, as Article 36, an advocacy group, puts it—mindlessly pushing the red button, in other words—have abdicated moral responsibility to a machine.

The quandary is likely to worsen for two reasons. One is that AI begets AI. If one army is using AI to locate and hit targets more rapidly, the other side may be forced to turn to AI to keep up. That is already the case when it comes to air-defence, where advanced software has been essential for tracking approaching threats since the dawn of the computer age. The other reason is that it will become harder for human users to grasp the behaviour and limitations of military systems. Modern machine learning is not yet widely used in “critical” decision-support systems, notes Mr Holland Michel. But it will be. And those systems will undertake “less mathematically definable tasks”, he notes, such as predicting the future intent of an adversary or even his or her emotional state.

There is even talk of using AI in nuclear decision-making. The idea is that countries could not only fuse data to keep track of incoming threats (as has happened since the 1950s) but also retaliate automatically if the political leadership is killed in a first strike. The Soviet Union worked on this sort of “dead hand” concept during the cold war as part of its “Perimetr” system. It remains in use and is now rumoured to be reliant on ai-driven software, notes Leonid Ryabikhin, a former Soviet air-force officer and arms-control expert. In 2023 a group of American senators even introduced a new bill: the “Block Nuclear Launch by Autonomous Artificial Intelligence Act”. This is naturally a secretive area and little is known about how far different countries want to go. But the issue is important enough to have been high up the agenda for presidential talks last year between Joe Biden and Xi Jinping.
RemAIning in the loop

For the moment, in conventional wars, “there’s just about always time for somebody to say yes or no,” says a British officer. “There’s no automation of the whole kill chain needed or being pushed.” Whether that would be true in a high-intensity war with Russia or China is less clear. In “The Human Machine Team”, a book published under a pseudonym in 2021, Brigadier-General Yossi Sariel, the head of an elite Israeli military-intelligence unit, wrote that an AI-enabled “human-machine team” could generate “thousands of new targets every day” in a war. “There is a human bottleneck,” he argued, “for both locating the new targets and decision-making to approve the targets.”

In practice, all these debates are being superseded by events. Neither Russia nor Ukraine pays much heed to whether a drone is an “autonomous” weapon system or merely an “automated” one. Their priority is to build weapons that can evade jamming and destroy as much enemy armour as possible. False positives are not a big concern for a Russian army that has bombed more than 1,000 Ukrainian health facilities to date, nor for a Ukrainian army that is fighting for its survival.

Hanging over this debate is also the spectre of a war involving great powers. NATO countries know they might have to contend with a Russian army that might, once this war ends, have extensive experience of building AI weapons and testing them on the battlefield. China, too, is pursuing many of the same technologies as America. Chinese firms make the vast majority of drones sold in America, be it as consumer goods or for industrial purposes. The Pentagon’s annual report on Chinese military power observes that in 2022 the People’s Liberation Army (PLA) began discussing “MultiDomain Precision Warfare”: the use of “big data and artificial intelligence to rapidly identify key vulnerabilities” in American military systems, such as satellites or computer networks, which could then be attacked.

The question is who has the upper hand. American officials once fretted that China’s lax privacy rules and control over the private sector would give the PLA access to more and better data, which would result in superior algorithms and weapons. Those concerns have mellowed. A recent study of procurement data by the Centre for Security and Emerging Technology (CSET) at Georgetown University found that America and China are “devoting comparable levels of attention to a similar suite of AI applications” (see chart 2).

Moreover, America has pulled ahead in cutting-edge models, thanks in part to its chip restrictions. In 2023 it produced 61 notable machine-learning models and Europe 25, according to Epoch AI, a data firm. China produced 15. These are not the models in current military systems, but they will inform future ones. “China faces significant headwinds in…military AI,” argues Sam Bresnick of CSET. It is unclear whether the PLA has the tech talent to create world-class systems, he points out, and its centralised decision-making might impede AI decision-support. Many Chinese experts are also worried about “untrustworthy” ai. “The PLA possesses plenty of lethal military power,” notes Jacob Stokes of CNAS, another think-tank, “but right now none of it appears to have meaningful levels of autonomy enabled by AI”.

China’s apparent sluggishness is part of a broader pattern. Some, like Kenneth Payne of King’s College London, think AI will transform not just the conduct of war, but its essential nature. “This fused machine-human intelligence would herald a genuinely new era of decision-making in war,” he predicts. “Perhaps the most revolutionary change since the discovery of writing, several thousand years ago.” But even as such claims grow more plausible, the transformation remains stubbornly distant in many respects.

“The irony here is that we talk as if AI is everywhere in defence, when it is almost nowhere,” notes Sir Chris Deverell, a retired British general. “The penetration of AI in the UK Ministry of Defence is almost zero…There is a lot of innovation theatre.” A senior Pentagon official says that the department has made serious progress in improving its data infrastructure—the pipes along which data move—and in unmanned aircraft that work alongside warplanes with crews. Even so, the Pentagon spends less than 1% of its budget on software—a statistic frequently trotted out by executives at defence-tech startups. “What is unique to the [Pentagon] is that our mission involves the use of force, so the stakes are high,” says the official. “We have to adopt AI both quickly and safely.”

Meanwhile, Britain’s StormCloud is getting “better and better”, says an officer involved in its development, but the project has moved slowly because of internal politics and red tape around the accreditation of new technology. Funding for its second iteration was a paltry £10m, pocket money in the world of defence. The plan is to use it on several exercises this year. “If we were Ukraine or genuinely worried about going to war any time soon,” the officer says, “we’d have spent £100m-plus and had it deployed in weeks or months.” ■


https://www.economist.com/briefing/2024/06/20/how-ai-is-changing-warfare

Anzarane aime ce message

Revenir en haut Aller en bas
Contenu sponsorisé





Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitime

Revenir en haut Aller en bas
 
Artificial Intelligence
Revenir en haut 
Page 1 sur 1

Permission de ce forum:Vous ne pouvez pas répondre aux sujets dans ce forum
Moroccan Military Forum alias FAR-MAROC  :: Armement et matériel militaire :: Autres Systemes d´armes-
Sauter vers: