Moroccan Military Forum alias FAR-MAROC
Vous souhaitez réagir à ce message ? Créez un compte en quelques clics ou connectez-vous pour continuer.

Moroccan Military Forum alias FAR-MAROC

Royal Moroccan Armed Forces Royal Moroccan Navy Royal Moroccan Air Forces Forces Armées Royales Forces Royales Air Marine Royale Marocaine
 
AccueilDernières imagesS'enregistrerConnexion
Le Deal du moment : -34%
-34% LG OLED55B3 – TV OLED 4K 55″ 2023 ...
Voir le deal
919 €

 

 Artificial Intelligence

Aller en bas 
3 participants
AuteurMessage
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5198
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Artificial Intelligence   Artificial Intelligence Icon_minitimeDim 29 Sep 2019 - 16:23

https://warontherocks.com/2019/09/terrorist-groups-artificial-intelligence-and-killer-drones/

Citation :


Terrorist Groups, Artificial Intelligence, and Killer Drones

-
Jacob Ware


Editor’s Note: This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It is based on a chapter by the authors in the forthcoming book ‘AI at War’ and addresses the first question (part a. and b.) which asks how AI will impact competition below the threshold of armed conflict, and what might happen if the United States fails to develop robust AI capabilities that address national security issues.

In 2016, the Islamic State of Iraq and the Levant (ISIL) carried out its first successful drone attack in combat, killing two Peshmerga warriors in northern Iraq. The attack continued the group’s record of employing increasingly sophisticated technologies against its enemies, a trend mimicked by other nonstate armed groups around the world. The following year, the group announced the formation of the “Unmanned Aircraft of the Mujahedeen,” a division dedicated to the development and use of drones, and a more formal step toward the long-term weaponization of drone technology.

Terrorist groups are increasingly using 21st-century technologies, including drones and elementary artificial intelligence (AI), in attacks. As it continues to be weaponized, AI could prove a formidable threat, allowing adversaries — including nonstate actors — to automate killing on a massive scale. The combination of drone expertise and more sophisticated AI could allow terrorist groups to acquire or develop lethal autonomous weapons, or “killer robots,” which would dramatically increase their capacity to create incidents of mass destruction in Western cities. As it expands its artificial intelligence capabilities, the U.S. government should also strengthen its anti-AI capacity, paying particular attention to nonstate actors and the enduring threats they pose. For the purposes of this article, I define artificial intelligence as technology capable of “mimicking human brain patterns,” including by learning and making decisions.

AI Could Turn Drones into Killer Robots

The aforementioned ISIL attack was not the first case of nonstate actors employing drones in combat. In January 2018, an unidentified Syrian rebel group deployed a swarm of 13 homemade drones carrying small submunitions to attack Russian bases at Khmeimim and Tartus, while an August 2018 assassination attempt against Venezuela’s Nicolas Maduro used exploding drones. Iran and its militia proxies have deployed drone-carried explosives several times, most notably in the September 2019 attack on Saudi oil facilities near the country’s eastern coast.

Pundits fear that the drone’s debut as a terrorist tool against the West is not far off, and that “the long-term implications for civilian populations are sobering,” as James Phillips and Nathaniel DeBevoise note in a Heritage Foundation commentary. In September 2017, FBI Director Christopher Wray told the Senate that drones constituted an “imminent” terrorist threat to American cities, while the Department of Homeland Security warned of terrorist groups applying “battlefield experiences to pursue new technologies and tactics, such as unmanned aerial systems.” Meanwhile, ISIL’s success in deploying drones has been met with great excitement in jihadist circles. The group’s al-Naba newsletter celebrated a 2017 attack by declaring “a new source of horror for the apostates!”

The use of drones in combat indicates an intent and capability to innovate and use increasingly savvy technologies for terrorist purposes, a process sure to continue with more advanced forms of AI. Modern drones possess fairly elementary forms of artificial intelligence, but the technology is advancing: Self-piloted drones are in development, and the European Union is funding projects to develop autonomous swarms to patrol its borders.

AI will enable terrorist groups to threaten physical security in new ways, making the current terrorism challenge even more difficult to address. According to a February 2018 report, terrorists could benefit from commercially available AI systems in several ways. The report predicts that autonomous vehicles will be used to deliver explosives; low-skill terrorists will be endowed with widely available high-tech products; attacks will cause far more damage; terrorists will create swarms of weapons to “execute rapid, coordinated attacks”; and, finally, attackers will be farther removed from their targets in both time and location. As AI technology continues to develop and begins to proliferate, “AI [will] expand the set of actors who are capable of carrying out the attack, the rate at which these actors can carry it out, and the set of plausible targets.”

For many military experts and commentators, lethal autonomous weapon systems, or “killer robots,” are the most feared application of artificial intelligence in military technology. In the words of the American Conservative magazine, the difference between killer robots and current AI-drone technology is that, with killer robots, “the software running the drone will decide who lives and who dies.” Thus, killer robots, combining drone technology with more advanced AI, will possess the means and power to autonomously and independently engage humans. The lethal autonomous weapon has been called the “third revolution in warfare,” following gunpowder and nuclear weapons, and is expected to reinvent conflict, not least terrorist tactics.

Although completely autonomous weapons have not yet reached the world’s battlefields, current weapons are on the cusp. South Korea, for instance, has developed and deployed the Samsung SGR-A1 sentry gun to its border with North Korea. The gun supposedly can track movement and fire without human intervention. Robots train alongside marines in the California desert. Israel’s flying Harpy munition can loiter for hours before detecting and engaging targets, while the United States and Russia are developing tanks capable of operating autonomously. And the drones involved in the aforementioned rebel attack on Russian bases in Syria were equipped with altitude and leveling sensors, as well as preprogrammed GPS to guide them to a predetermined target.

Of particular concern is the possibility of swarming attacks, composed of thousands or millions of tiny killer robots, each capable of engaging its own target. The potentially devastating terrorist application of swarming autonomous drones is best summarized by Max Tegmark, who has said that “if a million such killer drones can be dispatched from the back of a single truck, then one has a horrifying weapon of mass destruction of a whole new kind: one that can selectively kill only a prescribed category of people, leaving everybody and everything else unscathed.” Precisely that hypothetical scenario was illustrated in a recent viral YouTube video, “Slaughterbots,” which depicted the release of thousands of small munitions into British university lecture halls. The drones then pursued and attacked individuals who had shared certain political social media posts. The video also depicts an attack targeting sitting U.S. policymakers on Capitol Hill. The video has been viewed over three million times, and was met with increasing concern about potential terrorist applications of inevitable autonomous weapons technology. So far, nonstate actors have only deployed “swarmed” drones sparingly, but it points to a worrying innovation: Swarming, weaponized killer robots aimed at civilian crowds would be nearly impossible to defend against, and, if effective, cause massive casualties.

Terrorists Will Be Interested in Acquiring Lethal Autonomous Weapons

Terrorist groups will be interested in artificial intelligence and lethal autonomous weapons for three reasons — cost, traceability, and effectiveness.

Firstly, killer robots are likely to be extremely cheap, while still maintaining lethality. Experts agree that lethal autonomous weapons, once fully developed, will provide a cost-effective alternative to terrorist groups looking to maximize damage, with Tegmark arguing that “small AI-powered killer drones are likely to cost little more than a smartphone.” Additionally, killer robots will minimize the human investment required for terrorist attacks, with scholars arguing that “greater degrees of autonomy enable a greater amount of damage to be done by a single person.” Artificial intelligence could make terrorist activity cheaper financially and in terms of human capital, lowering the organizational costs required to commit attacks.

Secondly, using autonomous weapons will reduce the trace left by terrorists. A large number of munitions could be launched — and a large amount of damage done — by a small number of people operating at considerable distance from the target, reducing the signature left behind. In Tegmark’s words, for “a terrorist wanting to assassinate a politician … all they need to do is upload their target’s photo and address into the killer robot: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure nobody knows who was responsible.” With autonomous weapons technology, terrorist groups will be able to launch increasingly complex attacks, and, when they want to, escape without detection.

Finally, killer robots could reduce, if not eliminate, the physical costs and dangers of terrorism, rendering the operative “essentially invulnerable.” Raising the possibility of “fly and forget” missions, lethal autonomous weapons might simply be deployed toward a target, and engage that target without further human intervention. As P. W. Singer noted in 2012, “one [will] not have to be suicidal to carry out attacks that previously might have required one to be so. This allows new players into the game, making al-Qaeda 2.0 and the next-generation version of the Unabomber or Timothy McVeigh far more lethal.” Additionally, lethal autonomous weapons could potentially reduce human aversion to killing, making terrorism even more palatable as a tactic for political groups. According to the aforementioned February 2018 report, “AI systems can allow the actors who would otherwise be performing the tasks to retain their anonymity and experience a greater degree of psychological distance from the people they impact”; this would not only improve a terrorist’s chances of escape, as mentioned, but reduce or even eliminate the moral or psychological barriers to murder.

Terrorist Acquisition of Lethal Autonomous Weapons Is Realistic

The proliferation of artificial intelligence and killer robot technology to terrorist organizations is realistic and likely to occur through three avenues — internal development, sales, and leaks.

Firstly, modern terrorist organizations have advanced scientific and engineering departments, and actively seek out skilled scientists for recruitment. ISIL, for example, has appealed for scientists to trek to the caliphate to work on drone and AI technology. The individual technologies behind swarming killer robots — including unmanned aerial vehicles, facial recognition, and machine-to-machine communication — already exist, and have been adapted by terrorist organizations for other means. According to a French defense industry executive, “the technological challenge of scaling it up to swarms and things like that doesn’t need any inventive step. It’s just a question of time and scale and I think that’s an absolute certainty that we should worry about.”

Secondly, autonomous weapons technology will likely proliferate through sales. Because AI research is led by private firms, advanced AI technology will be publicly sold on the open market. As Michael Horowitz argues, “militant groups and less-capable states may already have what they need to produce some simple autonomous weapon systems, and that capability is likely to spread even further for purely commercial reasons.” The current framework controlling high-tech weapons proliferation — the Wassenaar Arrangement and Missile Technology Control Regime — is voluntary, and is constantly tested by great-power weapons development. Given interest in developing AI-guided weapons, this seems unlikely to change. Ultimately, as AI expert Toby Walsh notes, the world’s weapons companies can, and will, “make a killing (pun very much intended) selling autonomous weapons to all sides of every conflict.”

Finally, autonomous weapons technology is likely to leak. Innovation in the AI field is led by the private sector, not the military, because of the myriad commercial applications of the technology. This will make it more difficult to contain the technology, and prevent it from proliferating to nonstate actors. Perhaps the starkest warning has been issued by Paul Scharre, a former U.S. defense official: “We are entering a world where the technology to build lethal autonomous weapons is available not only to nation-states but to individuals as well. That world is not in the distant future. It’s already here.”

Counter-Terrorism Options

Drones and AI provide a particularly daunting counter-terrorism challenge, simply because effective counter-drone or anti-AI expertise does not yet exist. That said, as Daveed Gartenstein-Ross has noted, “in recent years, we have seen multiple failures in imagination as analysts tried to discern what terrorists will do with emerging technologies. A failure in imagination as artificial intelligence becomes cheaper and more widely available could be even costlier.” Action is urgently needed, and for now, counter-terrorism policies are likely to fit into two categories, each with flaws: defenses and bans.

Firstly, and most likely, Western states could strengthen their defenses against drones and weaponized AI. This might involve strengthening current counter-drone and anti-AI capabilities, improving training for local law enforcement, and establishing plans for mitigating drone or autonomous weapons incidents. AI technology and systems will surely play an important role in this space, including in the development of anti-AI tools. However, anti-AI defenses will be costly, and will need to be implemented across countless cities throughout the entire Western world, something Michael Horton calls “a daunting challenge that will require spending billions of dollars on electronic and kinetic countermeasures.” Swarms, Scharre notes, will prove “devilishly hard to target,” given the number of munitions and their ability to spread over a wide area. In addition, defenses will likely take a long time to erect effectively and will leave citizens exposed in the meantime. Beyond defenses, AI will also be used in counter-terrorism intelligence and online content moderation, although this will surely spark civil liberties challenges.

Secondly, the international community could look to ban AI use in the military through an international treaty sanctioned by the United Nations. This has been the strategy pursued by activist groups such as the Campaign to Stop Killer Robots, while leading artificial intelligence researchers and scientific commentators have published open letters warning of the risk of weaponized AI. That said, great powers are not likely to refrain from AI weapons development, and a ban might outlaw positive uses of militarized AI. The international community could also look to stigmatize, or delegitimize, weaponized AI and lethal autonomous weapons sufficiently to deter terrorist use. Although modern terrorist groups have proven extremely willing to improvise and innovate, and effective at doing so, there is an extensive list of weapons — chemical weapons, biological weapons, cluster munitions, barrel bombs, and more — accessible to terrorist organizations, but rarely used. This is partly down to the international stigma associated with those munitions — if a norm is strong enough, terrorists might avoid using a weapon. However, norms take a long time to develop, and are fragile and untrustworthy solutions. Evidently, good counter-terrorism options are limited.

The U.S. government and its intelligence agencies should continue to treat AI and lethal autonomous weapons as priorities, and identify new possible counter-terrorism measures. Fortunately, some progress has been made: Nicholas Rasmussen, former director of the National Counterterrorism Center, admitted at a Senate Homeland Security and Governmental Affairs Committee hearing in September 2017 that “there is a community of experts that has emerged inside the federal government that is focused on this pretty much full time. Two years ago this was not a concern … We are trying to up our game.”

Nonstate actors are already deploying drones to attack their enemies. Lethal autonomous weapon systems are likely to proliferate to terrorist groups, with potentially devastating consequences. The United States and its allies should urgently address the rising threat by preparing stronger defenses against possible drone and swarm attacks, engaging with the defense industry and AI experts warning of the threat, and supporting realistic international efforts to ban or stigmatize military applications of artificial intelligence. Although the likelihood of such an event is low, a killer robot attack could cause massive casualties, strike a devastating blow to the U.S. homeland, and cause widespread panic. The threat is imminent, and the time has come to act.

Jacob Ware holds a master’s in security studies from Georgetown University and an MA (Hons) in international relations and modern history from the University of St Andrews. His research has previously appeared with the International Centre for Counter-Terrorism – The Hague.

Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5198
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeMar 26 Nov 2019 - 20:52

https://breakingdefense.com/2019/11/exclusive-pentagons-ai-problem-is-dirty-data-lt-gen-shanahan/

Citation :

EXCLUSIVE Pentagon’s AI Problem Is ‘Dirty’ Data: Lt. Gen. Shanahan

Sydney J. Freedberg Jr.

CRYSTAL CITY: “Some people say data is the new oil. I don’t like that,” the Defense Department’s AI director told me in his office here. “I treat it as mineral ore: There’s a lot of crap. You have to filter out the impurities from the raw material to get the gold nuggets.”

Lt. Gen. Jack Shanahan learned this the hard way as head of the much-debated Project Maven, which he led for two years before becoming the founding director of the Joint Artificial Intelligence Center last year. The lessons from that often-painful process – discussed in detail below – now shape Shanahan’s approach to the new and ever-more ambitious projects the Defense Department is taking on. They range from the relatively low-risk, non-combat applications that JAIC got warmed up with in 2019, like predicting helicopter engine breakdowns before they happen, to the joint warfighting efforts Shanahan wants to ramp up to in 2020:

Joint All-Domain Command & Control: This is a pilot project working towards what’s also called Multi-Domain C2, a vision of plugging all services, across all five domains — land, sea, air, space, and cyberspace — into a single seamless network. It’s a tremendous task to connect all the different and often incompatible technologies, organizations, and cultures.
Autonomous Ground Reconnaissance & Surveillance: This involves adding Maven-style analysis algorithms to more kinds of scout drones and even ground robots, so the software can call humans’ attention to potential threats and targets without someone having to watch every frame of video.
Operations Center Cognitive Assistant: This project aims to streamline the flow of information through the force. It will start with using natural-language processing to sort through radio chatter, turning troops’ urgent verbal calls for airstrikes and artillery support into target data in seconds instead of minutes.
Sensor To Shooter: This will build on Maven to develop algorithms that can shrink the time to locate potential targets, prioritize them, and present them to a human, who will decide what action to take. In keeping with Pentagon policy, Shanahan assured me, “this is about making humans faster, more efficient, and more effective. Humans are still going to have to make the big decisions about weapons employment.”
Dynamic & Deliberate Targeting: The idea here is to take targets (for example, ones found by the Sensor To Shooter software) and figure out which aircraft is best positioned to strike it with which weapons along which flight path – much like how Uber matches you with a driver and route.

“The data’s there in all the cases I described, but what’s the quality? Who’s the owner of the data?” Shanahan said. “There’s a lot of proprietary data that exists in weapons systems” – from maintenance diagnostics to targeting data – “and unlocking that becomes harder than anybody expected. Sometimes the best data is treated as engine exhaust rather than potential raw materials for algorithms.

“What has stymied most of the services when they dive into AI is data,” he said. “They realize how hard it is to get the right data to the right place, get it cleaned up, and train algorithms on it.”

Today’s military has vast amounts of data, Shanahan said, but “I can’t think of anything that is really truly AI-ready. In legacy systems we’re essentially playing the data as it lies, which gets complicated, because it’s messy, it’s dirty. You have certain challenges of data quality, data provenance, and data fidelity, and every one of those throws a curve ball.”

While the Pentagon needs solid data for lots of different purposes, not just AI, large amounts of good data are especially essential for machine learning. Fighting wars is only going to get more complex in the future: Military leaders see huge opportunities to use AI to comb through that complexity to make operations more efficient, reduce collateral damage, and bring the troops home safely.

Lessons From Maven: Show Me The Camel

Project Maven showed Shanahan just how hard the data wrangling could get. The aim of Maven was to analyze huge amounts of drone surveillance video that human analysts couldn’t keep up with, training machine-learning algorithms to recognize hints of terrorist activity and report it.

“We thought it would be easier than it was, because we had tens of thousands of hours of full motion video from real missions,” Shanahan told me. “But it was on tapes somewhere that someone had stored, and a lot of the video gets stored for a certain amount of time and then gets dumped. We had to physically go out and pick tapes up.”

While the military data was patchy and dirty, open-source image libraries and other civilian sources were too clean to teach an algorithm how to understand a war zone, Shanahan told me. “If you train against a very clean, gold-standard data set, it will not work in real world conditions,” he said. “It’s much more challenging — smoke, haze, fog, clouds — fill in the blank.

“Then you have the edge cases, something that is so unusual that you just didn’t have enough data to train against it,” Shanahan said. “For example, we may not have had enough camel imagery.” That sounds comical – until the first few hundred times your algorithm glitches because it can’t figure out what this strange lumpy object is that it’s seeing from 10,000 feet overhead.

Even once you had the data in usable form, Shanahan continued, you needed humans to categorize “tens of thousands, if not millions, of images” so the algorithm could learn, for example, what camels look like as opposed to pickup trucks, people, buildings and weapons. Machine learning algorithms need to see millions of clearly-labeled examples before they can figure out how to deal with new, unlabeled data. So it takes a huge amount of human labor, doing tasks that require little intelligence, to get the data in a form the machine can actually learn from.

On Maven, intelligence community analysts helped with data labeling a lot. The Intelligence Systems Support Office down in Tampa, near Special Operations Command’s SOFWERX, even spun off a dedicated subunit just to support Shanahan. (This Algorithmic Warfare Provisional Program Activity Office now helps JAIC as well).

Even so, manpower was a problem. “We never got the numbers we needed, so we had to get contractor support,” Shanahan said. Unlike a commercial company outsourcing data-labeling to, say, China, the Defense Department had sensitive operational information that could only be worked on by US nationals with security clearances. And before handing the video to the cleared contractors, Shanahan said, “you had to get rid of some sensitive things and some extreme potentially graphic things you didn’t necessarily want data labelers to look at.”

All told, it was a huge amount of work – and it’s never really done. “When you fly it for the first time, the algorithm is going to find things you didn’t train it on,” Shanahan said. “They’re constantly updated through what we call dynamic retraining.”

Even civilian algorithms require continual tweaking, because the world keeps changing. And many military algorithms have to deal with an adversary who’s actively trying to deceive them. The cycle of countermeasure and counter-countermeasures is as old as warfare, but the rise of machine learning has spawned a whole science of adversarial AI to deceive the algorithms.

“We learned in Maven, even if you fielded a decent algorithm, if you don’t update that algorithm in the next six months, [users] become cynical and disillusioned that it’s not keeping up with what they’re seeing in the real world,” Shanahan told me. Today, after much streamlining of processes, Maven is updated regularly at a pace unobtainable even a year ago, Shanahan said, but it’s still far short of the almost-daily updates achievable in civilian software.

Artificial Intelligence Screen-Shot-2018-12-06-at-10.18.26-PM-1024x753
SOURCE: Army Multi-Domain Operations Concept, December 2018.

Beyond Maven: AI For Joint Warfighting

Maven solved its problems – mostly. The head of Air Combat Command has publicly said he doesn’t entirely trust its analysis, not yet, and Shanahan himself admitted its accuracy was initially about 50-50. But the entire basis for Maven was to deliver initial capabilities – a minimum viable product – as quickly as possible to the field, then get real-world feedback, improve it, field the upgrade, and repeat.

But the tools for tackling full motion video don’t necessarily translate to other tasks that the new Joint AI Center is taking on.

Even when JAIC is seeking to apply Maven-style video analysis to other kinds of surveillance footage, the algorithms need to be retrained to recognize different targets in different landscapes and weather conditions, all seen from different angles and altitudes through different kinds of cameras. “You can’t just train an algorithm on electro-optical data and expect it to perform in infrared,” Shanahan said. “We tried that.”

And many of JAIC’s projects don’t involve video at all: They range from predicting helicopter engine breakdowns and using natural-language processing to turn troops’ radio calls for air support into unambiguous targeting data.

This is another reason why Shanahan prefers to think of data as mineral ore rather than petroleum, he told me: “It’s not fungible like oil is.” You can think of full motion video, for example, as palladium: an essential catalyst for some applications, irrelevant for others. And like rare minerals, all the different kinds of data are out there – somewhere – if you can find them, get permission to exploit them from whoever currently owns them, and separate them from the junk that they’re embedded in.

There’s no simple silver bullet solution, Shanahan said. Some suggest rigorously imposing some kind of top-down standard for formatting and handling data, but he argues the Defense Department has too many standards already, and they are inconsistently applied.

“There are a lot of people who want to just jump to data standards. I don’t,” he told me. “Every weapons system that we have, and every piece of data that we have, conforms to some standard. There are over a thousand different standards related to data today. They’re just not all enforced.”

“It’s less a question of standards and more of policies and governance,” he told me. “We now have to think about data as a strategic asset in its own right. Now, a much better approach to drive interoperability is to start with a discussion of metadata standards that are as lightweight as possible, as well as a Modular Open Systems Architecture. Or put another way, we need to agree on the definition of ‘AI Ready’ when it comes to our weapon systems.”

That includes getting acquisition program managers, traditionally focused on the physical performance of the weapons they are developing, fielding, and sustaining, to consider data as “part of the life-cycle management process just as much as the hardware is,” Shanahan said. “I see signs of the services beginning to have that conversation about future weapons systems.”

The fundamental issue: “The Department of Defense is different from Amazon, Google, Microsoft, which were born as digital companies,” he said. “The Department of Defense was not. It started as a hardware company. It’s an industrial age environment and we’re trying to make this transformation to an information-age, software-driven environment.”

One of JAIC’s key contributions here will be to build a “common foundation” that pulls together usable data and proven algorithms from across the Defense Department for any DoD user to access and apply to their specific needs. (This will require a DoD-wide cloud computing system, he noted).

“We want to have APIs [Application Program Interfaces] that allow anyone to come in and access our common foundation or platform. We will publish API definitions, what you need to write to,” Shanahan said. But the sheer diversity of the data and the different purposes it can be put to, he said, means that “there is never going to be a single standard API.”

Likewise, he said, while there will “minimum common denominator” standards for tagging metadata with various categories and labels, “you will have lots of flexibility for mission-specific tagging.”

It’s a tremendous task, but one with equally tremendous potential benefits. Working with Chief Data Officer Michael Conlin, “we are trying to fix all sorts of problems with data across the Department of Defense,” not just for AI, Shanahan told me. “I am optimistic.”

“AI will likely become the driving force of change in how the department treats data,” Shanahan told me. “And technology is changing so fast that the painful data wrangling processes we endure today may well be transformed into something entirely more user-friendly a year from now.”
Revenir en haut Aller en bas
AIT
Victime
Victime
AIT


messages : 612
Inscrit le : 02/02/2019
Localisation : Ait Hdiddou
Nationalité : maroc-espagne
Médailles de mérite : Artificial Intelligence Unbena31

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeSam 22 Fév 2020 - 13:18

Un rapport de l'institut RAND sur l'impact de l'IA sur la dissuasion

https://twitter.com/RANDCorporation/status/1231088478466826240?s=19

_________________
Artificial Intelligence Gamins13
ⵜⴰⵢⵔⵉ ⵏ'ⵜⵎⴰⵣⵉⵔⵜ
Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5198
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeMar 20 Avr 2021 - 7:16

Artificial Intelligence 7e545510
Artificial Intelligence 79766210
Artificial Intelligence 0fdbfe10
Artificial Intelligence 841b9110

Artificial Intelligence Bc953010

Artificial Intelligence 31dde810
Artificial Intelligence Ce67bd10
Artificial Intelligence 7679f410

Adam aime ce message

Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5198
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeDim 19 Sep 2021 - 0:42

Revenir en haut Aller en bas
Adam
Modérateur
Modérateur
Adam


messages : 6300
Inscrit le : 25/03/2009
Localisation : Royaume pour tous les Marocains
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena31Artificial Intelligence Ambass10
Artificial Intelligence Medail10Artificial Intelligence Unbena15

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeMer 29 Sep 2021 - 21:33


_________________
Les peuples ne meurent jamais de faim mais de honte.

Shugan188 aime ce message

Revenir en haut Aller en bas
Adam
Modérateur
Modérateur
Adam


messages : 6300
Inscrit le : 25/03/2009
Localisation : Royaume pour tous les Marocains
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena31Artificial Intelligence Ambass10
Artificial Intelligence Medail10Artificial Intelligence Unbena15

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeJeu 7 Oct 2021 - 0:06


_________________
Les peuples ne meurent jamais de faim mais de honte.
Revenir en haut Aller en bas
Adam
Modérateur
Modérateur
Adam


messages : 6300
Inscrit le : 25/03/2009
Localisation : Royaume pour tous les Marocains
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena31Artificial Intelligence Ambass10
Artificial Intelligence Medail10Artificial Intelligence Unbena15

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeLun 25 Oct 2021 - 16:12


_________________
Les peuples ne meurent jamais de faim mais de honte.
Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5198
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeLun 22 Nov 2021 - 18:18

https://gjia.georgetown.edu/2021/11/22/swords-and-shields-autonomy-ai-and-the-offense-defense-balance/

Citation :



Swords and Shields: Autonomy, AI, and the Offense-Defense Balance

- Georgetown Journal of International Affairs
8 - 10 minutes

Introduction

Military machines can increasingly move, search for targets, and even kill without human control. Growing computer power coupled with advances in artificial intelligence empower autonomous weapons and platforms to carry out more sophisticated behaviors and activities. Autonomy fundamentally means reducing human involvement in command and control, which theoretically means that virtually every military platform and weapon can be made autonomous. Whether this autonomy is sensible or ethical is another question.

Current autonomous weapons have typically been used for tactical defense. Land and sea mines are extremely simple autonomous weapons, based on mechanical triggers to keep an enemy from crossing or holding a particular piece of territory. Other autonomous weapons like close-in weapon systems and active protection systems are used to defend military platforms against incoming projectiles. Of course, that tactical defense may support strategic offense, protecting invading forces from defender attack. Some systems like loitering munitions, particularly radar-hunting missiles, are designed primarily for offense, to destroy defending air and missile defenses.

The future of autonomous weapons is unclear. A broad range of factors influence whether autonomy will ultimately favor offense or defense and the degree of this impact. These factors include the type and nature of the application, reliability in face of adversary interference, affordability and availability of application types, and overall effects on the cost of war.

Differing Applications of Autonomous Weapons

Autonomy can be applied to virtually any weapon system or platform, so the net effects for offense and defense depend on which applications prove most significant. For example, militaries are developing drone swarms to target air and missile defenses. Cheap drones may overwhelm and destroy defenses to ensure more expensive manned aircraft are safe from reprisal. But autonomy and artificial intelligence can also improve those same defenses, increasing the risks to manned aircraft. The need to protect against aerial drone swarm attacks is already driving improvements to those defenses.

The nature of the application matters too. Offense-defense theorists argue nuclear weapons are the ultimate defensive weapon, because they ensure nuclear-armed states can retaliate with overwhelming destruction. If so, then any technology weakening that deterrent must favor the offense. Some researchers have argued that the creation of massive underwater sensor networks may render the ocean transparent, effectively eliminating undersea nuclear second-strike capabilities. Hypothetically, a mixture of unmanned undersea, surface, and aerial vehicles; sensors; and manned anti-submarine warfare systems would comb the ocean to find nuclear submarines. The argument goes that if an adversary can locate every nuclear submarine, they may be able to destroy them all in a single strike. The reality is likely more complex, given the challenge of processing and managing such a huge network and carrying out strikes against identified targets. Nonetheless, any meaningful risk to nuclear stability certainly would have outsized global effects.

Autonomy and artificial intelligence can also play a support role for manned forces. Autonomous vehicles can provide logistical support by helping transport supplies and forces to the battlefield, which would favor offensive operations. But what happens if an autonomous convoy comes under attack? Limits on autonomous cognition may inhibit their response, making them more of a liability than an asset. At the same time, autonomous systems can help collect intelligence to identify targets, assess enemy defenses, and plan military actions, and artificial intelligence can help sift and process that information. This can help attackers identify vulnerabilities and plan attacks, while also granting defenders better situational awareness to monitor movements of attackers and plan ambushes.

Reliability of Autonomous Weapons

The reliability of autonomous weapons and platforms also affects the net impact of autonomy on offense and defense. Current machine vision systems are heavily dependent on training data. Machines require large amounts of data to know the difference between a car, a tank, a cruise ship, and a naval destroyer. Training data may exhibit structural biases that affect how and when the weapons are used. For example, an autonomous weapon might be able to recognize an unobstructed tank on a sunny day, but what about a foggy or snowy day, or if the tank is partially obscured by a tree or a building? Militaries cannot always anticipate the environmental characteristics of where the autonomous systems are deployed. Differences in climate, flora, architecture, and other factors may create reliability problems in deployed areas. For example, Russia deployed the unmanned combat vehicle Uran-9 in Syria, but it worked poorly: it struggled to spot enemies farther away than 1.25 miles, its sensors and weapons were useless when moving, the tracked suspension had unreliable parts, and the remote control system had a much shorter range than expected. For autonomous weapons to be meaningful in combat, these challenges need to be resolved.

Enemy action also affects reliability. In the civilian domain, mere stickers on a stop sign have been enough to manipulate an autonomous car. Hopefully, a military autonomous system would use multiple sensors to prevent such a simple manipulation, but interference is still possible. At an extreme level, manipulation could cause autonomous weapons to fire on friendly forces. The unreliability of autonomous weapons may mean that they tend to favor the defense, as defending militaries can more readily control and influence the environment to cause problems for attacking forces.

Effects on the Cost of War

Conversely, autonomous weapons may make wars easier to start, benefiting the offense. Autonomous weapons necessarily reduce the immediate risk to human operations by removing humans from the battlefield, potentially reducing the perceived risk of military conflict. If war is cheaper to start, then theoretically, war will happen more readily. A military composed of numerous autonomous and unmanned systems would exacerbate those concerns. A drone support force is one thing; an army of them is another entirely.

Affordability of Autonomous Weapons

The affordability of autonomous weapons may favor defense. In theory, autonomous weapons should be cheaper than manned systems. They can be more readily mass produced since they do not need to support human life, and may be as simple as a small drone with a bomb strapped onto it. And defensive autonomous weapons are likely to be cheaper than offensive weapons, because defensive weapons do not require mobility and may be designed to control a known, expected battlefield. If so, autonomous weapons and platforms may be deployed in larger numbers by defending states, allowing them to impose greater costs on an attacking state.

Conclusion

Autonomous weapons and artificial intelligence are clearly growing features of global conflict, but what that means for global stability is unclear. Researchers and policy-makers need to better understand what weapons are used, how they are used, and by whom. That requires research and analysis at all levels of warfare, across all domains—land, sea, air, and space. States and civil society need to exploit the opportunities autonomy offers, while identifying and countering the risks. Global security depends on it.



Zachary Kallenborn is a Policy Fellow at the Schar School of Policy and Government, a Research Affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), the Master Coordination Director for Project Exodus Relief helping evacuate high-risk Afghan refugees, an officially proclaimed U.S. Army “Mad Scientist,” and national security consultant. His research on autonomous weapons, drone swarms, weapons of mass destruction (WMD) and WMD terrorism has been published in a wide range of peer-reviewed, wonky, and popular outlets, including the Brookings Institution, Foreign Policy, Slate, War on the Rocks, and the Nonproliferation Review. Journalists have written about and shared that research in the New York Times, NPR, Forbes, the New Scientist and WIRED, among numerous others.


Revenir en haut Aller en bas
Shugan188
Modérateur
Modérateur
Shugan188


messages : 5198
Inscrit le : 12/05/2015
Localisation : Maroc
Nationalité : Maroc
Médailles de mérite : Artificial Intelligence Unbena32Artificial Intelligence Unbena11
Artificial Intelligence Unbena11

Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitimeLun 10 Juil 2023 - 2:38

Revenir en haut Aller en bas
Contenu sponsorisé





Artificial Intelligence Empty
MessageSujet: Re: Artificial Intelligence   Artificial Intelligence Icon_minitime

Revenir en haut Aller en bas
 
Artificial Intelligence
Revenir en haut 
Page 1 sur 1

Permission de ce forum:Vous ne pouvez pas répondre aux sujets dans ce forum
Moroccan Military Forum alias FAR-MAROC  :: Armement et matériel militaire :: Autres Systemes d´armes-
Sauter vers: