Skip to main content

Can AI Unleash a World War: Experts Say Yes!

Posted on Sep 16, 2019

Dr. Bruce D. Jett, Assistant Secretary of the Army for Acquisition, Logistics, and Technology, recently admitted that certain types of weapons could be placed under control of AI. Representatives of the military units of Europe and China immediately made similar statements. Previously, the inventor Elon Musk warned that such statements and actions could set off World War III.

robots

A “brain in a jar” or a “black box”?

The military has dreamt of recruiting artificial intelligence from the very moment the first man-made neurons began to demonstrate minimal efficiency. Armies of many countries in the world would like to have an impartial soldier on their side who performs commands, knows how to quickly analyze a situation and is the first to press the red button in case a nuclear apocalypse occurs. The imagination of military units is reflected in Hollywood blockbusters, where an independent and belligerent AI drives humanity into the Stone Age.

Experts believe that the world is approaching the line beyond which machines, not humans, will lead wars.

Artificial intelligence can obviously be used for various military purposes. Now war is not what it used to be – it is a battle of technologies, far from being visible to the naked eye. A civilian can deal with the current tasks of programming, being at the workplace in a large corporation or state-owned enterprise, and would never suspect that his developments actually might be used for other motives.

Managing missile launches, tracking the behavior of flying objects, including in the stratosphere, intercepting and decoding enemy signals, building illusory targets, and hindering similar enemy activities are just some of the tasks that AI can perform.

And it is important to understand here that AI is not a human “brain in a jar.” It is program algorithm or, as scientists call it, a “black box” that can be trained. It can observe, repeat, make mistakes, and learn. For example, the photos of cats are shown to the black box, and while guessing, making mistakes, it learns to recognize these furry pets. And each time it improves. In the same way, AI can find cancer in pictures, play complex games or analyze military maps, or control a tank.

Experts say that, at the same time, the algorithm does not have its own consciousness or understanding what exactly it does. Moreover, developers believe that AI cannot suddenly become rational or destroy people without human intervention.

AI as a soldier, general, commander?

It is still difficult to determine the role of AI in the military. Politicians and futurists see artificial intelligence as the commander and power that can destroy the whole world at once. Pavel Adylin, Artezio CEO, the creator and curator of AI lab believes that AI cannot become dangerous without human intervention.

“Elon Musk says the same. He does not claim that something terrible can appear in this “black box”, he believes that a person with bad intentions, using this tool (it mainly refers to reinforcement learning algorithms), can cause troubles. That is, if you set a certain motivation before the program that will be harmful to the person, the system will learn to solve the problem, the goal of which will be evil, and achieve greater success on this path,” says Adylin.

For example, developers can create AI that will be able to run a battle tank. The tank itself will drive and accurately shoot at targets that a person will determine. That is, AI will focus on predefined tasks, set by a human, and not by self-consciousness or the mythical “Machine God.” Thus, we should not expect miracles of the commander who makes independent decisions—at least not yet.

According to Pavel Adylin, “If we talk about comprehensive command and control of the troops, replacing the general staff with a neural network, then this is an AI-complete problem, which is, creating a computer as smart as a person. In the short term, such a problem will not be solved, but progress does not stand still. And if we talk about particular cases, it is precisely such tasks that a unified system of layered air defense is already solving. The most suitable means of destruction are matched to the targets, depending on the degree of their threat and destruction probability.”

The classic turning point in the development of AI is considered the moment when the robot (with AI) is able to clone itself using improvised material, such as soil, for example.

Therefore, software algorithms must contain restrictions that will not allow robots to “multiply” in an unlimited way.

Nanorobots and “smart” missiles

Assessing the potential of using AI in the armed forces, almost no experts see global systems or solutions that could change the balance of power, as it has happened in the past with atomic technology.

The potential for use is very wide: starting from missiles that independently adjust their flight to increase accuracy, to decision-making systems on the appropriateness of certain military operations based on simulations. The military sphere has a huge number of places to apply AI, and there are probably more than it can be economically feasible. For example, there are certain results on the development of self-guided bullets, but they are unlikely to receive wide distribution in the near future due to their high cost.

The US is watching...

One can only guess how much military forces of different countries spend on developing smart weapons. No one can give exact numbers, and AI research is partially conducted in the civilian, commercial sphere as well.

It can be assumed that about 10 to 30 percent of the defense budget is spent on the AI research, plus hundreds of millions of civilian dollars. It indicates that the US defense strategy does not include high AI stakes.

For example, in June 2018 the Pentagon launched its Joint Artificial Intelligence Center (JAIC). The name of the center is more like a PR move. Experts believe that the Pentagon has not conducted any developments in the AI field and is awaiting proposals from the civil industry.

Arguing their findings, experts cite the speech of Captain Michael Kanaan, Co-Chair for Artificial Intelligence, U.S. Air Force.

“One of the problems that contractors may encounter is more expensive products that focus on government-oriented programs, on security, compared to the same, less expensive, commercial programs. A new approach to commercial off-the-shelf technology as part of the service can help," says Captain urging not to consider non-classified civilian projects unprotected.

Simply put, advances in the development of AI in the commercial field are better than in the secret laboratories of the US military industry. Therefore, the United States determined the task to its military leadership – to monitor successes and use the best practices.

It turns out that the USA is watching the development of AI and does not strongly believe in its benefits for defense. It is easy to believe, if you pay attention to the speech of the American military official Major Daniel Tadross (JAIC).

“We started exploring the Ministry of Defense’s aviation service ecosystem, which is huge and too complex for any artificial intelligence,” says Tadross. That is, artificial intelligence is de facto assigned the role of an ordinary soldier or part of defense systems – missiles, bullets, guidance systems that are controlled by men.

army

Meanwhile, Russia is creating smart weapons

However, in Russia everything is exactly the opposite – breakthrough technologies are born in the defense industry and then find their application in the civic life. This was stated by Ruslan Tsalikov, the First Deputy Minister of Defense of Russia. According to Tsalikov, the Russian Armed Forces "lead almost all the breakthrough technological areas that are being developed in the country."

The military official believes that "this is a normal situation, since the development of military technologies is always ahead of the development of civilian technologies." It turns out that the advanced AI developments should be sought from the military.

In the guidebook of the Ministry of Defense, the section on the use of AI in defense states that research “is conducted in three main areas: the creation of knowledge-based systems; neurosystems; heuristic search systems, automated information systems, military systems that are decision support systems for officials and the so-called intelligent systems and weapon models."

In 2017, the Russian Deputy Prime Minister Dmitry Rogozin announced that by 2025 Russia will complete the development of new intelligent weapons – robots and drones.

Just one year later, at the conference “Digital Industry of Industrial Russia-2018”, Sergey Abramov, Conventional Armament, Ammunition and Special Chemistry Cluster Industrial Director of the Rostec State Corporation, spoke about the developments “on the creation of AI military systems, which involves self-learning weapons.”

As an example, he mentioned the development in the Kalashnikov Concern – a combat module that can identify the threat and decide on its destruction.

“The module independently analyzes the environment, identifies threatening objects and decides on their destruction. The algorithm of the on-board computer is based on the algorithms of the human brain, so the module is able to self-learn during the combat use,” says concerned representatives.

Intelligence that does not exist?

It is impossible to use what is not really there. Artificial intelligence, which is currently offered on the market, is not intelligence, scientists believe.

“The concept of the perceptron, implemented on the basis of neural networks and Bayesian networks, is an artificial eye, ear, nose in its functionality and essence, but not the brain. Therefore, such artificial intelligence (decades ago called weak, and today referred to as fast) has serious limitations on its possible application. The maximum that modern systems are capable of is autonomous actions at a tactical level,” notes an independent expert.

Heural Network Learning Process

                                                                                   Investingnotes.trade

 

According to the expert, current systems are not capable of thinking more globally. They can’t think at all, as this function is not inherent in them.

“To understand the basic algorithm of modern AIs, it’s sufficient to get an idea of the “Chinese room” experiment, the defense projects of modern AI systems are more correctly called ‘autopilots,’ and then it becomes clear where they are used, and what they are capable of. In the civilian sphere, more recently, a Boeing autopilot showed its elementary inability to adequately reflect the situation in case of a sensor breakdown, which led to several disasters,” notes the specialist.

Is the war real?

Speaking about the possibility of a new world war, Elon Musk proceeds from the fact that the current level of technical development of AI is higher than the level of regulation of various aspects of AI. In other words, legislative experiments with AI are in no way limited.

This is a dangerous problem, according to Vladimir Krylov, Professor of mathematics, Head of the AI laboratory, Artezio consultant.

“I would limit the use of AI in the military industry, as a biological and chemical weapon, and control it as nuclear weapons. The creation of intelligent systems for independent control of weapons without human intervention, in my opinion, should be regarded as criminal acts and limited to international treaties.”

Large technology companies such as Google and SpaceX understand this problem, and they have already signed an agreement on non-participation in the development of weapons using AI.

“In his judgments, Elon Musk proceeds from the fact that the current level of technical development of AI is higher than the level of regulation of various aspects of AI, since AI can be both a powerful tool and a powerful weapon. He calls for more efforts to regulate AI in order to prevent a situation where AI will control people. However, history shows that certain restrictions on breakthrough technologies, for example, nuclear or rocket technologies, are accepted at the international level only after several countries have acquired them. Competition for leading the AI field at the national level is likely to even trigger the outbreak of World War III,” says Krylov. 

Skynet and terminator

Is the war between AIs possible, or a global military cataclysm resulting from the decision made by AI?

“The more perfect AI will be, the less likely it is to make decisions on the destruction of people, even if the task assignment for AI will carry such consequences. Military actions by AI can only be caused in case of imperfect systems or systems maliciously designed by people,” believes Professor Krylov.

Scientists and politicians all over the world argue whether it is possible to combine AI defense systems in different countries into a single system, so they will not consider each other as enemies.

Proponents of the association argue that in this way it will be possible to achieve good synergy in the development and create a single cybernetic space, which is much easier to manage than a dozen different complex systems.

Opponents believe that in case of a failure, robotic systems will destroy each other, since they are programmed for war, thus, humanity will have a chance to survive. In any case, many scientists, politicians and businesspersons urge the scientific community of all countries to conduct developments, taking into account the limitations of the omnipotence of AI implemented in program codes. Indeed, no matter how fantastic it looks today, smart weapons can become not only tools in future wars, but also their cause.