In the last few years, we have witnessed a rapid increase in the speed of computers, a fall in their prices and miniaturization, which are ideal conditions for the development of robotics.
Robot - the word first mentioned by the Czech writer Karel Čapek in his drama Rossum's Universal Robots, in definition it is an electro-mechanical machine controlled by a computer program.
Isaac Asimov, in his SF stories also cite four laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
4. The robot should protect its integrity, except when it is contrary to zero, first or second law.
Of course, since the army is the one that invests the most in the development of robots for war purposes, modern warfare robots have a completely different purpose, and that is to kill efficiently with the greatest possible financial savings.
Today's military robots possess a certain degree of autonomy - the Korean robot Super aEgis has autonomy up to 4 km, the ability to detect, track and destroy a moving target from a great distance with a weapon of capacity to be able to stop a truck, but to open fire it must obtain human permission. Not because the technology is not capable of it, but it was a customer requirement. The plan of the US military is that by 2050 all soldiers are replaced by robots. Robots save lives, but ultimately they are a lesser cost then one soldier, especially in large serial productions. The goal is an autonomous robot which has the ability to recognize an enemy soldier from a civilian, and the possibility of error (which also means death innocent persons) reduced to zero. How to program such a computer, where a software error can mean destroying a school instead of the barracks? How to program the robot to respond in questionable moral situations? That is an ethical and philosophical debate that occurs today not only with robot soldiers, but also in all robots that have autonomy and interact with humans (self-managed cars, drones).
Drones and aircraft
Drones are not robots in the true sense, but they have a high degree of autonomy of the control center and the aircraft itself. Drones are flying robots that can fly autonomously or can be controlled via radio, and their popularity is increasing. Their usage is wide, from monitoring large areas (early detection of fires), package delivery, delivery of assistance to endangered areas, to the negative ones. Drug trafficking via drones and drones that carry weapons are on the rise, and there is a well-founded fear of drones that can carry a bomb imperceptibly, because they can move in the dark and at low altitudes.
Prompted by the accident of flight 9525 Germanwings when co-pilot Andreas Lubitz, who was being medically treated for suicidal tendencies, deliberately crashed the plane, there was a debate whether commercial flights should be left to the autopilots, which are used during most of the flight today. Pilots and Airline companies believe that pilots still have to have the last word, especially in crisis situations, and that the computer must not be left to make decisions when it comes to human lives. Also, computers are a subject to hacking, errors and shutdown, but independent analyzes have shown that most of plane crash accidents which were caused by a human error would have been prevented if the computer had the right to take over management at those times.
With the development of computers, GPS systems and various types of sensors and radars, we have come to a time when the autonomous driverless cars cruise the roads of California and do so quite successfully. The most famous among them is a Google self driving car. Studies show that 94% of car accidents are caused by human error. 1,200,000 people a year die because of it. Google estimates that with the usage of autonomous vehicles that figure could be reduced to a staggering 0. Google's autonomous car has sensors in it, which can detect objects up to 200 meters away in all of the directions. It can detect cyclists, pedestrians, roadside animals in the woods etc. When it detects such an object, it analyzes the motion of that object, and predicts what might happen next.
Let's say that at night, the radar detects the deer in the woods and its movement towards the road at a distance of 200 meters, so it automatically slows down and lets the deer run across the road. If it detects children moving fast next to the road and playing with a ball, it provides the possibility for the ball to escape to the road and for a child to run for it as it reduces the speed and thus saves the child's life. That is something a human isn't able to do.
Currently 49 autonomus vehicles drive on California and Texas roads. Since 2009, they have covered 1,210,000 kilometers in full autonomous mode. They had 14 car accidents and the culprits were always the drivers of other cars. 11 times they were hit from behind, with one slightly injured person driving the car that hit a Google vehicle that stopped at a pedestrian crossing. The vehicle fully respects traffic regulations (no speeding) and blind and disabled people can drive in it completely carefree. If one day all vehicles would be autonomous, the possibility of an accident would reduce even more because all the vehicles communicate with each other and know each other's location. For example, vehicle which ran into a landslide on an icy road and had an accident informs all the vehicles in the area to adjust the speed.
As for the negative sides, two stand out - what if someone hacks a computer in the car and intentionally causes an accident since the computer is connected to the Internet? The other is more ethical, and concerns decisions related to situations where a collision is inevitable, and with fatal consequences. Imagine that there is a child on the road and there is no possibility for the vehicle to stop, but only to go around and take off in an abyss on the side of the road which means the death of the people in the vehicle. What should the computer choose? Whose life is worth more? Google experts answer that their vehicle has sensors that can notice a child on the side of the road and thus automatically reduce speed, but that child is not a computer and its actions are still unredictable.
Statistics are on the side of robots because humans are imperfect. Robots are also imperfect, but they are also more reliable than humans, so the future of driving is in the hands of computers, as is the future of warfare. While one saves all lives, others are selective, so they save the lives only on the side they are fighting for.