In computer technology, there’s a more and more common optimalisation technique called evolutionary algorithms. Problems or tasks of the world can be modeled and solved in a similar approach as mother nature solves her own ones. We all more or less understand the concepts of nature since elementary school. Let’s take a specie of some ancient fish-like creature for example. They live in swarms in the sea, being omnivores. We already know about natural selection. Those fish have to face predators, diseases, and environmental hazards. They live in a place where only the fittest can survive. Mostly the fittest breed only, in order to maintain their biological qualities. Furthermore, there’s always some mutation that helps later generations to adapt more to these circumstances. The least capable entites of the swarm die, while others breed and improve. They become more evasive to predators and diseases, and conquer the land to live on by becoming amphibious. All in all, they optimize a function: their quality of life. That’s a natural evolutionary algorithm. EAs of the computer world work the same way.
Let’s take the following task: we have an unexplored ocean bottom, and at its deepest point, we could access to a very valuable food source. We have a couple of robot fishes who have a limited range of vision, but they can instantly exchange information by using some long range radio communication. The initial setup is that we randomly drop our robots into the ocean, give each one a random direction to explore, and give them a set of rules, for example:
- if you are close to another robot, keep your distance
- if you see a deeper point than you are currently on, start swimming towards it
- if you see another deeper point on your way, you can change direction, but don’t do any radical changes on your path
- ask other robots about their exploration, and if any of your buddies found a point that is much more deeper than yours, you can change direction and swim towards him
- if others ask you about you, report your depth
- don’t turn back unless the value of that decision is assured by others
This is swarm intelligence in a nutshell. It can be modelled, programmed and executed on a computer. It work in a similar way like fish swarms and ant colonies. They optimize to some target function, like the quality of life, which can be described by level of safety, amount of available food, etc.. They work together. They collect information on their own but they share it with the community. They take advantage of global information in their own life. Eventually, the swarm’s quality of life will improve, just like in nature.
Evolutionary algorithms are quite similar. They have a population of random starting entities, and a set of rules, but the entities are periodically modified. Let the task be to extract gold from a mine. The programmer plays the role of God, but he has limited amount of resources. Each robot can spend 100 energy to sensors, engines, storage capability, and power of its extractor drill. The programmer randomizes the values so there would be robots with different capabilities. For example one robot would have 25 units of energy for all, while another robot would have 50 units of energy for sensors and 50 for engines. The third one would have 50 units for storage and 50 for extraction power. The programmer lets all entities work for a while, than he measures their fitness. The first robot was capable to see nearby minerals, to move to it, to mine it, and to store it. The second robot was capable to see minerals from a larger distance, and it could move to it very quickly thanks to its engines, however, it had no drilling power and no storage space, so it failed in mineral collection. The third robot could mine mineral efficiently and store a large amout, however, it failed to sense it and to move near, since it had no energy spent on sensors and engine.
We usually have limited resources and many important factors to consider for successful execution of a task, as you could see in that example. Each factor participates in the task with a different amount of importance, for example if the mine is dark, perception is more important than movement, but if we mine diamond instead of gold, drilling power is more important than storage space. Each task has its own requirements but we don’t know the importance of those in the beginning. So, the programmer evaluates the “fitness” of all robots, the capability to achieve success in the task. After that, he can divide the population into two parts by fitness. The worse half of the robot population can be powered off thus saving energy for the better half. We also can examine robots of the better half and design new ones by their properties. For example, it there are two successful robots, one has 25 units of energy for all of the four attributes while another has (30, 20, 30, 20), we can design a robot with (27, 23, 27, 23) units of energy distributed to its subsystems. That new robot can perform better or worse, it will be evaluated in the next unit of time. It’s similar to natural selection. Those who perform bad, perish, and the strong transfer their genes to the next generation. Just as in nature, weak children can be born from strong parents, however, they will perish too. With this approach, if we let this system run for enough time, the population will became stronger and eventually approximate the perfect solution for a task.
We can’t be always sure that if we approached perfection close enough. Computer programmers usually just wait and see what happens. If the population has no more significant improvement after each unit of time, they just shut down the algorithm and work with the best entities they could discover during the process. That’s how many areas of our life work right now.