Artificial intelligence is artificial

The fascination of AI: what is artificial intelligence?

Neural AI

It was Geoffrey Hinton and two of his colleagues who started neural AI research in 1986 and thus the research field of artificial intelligence revived. With their further development of the Back propagation algorithm they created the basis for the deep learning that almost every AI works with today. Thanks to this learning algorithm, deep neural networks can constantly learn and grow independently - and thus master challenges in which symbolic AI fails.

Neural artificial intelligence (also: connectionist or subsymbolic AI) says goodbye to the principle of symbolic knowledge representation. Similar to the human brain, the knowledge is instead segmented into tiny functional units, the artificial neurons, which are networked into ever larger groups (buttom-up approach). There arises a diversely branched network of artificial neurons.

The neural AI tries to simulate the functional principle of the brain as precisely as possible and its to simulate neural networks artificially. In contrast to symbolic AI, the neural network is "trained" - in robotics, for example, with sensorimotor data. From these experiences, the AI ​​generates its constantly growing knowledge. This is where the great innovation lies: although the training takes a relatively long time, the system is ultimately able to learn independently. We therefore also speak of “self-learning systems” or “machine learning”. This makes neural artificial intelligences into very dynamic, adaptable systems - which are sometimes no longer fully comprehensible for people.

The Building an artificial neural network however, almost always follows the same principles:

  1. Countless artificial neurons are there in layers on top of each other placed. They are connected to one another via simulated lines.
  2. At present there are mainly deep neural networks in the application. “Deep” means that you are working with more than two shifts. The Intermediate layers are hierarchically on top of each other - in some systems information is passed up over millions of connections. For orientation: AlphaGo (Google DeepMind) has 13 intermediate layers, Inception (Google) already has 22 layers.
  3. The top layer or Input layer works like a sensor: it picks up the input - such as text, images or sounds - into the system. From there, the input is passed through the network according to certain patterns and compared with previous input. The network is fed and trained via the input layer.
  4. The deepest layer or Output layer however, usually has only a few neurons - one for each category to be classified (picture of a dog, picture of a cat, etc.). The output layer shows the user the result of the neural network and can e.g. B. also recognize the picture of a cat that was previously unknown to him.
  5. There are three basic learning processes, with which neural networks can be trained: supervised, unsupervised and reinforcement learning. These procedures regulate in different ways how an input leads to the desired output of a system.

The overwhelming majority of recent AI successes have come from neural networks. Under the catchphrase Deep learning In innovation research, one relies on the extraordinary performance of self-learning systems - be it in speech and handwriting recognition or in autonomous driving. Thanks to deep neural networks, Google defeated DeepMinds AlphaGo 2016 the South Korean professional go player Lee Sedol. Go is considered to be one of the most complex strategic board games in the world.

Google's Inception, on the other hand, actually a system for image recognition, creates amazing dream images that triggered a viral hype in 2015 under the hashtag #DeepDreams. This “side effect” of the system was discovered by chance by its developers: They wanted to find out exactly how the artificial intelligence they created works.