Obtain news and background information about sealing technology, get in touch with innovative products – subscribe to the free e-mail newsletter.
Artificial intelligence is revolutionizing software development in the auto industry, which largely thinks in terms of if-then relationships. The new algorithms are not just being used for highly automated driving – they’re expected to solve nearly any problem that classic control engineering is unable to handle.
A solid line could easily have thwarted Daimler’s grand spectacle. A few years ago, when Daimler engineers equipped an S-Class so it could retrace Bertha Benz’s historic ride automatically, the project largely went well. The vehicle moved cautiously through Ladenburg’s city traffic and didn’t drive too closely to other vehicles out on the highway. It was only when a delivery vehicle blocked the lane that the rolling supercomputer became an obstacle to traffic. Then it adhered strictly to the traffic rules envisioned by its creators, which stipulated that a continuous line must not be crossed. At that point, if not earlier, it must have been clear that autonomous driving would never work in an urban environment if you stick with classic control technologies. It is simply impossible to program a machine to be prepared for any possible circumstance or hazard during city driving. This requires software that solves rule-related conflicts based on experience. The required algorithms are artificial neuronal networks that are trained with machine learning.
The use of artificial intelligence for highly automated driving starts with the unequivocal recognition of what the sensory “eyes” of the car see. Machines must take pains to learn something that comes naturally to small children. Deeply layered neuronal networks provide the key to computer-supported image recognition. They rely on a multilayered system based on the smallest possible computing units, the so-called neurons. Each neuron passes its findings on to the neurons in the underlying layer, and the rules governing the way they are calculated and forwarded change continuously. Neuronal networks really can’t do anything at first – they have to be trained. They can only differentiate a dog from a cat once they have seen images of many breeds of dogs, but the process can be extensively automated by feeding the machine images and the associated image descriptions from a photo database, for example. The more layers that a neuronal network has, the more complex the learning processes, which make all this possible. That is where the much-used phrase “deep learning” comes from.
More Stories About Digitalization