Artificial intelligence (AI) plays a fundamental role in a company’s future success. Its key technologies are being deployed all across the board. These are among the findings of a study we recently published together with Crisp Research and Hewlett Packard Enterprise (HPE). Now let’s look at the details and various facets of AI.
A quick update: According to the often-quoted study, almost two thirds of companies in Germany, Austria and Switzerland are actively concerning themselves with machine learning. One fifth have already implemented corresponding technologies into their processes, while the same proportion of companies are now investing resources in deep learning.
What are the facets of AI and how do they differ from one another?
Depending on origin, perspective and processes, artificial intelligence systems can be categorized by the following terms:
- Machine learning
- Deep learning
- Cognitive computing
In order to separate these terms from one another, it is important to differentiate their dimensions: “clarity of purpose” and “degree of autonomy”. The majority of machine learning-based systems are now developed, trained and optimized for a specific task. An example is the recognition of defective products during quality control of a production process. These systems have a clearly defined purpose and little to no autonomy.
Beyond that, deep learning-based systems enable independent learning. Suitable tasks include the recognition of objects in images or speech recognition during interaction with a smartphone, for example. This technology, which is based on the engineering of neural networks, now allows machines to understand much more than was previously possible. Simulated neurons (similar to those in the human brain) are modelled and arranged in many layers, one after the other. Each layer within the network carries out a small task, such as the recognition of edges, and the features are extracted here. The output of one individual layer serves as an input for the next layer. In combination with a large volume of high-quality training data, the network learns to complete certain tasks.
Humans do not have an insight into the layers of the network, and decisions are made solely by the trained machines. Therefore, deep learning-based systems have a higher degree of autonomy and offer a wide range of application possibilities. An initial, very successful example is in the area of medicine, where cancer cells can be detected in images significantly more quickly and efficiently than before.
Cognitive computing is when corresponding systems take on certain tasks or make decisions to assist or to replace a human. Possible areas of application include insurance claim management, service hotlines or medical diagnostics.
Cognitive systems are characterized primarily by the taking on of certain “human” characteristics and the ability to deal with ambiguity and vagueness. The degree of autonomy these systems display can be very high – just think of cognitive systems in medicine that suggest a specific treatment, or implementation in national security to decide upon the preventative detention of a crime suspect.
Real artificial intelligence ultimately denotes machines that possess complete, cognitive abilities and cannot be distinguished as a machine by humans. In the final development stage, these systems have achieved a very high degree of independence. They make their own decisions, determine strategies by themselves and decide on the way they learn and communicate.
Achieving this level of autonomy is now the focus of many researchers, while companies and their use cases are still exploring other varieties of artificial intelligence.
This definition is part of our study “Machine Learning in Companies – Artificial Intelligence as a Foundation of Digital Transformation Processes”. The entire study is available here: