Following the German Digital Summit, the Research Center for Information Technology (FZI) has asked politicians, associations, industry, and science to form a united front to promote the responsible use of artificial intelligence (AI): A responsibility that we have always encouraged. As the use of AI technology becomes ever-more widespread, we not only need to consider the technical and legal challenges of this exponential growth, but also to develop solutions that take the needs of society as a whole into account.
More specifically, the FZI called on all parties to collaborate more closely on solutions that allow companies and individuals to attain and maintain digital sovereignty. For technologies and devices with integrated AI, we need "new forms of certification that users can trust, and that protect the business secrets behind the company’s algorithms". The organization supports the creation of a legal framework for the reliable use of autonomous systems, and highlights the importance of ethical considerations.
The FZI’s ideas are based in part on its study "Competencies for Digital Sovereignty" (available in German language only), compiled in partnership with BMWi, Accenture, and Bitkom. Many of the conclusions and recommended actions arising from the study are similar to the results of our "Machine Learning in Companies" study.
For some time, we have focused on the responsible use of AI, highlighting the sticking points from a range of perspectives and taking all possibilities into account. From Public Cloud AI-as-a-Service solutions to the development of high-performance, complex, autonomous systems, we understand the importance of taking responsibility for systems that use human and artificial intelligence. This creates new challenges for us, our processes, our technologies, and our company, and requires us to think – and act – in a different way. We must consider questions such as:
AI is primed for general, company-wide use – the technology is enterprise-ready. Are companies – providers and users alike – ready for the use of AI?
AI is often regarded as a "black box" whose processes are not transparent. How can we create "white box AI" and explain the decision-making mechanisms of algorithms in a transparent way?
We will answer these questions – and more – in our upcoming series of posts dedicated to the individual challenges presented by AI and the corresponding solutions.