From unconscious use in everyday life to the automation of work: AI is used extensively. In an expert interview, Prof Dr René Peinl, Head of the Systems Integration Research Group at the affiliated Institute for Information Systems at Hof University of Applied Sciences, reveals the opportunities, changes and risks behind the technology and why legal regulation such as the AI Act is necessary.
AI is now much more than just a technological trend; it has become a game changer in many areas. As an aid for writing texts, creating image material, or collecting knowledge in general: Companies, students, private individuals or the self-employed are increasingly using AI. AI is also already integrated into many everyday applications, from navigation systems to shopping recommendations. Algorithmic decision-making (ADM) is now a permanent fixture in our real world. AI therefore offers great potential for improvement and simplification, but also for misuse. This is why the value-oriented and responsible use of AI, or digital ethics, is becoming increasingly necessary in this context.
Expert-Interview with Prof. Dr. René Peinl
Head of Institute for Information Systems,
Hof University of Applied Sciences
Project Team Hof: Hello, Prof. Peinl. Let's start with the fundamental challenges and concerns that arise when using AI in connection with data protection, quality, and ethics. What can you tell us about these issues?
In order to regulate the use of AI, the EU developed the "AI Act" in 2022-23, the world's first comprehensive AI law. How demanding is its implementation and compliance?
And how can we protect ourselves and ensure that personal data is adequately protected when using AI algorithms?
It must also be ensured that the data is no longer personal or personally identifiable. Algorithms have even been developed to check the number of individuals in a particular area and, if necessary, to falsify the data to prevent it being linked to individuals. Such mechanisms can contribute to the protection of personal data. It is emphasized that data protection principles such as data minimization should be observed, especially for applications that collect data during use. However, in the online media industry, especially in the US, collecting data as a matter of principle is the norm, unless the user explicitly objects. Otherwise, access to the service may be restricted. This is often justified on the grounds that it is technically impossible to do otherwise, but this is usually not the case.
This quickly raises the question of what is right and what is wrong. So, to what extent are ethical considerations important in the development of AI systems?
That is a rather dark perspective. Are there any specific ethical guidelines that you think should be taken into account when integrating AI in different application areas?
Beyond the Human Rights Convention, there is no generally recognized set of ethical rules. The diversity of philosophical approaches makes the creation of a uniform code of ethics even more difficult. Social debates, such as the current discussions about financial decisions and social justice, show that ethical considerations often take a back seat to economic and political interests. The challenge is to bring ethical principles back to the fore and achieve a broader social consensus on basic ethical values. The opportunity of ethical guidelines, on the other hand, lies in the fact that cooperative collaboration leads to a better world in which not only the richest benefit, but also the general population can enjoy technological and economic progress and society develops positively.
That sounds more hopeful. Progress then also includes ensuring the quality of AI-generated answers and data, especially in sensitive or critical fields of application, right?
Finally, a question that could also be asked by a management consultancy: What measures should companies and organizations take to ensure the safe use of AI technologies?
It is difficult to anticipate all possible misuse scenarios, as users often find creative and unexpected uses for software. One strategy is to anticipate known risks and minimize them through security mechanisms to prevent unwanted or harmful use. Continuous monitoring of applications can help to identify unexpected developments and respond appropriately. Data protection laws must also be adhered to in order to ensure that the application complies with legal requirements and the privacy of users is protected.