Bitte beachte unsere neue Geschäftsadresse ab 1. Dezember 2024: Ingolstädter Straße 51, 85276 Pfaffenhofen an der Ilm.

AI and digital ethics: the responsible use of the game changer

From unconscious use in everyday life to the automation of work: AI is used extensively. In an expert interview, Prof Dr René Peinl, Head of the Systems Integration Research Group at the affiliated Institute for Information Systems at Hof University of Applied Sciences, reveals the opportunities, changes and risks behind the technology and why legal regulation such as the AI Act is necessary.

AI is now much more than just a technological trend; it has become a game changer in many areas. As an aid for writing texts, creating image material, or collecting knowledge in general: Companies, students, private individuals or the self-employed are increasingly using AI. AI is also already integrated into many everyday applications, from navigation systems to shopping recommendations. Algorithmic decision-making (ADM) is now a permanent fixture in our real world. AI therefore offers great potential for improvement and simplification, but also for misuse. This is why the value-oriented and responsible use of AI, or digital ethics, is becoming increasingly necessary in this context.

The so-called AI Act, with which the EU wants to create the legal framework for the use of artificial intelligence, is intended to serve this purpose. The AI Act categorizes AI systems according to their potential risk:
  • Unacceptable risk: All AI systems that have a significant risk of violating human rights and fundamental principles are to be banned.
  • High risk: AI systems that have a high risk of violating health, safety and fundammental principles should be regulated.
  • Limitated risk: Concerns AI systems that interact with humans. Users must be informed about the use of AI.
  • Low risk: AI systems that do not need to be restricted, but where documentation is required, such as spam filters.
Further points on the AI Act are currently being included in the EU Parliament. It may be adopted as early as 2024. Prof. Dr René Peinl from Hof University of Applied Sciences gives us an overview of the topics of value-based management and digital ethics in an expert interview that was conducted as part of the practical project with Hof University of Applied Sciences.
Expert-Interview with Prof. Dr. René Peinl

Head of Institute for Information Systems,
Hof University of Applied Sciences

Project Team Hof: Hello, Prof. Peinl. Let's start with the fundamental challenges and concerns that arise when using AI in connection with data protection, quality, and ethics. What can you tell us about these issues?
Prof. Dr. Peinl: Legislation often neglects the importance of data in the development of AI. It is important to understand the impact of the data used on the data subjects, for example: the use of professional speakers for AI data can offer new opportunities for speech recognizers in speech recognition and unemployment for speakers in speech synthesis. The legal regulations do not differentiate between scenarios and use cases; the GDPR, in particular regards voice data as personal data and prohibits its use. This leads to a missed opportunity for AI companies in Germany compared to other countries where the regulations are less strict. There are also difficulties with the use of generative AI, which can put artists out of work or lead to falsified images or fake news. It would therefore be necessary to treat generative AI differently from recognizing AI (except for surveillance), especially with regards to ethical issues.
In order to regulate the use of AI, the EU developed the "AI Act" in 2022-23, the world's first comprehensive AI law. How demanding is its implementation and compliance?
Prof. Dr. Peinl: The integrated risk assessment is seen as a positive step, as it focuses on the effects and not only on data collection. There are examples and discussions in legal texts about the categorization of generative AI into risk classes, but no clear agreement. Now companies are faced with the challenge of appropriately classifying their products and projects into risk classes and, in case of doubt, having to defend this in court. At the end of the day, the biggest challenge is to obtain a reliable assessment and protect themselves accordingly.
And how can we protect ourselves and ensure that personal data is adequately protected when using AI algorithms?
Prof. Dr. Peinl: There are various proposals for dealing with data protection, such as “privacy by default” and “privacy by design”, which should be taken into account during software development. This includes the anonymization of data by removing non-essential personal information. Descriptive attributes about the data subject may be sufficient. For example, the exact address can be replaced by a zip code or an area cluster.
It must also be ensured that the data is no longer personal or personally identifiable. Algorithms have even been developed to check the number of individuals in a particular area and, if necessary, to falsify the data to prevent it being linked to individuals. Such mechanisms can contribute to the protection of personal data. It is emphasized that data protection principles such as data minimization should be observed, especially for applications that collect data during use. However, in the online media industry, especially in the US, collecting data as a matter of principle is the norm, unless the user explicitly objects. Otherwise, access to the service may be restricted. This is often justified on the grounds that it is technically impossible to do otherwise, but this is usually not the case.
This quickly raises the question of what is right and what is wrong. So, to what extent are ethical considerations important in the development of AI systems?
Prof. Dr. Peinl: In our society, ethical issues are often associated with challenges, as legal norms and ethical standards do not always coincide. The definition of ethical principles is more complicated and is complicated by social differences of opinion. Ethical action is often guided by legal boundaries by exploiting gray areas and legal loopholes. For example, the pursuit of profit can lead to business models that are based on ethical transgressions. The link between ethics and capitalism is difficult, and in some cases ethics is used more as a cover when it serves economic interests. For example, companies sometimes emphasize their data-saving practices, but this can also be motivated by economic benefits.
That is a rather dark perspective. Are there any specific ethical guidelines that you think should be taken into account when integrating AI in different application areas?
Prof. Dr. Peinl: It is difficult to define ethical guidelines as they are often related to laws or the absence of laws. Although the Convention on Human Rights provides certain ethical guidelines, signatory states often opt for different interpretations or selective compliance.
Beyond the Human Rights Convention, there is no generally recognized set of ethical rules. The diversity of philosophical approaches makes the creation of a uniform code of ethics even more difficult. Social debates, such as the current discussions about financial decisions and social justice, show that ethical considerations often take a back seat to economic and political interests. The challenge is to bring ethical principles back to the fore and achieve a broader social consensus on basic ethical values. The opportunity of ethical guidelines, on the other hand, lies in the fact that cooperative collaboration leads to a better world in which not only the richest benefit, but also the general population can enjoy technological and economic progress and society develops positively.
That sounds more hopeful. Progress then also includes ensuring the quality of AI-generated answers and data, especially in sensitive or critical fields of application, right?
Prof. Dr. Peinl: Yes, but the question of the ethical evaluation of AI responses is complex and individual. The evaluation of a response often depends on individual sensitivity, and some people can react very emotionally. In an example of image generation, the response could be considered problematic if it is not diverse enough. A heterosexual couple with a Caucasian phenotype might be seen as statistically appropriate by some, while others might interpret this as negative. The dilemma is how many variations should be considered to ensure diversity and whether this is even practically feasible. Accurate descriptions are required to ensure the quality of AI-generated responses. This presents companies with the challenge of balancing ethical requirements with usability. Usefulness and usability can collide with ethical and social requirements. Developing options that take different ethical approaches into account could be a solution but requires effort and raises the question of whether users appreciate and support this distinction. Unfortunately, the market often tends to favor less elaborate and faster solutions, even if these can be ethically questionable.
Finally, a question that could also be asked by a management consultancy: What measures should companies and organizations take to ensure the safe use of AI technologies?
Prof. Dr. Peinl: The issue of security in relation to AI can be very complex depending on the aspects it relates to. Security can relate to data protection to ensure that data does not end up in the wrong hands. It can also be about ensuring, that AI is not misused for harmful purposes such as the development of weapons or terrorist activities.
It is difficult to anticipate all possible misuse scenarios, as users often find creative and unexpected uses for software. One strategy is to anticipate known risks and minimize them through security mechanisms to prevent unwanted or harmful use. Continuous monitoring of applications can help to identify unexpected developments and respond appropriately. Data protection laws must also be adhered to in order to ensure that the application complies with legal requirements and the privacy of users is protected.
Thank you, Prof. Peinl, for this detailed excursion on the topic of AI in the context of digital ethics AND MANY THANKS FOR CONDUCTING THE INTERVIEW TO THE TEAM OF STUDENTS AT HOF UNIVERSITY, WHOSE PROJECT WILL BE REPORTED ON SOON.