Prof. Dr. Laura Bechthold is a professor at the Technische Hochschule Ingolstadt, where she teaches Technology Assessment and Cultural Management. One of her main areas of research is Corporate Digital Responsibility (CDR) as she also deals with technological developments and the responsibility that innovation entails. We therefore reached out to Laura for an insight into the topic of CDR, the "digital responsibility" that AI already entails in its development, data provision and evaluation. We also discussed the impact on employees and companies with Laura. Risks, opportunities and, above all, the responsibility that comes with the use of AI now and in the future are the topics of the interview by Michael Urban (Head of Knowledge & Learning) with Prof. Dr. Laura Bechthold.
Michael Urban: Laura, we're delighted to have a foresight specialist here today! We are talking about Artificial Intelligence (AI) on a large scale, so let's start with this: how would you define "AI"?
Prof. Dr. Laura Bechthold: Basically, I base my thinking on the common definitions. A first approach is to assume that AI is about machine systems that perform tasks that would normally require human intelligence, and we transfer these into the artificial or digital space. Examples of this could be visual perception, speech recognition, decision making or translation between languages. AI is also a very general term that can be broken down into sub-areas such as “machine learning” or “deep learning”. These refer more to the underlying technological solutions than to the fields of application.
Language models, autonomous driving, image recognition in medicine - these are all familiar fields of application. Are there also unexpected winners in this field? In other words, AI applications that nobody had anticipated?
In addition to the opportunities and possibilities, I wonder what the other side looks like. What risks do you identify in terms of AI misuse or dystopian applications?
These mechanisms work much more subliminally than physical applications and enter our everyday lives unnoticed. There is also a risk that existing social inequalities will be reinforced.
Accordingly, there are also some ethical challenges...
So why is that? Most algorithms are trained on the basis of existing data sets. In other words, this is currently available data that represents our society as it is today and as it has been digitally mapped to date. Meaning: with all the shifts, distortions and possible injustices. A database like this can then be used to train the algorithm to have a racist or gender-specific bias, for example, because social distortions and injustices are already present in the training data and are therefore unconsciously transferred to the machine learning. One example of the individual and social consequences this can have is the child benefit scandal in the Netherlands in 2019. Thousands of families were falsely accused of fraud and ordered to pay back child benefit. The underlying problem with the algorithms was that they discriminated by automatically classifying people as fraud risks based on their nationality or dual citizenship. This led to unlawful accusations and serious financial and social consequences for the families concerned.
As an AI developer, I have to consider what data I use, how I train the algorithm and whether I can ensure ethical aspects such as fairness. In addition to the data set, the algorithm itself must then be analyzed to determine the basis on which it should make decisions: For example, which factors should be taken into account or given special weighting or not? This alone involves an evaluation. Let’s use the following thought experiment, for example: How accurately should an algorithm detect cancer cells on MRIs?
A strictly scientifically optimized algorithm only identifies clearly diseased cells and accepts that some diseased cells will fall through the cracks. This means that the algorithm may not recognize cancer cells that are actually present. As a result, a patient who is actually ill is wrongly classified as healthy. At the same time, doctors would probably intuitively be more inclined to accept so-called “false positive” results, i.e. that the algorithm would rather have one too many results than one too few. However, if patients are mistakenly diagnosed as having cancer, this can of course also lead to unnecessary anxiety, further examinations and possibly unnecessary treatments. I am confident that such problems will become fewer and fewer as the technology advances, but so far we still have to think about them and in different contexts.
The final point that concerns me is the use of AI in everyday life. What happens when people use it without training? A good example is ChatGPT. If the results of the algorithmic model are not questioned, false statements may be published as truths, as you are still not protected from receiving false information as output.
So there are a number of ethical dimensions to consider. What about the decision-making power of an AI, where should the limits be?
The other axis is the question of the possible influence of an AI result on society. For example, there is an algorithm in the USA that calculates the probability of a potential offender’s recidivism rate. Judges use this information to decide how to proceed with the person. The same applies to scenarios in which AI plays a role in determining creditworthiness, attitudes or possibly even the choice of educational pathways. In my opinion, AI can serve as an additional source of information here, but should not be used as the sole basis for decision-making.
The ethical complications will therefore also have a huge impact on the business world. What will our companies be facing in the next five to ten years?
What is the status quo? Are companies currently well prepared for the AI factor?
There is also no way around the topic of "corporate digital responsibility". What concrete advantage do companies that are aware of their digital responsibility have?
It has become evident that companies that have adapted to this at an earlier stage and taken sustainability factors more seriously are more economically resilient on the long-run and also find it easier to adapt to subsequent mandatory regulations. Just as CSRD reporting, for example, was introduced as a formalizing element in the area of sustainability, we are now seeing a similar development in the area of AI with the European AI Act, which will have far-reaching consequences for companies of all sizes that use AI in any way. Early preparation creates a cost advantage in this area, as everything can be set up appropriately from the start and no adjustments need to be made later on. People are also becoming increasingly aware that responsible companies are preferred, which also creates a brand value. Digital responsibility is a decisive competitive advantage that can – and should – be tackled now.
Well, let's hope that the companies have been listening carefully and are not sleeping on their future. In any case, thank you very much for the in-depth insight and for pointing out so many exciting connections, dear Laura!
Our Offer to you
Interested in finding out more about Laura?
Prof. Dr. Laura Bechthold works at the Bavarian Foresight Institute at the Technische Hochschule Ingolstadt, which focusses on technology-oriented futurology and the associated economic and social interdependencies. You can find more information about their areas of work and projects here.
Are you interested in her personal position on her research topics? Take a look at her LinkedIn profile.