AI and CDR: the digital responsibility behind the technology

Prof. Dr. Laura Bechthold is a professor at the Technische Hochschule Ingolstadt, where she teaches Technology Assessment and Cultural Management. One of her main areas of research is Corporate Digital Responsibility (CDR) as she also deals with technological developments and the responsibility that innovation entails. We therefore reached out to Laura for an insight into the topic of CDR, the "digital responsibility" that AI already entails in its development, data provision and evaluation. We also discussed the impact on employees and companies with Laura. Risks, opportunities and, above all, the responsibility that comes with the use of AI now and in the future are the topics of the interview by Michael Urban (Head of Knowledge & Learning) with Prof. Dr. Laura Bechthold.

Michael Urban: Laura, we're delighted to have a foresight specialist here today! We are talking about Artificial Intelligence (AI) on a large scale, so let's start with this: how would you define "AI"?

Prof. Dr. Laura Bechthold: Basically, I base my thinking on the common definitions. A first approach is to assume that AI is about machine systems that perform tasks that would normally require human intelligence, and we transfer these into the artificial or digital space. Examples of this could be visual perception, speech recognition, decision making or translation between languages. AI is also a very general term that can be broken down into sub-areas such as “machine learning” or “deep learning”. These refer more to the underlying technological solutions than to the fields of application.

Language models, autonomous driving, image recognition in medicine - these are all familiar fields of application. Are there also unexpected winners in this field? In other words, AI applications that nobody had anticipated?

Prof. Dr. Laura Bechthold: There are indeed some remarkable developments. AI is a general-purpose technology that has a broad impact and will influence almost all areas of the economy through its versatile applications and transformative power. What is exciting in my view is not just to consider AI as a technology in its own right, but to look at it in combination and interaction with other emerging technologies. For example, there are many developments in combination with (nano)robotics or video technology. There are also incredible achievements in medicine, for example on the basis of synthetic biology, cell recognition or personalized (cancer) therapies. These different combinations can give rise to new things and open up huge, exciting fields of research. In my opinion, language models are only at the beginning and there is still a lot to come.
Laura Bechthold Corporate Digital Responsibility CDR

In addition to the opportunities and possibilities, I wonder what the other side looks like. What risks do you identify in terms of AI misuse or dystopian applications?

Prof. Dr. Laura Bechthold: Of course, there are also various use cases and possible scenarios. Everything that can be done with artificial intelligence in a positive sense can also be done in a negative sense. Let’s think about current global political developments and the use of autonomous weapons systems, for example. However, my personal horror scenarios are much more subtle and less obvious, for example when it comes to manipulating and influencing decisions. There are various techniques for manipulating people in the digital world. This starts with influencing user behavior through so-called “dark patterns”, i.e. design practices that aim to trick users into making decisions that are not in their best interest. In the worst case, however, it can go as far as the targeted manipulation of democratic processes. This was demonstrated a few years ago by the scandal surrounding Facebook and Cambridge Analytica, for example, in which specific microtargeting was used to significantly influence the US election campaign through extensive disinformation campaigns.

These mechanisms work much more subliminally than physical applications and enter our everyday lives unnoticed. There is also a risk that existing social inequalities will be reinforced.

Accordingly, there are also some ethical challenges...

Prof. Dr. Laura Bechthold: Yes, there are many ethical questions! It depends on what purpose an AI is intended to serve. When an algorithm is programmed that differs from a classic mathematical orientation problem, ethical considerations often come into play. Depending on the context, some questions are more important than others. One very important ethical question, for example, is where the training data for an algorithm comes from.

So why is that? Most algorithms are trained on the basis of existing data sets. In other words, this is currently available data that represents our society as it is today and as it has been digitally mapped to date. Meaning: with all the shifts, distortions and possible injustices. A database like this can then be used to train the algorithm to have a racist or gender-specific bias, for example, because social distortions and injustices are already present in the training data and are therefore unconsciously transferred to the machine learning. One example of the individual and social consequences this can have is the child benefit scandal in the Netherlands in 2019. Thousands of families were falsely accused of fraud and ordered to pay back child benefit. The underlying problem with the algorithms was that they discriminated by automatically classifying people as fraud risks based on their nationality or dual citizenship. This led to unlawful accusations and serious financial and social consequences for the families concerned.
As an AI developer, I have to consider what data I use, how I train the algorithm and whether I can ensure ethical aspects such as fairness. In addition to the data set, the algorithm itself must then be analyzed to determine the basis on which it should make decisions: For example, which factors should be taken into account or given special weighting or not? This alone involves an evaluation. Let’s use the following thought experiment, for example: How accurately should an algorithm detect cancer cells on MRIs?
A strictly scientifically optimized algorithm only identifies clearly diseased cells and accepts that some diseased cells will fall through the cracks. This means that the algorithm may not recognize cancer cells that are actually present. As a result, a patient who is actually ill is wrongly classified as healthy. At the same time, doctors would probably intuitively be more inclined to accept so-called “false positive” results, i.e. that the algorithm would rather have one too many results than one too few. However, if patients are mistakenly diagnosed as having cancer, this can of course also lead to unnecessary anxiety, further examinations and possibly unnecessary treatments. I am confident that such problems will become fewer and fewer as the technology advances, but so far we still have to think about them and in different contexts.

The final point that concerns me is the use of AI in everyday life. What happens when people use it without training? A good example is ChatGPT. If the results of the algorithmic model are not questioned, false statements may be published as truths, as you are still not protected from receiving false information as output.
Laura Bechthold Corporate Digital Responsibility CDR

So there are a number of ethical dimensions to consider. What about the decision-making power of an AI, where should the limits be?

Prof. Dr. Laura Bechthold:  I don’t think there is one hard and fast limit. I think of it more as a spectrum. Let’s imagine, for example, a matrix with two axes. One of them is how much decision-making power the algorithm has (does it provide information, does it make a suggestion or does it really make a decision?) and the other is how much influence does the decision have on society or people? On the one hand, there are systems for technical optimization, for example, on which no human fate depends and which are designed solely to make established processes faster and more efficient, such as maintenance work. Here, I don’t need to spend all my time dealing with an ethical perspective; that’s an unnecessary waste of time and would mean giving away all the advantages of AI.

The other axis is the question of the possible influence of an AI result on society. For example, there is an algorithm in the USA that calculates the probability of a potential offender’s recidivism rate. Judges use this information to decide how to proceed with the person. The same applies to scenarios in which AI plays a role in determining creditworthiness, attitudes or possibly even the choice of educational pathways. In my opinion, AI can serve as an additional source of information here, but should not be used as the sole basis for decision-making.

The ethical complications will therefore also have a huge impact on the business world. What will our companies be facing in the next five to ten years?

Prof. Dr. Laura Bechthold: In my opinion, we are already in the midst of a major change; the typical Schumpeterian process of creative destruction is currently happening again. In other words: We are in a time in which established structures and old technologies are being successively rendered obsolete and replaced by innovative ventures and new technologies, which is leading to major economic change. This creates a huge opportunity space for companies to find out where AI data processing and other technologies can, should and will play a role in their business model and work processes. Seizing this opportunity can not only offer decisive competitive advantages in the future, but is also likely to determine the future viability of the company as a whole. That is why it is important not to shut yourself off and to think through the topic of AI at an early stage.
Laura Bechthold Corporate Digital Responsibility CDR

What is the status quo? Are companies currently well prepared for the AI factor?

Prof. Dr. Laura Bechthold: It’s all very different. There are two levels: the technology and AI level and the digital responsibility level. When it comes purely to technology, many large high-tech companies or start-ups are already at the forefront, and there are many great examples such as Siemens AI Labs. Training and research on the topic complements this, and we already have a good ecosystem here – especially in Bavaria. But there are also some companies, particularly in the SME sector, that are not yet that advanced in this area. Many companies are already sitting on large data sets, but they are often still unstructured and have no idea how they can be used to their own advantage. So there are still major discrepancies and it all depends on the digital mindset of the company.

There is also no way around the topic of "corporate digital responsibility". What concrete advantage do companies that are aware of their digital responsibility have?

Prof. Dr. Laura Bechthold: Corporate digital responsibility is a emerging collective term for the topic of responsibility and voluntary self-commitment of companies in the digital world. This covers many areas, from data protection and cybersecurity to ensuring digital participation, inclusive design and the environmental impact of my digital systems. I would actually venture to predict that CDR will be the next pillar to ESG. In my view, we are currently seeing a similar development in the area of CDR to that of the CSR or corporate sustainability movement over the last 10-15 years.

It has become evident that companies that have adapted to this at an earlier stage and taken sustainability factors more seriously are more economically resilient on the long-run and also find it easier to adapt to subsequent mandatory regulations. Just as CSRD reporting, for example, was introduced as a formalizing element in the area of sustainability, we are now seeing a similar development in the area of AI with the European AI Act, which will have far-reaching consequences for companies of all sizes that use AI in any way. Early preparation creates a cost advantage in this area, as everything can be set up appropriately from the start and no adjustments need to be made later on. People are also becoming increasingly aware that responsible companies are preferred, which also creates a brand value. Digital responsibility is a decisive competitive advantage that can – and should – be tackled now.

Well, let's hope that the companies have been listening carefully and are not sleeping on their future. In any case, thank you very much for the in-depth insight and for pointing out so many exciting connections, dear Laura!

Our Offer to you

If you don’t want to miss out on the competitive advantage of digitization and artificial intelligence – and also want to live up to the responsibility that comes with it – you are welcome to contact us for a personal consultation on the following topics:
  • Artificial Intelligence
  • Cybersecurity
  • Data Transformation
  • Internet of Things
  • Digital Ethics und CDSR

Interested in finding out more about Laura?

Prof. Dr. Laura Bechthold works at the Bavarian Foresight Institute at the Technische Hochschule Ingolstadt, which focusses on technology-oriented futurology and the associated economic and social interdependencies. You can find more information about their areas of work and projects here.

Are you interested in her personal position on her research topics? Take a look at her LinkedIn profile.