Artificial intelligence can ask big questions

Questions and answers about AI

Barrier-free use of the website

  1. Jump to content Accesskey: 1
  2. Jump to navigation Accesskey: 2
  3. Jump to meta navigation Accesskey: 3
  4. Jump to search Accesskey: 4

Everything you always wanted to know about AI and didn't dare to ask. Here are the most important answers about artificial intelligence and ethics.

How important is artificial intelligence (AI)?

IT products and services that we use in the work environment, but also in everyday life, are increasingly using AI. Therefore, the impact of this technology must be shaped by applying ethical principles to cushion social impacts and to build trust through responsible behavior.

Where are the limits of AI?

There are different views of what is meant by intelligence. This is why the researchers have come to an understanding over the course of time that a machine is considered “intelligent” if it finds the best way to solve a problem in relation to a task set by humans. Examples of this are route planning on online maps or automatic image recognition. In principle, however, AI cannot have “more” intelligence than the one for which it was trained. A machine can solve abstract problems if it is taught rules or if its material is made available from which it can derive the rules itself.

Can man and machine work together?

In the future, the collaboration between people and “intelligent” machines will be the central approach. An example from medicine makes this clear. Tissue samples from both humans and machines were analyzed. Using a method for evaluating analysis strategies, the machine achieved a value of 70.5, the pathologist a value of 96.6. If the approaches were combined, a value of 99.5 could be achieved. The two forms of intelligence, human and machine, are so complementary that the interaction delivers better results than the isolated approach.

Can you trust a machine?

When people work with machines, the final decision rests with people. But if the machine is to support people in making decisions, it has to follow certain rules. So you need systems that comply with a variety of ethical principles such as B. Follow fairness. Ultimately, one expects the same degree of fairness from a machine-assisted decision as with humans. Another aspect is explainability. If humans are to follow the recommendation of the machine, they have to understand the underlying decision-making logic. For people to trust the machine, it is necessary that they receive a satisfactory answer to the so-called “why” question. Since the concept of trust is rather vague, we consider four aspects in this context. For example fairness, which means: How can it be ensured that the technology follows the correct values ​​for the specific application? Second, the intelligibility of the technology. Third, can I count on the technology to make few mistakes? Finally, the fourth aspect is the issue of safety with the question of whether responsibility for use and liability have been clarified. Technical approaches ensure that AI applications have the right properties to explain decisions, prevent bias and act fairly and safely. In addition, there are a variety of principles for the beneficial use of AI such as guidelines, certificates, standards and even laws for the purpose of regulation.

Why should AI decision-making bodies be multidisciplinary?

The most comprehensive possible evaluation of the use of AI requires multidisciplinary cooperation between experts in areas such as AI, sociology, psychology, economics, philosophy and law. But you also have to discuss it with the target groups concerned. This quickly becomes apparent when you look at e.g. B. considered the concept of fairness and its many different interpretations. On the one hand, fairness in one area means exactly the same distribution of resources; in the other, fairness can mean the same chance of using a resource. One should work together with the respective target group concerned in order to find the correct interpretation of the term fairness.

What is a holistic AI ethics approach?

The holistic approach to ethics includes principles on the topics of “trust and transparency”, “data protection” or “guidelines for developers”. EU-wide and national AI strategies as well as research results on the subject of trustworthy AI should also be incorporated. International cooperation is an important aspect. A current example of this is a multidisciplinary EU initiative “Trustworthy AI”, in which 52 international experts are participating. Austria is represented in this body by Dr Sabine Köszegi, President of the Council for Robotics and AI. The BRZ has provided extensive feedback on this.

What questions do we still have to ask ourselves?

Learning systems require large amounts of data to make conclusions. But is it guaranteed that my personal data that I provide will also be properly processed and stored? Who has my data, do I still have control over my data? Can I have my data deleted again? But it's also about responsibility. Some wrong decisions made by artificial intelligence, such as a wrong book recommendation in the online shop, do not have a major impact. The situation is different with a rejected loan application or the wrong diagnosis in the hospital. If an AI system makes a mistake, who can I contact? Who is responsible for providing compensation? Who is responsible for the negative impact of machine decisions?