Undoubtedly, artificial intelligence is one of the most talked about fields in the field of technology today. Artificial intelligence, wh...
Undoubtedly, artificial intelligence is one of the most talked about fields in the field of technology today. Artificial intelligence, which provides answers to many questions, also causes questions that have never been asked before. These questions are not just technically based. The intersection of ethics and artificial intelligence is also an area where these questions and problems arise. In this article, we will try to briefly answer the questions of what artificial intelligence and artificial intelligence ethics are. The general framework of ethical problems in artificial intelligence will be drawn and concretized with an example.
WHAT IS ARTIFICIAL INTELLIGENCE?
It is an umbrella term that includes many areas under artificial intelligence. In general terms, it is a form of programming that attempts to enable computers to learn through various methods without explicitly telling them what to do. In classical programming, the steps to be followed are clearly stated for all situations. However, in artificial intelligence, predictions can be made using previous data or various optimization methods.
Artificial intelligence has gained such importance because it can be used in every field, from medical diagnosis to smart home appliances, and its effects can be observed on our daily lives. At this point, the ethical problem brought by artificial intelligence comes to light.
ARTIFICIAL INTELLIGENCE ETHICS
As new technologies developed and began to take an important place in our lives, people began to ask many questions. While some of these are about resistance to change, some of them are questions consisting of predictions about the effects it may have on our lives. Questions such as whether artificial intelligence will be the end of humanity or whether robots will completely eliminate jobs lead to more conceptual discussions that can take place in a technology that is far beyond the current state of artificial intelligence and can only be answered within the limits of technology. Apart from these, there are also many concerns whose effects we are already seeing. Many events have occurred that show us the necessity of discussing and regulating these issues in more depth, such as the bias that may arise in artificial intelligence (Biased AI), the power of artificial intelligence to manipulate people, and the choices that automatic systems should make. All of these have revealed "Artificial Intelligence Ethics" as a new field.
Ethical problems in artificial intelligence are not limited to the topics mentioned above. These problems and discussions take place in many areas such as surveillance and tracking, human-robot interaction, use of artificial intelligence in law, explainability and many others. You can learn about all these areas and observe examples from the suggestions in the further reading and references section.
DRIVERLESS CAR PROBLEM
Autonomous cars, better known as driverless cars, have recently begun to take part in our lives after being developed for a long time. Many car manufacturers have increased their investments in this technology and started to launch these models on the market. Although the presence of a human being in the driver's seat and at the steering wheel is essential, today cars can drive themselves, park and take control leaving minimal responsibility to the driver. It is thought that this will reduce accidents, optimize driving costs and be a solution to the traffic problem, but not everything is so positive about the results of this technology. Who is responsible and the choices to be made in the event of a possible accident are perhaps the biggest question marks.
Let's imagine that a driverless car is moving in flowing traffic. Let's assume that the road has 3 lanes and that a car, a tractor and a motorcycle are traveling in front of the car, from left to right, respectively. As a result of an incident, regardless of what it is, it is inevitable to hit one of the three vehicles in front of it, but let's assume that it is up to the vehicle to decide which one it will hit. At this point, a big problem comes to light. Who should it hit and who should decide? If it hits the motorcycle on its right, the damage to the car itself and its occupants will be minimized, but damage to the motorcycle driver, that is, the person opposite him, will probably be inevitable. In the event of a truck crashing, it is most likely that you and those inside will be harmed, and the damage to the other party is minimized. If it hits the car, they will both suffer about the same damage and there will be an average distribution. Of course, the accuracy of the event in this scenario is also controversial. The motorcycle driver who wears his/her equipment properly may not be harmed at all, but a car driver who does not fasten his seat belt may suffer greater damage if he is hit from behind. Against all these variables and possibilities, who should make the choice and based on what? Should this be a legally controlled form of programming, or should the companies that make the car decide when they develop their algorithms? Should the damage be minimized for the owner of the car or should the other party be taken into consideration? Should a situation where the damage is shared jointly be considered more fair? Above all, these calculations are not completely accurate and even minimal damage can cause someone to lose their life or suffer great harm. In such cases, who should be responsible? All these questions await carefully studied research and answers. Although it is difficult to make generalizable judgments and determine rules on these, the necessity of doing this is quite clear.
No comments