Contents page 1 — no, ethics is not page 2 programming — What is intelligence, anyway? Page 3 — laws as a Code? Page 4 — Open Source doesn’t work is not the same freedom On a page
read What social and political consequences of automated decisions from computers? How do they affect our everyday life? And what happens when machines discriminate? Discussion of the digitization, it is always well to these questions.
The answers are complicated, and consequently controversial, the current debates are, it must be also: Finally, they turn the question of how we shape the present and future of this society. However, many of the debates people, who have wrong ideas and assumptions about the structure and properties of the Digital. Six of the Central errors that are found again and again in arguments. Central, because they are not only found within a specific social group, but across almost all the groups involved. Policy-makers, technology-skeptics and skeptics, journalists and journalists, the science, the list could go on and on.
to identify The key errors and make you aware of it, is today more necessary than ever: The Federal government has just approved a support programme for artificial intelligence, after the introduction of the EU data protection regulation may emerge slowly, the first – not always planned – to follow. At the same time, not a week goes by in the people, not one or the other Form of Algorithms, to denounce, and in the sky to praise. Unfortunately, often, without naming really clear on what exactly an algorithm is and what a software artifact can actually.
Neither the policy nor the General Public can make to the Welfare of the people and the community-oriented use of new technologies, if arguments continue to stand on quicksand. The Address of the following errors a more solid Foundation for the debates of the coming years hopefully will.
mistake 1: The application of ethics in computer programs
software to formulate systems can be found increasingly in contexts in which you have ethical or moral behavior is required: self-driving cars will have to decide whether to bypass, rather an adult or a child. A Software to evaluate in Austria in the future, automatically, what are the chances of job seekers on a job. The discussion turns frequently to the question of how you can program in ethics in these systems, how to automate so the desired behavior is.
Generally refers to the operationalization of ethics, the Translation of abstract rules and models good behavior in predictable, deterministic, and objective control systems. This can check people independently, and developing and programming, to ensure that the robot or the software system behaves ethically.
to the author of the page
Unfortunately, this is a myth. Current approaches fail and produce either systems that do not work in the real world is simple, or much worse: deeply discriminatory results. At the renowned Massachusetts Institute of Technology (mit) wanted scientists to negotiate, for example, by using the platform of morality Machine important ethical decisions, to behavior, for example, as a car on the road. But this led, among other things, to the result, the car had better can the Criminal run over than dogs (as well as getting the car to a condemnation notice).
The idea that ethics can be in simple rules might work in video games. In the real world, ethical decision-making processes, but complex social and psychological processes that can lead, in spite of the same ethical rules depending on the social, political, religious or cultural Background of the key Person to quite different results.
To the rules of the ethics machine, you would have to deprive them of all social blur and humanity. Only that would be likely to be discriminatory, because exactly one value as normal, the of those who develop such systems would be defined.
And it would simplify the Problem is completely unacceptable: Why decisions should lead based on a simple model to acceptable ethical decisions in this heterogeneous world? Ethical decisions are much more complex than you could image the in software systems. Therefore, the automation of an ethics can only fail.