Prologue from 'Ethics in AI - Collection of Essays'
Sanjay Basu
Ethics of artificial intelligence (AI) is the study of the ethical issues that arise with AI systems.
Harari asks about the long-term consequences of AI: "When robots, the Internet of things and smart cities have become part of society, politics, and daily life, what will happen?"
The ethics of AI is often focused on "concerns", and throughout this book various authors will discuss the issues and deflate the non-issues in the context of new technologies, in their respective domain like healthcare, education, wealth-management and others. In my humble opinion, it seems for all major corporations, AI ethics is an image and public relations-driven discussion of how to achieve a desired outcome. In this collection we are focusing on genuine problems of ethics. It is a young field within applied ethics with significant dynamics and few well-established issues.
The notion of artificial intelligence (AI) is understood broadly and can include any kind of automaton that performs tasks based on inferences. Statistical and Deep Learning algorithms trained on past data enable these inferences or predictions. Policy is not one of the concerns of the AI enthusiasts. But the ethicists understand that the “Policy for AI Ethics” is not only the intersection, but the union of two sets of systems – one technical and other social. AI systems are more likely to have a greater impact on humanity. Therefore, AI Ethics Policies are necessary.
One such issues with using AI (ML algorithms) is around privacy of individuals. Privacy-preserving techniques to largely conceal the identity of persons or groups are now a standard staple in data science. The ethical issues that arise when these techniques are used in surveillance go beyond the mere manipulation of behavior.
Data systems can be used to manipulate behavior, online and offline, in a way that undermines autonomous rational choice. With sufficient prior data, algorithms can be used to target individuals or small groups with just the kind of input that is likely to influence this individual. The role of Cambridge Analytica and Facebook in the 2016 US Presidential election is well documented. An expose of this scandal is here - https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
A machine learning system will reproduce a bias based on the quality of the data provided irrespective of the intentions. There is a proposal for a standard description of datasets in a datasheet that might be helpful to understand the limitations of machine learning systems.
Henry Kissinger pointed out that we may have generated a potentially dominating technology in search of a guiding philosophy. In order to avoid an impenetrable suppression system of Algocracy, we need a broader societal move towards more democratic decision-making. Robots and AI systems are problematic if their processes or appearance involve deception, threaten human dignity, or violate the Kantian requirement of "respect for humanity". Hiroshi Ishiguro's remote-controlled robots are problematic, as are some robot advertisements. Humans often have deep emotional attachments to objects, so perhaps companionship with a predictable android is attractive to people who struggle with actual humans and already prefer dogs, cats, birds, a computer or a robot. Without an ethical guardrail, it is a possibility that in future, AI systems can influence humans.
Another key area of discussion is the job market and the impact of AI systems and automatons. The "dumbbell" job shape has occurred in the labor market as a result of AI and robotics automation: The highly skilled technical jobs are in demand and highly paid, the low skilled service jobs are in demand and badly paid, but the majority of jobs are under pressure and reduced. In general terms, the issue of unemployment is a question of how goods in a society should be justly distributed.
The next big impact of AI systems is in the auto industry such that, some countries have taken additional rules of politeness and interesting questions on when to break the rules into consideration when using autonomous vehicles. Interestingly, licensing automated driving is much more restrictive than licensing automated weapons, which are more difficult to test without informed consent of the consumers or their possible victims. According to the critics of using AI-enabled weapon systems, the possibility of using automated and connected driving systems and remotely piloted or autonomous land, sea, and air vehicles increases the probability of more civilian casualties during a targeted military mission because of the asymmetry of being held accountable.
Now let’s discuss Machine Ethics. Machine ethics refers to ethics for machines as opposed to ethics for humans. AI ethics is concerned with ensuring that the behavior of machines is ethically acceptable, and to guarantee transparency. The AI reasoning should reflect societal values, moral and ethical considerations, weigh the respective priorities of values held by different stakeholders in various multicultural contexts.
Various rules for ethical conduct can be easily modified to follow unethical rules, and the idea that machine ethics might take the form of "laws" has been investigated by Isaac Asimov, who proposed "three laws of robotics". First Law - A robot may not injure a human being or, through inaction, allow a human to come to harm. Second Law - A robot must obey orders given it by human beings except where such orders would conflict with the First Law. Third Law - A robot must protect its own existence if such protection does not conflict with the First or Second Law. The third law raises some interesting points around the rights of AI systems or robots.
Now let’s discuss some concerns around Super Intelligence or Artificial General Intelligence.
In order to distribute responsibility for robots, the European Group on Ethics in Science and New Technologies (2018) proposes that there should be a distribution of responsibility for robot actions, and that responsibility should be distributed across hierarchies. Until Superintelligent AI systems become so intelligent that they could develop themselves into even more intelligent systems, the fear that "the robots we created will take over the world" captured human imagination even before there were computers.
The fear of an "intelligence explosion" was first formulated by Irvin Good in 1965 and was taken up by Kurzweil in 1999, 2005, 2012. He predicted that supercomputers would reach human computation capacity in 10 years, mind uploading would be possible by 2030, and the "singularity" would occur by 2045. AI systems may develop to a level beyond human level, which could cause risks for the human species. The discussion is summarized in Eden et al. (2012), Armstrong (2014), and Shanahan (2015).
The argument from superintelligence to risk assumes that superintelligence does not imply benevolence, while other arguments assume that superintelligence is an entirely independent dimension. The assumption of superintelligence is not discussed in detail, and the question whether such a singularity is possible has not been investigated.
Superintelligence is not likely to end human existence on Earth. If there is an astronomical pattern in which an intelligent species is bound to discover AI at some point, there may be an existential risk that superintelligence will end human existence.
The singularity thus raises the problem of AI again, because in a few decades, the vision has changed from "AI is impossible" to "AI will solve all problems".
Comments
Post a Comment