FREE ELECTRONIC LIBRARY - Dissertations, online materials

Pages:   || 2 | 3 |

«Moral Machines and the Threat of Ethical Nihilism Anthony F. Beavers The University of Evansville In his famous 1950 paper where he presents what ...»

-- [ Page 1 ] --

This is a penultimate draft of a paper to appear in Robot Ethics: The Ethical and Social

Implication on Robotics, edited by Patrick Lin, George Bekey and Keith Abney. Cam-

bridge, Mass: MIT Press, forthcoming.

Moral Machines and the Threat of Ethical Nihilism

Anthony F. Beavers

The University of Evansville

In his famous 1950 paper where he presents what became the benchmark for success in

artificial intelligence, Turing notes that "at the end of the century the use of words and

general educated opinion will have altered so much that one will be able to speak of ma- chines thinking without expecting to be contradicted" (Turing 1950, 442). Kurzweil (1990) suggests that Turing's prediction was correct, even if no machine has yet to pass the Turing Test. In the wake of the computer revolution, research in artificial intelligence and cognitive science has pushed in the direction of interpreting "thinking" as some sort of computational process. On this understanding, thinking is something computers (in principle) and humans (in practice) can both do.

It is difficult to say precisely when in history the meaning of the term "thinking" headed in this direction. Signs are already present in the mechanistic and mathematical tendencies of the early Modern period, and maybe even glimmers are apparent in the an- cient Greeks themselves. But over the long haul, we somehow now consider "thinking" as separate from the categories of "thoughtfulness" (in the general sense of wondering about things), "insight" and "wisdom." Intelligent machines are all around us, and the world is populated with smart cars, smart phones and even smart (robotic) appliances.

But, though my cell phone might be smart, I do not take that to mean that it is thoughtful, insightful or wise. So, what has become of these latter categories? They seem to be by- gones left behind by scientific and computational conceptions of thinking and knowledge that no longer have much use for them.

In 2000, Allen, Varner and Zinser addressed the possibility of a Moral Turing Test (MTT) to judge the success of an automated moral agent (AMA), a theme that is repeated in Wallach and Allen (2009). While the authors are careful to note that a language-only test based on moral justifications, or reasons, would be inadequate, they consider a test based on moral behavior. “One way to shift the focus from reasons to actions,” they write, “might be to restrict the information available to the human judge in some way.

Suppose the human judge in the MTT is provided with descriptions of actual, morally significant actions of a human and an AMA, purged of all references that would identify the agents. If the judge correctly identifies the machine at a level above chance, then the machine has failed the test” (Wallach and Allen 2009, 206). While they are careful to note that indistinguishability between human and automatedagents might set the bar for passing the test too low, such a test by its very nature decides the morality of an agent on the basis of appearances. Since there seems to be little else we could use to determine the success of an AMA, we may rightfully ask whether, analogous to the term "thinking" in other contexts, the term "moral" is headed for redescription here. Indeed, Wallach and Allen’s survey of the problem space of machine ethics forces the question of whether in   1   fifty years (or less) one will be able to speak of a machine as being moral without expecting to be contradicted. Supposing the answer were yes, why might this invite concern?

What is at stake? How might such a redescription of the term "moral" come about? These are the questions that drive this reflection. I start here with the last one first.

How Might a Redescription of the Term "Moral" Come About?

Before proceeding, it is important to note first that because they are fixed in the context of the broader evolution of language, the meaning of terms is constantly in flux.

Thus, the following comments must be understood generally. Second, the following is one way redescription of the term “moral” might come about, even though in places I will note that this is already happening to some extent. Not all machine ethicists can be plotted on this trajectory.

That said, the project of designing moral machines is complicated by the fact that even after more than two millennia of moral inquiry, there is still no consensus on how to determine moral right and wrong. Even though most mainstream moral theories agree from a big picture perspective on which behaviors are morally permissible and which are not, there is little agreement on why they are so, that is, what it is precisely about a moral behavior that makes it moral. For simplicity’s sake, this question will be here designated as the hard problem of ethics. That it is a difficult problem is seen not only in the fact that it has been debated since philosophy’s inception without any satisfactory resolution, but also that the candidates that have been offered over the centuries as answers are still on the table today. Does moral action flow from a virtuous character operating according to right reason? Is it based on sentiment, or on application of the right rules? Perhaps it is mere conformance to some tried and tested principles embedded in our social codes, or based in self-interest, species-instinct, religiosity, and so forth.

The reason machine ethics cannot move forward in the wake of unsettled questions such as these is that engineering solutions are needed. Fuzzy intuitions on the nature of ethics do not lend themselves to implementation where automated decision procedures and behaviors are concerned. So, progress in this area requires working the details out in advance and testing them empirically. Such a task amounts to coping with the hard problem of ethics, though largely, perhaps, by rearranging the moral landscape so an implementable solution becomes tenable.

Some machine ethicists, thus, see research in this area as a great opportunity for ethics (Anderson and Anderson 2007; Anderson, S. forthcoming; Beavers 2009 and 2010;

Wallach 2010). If it should turn out, for instance, that Kantian ethics cannot be implemented in a real working device, then so much the worse for Kantian ethics. It must have been ill-conceived in the first place, as now seems to be the case, and so also for utilitarianism, at least in its traditional form.

Quickly, though some have tried to save Kant’s enterprise from death by failure to implement (Powers 2006), the cause looks grim. The application of Kant's categorical imperative in any real world setting seems to fall dead before a moral version of the frame problem. This problem from research in artificial intelligence concerns our current inability to program an automated agent to determine the scope of reasoning necessary to engage in intelligent, goal-directed action in a rich environment without needing to be told how to manage possible contingencies (Dennett 1984). Respecting Kantian ethics,   2   the problem is apparent in the universal law formulation of the categorical imperative, the one that would seem to hold the easiest prospects for rule-based implementation in a computational system: "Act as if the maxim of your action were to become through your will a universal law of nature" (30). One mainstream interpretation of this principle suggests that whatever rule (or maxim) I should use to determine my own behavior must be one that I can consistently will to be used to determine the behavior of everyone else.

(Kant's most consistent example of this imperative in application concerns lying promises. I cannot make a lying promise without simultaneously willing a world in which lying is permissible, thereby also willing a world in which no one would believe a promise, particularly the very one I am trying to make. Thus, the lying promise fails the test and is morally impermissible.) Though at first the categorical imperative looks implementable from an engineering point of view, it suffers from a problem of scope, since any maxim that is defined narrowly enough (for instance, to include a class of one, anyone like me in my situation) must consistently universalize. Death by failure to implement looks imminent; so much the worse for Kant, and so much the better for ethics.

Classical utilitarianism meets a similar fate, even though unlike Kant, Mill casts internals, such as intentions, to the wind and considers just the consequences of an act for evaluating moral behavior. Here, "actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure and the absence of pain; by unhappiness, pain and the privation of pleasure" (7). That internals are incidental to utilitarian ethical assessment is evident in the fact that Mill does not require that one act for the right reasons. He explicitly says that most good actions are not done accordingly (18-19). Thus, acting good is indistinguishable from being good, or, at least, to be good is precisely to act good; and sympathetically we might be tempted to agree, asking what else could being good possibly mean.

Things again are complicated by problems of scope, though Mill, unlike Kant, is aware of them. He writes, "Again, defenders of utility often find themselves called upon to reply to such objections as this—that there is not enough time, previous to action, for calculating and weighing the effects of any line of conduct on the general happiness" (23). (In fact, the problem is computationally intractable when we consider the ever extending ripple effects that any act can have on the happiness of others across both space and time.) Mill gets around the problem with a sleight of hand, noting that "all rational creatures go out upon the sea of life with their minds made up on the common questions of right and wrong" (24), suggesting that calculations are, in fact, unnecessary, if one has the proper forethought and upbringing. Again, the rule is of little help, and death by failure to implement looks imminent. So much the worse for Mill; again, so much the better for ethics.

Wallach and Allen agree that the prospects for a "top-down, theory driven approach to morality for AMAs" (83), such as we see in both instances above, do not look good, arguing instead that a hybrid approach that includes both "top-down" and "bottom-up" strategies is necessary to arrive at an implementable system (or set of systems). "Bottomup" here refers to emergent approaches that might allow a machine to learn to exhibit moral behavior and could arise from research in "Alife [or artificial life], genetic algorithms, connectionism, learning algorithms, embodied or subsumptive architecture, evolutionary and epigenetic robotics, associative learning platforms, and even traditional symbolic AI" (112). While they advocate this hybrid approach, they also acknowledge   3   the limitations of the bottom-up approach taken by itself. As one might imagine, any system that learns is going to require us to have a clear idea of moral behavior in order to evaluate goals and the success of our AMAs in achieving them. So, any bottom-up approach also requires solving the ethical hard problem in one way or another, and thus it too dies from failure to implement. We can set the bottom-up approach aside; again, so much the better for ethics.

If these generalizations are correct, that top-down theoretical approaches may run into some moral variant of the frame problem and that both the top-down and bottom-up approaches require knowing beforehand how to solve the hard problem of ethics, then where does that leave us? Wallach and Allen (and others, see Coleman 2001) find possible solutions in Aristotle and virtue ethics more generally. At first, this move might look surprising. Of the various ways to come at ethics for machines, virtue ethics would seem an unlikely candidate, since it is among the least formalistic. Nonetheless, it has the benefit of gaining something morally essential from both top-down and bottom-up approaches.

The top-down approach, Wallach and Allen argue, is directed externally toward others. Its "restraints reinforce cooperation, through the principle that moral behavior often requires limiting one's freedom of action and behavior for the good of society, in ways that may not be in one's short-term or self-centered interest" (117). Regardless of whether Kant, Mill and other formalists in ethics fall to a moral frame problem, they do nonetheless generally understand morality fundamentally as a necessary restraint on one's desire with the effect of, though not always for the sake of, promoting liberty and the public good.

But rules alone are insufficient without a motivating cause, Wallach and Allen rightly observe, noting further that “values that emerge through the bottom-up development of a system reflect the specific causal determinates of a system’s behavior” (117). Bottom-up developmental approaches, in other words, can precipitate where, when, and how to take action and perhaps set restraints on the scope of theory-based approaches like those mentioned above. Having suggested already that by “hybrid” they mean something more integrated than the mere addition of top to bottom, virtue ethics would seem after all a good candidate for implementation. Additionally, as Gips (1995) noted prior, learning by habit or custom, a core ingredient of virtue ethics, is well-suited to connectionist networks and, thus, can support part of a hybrid architecture.

Pages:   || 2 | 3 |

Similar works:

«Christa Müller Guerilla Gardening und andere Strategien der Aneignung des städtischen Raums Erschienen in: Bergmann, Malte/ Lange, Bastian (Hrsg.): Eigensinnige Geographien. Städtische Raumaneignungen als Ausdruck gesellschaftlicher Teilhabe, S. 281-288, Verlag für Sozialwissenschaften: Wiesbaden 2011 „Wenn dir Land zum Gärtnern fehlt, denk immer daran, dass dir das Land fehlt, während andere mehr als genug besitzen. Es macht also Sinn, Gelände zu bewirtschaften, die anderen gehören....»

«How Do Children Represent Pretend Play? Ori Friedman University of Waterloo Citation: Friedman, O. (2013). How do children represent pretend play? In M. Taylor (Ed.), Oxford handbook of the development of imagination (pp. 186-195). New York: Oxford University Press. Electronic copy available at: http://ssrn.com/abstract=2360484 Abstract How do young children represent pretend play? One possibility is that recognizing and representing pretend play depends on children’s ability to infer the...»

«MUHAMMAD AND HIS QURAN: BLOOD AND LIES AT THE ROOT OF ISLAM Mohammad Asghar 2 2 3 Thought looks into the pit of hell and is not afraid. Thought is great and swift and free, the light of the world, and the chief glory of man. — Bertrand Russell 3 Contents THE HAUNTING.. 1 Interpeting the Quran.. 10 I. THE OUTCAST: MUHAMMAD IN MECCA.18 An Accidental Conception?.. 18 The Spring and the Rock.. 24 Was Muhammad Switched? The Orphan’s Ordeals.. 32 The Caravan Trader..34 Muhammad and the Pagan...»

«RecheRche littéRaiRe liteRaRyhead ReseaRch running 1 30.59–60 (Summer 2014) running head 1 Recherche littéraire / Literary Research Rédacteur / Editor Dorothy M. Figueira, University of Georgia (USA) Directeur de production / Production Manager Jenny Webb Publié avec le concours de / Published with the support of: l’AILC / the ICLA et / and the University of Georgia (USA) En tant que publication de l’Association internationale de la littérature comparée, Recherche littéraire /...»

«CASE WESTERN RESERVE UNIVERSITY Faculty Senate October 24, 2007 Adelbert Hall, Toepfer Room– 3:30-5:30 p.m. AGENDA 1. 3:30 Approval of Minutes of the September 20, 2007 Faculty Senate meeting attachment D. Matthiesen 2. 3:35 President’s Announcements B. Snyder 3. 3:40 Provost’s Announcements J. Goldberg 4. 3:45 Chair’s Announcements D. Matthiesen 5. 3:50 Report from the Executive Committee G. Starkman 6. 4:00 Report from the Budget Committee K. Ledford 7. 4:20 Discussion of the Faculty...»

«Meat and Livestock Australia Cattle/Beef Price Transparency milestone report CONFIDENTIAL Project code: G.POL.1503 Prepared by: AgInfo Pty Ltd Team members: Peter Weeks (Weeks Consulting Services Pty Ltd), Winifred Perkins (Profaned Associates Pty Ltd) and David Warriner (DMGM Pty Ltd) Date published: Update March 2 2015 PUBLISHED BY Meat and Livestock Australia Limited Locked Bag 991 NORTH SYDNEY NSW 2059 Assessment of price transparency in the beef supply chain Milestone 2: Learning from...»

«Features of J-CAT (Japanese Computerized Adaptive Test) Shingo Imai Sukero Ito Tsukuba University Tokyo University of Foreign Studies Yoichi Nakamura Kenichi Kikuchi Seisen Jogakuin College Toho University Yayoi Akagi Hiromi Nakasono Yamaguchi University Shimane University Akiko Honda Takekatsu Hiramura Ritsumeikan Asia Pacific University Tokyo Institute of Technology Presented at the CAT Research and Applications Around the World Poster Session, June 2, 2009 Abstract J-CAT (Japanese...»

«1 Wayne State University Professional Record — MARVIN ZALMAN Page WAYNE STATE UNIVERSITY Professional Record Date Prepared: 3/24/2014 NAME: MARVIN ZALMAN Soc. Sec.: available on request Office: 2283 Fac/Admin Bldg. (313) 577-6087 E-MAIL: aa1887@wayne.edu DEPARTMENT/COLLEGE: Criminal Justice/Liberal Arts & Sciences RANK & DATE: Full Professor/August 1992 Appointed as Associate Professor with Tenure: August 25, 1980 DATE & PLACE OF BIRTH: 9 January 1942, Bronx, New York CITIZENSHIP: United...»

«A NOBLE GROOM JODY HEDLUND 5 Jody Hedlund, A Noble Groom Bethany House, a division of Baker Publishing Group, © 2013. Used by permission. © 2013 by Jody Hedlund Published by Bethany House Publishers 11400 Hampshire Avenue South Bloomington, Minnesota 55438 www.bethanyhouse.com Bethany House Publishers is a division of Baker Publishing Group, Grand Rapids, Michigan Printed in the United States of America All rights reserved. No part of this publication may be reproduced, stored in a retrieval...»

«Table Of Contents Table Of Contents •  Basics •  Import Background from Applications •  Calibration •  Hyperlink, File Attachment •  Function Buttons •  Webcam Input •  Mouse Operations •  Whiteboard •  TWAIN Input •  Pen Variations •  Hitachi Projector Control •  Normal Pen •  Customize Toolbar •  Intelli-Pen •  User Profile •  Text Recognition •  Math Tools •  Search text in Google •  Search text in Wikipedia • ...»

«Interpretations of Negative Probabilities M. Burgin Department of Mathematics University of California, Los Angeles 405 Hilgard Ave. Los Angeles, CA 90095 Abstract In this paper, we give a frequency interpretation of negative probability, as well as for extended probability, demonstrating that to a great extent these new types of probabilities, behave as conventional probabilities. Extended probability comprises both conventional probability and negative probability. The frequency...»

«Handbook for the New Hampshire School Bus Driver New Hampshire Department of Safety Division of Motor Vehicles DSMV 241 (Rev. 06/10) HANDBOOK FOR THE NEW HAMPSHIRE SCHOOL BUS DRIVER The Department of Safety wishes to thank the following New Hampshire School Bus Driver Instructors for their assistance in revising and updating this publication: Percy Abbott Arlene Cody Sandra Ellis Cheryl Hardy George Korn, Ed.D. Gayle Marcotte Natalie Perry Trooper Cathy McHugh Pupil Transportation Supervisor...»

<<  HOME   |    CONTACTS
2016 www.dissertation.xlibx.info - Dissertations, online materials

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.