The Three Laws of Robotics in the Age of Big Data
dc.contributor.author | Balkin, Jack | |
dc.date | 2021-11-25T13:34:49.000 | |
dc.date.accessioned | 2021-11-26T11:47:32Z | |
dc.date.available | 2021-11-26T11:47:32Z | |
dc.date.issued | 2017-01-01T00:00:00-08:00 | |
dc.identifier | fss_papers/5159 | |
dc.identifier.contextkey | 12190431 | |
dc.identifier.uri | http://hdl.handle.net/20.500.13051/4697 | |
dc.description.abstract | When I was a boy, I read all of Isaac Asimov's stories about robotics. In Asimov's world, robots were gradually integrated into every aspect of society. They had various degrees of similarity to humans, but as the stories and novels progressed, the most advanced robots were very human in appearance and form. The most famous feature of these robot stories is Asimov's three laws of robotics that were built into every robot's positronic brain. The three laws are: First Law: "a robot may not injure a human being, or, through inaction, allow a human being to come to harm." Second Law: "a robot must obey the orders given it by human beings except where such orders would conflict with the First Law." Third Law: "a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws." These three laws have been very influential, and even today people imagine what it would be like--or whether it would even be possible-to build them into robots, including, for example, into self-driving cars. | |
dc.title | The Three Laws of Robotics in the Age of Big Data | |
dc.source.journaltitle | Faculty Scholarship Series | |
refterms.dateFOA | 2021-11-26T11:47:32Z | |
dc.identifier.legacycoverpage | https://digitalcommons.law.yale.edu/fss_papers/5159 | |
dc.identifier.legacyfulltext | https://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=6160&context=fss_papers&unstamped=1 |