About This Blog

Automating Invention is Robert Plotkin's blog on the impact of computer-automated inventing on the future of invention and patent law.

Click here for latest videos.

News Feeds

November 10, 2009

The Need for Regulating Autonomous Machines

The Royal Academy of Engineering has issued a report that calls on legislators and policy-makers to begin thinking about how to regulate autonomous machines that are now in the development stages. The report mentions two specific new systems: autonomous transport and smart homes. The benefits of these systems will be great, but they will also have associated issues of ethics and management. At this point, there is no legal framework set up to deal with these issues. Read the complete discussion on this report at The Engineer Online.

Posted by BlogAuthor1 at 2:23 AM | Comments (0)
category: Ethics

February 26, 2008

Upcoming talk on automating invention at the MIT Technology and Culture Forum

I will be giving a talk on computer-automated inventing and its philosophical and ethical implications at MIT on Thursday, March 6 from 4:30pm-6pm in Room E51-315 (campus map). In the talk I will give a preview of some of the examples of computer-automated inventing that I will describe in more detail in my upcoming book, and explain how the inventive processes behind them are already raising new questions about what it means to be an inventor and the ethical responsibilities of inventors.

The talk is sponsored by the MIT Technology and Culture Forum.

Posted by Robert at 8:00 AM | Comments (0)
category: Ethics | Philosophy of Computing

January 12, 2006

Norbert Weiner on Computer Automation and Work

I've written several entries on the ways in which one economic effect of computer automation is to drive people to develop and market skills that have not yet been automated. Just to make clear that this topic is far from new, and that it has ethical as well as economic implications, consider the following statement about the human impact of computer automation from Norbert Weiner's 1950 book The Human Use of Human Beings: Cybernetics and Society:

We were here in the presence of another social potentiality of unheard-of importance for good and for evil.... It [technology] gives the human race a new and most effective collection of mechanical slaves to perform its labor.... However, any labor that accepts the conditions of competition with slave labor accepts the conditions of slave labor, and is essentially slave labor.... However, taking the second [industrial] revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that is worth anyone's money to buy. The answer, of course, is to have a society based on human values other than buying or selling.

Just some food for thought from a half-century ago.

Posted by Robert at 7:15 AM | Comments (0)
category: Ethics | Work

June 27, 2005

Biological Viruses Spread by Computer Viruses?

Drew Endy of the MIT Biological Engineering Division gave a great keynote address at GECCO 2005 today describing the work he and his team are doing. Instead of attempting to summarize the talk here, I encourage you to check out his web site to find out more about what he is doing.

In the tail end of his talk, Professor Endy discussed some of the risks of biological engineering, including the risk of lone "garage bio-hackers" engineering harmful biological substances and unleashing them on the world. As he acknowledged, this risk is not new but may be exacerbated by improvements in biological engineering.

One limitation on the risk posed by biological materials is that the harm they can cause is limited in geographic scope. Airborne anthrax can only travel so far. A computer virus, in contrast, can travel worldwide almost instantly. If (as Professor Endy described earlier in his talk) it becomes increasingly possible to engineer biological systems using abstractions that are decoupled from the underlying biological substrate, one can imagine designing a biological ("real") virus by writing a computer program. Couple this with desktop biological manufacturing and virtual "teleportion" (see my next post) and we could see the spread of biological viruses using computer viruses. The previous sentence is complete science fiction, but still interesting to ponder.

Posted by Robert at 9:22 PM | Comments (0)
category: Design & Engineering | Ethics

June 23, 2005

Are programmers responsible for harm in virtual worlds?

The videogame 25 to Life is getting slammed by Senator Charles Schumer (D-NY) for allowing players to take the role of criminal gang members and kill police officers. The game, published by Eidos, lets players play scenarios as either police or criminals.

This reminds me in some ways of the speculation after September 11, 2001 that terrorists had used Microsoft Flight Simulator to practice flying planes into the World Trade Center, and Microsoft's subsequent decision to remove images of the World Trade Center from the software. Both 25 to Life and Flight Simulator allow their players to perform actions in virtual worlds, including actions that would be clearly immoral and unlawful if committed in the real world.

One important difference between the two, however, is their generality. Flight Simulator is a general-purpose flight simulator, allowing the user a wide range of choice about whether to fly peacefully over the Atlantic Ocean or to crash into skyscrapers. 25 to Life provides the user with a much narrower range of choices. Based on the descriptions I've seen, the user who plays the role of criminal has to commit crimes to win the game.

I think this distinction has ethical implications for computer programmers. We don't typically consider the inventor of a device that has a wide variety of beneficial and harmful uses (such as a hammer) to be morally responsible for someone else's use of that device to harm someone. But we at least take seriously the ethical responsibility of the inventors of devices (such as weapons of mass destruction) whose primary purpose is to hurt people, even if the inventors themselves never use those devices in harmful ways. Even inventors of "general purpose" inventions have questioned their own responsibility for the harm caused by their inventions, as in the case of Alfred Nobel, who established the Nobel Peace Prize after seeing his invention -- dynamite -- being used to make war.

Artificial creativity software adds an additional layer of indirection between the inventor/programmer and the resulting (real or virtual) harm. If Eidos had released a general-purpose simulator in which players could live any life of their choice, would (or should) Eidos have received any flak if its customers decided to play as characters who killed police officers? Would Alfred Nobel have had a clearer conscience if he had written artificial creativity software that he -- or perhaps someone else -- then used to produce the design for dynamite?

I am curious to hear feedback about this, in particular how views of the ethical reponsibilities of inventors have varied over time and from culture to culture.

Posted by Robert at 5:27 PM | Comments (0)
category: Ethics