About This Blog
Automating Invention is Robert Plotkin's blog on the impact of computer-automated inventing on the future of invention and patent law.
- Artificial Invention
- Design & Engineering
- Evolutionary Computation
- Genie in the Machine
- History of Computing
- Human Creativity
- Intellectual Property Law
- Philosophy of Computing
- Software Patents
- Technology Industry
- Human-competitive software meets resistance from human competitors
- Patent Law and Layers of Abstraction
- First File Sharing, Then People Sharing
- Biological Viruses Spread by Computer Viruses?
- Are programmers responsible for harm in virtual worlds?
- Who "writes" a reality TV show?
- European Parliament Renames the "Software Patent Directive"
- Richard Stallman on Software Patents
- Inventors Work Hard to be Lazy
- Tutorials on AutomatingInvention.com
- Blogging from GECCO and NASA/DOD EH Conferences
- A software patent puzzle (part 2)
June 29, 2005
Human-competitive software meets resistance from human competitors
There were quite a few presenters from private industry at GECCO this week. I had the chance to attend presentations by Erik Goodman (wearing his Red Cedar Technology hat), Thomas Baeck of NuTech Solutions, and Douglas J. Newell of Genalytics. Unfortunately, I could not attend the others but have been reading as many papers as I can.
One thing that struck me about some of these companies is that although they are able to use evolutionary computing to design products and solve other problems faster, cheaper, and better than ever before, they often face resistance from prospective clients. More specifically, they face resistance from the engineers, statisticians, and others who essentially fear losing their jobs to software.
In one sense, this is nothing new. New technology has always made it possible to displace human labor. What is worth noting is that technological automation is moving higher up the skill ladder, increasingly making it possible to automate functions -- such as those performed by engineers and computer programmers -- that were previously thought to require not only skill but also creativity and hence to be "safe" from the encroachment of automation. I believe this is one of the theses of A Whole New Mind by Daniel Pink, which I plan to read soon and comment on here.
June 28, 2005
Patent Law and Layers of Abstraction
In his talk yesterday (see my previous posting), Drew Endy talked about how some of his work is shifing towards designing biological systems at higher layers of abstraction. The idea of "layers" (or "levels") of abstraction is deeply engrained in the way engineers, and computer scientists in particular, think about systems. I've argued elsewhere that some of the problems with software patents could be fixed if patent law recognized the distinction between different layers of abstraction and focused its attention on the layer(s) where innovation is actually occurring in a particular field.
If you're not familiar with the concept of "layers of abstraction," imagine that you're designing a toaster. You might design the toaster as follows:
- Start with the very high-level (i.e., very abstract) goal of designing a machine to toast bread.
- Then you might identify the functions that the toaster needs to be able to perform, such as holding bread, heating bread, detecting when the bread has been toasted to the desired darkness, and ejecting the bread. Identifying these functions is called "functional design."
- Then you might design the physical components for performing the functions described above, such as heating elements for heating the bread and springs for ejecting the bread. Designing these components is called "physical design" and is often what we think of as "inventing."
Once you've designed and built your toaster, there is of course only a single physical toaster. But you can still think about and describe the toaster in terms of its "physical layer" (its physical components) and its "functional layer" (the functions it performs). Machines and other systems are often designed and described in terms of many such layers in addition to the physical and functional.
It is possible to innovate in any of these layers. Mechanical engineers traditionally have innovated in the physical layer by designing new physical components or new combinations of existing physical components. Computer programmers traditionally have innovated in the functional layer by writing programs consisting of new functions or new combinations of existing functions.
But physical innovation holds a special place in patent law, as the result of assumptions that are no longer valid. One example of patent law's attachment to the physical layer is section 112, paragraph 6 of the U.S. patent statute, which says that patent claims written using functional language are to be limited in scope to the specific physical structure of the invention as described elsewhere in the patent. This section hard-wires the physical layer as the upper limit on the layer of abstraction at which an invention may be claimed.
This legal distinction between the physical and functional layers made sense when the focus of most innovation was the physical layer, and when there were no computers or other means available to automatically produce physical implementations of innovations described at higher levels of abstraction. But these assumptions are no longer valid, and patent law needs to become more flexible in response by focusing protection on the layer(s) of abstraction at which innovation is actually occurring in the real world. In future postings on this site I will explore ways in which patent law could be reformed to achieve this goal. I will also consider potential problems with such reforms.
June 27, 2005
First File Sharing, Then People Sharing
"Teleport" is in quotes because the technology, even if it existed, wouldn't really allow you to teleport yourself, but instead would create the illusion of doing so. To "teleport" yourself to your friend's house, you would need a camera connected to the Internet, and your friend would need a special machine with a stash of special synthetic atoms. The camera would take a picture of you and transmit the picture to your friend's machine, which would assemble the synthetic atoms into your shape. Capturing and transmitting images over time would produce a moving replica of you at your friend's house.
It seems that the same technology could be used to perform remote manufacturing. The manufacturer of a machine could transmit a 3D design for a machine anywhere in the world, and have the machine manufactured on location, thereby saving the cost of transportation. Unlike in the case of you and your friend, there wouldn't even need to be a physical "original" from which to make copies, just a CAD file generated using software. And any number of copies could be made, assuming a sufficient supply of raw materials.
This would bring the basic feature that has been causing so many problems for the "copyright industry" -- namely worldwide, instant, perfect, and (essentially) costless copying of audio and video -- to the manufacture of physical machines. Although automated manufacturing based on CAD-generated designs is old news in certain fields, it is unclear what its implications would be if it were to become ubiquitous.
Biological Viruses Spread by Computer Viruses?
Drew Endy of the MIT Biological Engineering Division gave a great keynote address at GECCO 2005 today describing the work he and his team are doing. Instead of attempting to summarize the talk here, I encourage you to check out his web site to find out more about what he is doing.
In the tail end of his talk, Professor Endy discussed some of the risks of biological engineering, including the risk of lone "garage bio-hackers" engineering harmful biological substances and unleashing them on the world. As he acknowledged, this risk is not new but may be exacerbated by improvements in biological engineering.
One limitation on the risk posed by biological materials is that the harm they can cause is limited in geographic scope. Airborne anthrax can only travel so far. A computer virus, in contrast, can travel worldwide almost instantly. If (as Professor Endy described earlier in his talk) it becomes increasingly possible to engineer biological systems using abstractions that are decoupled from the underlying biological substrate, one can imagine designing a biological ("real") virus by writing a computer program. Couple this with desktop biological manufacturing and virtual "teleportion" (see my next post) and we could see the spread of biological viruses using computer viruses. The previous sentence is complete science fiction, but still interesting to ponder.
June 23, 2005
Are programmers responsible for harm in virtual worlds?
The videogame 25 to Life is getting slammed by Senator Charles Schumer (D-NY) for allowing players to take the role of criminal gang members and kill police officers. The game, published by Eidos, lets players play scenarios as either police or criminals.
This reminds me in some ways of the speculation after September 11, 2001 that terrorists had used Microsoft Flight Simulator to practice flying planes into the World Trade Center, and Microsoft's subsequent decision to remove images of the World Trade Center from the software. Both 25 to Life and Flight Simulator allow their players to perform actions in virtual worlds, including actions that would be clearly immoral and unlawful if committed in the real world.
One important difference between the two, however, is their generality. Flight Simulator is a general-purpose flight simulator, allowing the user a wide range of choice about whether to fly peacefully over the Atlantic Ocean or to crash into skyscrapers. 25 to Life provides the user with a much narrower range of choices. Based on the descriptions I've seen, the user who plays the role of criminal has to commit crimes to win the game.
I think this distinction has ethical implications for computer programmers. We don't typically consider the inventor of a device that has a wide variety of beneficial and harmful uses (such as a hammer) to be morally responsible for someone else's use of that device to harm someone. But we at least take seriously the ethical responsibility of the inventors of devices (such as weapons of mass destruction) whose primary purpose is to hurt people, even if the inventors themselves never use those devices in harmful ways. Even inventors of "general purpose" inventions have questioned their own responsibility for the harm caused by their inventions, as in the case of Alfred Nobel, who established the Nobel Peace Prize after seeing his invention -- dynamite -- being used to make war.
Artificial creativity software adds an additional layer of indirection between the inventor/programmer and the resulting (real or virtual) harm. If Eidos had released a general-purpose simulator in which players could live any life of their choice, would (or should) Eidos have received any flak if its customers decided to play as characters who killed police officers? Would Alfred Nobel have had a clearer conscience if he had written artificial creativity software that he -- or perhaps someone else -- then used to produce the design for dynamite?
I am curious to hear feedback about this, in particular how views of the ethical reponsibilities of inventors have varied over time and from culture to culture.
Who "writes" a reality TV show?
WBUR reported this morning (the same story is being covered by Reuters and others) that the Writers Guild of America (WGA) has launched a campaign to gain a labor contract for writers of reality TV shows. Reality TV producers are objecting to such a contract, in part on the basis that the people seeking a contract aren't "writers."
What's the connection between this and automated inventing? Consider the following (from the Reuters story):
Instead of writing dialogue, reality TV writers say they help craft the overall sense of story. According to the union, this includes casting, creating scenarios, conducting field interviews and guiding the postproduction process so hundreds of hours of video end up with a meaningful beginning, middle and end.
For that reason, video editors feel they are equally deserving of WGA coverage.
"These stories come together in post (-production) -- stories are pulled out by us, in collaboration of course with storytellers -- but we're in there creating stories so it's a logical conclusion to be part of the Writers Guild," said editor Donna Egan, who also is helping organize this campaign. "A lot of it is just about having basic benefits -- health and pension. We have to change the system because the system isn't going to change voluntarily."
Is someone who works on a reality TV show a "writer" because he or she creates the environment in which a reality TV show plays out? This is similar to the question whether someone who writes automatic script-writing software is the "author" of the resulting scripts, or whether someone who writes automatic machine-designing software is the "inventor" of the resulting machines.
Whatever the answer to these questions, now at least I can justify watching "Fear Factor" as a way of conducting research into automated inventing.
June 21, 2005
European Parliament Renames the "Software Patent Directive"
ZDNet UK has an article reporting that the European Parliament has proposed changing the name of the so-called "software patent directive" to use the term "computer-aided invention" in place of the old term, "computer-implemented invention." For those of you who can't see what difference this could possibly make, the stated reason for the proposed change is to make it clear that software is not patentable per se, but rather only as part of an innovation that uses software "to aid the performance of the invention."
These and other attempts over the years to find just the right term to define patents on software are, in my view, doomed to fail. One motivation for my "software patent puzzle" (see parts 1 and 2) is to demonstrate that there is no principled way to use the hardware/software distinction as a basis for distinguishing patentable inventions from unpatentable ones.
Instead, I propose that we focus our attention on the patentability of "computer-generated inventions." Examples of computer-generated inventions include software (a computer generates software when you program the computer) and any device whose design is generated by a genetic algorithm or other artificial creativity software.
It is the "computer-generated" feature of software that keeps causing problems for patent law, and that will continue to cause problems for patent law as computers automate the invention of things other than software. I can't justify that claim in a single blog posting, but I tried to make the basic argument here, and will continue to extend the argument in future postings.
Richard Stallman on Software Patents
Richard Stallman of the Free Software Foundation has written (another) article criticizing software patents. Stallman makes several arguments against software patents: (1) patents cover "ideas," and therefore are broader than copyrights and more easily used to stifle innovation than copyrights; (2) software patents are frequently granted on software that is at best a trivial improvement over existing software; and (3) the ability to obtain a software patent without writing any actual software makes it easy for patent trolls (he calls them "patent parasite companies") to extort money from true innovators. Stallman's solution: ban software patents.
Stallman's arguments strike me primarily as arguments against patents in general, although they may have some particular empirical force in the software context.
I wonder what Stallman would think about a patent on a mechanical device that was designed by software? Or a patent on software whose only function is to design hardware? Or a patent on an electrical machine whose only function is to write software? Would such patents be "software patents" and should they be allowed?
I agree with Stallman that software raises some difficult problems for patent law, but for different reasons. And attempting to address these problems by banning "software patents" outright will only cause as many problems as it solves.
June 17, 2005
Inventors Work Hard to be Lazy
I don't think necessity is the mother of invention - invention, in my opinion, arises directly from idleness, possibly also from laziness. To save oneself trouble.
The purpose of many inventions is to make life easier for the inventions' users. New toasters, cars, and lawn mowers make it easier for the people who use them to make toast, travel, and mow the lawn.
But the purpose of many other inventions is to make it easier for inventors to invent. An engineer might write a computer program to simulate new designs for automobile frames, thereby saving the time, money, and effort needed to build and test physical prototypes of the frames. Similarly, engineers have long invented measurement tools, ranging from calipers to electronic calculators, to make it easier to build and test their inventions more accurately.
Inventors invent such new devices to make their lives as inventors easier -- "[t]o save oneself trouble," in Christie's words. The use of such "invention-facilitating inventions" is usually transparent to the end user, who has no way to know whether his new toaster was designed by pure human ingenuity or by an automated computer program.
Invention-facilitating inventions have long been patentable. But such inventions arguably "promote the progress of useful arts" (the ultimate purpose of U.S. patent law) only indirectly, by reducing the resources (e.g., time, money, raw materials) required to invent. If this is a sufficient basis for patentability, then why not allow improvements in pure mathematics, or at least improvements in "pure software" that performs calculations more efficiently? Surely such improvements may be applied to facilitate the process of inventing.
Although I won't attempt to provide any answers to these questions here, I think that the debate over software patents is in part a debate over how direct the connection needs to be between the function performed by a computer program and some real-world ("practical" or "industrial") use for patent protection to be justified. Future improvements in automated inventing will only make resolution of this question more pressing.
June 16, 2005
Tutorials on AutomatingInvention.com
From time to time I will post tutorials on this site about relevant topics. Unlike a site about a specific field -- such as contract law or programming in C++ -- this site is interdisciplinary, cutting across the fields of law, computer science, philosophy, and economics, to name a few. Different readers come to this site with different backgrounds and speaking different languages.
Although I attempt to make this site understandable to as broad an audience as possible, sometimes providing links and a few sentences of background information just isn't enough. Therefore, I will post separate tutorials on important recurring topics. Each tutorial will have a primary category of Tutorials and a secondary category matching the topic of the tutorial. As a result, you will always be able to view all existing tutorials by viewing the Tutorials category.
If there is any topic for which you would like a tutorial to be posted, please let me know.
Blogging from GECCO and NASA/DOD EH Conferences
I will be attending and blogging from the Genetic and Evolutionary Computation Conference (GECCO 2005) and the NASA/DOD Conference on Evolvable Hardware from June 27-July 1. (Unfortunately I will miss the first two days of GECCO.) These conferences will provide tutorials and updates on the latest developments not only in computer-automated design, but more generally in evolution-inspired computing.
Instead of attempting to duplicate or compete with Illigal Blogging's live coverage of the GECCO conference, I plan to post my thoughts on the legal and broader social implications of the topics covered in the conference sessions. I will also interview presenters and attendees for a book I am writing on computer-automated invention and the law. (I may post excerpts/summaries of those interviews here, with the interviewees' permission.) If you would like to speak with me, just track me down at the conference (you can see what I look like here).
A software patent puzzle (part 2)
Glen Secor posted a thoughtful comment in response to part 1 of this entry. Now, as a result of my posting and Glen's additions, we have four x-ray clarification boxes that have the exact same externally-observable behavior, but that differ in the following ways:
(1) the first is powered by human-designed circuitry;
(2) the second is powered by a laptop running software written by a human programmer;
(3) the third is powered by the same software as the second, except that this software was "written" by other software (e.g., a genetic algorithm) rather than a human; and
(4) the fourth is powered by x-ray clarification gnomes.
Again, I'll ask: is there any basis for granting patent protection to some of these boxes but not all of them? Patent law protects (1) products and processes that are (2) new and (3) useful.
(1) All of the boxes are products (with the possible exception of the gnomes, who I'll ignore for now).
(2) All of the boxes are new (based on the way I've set up the hypothetical).
(3) Are the boxes useful? (Do they satisfy patent law's "utility" requirement?) If clarifying x-rays is "useful," then all of the boxes are useful. If clarifying x-rays is not "useful," then none of the boxes is useful.
This hypothetical appears to demonstrate that whether a device is a new and useful machine for purposes of patent law can't depend on either: (1) what's inside the device (e.g., hardware or software); or (2) how the device was designed (e.g., by a human or by software). Therefore, if there's a reason for objecting to software patents and not to hardware patents, or for objecting to patents on computer-designed machines and not to patents on human-designed machines, it must lie elsewhere.
In other words, what is the controversy about software patents really about?
June 13, 2005
What's in a name? Computer "science" vs. "engineering"
Illigal Blogging has a nice post criticizing the initial choice to use the word "science" (as in "computer science") to describe what computer people do. (I'll use the term "computer people" in this post to avoid any bias toward "science" or "engineering.") I agree with the author's assertion that the use of "science" instead of "engineering" is unfortunate because it fails to capture the significant ways in which computer scientists/engineers use computers to develop new solutions to real-world problems.
There has been similar debate about the use of the term "machinery" in the name of the computer profession's preeminent association: The Association for Computing Machinery. R.W. Hamming, in his acceptance speech for the 1968 Turing Award (ACM Digital Library subscription required), said:
At the heart of computer science lies a technological device, the computing machine. Without the machine almost all of what we do would become idle speculation, hardly different from that of the notorious Scholastics of the Middle Ages. The founders of the ACM clearly recognized that most of what we did, or were going to do, rested on this technological device, and they deliberately included the word “machinery” in the title [of the ACM]. There are those who would like to eliminate the word, in a sense to symbolically free the field from reality, but so far these efforts have failed. I do not regret the initial choice. I still believe that it is important for us to recognize that the computer, the information processing machine, is the foundation of our field.
The focus on the "scientific" and theoretical aspects of what computer people do has affected how the law has viewed computer software. For example, an argument that continues to be raised against the patenting of software to this day (and particularly strongly in Europe at the moment) is that software is "abstract" or "intangible" and therefore lacking in the "practical" or "technical" nature required for patent protection. I think that this dispute about whether software is "abstract" (and hence not patentable) or "technical" (and hence susceptible of patent protection) has some of the same roots as the debate about whether computer people are doing science, mathematics, engineering, some combination of them, or something completely different.
Although what many computer people do what is reasonably classified as "science," many are using computers to design new solutions to practical problems -- in other words, to engage in engineering. The debate over legal protection for software needs to incorporate a more nuanced understanding of what computer people do if there is to be any hope of a rational resolution to that debate.
A software patent puzzle (part 1)
The controversy over software patents just won't die, with the row over the proposed European Directive on the Patentability of Computer-Implemented Inventions being the latest bit of evidence.
I think this debate is fundamentally misguided, and that the debate may be more fruitfully framed as one about the patentability of computer-generated inventions, not just software. To shed light on this alternative perspective, consider the following puzzle:
You are shown two black boxes that are identical in external appearance and behavior. Each box has an input slot into which an original X-ray print may be fed. After a short delay, each box produces a highly clarified X-ray print in which any tumors are highlighted. The clarified X-rays produced by both boxes are indistinguishable from each other. Assume that the quality of X-ray clarification produced by both boxes is better than that which may be obtained using any preexisting X-ray processing device.
Upon opening both boxes and peering inside, you find in the first box a complex jumble of circuitry and are informed that such circuitry was custom-designed by an expert electrical engineer. In the other box you find a small laptop computer running X-ray image processing software written by a computer programmer. The circuitry in the first box and the software in the second box implement precisely the same X-ray clarification algorithm.
Question: Is there any basis for deeming the circuit-implemented X-ray clarification device to be patentable subject matter, but not the software-implemented clarification device? Post your answers below.
(This hypothetical first appeared in an article that I wrote, but which is not yet available online. All the better -- now you can't cheat!)
Welcome to AutomatingInvention.com
Welcome to the Automating Invention blog, where I will explore the social implications of computer-automated invention. I've posted a more detailed description of this site here. If you have any questions or comments about the site, either post them in response to individual entries or send me an email.