About This Blog
Automating Invention is Robert Plotkin's blog on the impact of computer-automated inventing on the future of invention and patent law.
- Artificial Invention
- Design & Engineering
- Evolutionary Computation
- Genie in the Machine
- History of Computing
- Human Creativity
- Intellectual Property Law
- Philosophy of Computing
- Software Patents
- Technology Industry
- From computer programmers to "renaissance geeks"
- One way to limit the scope of software patents
- A few practical applications of genetic algorithms
- Automation and Innovation
- Never send a computer to do a human's job, especially if the human works for free
- Digitizing know-how
- UK judge has problem defining "technical problem"
August 23, 2005
From computer programmers to "renaissance geeks"
News.com comments on a piece in the New York Times arguing that computer programmers increasingly need skills in other fields, such as biology, business management, entertainment, and marketing. Although offshore outsourcing may be partly responsible, computer automation is also a driver of the demand for new skills. As "lower level" skills become automated, human programmers are finding it increasingly necessary to develop skills, and combinations of skills, that cannot currently be replicated by computers in order to maintain demand for their labor.
One way to limit the scope of software patents
I/P Updates reports on a recent decision by the U.S. Court of Appeals for the Federal Circuit in a software patent case, Harris Corporation v. Ericcson, Inc. Read the summary at I/P Updates if you are a patent lawyer. If you aren't, and at the risk of overgeneralizing, the gist of the opinion is that if you write a software patent that describes a particular algorithm for performing a function, and you write the claims in your patent in a way that attempts to broadly cover any algorithm that performs the same function, you won't succeed. Instead, your claim will be interpreted to cover only the specific algorithm that you described in the patent.
Those who oppose software patents on the grounds that they provide legal protection that is overly broad should pay attention to this decision. There are several ways to protect against the problems caused by overly broad patents. One that is often cited is to ensure that good sources of prior art exist and that the Patent Office and litigants have easy access to those sources. Another, often overlooked outside of the patent bar, is for courts to develop appropriate rules for limiting the scope of claims in individual patents. I advocated for the further development of such rules in software patent cases in this paper (see Section V.C.2), and for the reasons provided therein I think more decisions like the one in the Harris case would be a good thing.
A few practical applications of genetic algorithms
IlliGAL Blogging has several recent postings on real-world applications of genetic algorithms:
Automation and Innovation
Hillel Levin at PrawfsBlog comments on the decreasing extent to which computer users need to understand how computers work. He asks whether "it follow[s] that the pool of potential programming innovators is likely to dwindle, since there are going to be fewer school-age kids who have a basic understanding" of how computers work.
I think the answer is "no" if we understand a "programming innovator" to be someone who uses a computer to create innovative computer programs. As genetic programming and other techniques for automating the creation of computer programs improve, we will likely see an increase in computer innovations even as the need for old-fashioned programming skills -- such as the ability to hand-code the individual instructions that make up an algorithm -- decreases.
Remember that in the early days of computing, the term "assembler" referred to a person (usually a lowly graduate student) who hand-translated instructions in an assembly language into binary instructions in a computer machine language. Now that software "assemblers" automate the process of assembly, few humans know how to perform this function. Rather than reduce innovation, this increased ignorance of low-level technical details has spurred innovation by enabling computer programmers to focus their efforts on high-level problem solving rather than low-level implementation details.
August 11, 2005
Never send a computer to do a human's job, especially if the human works for free
Although this blog is about computer automation, humans still outshine computers in the ability to make aesthetic judgments. Despite advances in automated image processing, for example, computers still have a long way to go in recognizing the contents of a photograph or judging whether a new clothing design would be visually appealing to customers.
Interactive evolutionary computation attempts to provide the best of both worlds by combining the ability of computers to generate new designs with the ability of humans to evaluate their aesthetics. Techdirt writes about a researcher at Carnegie Mellon University who has created two online games (here and here) that are fun to play in their own right, and which use the input of the games' users to improve the ability of computers to search for and recognize digital images.
The human players of these games provide descriptive labels of images they are shown and point out key portions of such images, two tasks that computers perform poorly. The human input, however, can then be used by computers to better search for and recognize subsequent images. For example, if many human players of the first game have labeled images of elephants with the word "elephant," when someone then performs a search for "elephant" images, computer software can easily pull up the right pictures just by searching through the human-provided labels, rather than by attempting to recognize the images themselves.
Although I don't believe that either of these games uses interactive evolutionary computation, the philosophy behind both is the same: to form a partnership between computers and humans, using each for what it does best. And when the humans provide input for free, deciding whether to incorporate their superior aesthetic judgments into computer software is a no-brainer.
August 10, 2005
IPcentral ponders the difficult question of who should own the technical know-how that is inside the heads of workers at high-tech companies. The posting was motivated by a recent court ruling that temporarily bars a former Microsoft employee from performing search-related work for his new employer, Google, because doing so would violate his non-compete agreement with Microsoft.
Trade secret law and non-compete agreements have long been used to control the movement of know-how and other information stored in the heads of human scientists, engineers, and programmers. But what happens when we "bottle" such know-how, or its equivalent, in the form of software that can design machines and write software? You might think that a company that develops an improved genetic algorithm that assists it in designing new machines should maintain that algorithm as a closely-guarded trade secret. After all, isn't the algorithm the functional equivalent of an engineer's know-how within the framework of the company's business model?
But I don't think the answer is entirely obvious. Perhaps the company should seek a patent on the algorithm, thereby obtaining a period of time in which it can block competitors from using the same algorithm even if they develop it themselves independently. Or maybe they should use some combination of intellectual property protection and licensing mechanisms to secure the maximum value to the company.
The point is that transferring know-how from a human mind to software raises some tricky legal and business considerations that will need to be addressed as the automation of invention continues.
August 9, 2005
UK judge has problem defining "technical problem"
Axel Horns reports on a recent decision by Judge Prescott of the Royal Courts of Justice in London effectively rejecting two patent applications (here and here) on what I think can fairly be characterized as computer-implemented business methods.
Judge Prescott expounded at length on how difficult it is to define the terms "technology" and "technical" -- definitions that are essential if "technical contribution" is to serve as a legal standard for distinguishing patentable from non-patentable subject matter. Judge Prescott rightly observes that "technology" is "a horribly imprecise concept" and that "trying to define the words 'technical' or 'technology' is a dead-end."
Although I agree with his observations, they beg important questions:
- Why are the terms "technology" and "technical" so hard to define?
- Why does the problem of defining these terms arise only in certain kinds of cases -- such as those involving software and business methods -- but not others?
Patent law seems to do a pretty good job of separating patentable from non-patentable subject matter in many areas despite the vagueness of these terms. More generally, the law deals with vague terms such as "reasonable" every day without breaking down.
As you might have guessed, I have an explanation: computers are making it possible to automate processes that previously only human beings engaged in the "liberal arts" (as opposed to the "useful" or "technical" arts) could perform. In the past, machines automated processes in the “useful arts,” such as manufacturing, construction, and transportation. Only human beings, exercising their creativity and judgment, could perform services in the “liberal arts.” Lawyers practiced law, doctors practiced medicine, businesspeople created and executed business plans, and authors wrote prose and poetry.
Intellectual property law reflected and respected this distinction. Copyright law protected works in the “liberal arts,” while patent law protected creative works in the “useful” (technical) arts.
But now computers are automating processes in every field of human endeavor, including those falling within the “liberal arts”: medicine, law, business, and the visual arts to name a few. Although machines have always been used in these fields, software is the first technology to enable end-to-end automation, as software that can recognize and translate speech demonstrates.
It is the widespread end-to-end automation of business methods and other methods previously classified within the "liberal arts" that is forcing the law to confront the ambiguity of terms such as "technology," when previously such ambiguity could be overlooked safely enough, except in extremely rare instances.
Although I don't have an answer to the ultimate question of how far patent law should extend its reach, I do think the story I've just told at least helps to understand both why we are seeing so much controversy over the definition of "technology" and how computers are blurring the line between the technical and non-technical.