About This Blog

Automating Invention is Robert Plotkin's blog on the impact of computer-automated inventing on the future of invention and patent law.

Click here for latest videos.

News Feeds

« June 2005 | Main | August 2005 »

July 28, 2005

Are college students abandoning computer science too quickly?

FutureWire reports that enrollment in U.S. computer science undergraduate and graduate programs dropped by 23% in 2004, at least in part due to concern about outsourcing and downsizing.

I can't fault any college student for seriously rethinking the value of a computer science degree in light of current, and likely future, economic conditions. But "computer science" isn't monolithic. Evolutionary computation, and the skills required to excel at it, differ in many ways from "traditional" computer science. And there are good signs that evolutionary computation is finding a foothold in private industry, as indicated by recent postings here. Perhaps those new college enrollees should seek to reposition themselves within computer science, rather than jump ship completely.

But minoring in business as a hedge probably wouldn't hurt either.

Posted by Robert at 11:38 AM | Comments (2)
category: Work

Genetic algorithms optimize complex pipe design

Australian firm Optimatics reports that it has used genetic algorithms to help more than 80 major clients in Australia, the U.S., Canada, and Britain optimize the design of pipes for providing water through cities, towns, and new urban developments. Optimatics claims that its optimization techniques can produce solutions up to 20 percent less expensively than traditional engineering.

Posted by Robert at 11:17 AM | Comments (0)
category: Evolutionary Computation | Technology Industry

Evolutionary computation improves automobile design

NuTech Solutions has issued a press release describing how its ClearVu Engineering technology has been used to solve multidisciplinary optimization problems in car safety applications. Thomas Baeck gave an impressive presentation at GECCO this year describing NuTech's work with a German auto manufacturer to increase the speed and decrease the cost of design without comprising crash safety.

Posted by Robert at 9:28 AM | Comments (0)
category: Evolutionary Computation | Technology Industry

July 25, 2005

Hold onto your seats, baseball viewers, or you might infringe a patent

Techdirt reports that a Microsoft patent application is directed to a method for identifying when a segment of a baseball game on television is exciting. Although most of the claims are for software, claim 66 is for a method that doesn't require the use of a computer. One reasonable interpretation of such a claim is that it could be infringed by the mind of a person watching TV at home.

Some qualifiers: the claims include limitations that might make them unlikely to be infringed by a human brain. Also, remember that this is a patent application, not an issued patent. The claims of the application might never be granted, or they might be modified before they are granted.

Patent claims like these raise interesting questions about whether methods that are performed without the use of machines should be patentable. The opinion in State Street Bank v. Signature Financial Group, the case in the U.S. that announced that business methods may be patented, did not make clear whether a business method must be implemented using a machine to be patentable. This has led many to file patent applications for so-called "non-machine implemented business methods," my favorite of which is a "Method of Shared Erotic Experience and Facilities for Same." And, unlike the Microsoft application, this one actually issued as a patent.

Does this mean that "safe sex" now requires an opinion from a patent lawyer?

Posted by Robert at 7:49 AM | Comments (0)
category: Software Patents

July 22, 2005

Overcoming the software bottleneck

Hardware has always improved more quickly than our ability to write software for it. Although you might think that this "software crisis," as evidenced by increasingly buggy, bloated, and insecure software, is a recent phenomenon, the term "software crisis" was used at least as early as 1972. I remember reading about it in Fred Brooks' classic The Mythical Man Month, first published in 1975.

IPcentral describes one of the current incarnations of this crisis: improvements in hardware for parallel processing and the difficulty of writing software to run on that hardware.

Is this a job for genetic programming or some other automated programming technique? I don't know the answer, but when a technological "crisis" has lasted for 30+ years with no signs of abating, anything is worth a try.

Posted by Robert at 9:00 AM | Comments (0)
category: Design & Engineering | Evolutionary Computation

More on teleportation

ACD points to an article on Space.com describing the possibility of Star Trek-style teleportation in the future. See my previous postings here and here.

Although such "teleportation" (I put it in quotes because it seems more like "remote cloning" to me) would not actually automate "invention," it would help to automate manufacturing. Combine automated inventing with long-distance automated manufacturing and you start to get much closer to being able to automatically transform ideas into physical reality.

Posted by Robert at 8:37 AM | Comments (0)
category: Design & Engineering

Artificial inventions in simulated worlds

ACD links to an article in New Scientist describing an upcoming simulation of human behavior in an artificial world. One of the goals is to see whether culture will emerge in the simulation.

And what if the simulated humans discover fire, invent the wheel, or invent something completely new? Who would own the patent rights?

I suppose the next step would be for the simulated humans to write a (simulated) simulation of (simulated) humans, and so on, and so on . . .

Posted by Robert at 8:28 AM | Comments (0)
category: Artificial Invention | Intellectual Property Law

July 20, 2005

Isaac Asimov on Science, Technology, and the Future

Isaac Asimov, one of the "big three" science fiction authors (along with Arthur C. Clarke and Robert Heinlein), also had a lot to say about science and society generally. His Wikipedia entry attributes the following quotes, among others, to him:

  • A subtle thought that is in error may yet give rise to fruitful inquiry that can establish truths of great value.
  • Suppose that we are wise enough to learn and know — and yet not wise enough to control our learning and knowledge, so that we use it to destroy ourselves? Even if that is so, knowledge remains better than ignorance.
  • The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!' but 'That's funny...'
  • The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.
  • I do not fear computers. I fear lack of them.
  • Part of the inhumanity of the computer is that, once it is competently programmed and working smoothly, it is completely honest.
  • Individual science fiction stories may seem as trivial as ever to the blinder critics and philosophers of today — but the core of science fiction, its essence has become crucial to our salvation if we are to be saved at all.
  • It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be.
  • Science fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.

Posted by Robert at 8:39 AM | Comments (2)
category: Miscellaneous

July 19, 2005

Interspecies outsourcing

In response to my posting on the loss of programming jobs, Chad's Blog links to the logical extension of outsourcing. If you're like me, your laughter will be tinged with anxiety.

Posted by Robert at 11:36 AM | Comments (0)
category: Work

Is it harder to think in the abstract than in specifics?

Glenn Reynolds (a.k.a. "Instapundit") criticizes Daniel Pink's A Whole New Mind for encouraging people, perhaps indirectly, to seek out "holistic" and "right-brain" approaches to problems that will be appealing because they seem "easier than those tiresome traditional linear approaches with all their steps, increments, and well, work." Reynolds cautions that:

[G]enius . . . has more to do with perspiration than inspiration. And while our workplaces may be too unfriendly to right brain thinking, they're a lot friendlier than they used to be. . . . In fact, it's arguable that most business management could benefit from a more traditional approach to balance sheets and bottom lines: More thinking inside the Income Statement, and less effort to think "outside the box."

I think part of Reynolds' criticism stems from a problem with Pink's distinction between "logical" and "holistic" modes of thought. I've said before that I think Pink's analysis is insightful and well worth reading, but this distinction has limitations.

Consider instead a different distinction, that between thinking at different levels of abstraction (see previous posting). Imagine an engineer faced with the problem of designing an electronic calculator. She might start with low-level electronic components, such as resistors and capacitors, and attempt to combine them together into a calculator. This would require a detailed understanding of circuit design at a low level of abstraction (i.e., a high level of specificity).

If, however, the engineer had available existing components for adding, multiplying, and performing other arithmetic functions, she could design a calculator by combining those existing components together. She might not need to know anything about the internal guts (e.g., resistors and capacitors) of the components she used. This would require an understanding of circuit design at a higher level of abstraction.

Finally, if the engineer had access to an existing electronic calculator, she would not need to know anything about circuit design. But imagine that she programs the calculator to not only perform arithmetic, but also to solve equations. This would require a yet more abstract understanding of mathematics and programming.

Is it any "harder" or "easier" to solve problems at any one of these levels of abstraction than at the other? Yes, but only in the sense that it is easier to make an existing calculator add 2+2 than it is to design from scratch a calculator for adding 2+2.

But that is comparing apples and oranges. Once the calculator exists, it poses problems at a higher level of abstraction that are just as complex in their own right as the problems that existed at the lower level of abstraction before the calculator was built. Science and engineering are fractal in this way; there is no loss in resolution as you move among layers of abstraction.

Let me take a stab at using this analysis to harmonize Pink's original argument and Reynolds' criticism of it. We need to use both "logical" ("left-brain") thinking and "holistic" ("right-brain") thinking at every layer of abstraction. As Pink's Abundance, Asia, and Automation make it impossible for people in the U.S. to compete at their current level of abstraction using logical thinking alone, they will either need to use holistic thinking at that level of abstraction, or move up a level, where it will be possible for them to succeed using only logical thinking until the same forces kick in at that level at some point in the future. Then the whole game starts over again, and Pink will be able to write about the Neo-Conceptual Age and its progeny, ad infinitum.

Posted by Robert at 10:22 AM | Comments (0)
category: Human Creativity | Philosophy of Computing

Pink: Conceptual age causes programmers to reconceptualize career options

I just saw this posting by Dan Pink citing to an AP story "on the dimming luster of programming jobs." According to the article, "many new entrants into the U.S. work force see info tech jobs as monotonous, uncreative and easily farmed out -- the equivalent of 1980s manufacturing jobs."

Automation tends to do that. I always thought it odd when programmers would claim (until recently) that computer programming was inherently more creative than other kinds of science and engineering. It should have been clear then, and the evidence is mounting now, that it was a peculiar combination of technological and economic circumstances that enabled the typical programmer to effectively demand such a large degree of on-the-job creative freedom as a condition of employment.

If programmers had any doubt that their jobs could be made less creative and even eliminated by the very technologies they brought into existence, they need only have asked their neighborhood blacksmith.

Posted by Robert at 9:57 AM | Comments (0)
category: Work

National CyberEducation Project Opens at University of Richmond School of Law

IPcentral reports that the University of Richmond School of Law Intellectual Property Institute has launched the National CyberEdcuation Project, "an interdisciplinary grass-roots effort to engage students, faculty, and administrators on college campuses in discussion of contemporary intellectual property issues. . . . The Project will produce conferences, articles, blogs, education kits, and other student-oriented, campus-centric programs and materials."

I like the "interdisciplinary" part. The time has long passed when it was possible for the legal profession to address contemporary intellectual property issues on its own.

Posted by Robert at 9:11 AM | Comments (0)
category: Education

What's the use of blogging?

Why blog? Why read blogs? What social benefits do blogs provide?

FutureWire comments on K. Daniel Glover's suggestion that "[i]nstead of being part of the Fourth Estate, [bloggers] are part of something new. I call it Estate 4.5 -- a nod both to the profession whose excesses galvanized many bloggers and to the medium they use. Bloggers are like inspectors general, the independent watchdogs of government."

Blogs no doubt can serve this purpose. But they can also be useful for purposes that aren't as overtly political. The purpose of some blogs is to filter the web for links to relevant information, thereby saving their readers the hassle of searching for such information themselves. Others provide original content not found elsewhere, written by authors who might not be publishable through traditional channels.

In this blog, I attempt to provide links to relevant information and add my own commentary, with the hope that I add value by providing insights into automating invention that wouldn't otherwise be apparent from the linked source itself. This blog is also useful as a testing ground for my ideas, and a place to record my thoughts so that I can retrieve them later.

Why do you blog? Why do you read blogs? What social benefits do you think blogs provide?

Posted by Robert at 8:32 AM | Comments (0)
category: Blogging

Shift seen from human coding to machine learning in the face of uncertainty

ComputerWeekly.com reports on the use of machine learning, instead of human-written algorithms, by Microsoft Research Cambridge (MRC) to solve problems in domains such as handwriting recognition. The reason, according to Christopher Bishop, MRC's Assistant Director, is that:

It is impossible to write an algorithm for recognising handwriting; many have tried but all have failed. There are too many differences even in the writing of one person to be able to write a set of rules or algorithm.

. . .

With handwriting recognition, for example, we do not try to program a solution; we give the computer lots of examples of handwriting, along with the text, and the machine learns to match the electronic image to the text.

The preceding quotes are taken from Bishop's British Computer Society Lovelace Lecture. A report of and slides from the lecture may be found here.

Much of the legal scholarship on intellectual property protection for software has focused on protection for "algorithms." In fact, many scholarly treatments have used the terms "software" and "algorithms" interchangeably. Developments such as those described by Bishop make clear that the subject matter of computer science is not limited to algorithms. The legal profession needs to wake up to this difference between the law's conception of computer science and the way that computer science works in the real world.

Posted by Robert at 8:00 AM | Comments (0)
category: Design & Engineering | Software Patents

July 18, 2005

Imagination Engines Launches New Web Site

I just noticed that Imagination Engines, founded by Steven Thaler, has launched a new web site. The company describes itself as follows:

Imagination Engines is a small company working with what many have recognized as potentially the biggest idea in history, a technology that can invent everything else. Accordingly, largely due to issues of credibility, the company's road to success has been rocky. There have been many skeptics and critics, but there have been more believers and supporters. Now the company thrives upon a significant contract stream and tangible products that speak louder for the technology than words possibly could.

I believe I first heard about the company when I read an article (such as this one) describing how the company had used its patented Creativity Machine to invent the Oral-B CrossAction toothbrush. Although genetic algorithms seem to be getting most of the attention these days, Imagination Engines' Creativity Machine relies on neural networks.

Posted by Robert at 4:52 PM | Comments (0)
category: Artificial Invention | Technology Industry

A Whole New Mind by Daniel Pink

A Whole New Mind by Daniel Pink is well worth reading if you're interested in the topics covered by this blog. Pink's argument is that holistic thinking, and a variety of skills associated with it, will become increasingly economically valuable in the coming "Conceptual Age." A relatively small portion of the book is dedicated to substantiating this claim. Most of the book focuses on describing the "six senses" -- the set of aptitudes that you will need to succeed in the Conceptual Age -- and on providing practical ways for individuals to sharpen those senses. (The six senses are Design, Story, Symphony, Empathy, Play, and Meaning.)

Pink identifies three drivers of the Conceptual Age: Abundance, Asia, and Automation. He draws a useful analogy between the defeat of the iconic John Henry by an automated steam drill and the defeat of chess grand master Garry Kasparov by the IBM supercomputer Deep Blue. Pink concludes the analogy:

Last century, machines proved they could replace human backs. This century, new technologies are proving they can replace human left brains.

I agree. But Pink's conclusion doesn't go far enough. Artificial creativity is proving increasingly able to replace human right brains. For example, human programmers were still required to program the incarnation of Deep Blue that defeated Kasparov. Kasparov may have his revenge when Deep Blue's programmers are put out of work by a genetic algorithm that evolves winning chess playing strategies. Although we're not there yet, Moshe Sipper and his colleagues have made some great strides.

In the final analysis, my extended analogy is still consistent with Pink's general thesis -- that people will need to develop higher-level conceptual skills in the coming century to remain competitive. Deep Blue's programmers' best bet for keeping their jobs in the long term is to learn how to write genetic algorithms that produce chess-playing code, rather than continuing to fine tune their skills at writing chess-playing code itself.

Posted by Robert at 3:08 PM | Comments (0)
category: Human Creativity | Work

July 15, 2005

Patent infringement by teleportation

Patently-O comments on a lawsuit in which AT&T has accused Microsoft of infringing a patent on speech codecs. Microsoft claimed that it didn't infringe the patent because "Microsoft generates its source code in the U.S. That source code is copied and shipped abroad to Foreign computer manufacturers who . . . generated 2nd generation copies of the software that are then installed and sold."

Although Section 271(f) of the U.S. patent statute attempts to prohibit shipping components of patented inventions overseas to be combined there, Microsoft claimed (among other things) that it wasn't "supplying" components because it was the foreign-made copies, not the software that Microsoft supplied, that were installed and sold overseas. The District Court disagreed, finding that "supplying" software necessarily implies making a copy: "Copying, therefore, is part and parcel of software distribution."

It seems that the same logic would apply to "supplying" physical objects by "virtual teleportation" in the future (see my previous posting). For example, imagine that I have a piston for use in a patented automobile engine. I take a 3D photograph (using my handy-dandy 3D camera) of the piston, and send the digital photo file over the Internet to an overseas manufacturer. The manufacturer uses a 3D printer to produce a physical piston from the CAD file, and incorporates the piston into an automobile engine that it sells overseas. Am I an infringer under 271(f)? If Microsoft "supplied" the software in the case described above, then there's a strong argument to be made that I have "supplied" the piston even though I didn't transport any atoms overseas.

Stay tuned for developments in this area as 3D printing and desktop manufacturing become more widespread. If anyone knows of any cases specifically interpreting 271(f) in this context, please let me know.

Posted by Robert at 9:04 AM | Comments (0)
category: Software Patents

Does open source development produce innovations?

Anything Under the Sun Made by Man has an interesting posting questioning whether open source software development produces innovations. The author, a patent agent, relates that a software developer client of his "was of the feeling that nothing innovative has come from Open Source Software, nor will it ever. He cited several examples, including Linux, where a viable and useful piece of commercial software had been rewritten by OSS developers and released for free."

It is true that most of the effort in open source development to date has been directed to reproducing the functionality of existing software, such as operating systems, compilers, web servers, and graphical user interfaces. But although such projects may not produce innovations per se, they have other benefits. The original motivation for developing GNU/Linux was not to produce a new operating system, but to produce one that could be used and modified by its users without engaging in copyright infringement. Proponents of open source also claim that open source development produces software with fewer bugs and security holes than software produced using closed development models.

Also worth noting is that most open source projects produce platforms, protocols, and interfaces, rather than applications. These kinds of end products are valuable because they facilitate standardization and the development of specific applications and data formats consistent with the adopted standards. From a commercial perspective, it can be beneficial for such standards to be "open" -- not owned by any private entity -- because they increase the pie for everyone who is in the business of providing products and services consistent with the standards.

There is a connection between all of this and automated inventing. Should John Holland have attempted to patent the basic features of the genetic algorithm? In one sense, genetic algorithms are a platform for inventing and for problem-solving more generally. The arguments above would therefore imply that keeping genetic algorithms generally "open," as Holland did, was the right strategy for maximizing innovation. On the other hand, many patents have issued on specific applications of genetic algorithms. Such applications may have been kept as trade secrets, thereby depriving the public of knowledge about them, if patent protection had not been available.

This is all to say that developing legal rules to encourage optimal innovation is tricky business.

Posted by Robert at 8:20 AM | Comments (0)
category: Intellectual Property Law | Technology Industry

July 13, 2005

Microsoft obtains patent on training people to analyze music

News.com reports that Microsoft has obtained a patent on techniques for "training a trainee to analyze media, such as music, in order to recognize and assess the fundamental properties of any piece of media, such as a song or a segment of a song." As News.com summarizes, "that roughly means the company has a recommendation tool for music that is run by real people, and it needs to make sure that people are rating songs the same way."

Rather than comment on the merits of the patent, let me call your attention to the fact that the training process covered by the patent "includes an initial tutorial and a double grooving process."

Double grooving process?

The idea is that music experts are used to rate (classify) songs according to various criteria. The trainee is then asked to rate the same songs, and if he or she is able to match the experts' ratings sufficiently closely, the trainee is deemed "a groover" and is then allowed to rate new songs. Or, as the patent puts it: "When a high enough degree of cross-listening consensus is reached, the new listener becomes a groover and can classify new songs or segments of songs."

I can see the letter that Microsoft will send when it believes someone is infringing the patent: "We hereby demand that you immediately cease all grooving using our patented double grooving process. If you continue to groove, swing, get down, or otherwise funkifize, we shall be required to take legal action against you."

Posted by Robert at 8:07 AM | Comments (0)
category: Software Patents

July 12, 2005

A call for legal polymorphism

"Polymorphism" is a term that is familiar to computer scientists but not to most lawyers. According to its Wikipedia definition, "polymorphism is the idea of allowing the same code to be used with different classes of data." One of the benefits of polymorphism is that it allows code to be written abstractly once, and to be applied to new and different classes of data without rewriting the code. In other words, polymorphic code is flexible enough to adapt to changing circumstances.

Lawyers are familiar with the same concept, although not under the same name. It is often said, for example, that the U.S. Constitution is a "living document" that was written abstractly in an attempt to make it flexible enough to deal with evolving circumstances without the need to rewrite the Constitution itself. Legislators, judges, and lawyers strive (or at least should strive) to write legal documents with the same forward-looking generality. In his book Code, Professor Lawrence Lessig used the term "translation" to refer to "finding a current reading of the original Constitution that preserves its original meaning in the present context." Those who know more about legal theory than I do could certainly give examples of other terms that capture the same concept.

One of the areas in which it is most difficult to achieve legal polymorphism is intellectual property law, because of the need for the law to adapt to rapidly changing, and sometimes revolutionary, technology. For example, the U.S. patent statute (written in 1952) defines the subject matter of patent law as "any new and useful process, machine, manufacture, or composition of matter." Although the drafters of the statute intended for this definition to be extremely general and hence flexible, they did not anticipate that computer programs would not fit easily into any of these four categories and hence cause (seemingly endless) controversy.

Fashioning intellectual property law to strike the right balance between clarity and predictability on one hand and flexibility on the other in light of changing technology is the challenge of legal polymorphism. Computer professionals and legal professionals have much to learn from each other about this enterprise and could promote legal polymorphism better in cooperation than either could hope to do separately.

Posted by Robert at 11:27 AM | Comments (0)
category: Intellectual Property Law

Artificial inventions in science fiction

IlliGAL Blogging links to worldchanging's coverage of a science fiction novel in which the protagonist has "patented using genetic algorithms to patent everything they can permutate from an initial description of a problem domain – not just a better mousetrap, but the set of all possible better mousetraps. Roughly a third of his inventions are legal, a third are illegal, and the remainder are legal but will become illegal as soon as the legislatosaurus wakes up, smells the coffee, and panics." I don't know how prominently these inventions figure in the novel, but it looks like an interesting read, and it captures quite nicely an issue that the law needs to address soon.

Posted by Robert at 9:44 AM | Comments (0)
category: Evolutionary Computation | Software Patents

July 7, 2005

Software Patents Can't Be Banned

News.com reports that the European Parliament has rejected the proposed "software patent directive." The Foundation for a Free Information Infrastructure (FFII), which strongly opposed the Directive, announced that "[t]his is a great victory for those who have campaigned to ensure that European innovation and competitiveness is protected from monopolisation of software functionalities and business methods." For news and views from other blogs on the directive's rejection, see Axel Horns, Information Policy, and A Polytrope's Musings.

The success or failure of the Directive will have little practical effect because software patents cannot be banned. I'm not saying that software patents should not be banned. I'm saying that they cannot be banned, at least not without also banning patents on hardware. And the reason isn't political or economic. It's that there is no justification for granting patents on hardware that does not also justify granting patents on software.

Imagine that the European Parliament adopted a directive tomorrow proclaiming that "software shall be excluded from patentability." The next day, coincidentally, two inventors at opposite ends of Europe independently build two separate machines for clarifying x-rays. As described previously, the two machines are identical in external appearance and they both perform the exact same new and useful process for clarifying x-rays. One box contains circuitry custom-designed by an expert electrical engineer, while the other box contains a computer running software written by a programmer.

The electrical engineer submits a patent application for the circuitry-based x-ray box and obtains a patent on it. The programmer submits a patent application for the software-based x-ray box and is denied a patent, based on the new ban on software patents.

The hypothetical ban on software patents will only drive our hypothetical programmer to write his patent applications differently. Instead of describing his invention as a computer program in the patent application, he will describe the physical implementation of the program in a physical computer, thereby reframing his "software patent" as a "hardware patent" without any dishonesty or inaccuracy. In an extreme case, he could describe (perhaps using a binary listing) the state (on or off) of each and every one of the transistors in the computer's memory when the program is stored in it. Such a description would describe, admittedly inelegantly, a new and useful physical state of a computer that could have been obtained by hardwiring rather than by writing a computer program.

There would be no basis to reject such a patent application under the ban on "software patents." Such a patent application would be just as much a "hardware patent" as the patent application submitted by the electrical engineer.

The effect of the ban on software patents, therefore, would be to drive up the cost of writing and prosecuting patents, and make patents on inventions created using computer programs more difficult to understand. This is exactly the opposite of the effect intended by groups such as FFII.

And such a scenario is scarcely science fiction. Indeed, it is exactly what has happened in the U.S., Europe, and elsewhere beginning at least as early as the 1970s in the face of uncertainty about whether "software patents" would be granted and upheld. The trend towards "hardwarification" of patents on computer programs has declined somewhat in the U.S. as the status of such patents has become clearer, but developments such as those taking place in the E.U. may reverse the trend again, once again increasing the costs of obtaining patents and making them more obscure. I have proposed a solution to this problem for those who are interested.

Posted by Robert at 11:29 AM | Comments (0)
category: Software Patents

A software patent puzzle (part 4)

This entry follows parts 1, 2, and 3.

John Koza claims that genetic programming is an automated invention machine. I think this is a reasonable way to think about genetic programming. But it is also accurate and helpful to think of genetic algorithms as tools that assist humans in the process of inventing. If you imagine a continuum of invention automation tools, ranging from a sharp stone on one end to a "genie in a box" on the other end, genetic algorithms are much closer to the genie than the stone. But they still require human input and manipulation to generate inventions.

As a result, genetic algorithms effectively boost the "inventing power" of the people who use them. Something that many people at last week's GECCO and NASA conferences told me was that genetic algorithms are making it possible for people with less and less knowledge and skill in a particular field to produce new inventions in that field. For example, an engineer with a bachelor's degree and a genetic algorithm might be able to match the inventing prowess of an engineer with a Ph.D. without a genetic algorithm. The Ph.D. engineer could, of course, use the same genetic algorithm to boost his own inventive performance. A genetic algorithm on everyone's desktop would, in effect, raise the level of skill of everyone in the field, if "skill" is measured by the ability to produce inventions.

What does this mean for our hypothetical "person having ordinary skill in the art" (PHOSITA)? From the perspective of public policy, I think the right outcome is to increasingly assume that PHOSITA has access to genetic algorithms and the knowledge of how to use them, as such access and knowledge become increasingly common in the real world. PHOSITA's exact degree of access and knowledge will vary from case to case.

The logical consequence of this is that whether a particular invention is nonobvious should be determined by asking whether PHOSITA would have been able to generate the invention by applying ordinary skill to a genetic algorithm, even if the resulting invention would have been surprising or unexpected to PHOSITA. This is consistent with the underlying purpose of the novelty and nonobviousness requirements, namely to issue patents "only for those literally new solutions that are beyond the grasp of the ordinary artisan who had a full understanding of the pertinent prior art." Chisum on Patents, 5.01 (emphasis added). What interest does the public have in granting patents on inventions that could be produced by anyone with ordinary skill and access to a generally-known genetic algorithm? As artificial creativity software continues to effectively increase the level of ordinary skill in various arts, the law should assume an increased level of skill on the part of PHOSITA for purposes of determining whether an invention is nonobvious.

What is interesting is that genetic algorithms and other forms of artificial creativity raise difficult questions about what is meant by phrases such as "beyond the grasp"? Does this mean "beyond the mental grasp" in the sense that PHOSITA would not have conceived of the invention based on his knowledge and skill, or "beyond the practical grasp" in the sense that PHOSITA would not have generated the invention by applying his knowledge and skill to the tools (including genetic algorithsm) at his disposal?

I can think of precedent in the U.S. that supports both of these interpretations. Therefore I am only arguing here for what I believe is the right outcome from a public policy perspective, not what I think the current state of the law actually is. I plan to research the law on this point in more detail and to publish my conclusions at a later date.

Posted by Robert at 8:58 AM | Comments (0)
category: Software Patents

A software patent puzzle (part 3)

I had an interesting discussion last week at the NASA Evolvable Hardware Conference with John Koza, Martin Keane, Matthew Streeter, Sameer Al-Sakran, and Lee Jones. We talked about the work they have been doing using genetic algorithms to re-create previously-patented inventions and to generate new patentable inventions. Keane, Koza, and Streeter were granted a U.S. Patent earlier this year on improved proportional, integrative, and derivative (PID) controllers (a kind of circuitry) that were generated using genetic algorithms.

Martin Keane posed a question that is essentially the next piece to parts 1 and 2 of this software patent puzzle. In those parts, I argued that whether an invention satisfies patent law's novelty or utility requirements should not depend on whether the invention is embodied in hardware or software, or on whether the invention was invented with the assistance of a computer.

Martin Keane's hypothetical addressed the third requirement for patentability: nonobviousness (or "inventive step"). If a human and a genetic algorithm independently generate the same invention at the same time, is it possible for the human-generated invention to be nonobvious and the computer-generated invention to be obvious?

My short answer is the same as for novelty and utility: no. Whether an invention is nonobvious shouldn't depend on how the invention was created. So either the human-generated and computer-generated invention are both nonobvious, or they are both obvious.

One reason for this (at least in the U.S.) is that the section of the patent statute that defines nonobviousness explicitly states that "[p]atentability shall not be negatived by the manner in which the invention was made." 35 U.S.C. 103(a).

But even if this sentence weren't in the statute, whether a particular invention (and I'm using the term "invention" here with its colloquial, rather than legal, meaning) is nonobvious at a particular point in time depends on whether the invention "would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains." Id.. In other words, what is important is whether the invention would have been obvious to the hypothetical "person having ordinary skill in the art" (PHOSITA), not how the invention was actually invented in the real, non-hypothetical, world.

In summary, whether an invention is nonobvious shouldn't vary depending merely on whether the invention was invented using a genetic algorithm or some other kind of computer automation. The tougher question, in my view is: should computer-automated inventing have any impact on how the nonobviousness requirement is interpreted for all inventions, regardless of how they are actually invented in the real world? I will propose an answer to this question in the next entry in this series.

Posted by Robert at 7:39 AM | Comments (0)
category: Software Patents