About This Blog

Automating Invention is Robert Plotkin's blog on the impact of computer-automated inventing on the future of invention and patent law.

Click here for latest videos.

News Feeds

November 16, 2009

Encouraging Innovation with Cash Prizes

The Computing Community Consortium (CCC) blog recently examined the role of monetary prize awards as an incentive for encouraging research in technical fields. In September, Netflix awarded a $1 million prize in a contest for the best algorithm for improving collaborative film ratings. The Clay Mathematics Institute and Wolfram Research are other organizations that have offered prizes for technical innovation. See the CCC blog for a discussion of the benefits and drawbacks of these awards.

Posted by BlogAuthor1 at 1:22 AM | Comments (2)
category: Philosophy of Computing

October 24, 2009

Report on the the Risks of Future AI

An earlier post mentioned the discussions that have taken place among 25 experts in the fields of artificial intelligence, robotics, ethics and law. Under the auspices of the Association for the Advancement of Artificial Intelligence, this panel of experts met in Asilomar, California to discuss possible risks in the future AI developments. Now New Scientist has reported on the panel's initial findings, which were presented at the International Joint Conference for Artificial Intelligence. The entire panel agreed that creating human-level artificial intelligence is possible in principle, but their estimates on when this may occur vary widely. Read more details on the New Scientist website.

Posted by BlogAuthor1 at 1:01 PM | Comments (0)
category: Philosophy of Computing

August 25, 2009

Debating the Dangers of Intelligent Machines

An article in the New York Times reports on a group of research scientists who recently met at Asilomar Conference Grounds in Monterey Bay, California, to debate advances in artificial intelligence. Some of these advances seem threatening, such as a robot that can seek out a power source and recharge itself or computer viruses that are impossible to eradicate. Part of the researchers' concern is related to the social disruptions that further AI advances could bring. The misuse of AI technology by criminals was another topic of concern.

The researchers who met are leading computer scientists, artificial intelligence researchers and robotics experts. The Conference was organized by the Association for the Advancement of Artificial Intelligence, which will be issuing a report later this year. The meeting could prove to be pivotal to the field of AI.

Posted by BlogAuthor1 at 12:54 AM | Comments (0)
category: Philosophy of Computing

August 16, 2009

Blurring the Line Between Hardware and Software

Writing on the Foresight Institute website, J. Storrs Hall discusses how the boundary between hardware and software is becoming "fuzzier" as systems become more complex and nanotechnology becomes more important. With future use of nanocontrollers, the complexity of mechanical systems will accelerate to the point that "matter compilers" will be required for the design. This means that the nanotechnology designer will be using the same processes to design nanotechnology that today's software developers use to design and implement software. Dr. Hall predicts that the ability to write reliable software will become more and more important in the coming world of nanotechnology.  If he is right, this is further evidence that the problems that software has caused for patent law will begin to creep into the application of patent law to nanotechnology for the same reasons.

Dr. Hall is a leader in the field of molecular nanotechnology and president of the Foresight Institute. He is also known for coining the term Utility Fog, which is a hypothetical collection of nano-robots that unite to form a solid mass in the shape of any desired object.

Posted by BlogAuthor1 at 10:57 AM | Comments (0)
category: Design & Engineering | Miscellaneous | Philosophy of Computing

August 10, 2009

Turing Tarpits and Nonobviousness

Alan Perlis coined the term "Turing tarpit" in a 1982 article entitled, "Epigrams on Programming," in an epigram which stated, "Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy."

Turing tarpits tell us something about nonobviousness in patent law which may seem trivial, but which is often missed in debates about software patents: the mere fact that computers make the creation of a particular piece of software possible does not render that software obvious. Computers may facilitate the creation of software, and thereby raise the bar of nonobviousness for software, but they don't raise the bar infinitely. Yet it continues to be common to hear the argument that computers render all software trivial to create, and therefore obvious and unpatentable.

I presume that the "Turing tarpit" idea was inspired both by Turing's conception of the computer as a universal machine, capable of mimicking any other computing machine, and also by Turing's response to "Lady Lovelace's Objection" in his paper, "Computing Machinery and Intelligence." Mere possibility does not imply predictability and therefore should not be treated within patent law as sufficient proof of obviousness.

Posted by Robert at 6:16 PM | Comments (0)
category: Philosophy of Computing | Software Patents

July 29, 2009

Time to Retire the Turing Test?

In 2010, it will be 60 years since computer science pioneer Alan Turing proposed a test to determine if a machine is able to demonstrate intelligence. The Turing Test has been a fundamental benchmark for Artificial Intelligence (AI) for many years, but now Dr. Aladdin Ayesh and other AI experts are questioning whether Turing's proposal is a complete test for machine intelligence. The question was debated at a symposium held at the 2010 Artificial Intelligence and Simulation of Behaviour Conference.

Posted by BlogAuthor1 at 1:55 AM | Comments (0)
category: Philosophy of Computing

May 23, 2009

The Pursuit of Thinking Machines

In March of 2009, the Second Conference on Artificial General Intelligence, AGI-09, was held in Arlington, Virginia. The conference's main topic is the Holy Grail of the AI field -- the creation of thinking machines with human-level (or higher) intelligence. The 100 attendees included independent researchers, members of academia as well as representatives from major tech companies like Google, GE, AT&T, and Autodesk.

Conference Chair Ben Goertzel sees most AI research today as having a focus which is too narrow and specialized. The dream on which he says AI was founded, of "software displaying intelligence with the same sort of generality that human intelligence seems to have," is not being addressed by research. The AGI conference is part of concerted effort to form a cohesive community ready to focus on radical innovation and an acceptance of diverse approaches.

Posted by BlogAuthor1 at 1:53 AM | Comments (0)
category: Artificial Invention | Philosophy of Computing

March 3, 2009

Can Google Make You Smarter?

An article in Discover Magazine challenges claims that the increased use of search engines like Google can rob us of our ability to think and remember. There have also been claims that increased text messaging encourages illiteracy. Author Carl Zimmer disputes these theories, proposing instead that information technologies are making the world an extension of our minds.

The concept of an extended mind was brought to public attention in a 1998 article by philosophers Andy Clark and David Chalmers. They describe the mind as a system made up of the brain and parts of its environment. They argue that we all have minds that extend into our environment. If we subscribe to this extended mind theory, then today's mind-altering technologies can be seen as opening up a world of possibilities.

Posted by BlogAuthor1 at 4:22 PM | Comments (0)
category: Philosophy of Computing

February 15, 2009

The Petabyte Age

According to an article in Wired magazine, the explosion of statistical information brought on by the use of computers calls for an entirely different model for analysis. We are now in the Petabyte Age. A petabyte is a unit of computer storage equal to one quadrillion bytes.

Google is discussed as an example of how data can be analyzed without regard for context. The Google advertising philosophy led to the automatic placement of ads on web pages based on a purely mathematical analysis of content. Peter Norvig, Google's research director, posits that the measurements of large amounts of data will someday replace all known models.

The implications reach far beyond the world of advertising and touch on all areas of science. The scientific method itself, which consists of hypothesis, model, and test, may now face a serious competitor.

Posted by BlogAuthor1 at 6:43 PM | Comments (0)
category: Philosophy of Computing

February 7, 2009

Billion-Point Computing

Scientists from the University of California at Davis and Lawrence Livermore Labs have developed a computer algorithm that allows features and patterns to be extracted from extremely large and complex sets of raw data. The algorithm is optimized to run on computers with as little as two gigabytes of memory. It addresses problems with analyzing increasingly large data sets which result from simulations of real-world phenomena and from physical experiments and observations.

According to Attila Gyulassy, who led the five-year team effort, "What we've developed is a workable system of handling any data in any dimension. We expect this algorithm will become an integral part of a scientist's toolbox to answer questions about data." The algorithm works by dividing data sets into parcels of cells which are analyzed and merged. This process is repeated, with data which is no longer needed discarded at each merge step. The result is a drastic reduction in the amount of memory needed to store the results of the calculations.

Posted by BlogAuthor1 at 5:41 PM | Comments (0)
category: Philosophy of Computing

January 23, 2009

QED by Computer

New computer tools could revolutionize the field of mathematics by assisting in the development of nearly infallible proofs of mathematical theorems. Up until now, traditional proofs allowed many inferences to be glossed over or omitted, leaving determination of the correctness of a theorem up to the scrutiny of other mathematicians. A series of articles by leading experts published in the Notices of the American Mathematical Society describe the use of computer assistants in the development of "formal proofs" which provide checks for every logical inference in a mathematical theorem.

If computer proof assistants come into widespread use, formal proofs of the central significant theorems of mathematics may be possible. Thomas C. Hales likens this possibility to "the sequencing of the mathematical genome."

Posted by BlogAuthor1 at 2:23 AM | Comments (0)
category: Philosophy of Computing

January 17, 2009

The Computer as Collaborator

Ken Birman, a computer science professor at Cornell University, claims that the computer has gone from being a tool which serves science to becoming a framework for all other sciences. Another Cornell professor, Jon Kleinberg, thinks that computer algorithms will be to science in the 21st century what mathematics was in the 20th. Read their comments and some examples of new uses of computers in the sciences in Computerworld.

Posted by BlogAuthor1 at 6:39 PM | Comments (0)
category: Philosophy of Computing

February 26, 2008

Upcoming talk on automating invention at the MIT Technology and Culture Forum

I will be giving a talk on computer-automated inventing and its philosophical and ethical implications at MIT on Thursday, March 6 from 4:30pm-6pm in Room E51-315 (campus map). In the talk I will give a preview of some of the examples of computer-automated inventing that I will describe in more detail in my upcoming book, and explain how the inventive processes behind them are already raising new questions about what it means to be an inventor and the ethical responsibilities of inventors.

The talk is sponsored by the MIT Technology and Culture Forum.

Posted by Robert at 8:00 AM | Comments (0)
category: Ethics | Philosophy of Computing

October 31, 2005

Defining "software"

I often find people defining computer hardware as the "physical" part of a computer and software as either the "intangible" part of the computer or as "instructions" stored in the hardware. Although there's nothing wrong with these definitions per se, they leave out something important and might actually impede our ability to understand the importance of computer programs in the future.

I think it is worthwhile to think of hardware as the fixed part of a computer and software as the variable part. I like to use the following analogy: hardware is to software as a drill is to a drill bit. A drill is a drill; it doesn't change. To make the drill perform different functions, you attach different bits to it. The drill is fixed and the bits are variable, just like hardware and software, respectively. When you buy a computer, you buy the fixed hardware, which you can make perform different functions by attaching (installing) different software to it.

What I find useful about this analogy is that it makes clear that we are not talking here about the physical form taken by the drill and its bits -- both are quite physical. And the same is true in the case of hardware and software -- both are physical if what we are talking about are components of a physical computer. The web browser that you are using to read this blog is physical; it consists of electrical signals in your computer. Even if you take issue with the physicality of electrical signals, tomorrow's molecular computers will convince you that software is a physical thing.

Therefore, if the law is to treat software differently than hardware or anything else, the difference must stem from something other than the fact that software is not "physical." I've tried elsewhere in this blog to explore what else makes software different, and the implications of those differences for the law.

Posted by Robert at 4:24 PM | Comments (1)
category: Philosophy of Computing

July 19, 2005

Is it harder to think in the abstract than in specifics?

Glenn Reynolds (a.k.a. "Instapundit") criticizes Daniel Pink's A Whole New Mind for encouraging people, perhaps indirectly, to seek out "holistic" and "right-brain" approaches to problems that will be appealing because they seem "easier than those tiresome traditional linear approaches with all their steps, increments, and well, work." Reynolds cautions that:

[G]enius . . . has more to do with perspiration than inspiration. And while our workplaces may be too unfriendly to right brain thinking, they're a lot friendlier than they used to be. . . . In fact, it's arguable that most business management could benefit from a more traditional approach to balance sheets and bottom lines: More thinking inside the Income Statement, and less effort to think "outside the box."

I think part of Reynolds' criticism stems from a problem with Pink's distinction between "logical" and "holistic" modes of thought. I've said before that I think Pink's analysis is insightful and well worth reading, but this distinction has limitations.

Consider instead a different distinction, that between thinking at different levels of abstraction (see previous posting). Imagine an engineer faced with the problem of designing an electronic calculator. She might start with low-level electronic components, such as resistors and capacitors, and attempt to combine them together into a calculator. This would require a detailed understanding of circuit design at a low level of abstraction (i.e., a high level of specificity).

If, however, the engineer had available existing components for adding, multiplying, and performing other arithmetic functions, she could design a calculator by combining those existing components together. She might not need to know anything about the internal guts (e.g., resistors and capacitors) of the components she used. This would require an understanding of circuit design at a higher level of abstraction.

Finally, if the engineer had access to an existing electronic calculator, she would not need to know anything about circuit design. But imagine that she programs the calculator to not only perform arithmetic, but also to solve equations. This would require a yet more abstract understanding of mathematics and programming.

Is it any "harder" or "easier" to solve problems at any one of these levels of abstraction than at the other? Yes, but only in the sense that it is easier to make an existing calculator add 2+2 than it is to design from scratch a calculator for adding 2+2.

But that is comparing apples and oranges. Once the calculator exists, it poses problems at a higher level of abstraction that are just as complex in their own right as the problems that existed at the lower level of abstraction before the calculator was built. Science and engineering are fractal in this way; there is no loss in resolution as you move among layers of abstraction.

Let me take a stab at using this analysis to harmonize Pink's original argument and Reynolds' criticism of it. We need to use both "logical" ("left-brain") thinking and "holistic" ("right-brain") thinking at every layer of abstraction. As Pink's Abundance, Asia, and Automation make it impossible for people in the U.S. to compete at their current level of abstraction using logical thinking alone, they will either need to use holistic thinking at that level of abstraction, or move up a level, where it will be possible for them to succeed using only logical thinking until the same forces kick in at that level at some point in the future. Then the whole game starts over again, and Pink will be able to write about the Neo-Conceptual Age and its progeny, ad infinitum.

Posted by Robert at 10:22 AM | Comments (0)
category: Human Creativity | Philosophy of Computing

June 28, 2005

Patent Law and Layers of Abstraction

In his talk yesterday (see my previous posting), Drew Endy talked about how some of his work is shifing towards designing biological systems at higher layers of abstraction. The idea of "layers" (or "levels") of abstraction is deeply engrained in the way engineers, and computer scientists in particular, think about systems. I've argued elsewhere that some of the problems with software patents could be fixed if patent law recognized the distinction between different layers of abstraction and focused its attention on the layer(s) where innovation is actually occurring in a particular field.

If you're not familiar with the concept of "layers of abstraction," imagine that you're designing a toaster. You might design the toaster as follows:

  1. Start with the very high-level (i.e., very abstract) goal of designing a machine to toast bread.
  2. Then you might identify the functions that the toaster needs to be able to perform, such as holding bread, heating bread, detecting when the bread has been toasted to the desired darkness, and ejecting the bread. Identifying these functions is called "functional design."
  3. Then you might design the physical components for performing the functions described above, such as heating elements for heating the bread and springs for ejecting the bread. Designing these components is called "physical design" and is often what we think of as "inventing."

Once you've designed and built your toaster, there is of course only a single physical toaster. But you can still think about and describe the toaster in terms of its "physical layer" (its physical components) and its "functional layer" (the functions it performs). Machines and other systems are often designed and described in terms of many such layers in addition to the physical and functional.

It is possible to innovate in any of these layers. Mechanical engineers traditionally have innovated in the physical layer by designing new physical components or new combinations of existing physical components. Computer programmers traditionally have innovated in the functional layer by writing programs consisting of new functions or new combinations of existing functions.

But physical innovation holds a special place in patent law, as the result of assumptions that are no longer valid. One example of patent law's attachment to the physical layer is section 112, paragraph 6 of the U.S. patent statute, which says that patent claims written using functional language are to be limited in scope to the specific physical structure of the invention as described elsewhere in the patent. This section hard-wires the physical layer as the upper limit on the layer of abstraction at which an invention may be claimed.

This legal distinction between the physical and functional layers made sense when the focus of most innovation was the physical layer, and when there were no computers or other means available to automatically produce physical implementations of innovations described at higher levels of abstraction. But these assumptions are no longer valid, and patent law needs to become more flexible in response by focusing protection on the layer(s) of abstraction at which innovation is actually occurring in the real world. In future postings on this site I will explore ways in which patent law could be reformed to achieve this goal. I will also consider potential problems with such reforms.

Posted by Robert at 9:31 AM | Comments (0)
category: Philosophy of Computing | Software Patents