About This Blog

Automating Invention is Robert Plotkin's blog on the impact of computer-automated inventing on the future of invention and patent law.

Click here for latest videos.

News Feeds

« "Survival of the Fittest" Marketing | Main | Open Innovation and the Raymond Conference »

February 7, 2009

Billion-Point Computing

Scientists from the University of California at Davis and Lawrence Livermore Labs have developed a computer algorithm that allows features and patterns to be extracted from extremely large and complex sets of raw data. The algorithm is optimized to run on computers with as little as two gigabytes of memory. It addresses problems with analyzing increasingly large data sets which result from simulations of real-world phenomena and from physical experiments and observations.

According to Attila Gyulassy, who led the five-year team effort, "What we've developed is a workable system of handling any data in any dimension. We expect this algorithm will become an integral part of a scientist's toolbox to answer questions about data." The algorithm works by dividing data sets into parcels of cells which are analyzed and merged. This process is repeated, with data which is no longer needed discarded at each merge step. The result is a drastic reduction in the amount of memory needed to store the results of the calculations.

Posted by BlogAuthor1 at February 7, 2009 5:41 PM
category: Philosophy of Computing

Comments

Post a comment

Thanks for signing in, . Now you can comment. (sign out)

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)


Remember me?