28 April 2008

Microsoft comes over to play

Monday and Tuesday this week: a new (three months old!) numerical libraries group from Microsoft came over to speak with us linear algebra hackers and parallel performance tuners. Today we did most of the talking, but we learned something from them: They aren't from MS Research, and they aim to do applied research (not "blue-sky research," as the group's manager put it) with a 2-5 year horizon, and then transition successful, desired prototypes out into a production group. They've been incubating some scientific libraries for about two years, and they want to push it out into a core scientific library (sparse and dense linear algebra for now). Target hardware is single-node multicore -- no network stuff -- and they are especially interested in higher-level interface design for languages like C++, C#, and IronPython, built on both native and managed code. ("Managed code" means heavyweight runtimes like the JVM and .NET -- .NET is a big thing for them in improving programmer productivity, and these runtimes have a growing ecosystem of friendly higher-level languages.) Their group is pretty small now, but they are actively hiring (in case any of you are looking for a job in Redmond-world), and they have some bright folks on their team.

One of our biggest questions was, and probably still is, "why?" -- why wouldn't third-party applications suffice? Then again, one could ask the same of a company like Intel -- why do they need a large in-house group of HPC hackers? However, there's a difference: MS doesn't have the personpower or expertise to contribute as much to, say, ScaLAPACK development as Intel has, nor do they intend to grow their group that much. This team seems to be mainly focused on interface design: how to wrap efficient but hard-to-use scientific codes so that coders in the MS world can exploit them. In that context, my advisor showed one of his favorite slides: three different interfaces to a linear solver (solve Ax = b for the vector x, where b is a known vector and A is a matrix). One is Matlab's: "A \ b". The other two are two possible invocations of ScaLAPACK's parallel linear solver. The second of these has nearly twenty obscurely named arguments relating to the data distribution (it's a parallel distributed-memory routine) and to iterative refinement -- clearly not what you want to give to a 22-year-old n00b fresh out of an undergrad CS curriculum who knows barely enough math to balance a checkbook. Ultimately, MS has to design programmer interfaces for these people, as well as for gurus -- which is something that the gurus often forget.

Another reason perhaps for the "why" is that high-performance, mathematical calculations are a compelling reason to buy new computing hardware and software. There are interesting performance-bound consumer applications being developed, most of which have some kind of math kernel(s) at their core. MS presumably wants to get in on that action, especially as it is starting to lose out on the shrink-wrapped OS-and-Office software market as well as the Web 2.0 / search market.

It's interesting to watch a large, bureaucratic company like Microsoft struggling to evolve. IBM managed this sort of transition pretty well, from what I understand. They still sell mainframes (and supercomputers!), but they also sell services, and maintain a huge and successful research effort on many fronts. MS Research is also a powerhouse, but somehow we don't see the research transitioning into products, or even driving the brand, as it does in IBM's case (think about the Kasparov-defeating chess computer Deep Blue, for example). Maybe it does drive the products, but somehow the marketing has failed to convey this. I kind of feel for them, just like the "Mac guy" feels for the "PC guy" in Apple's ad series: They have to struggle not only to command a new market, but also to reinvent their image and command a brand.

17 April 2008

Engineering applications of general relativity?

The first engineering application of general relativity: GPS. Apparently, global positioning requires accurate time computations, and once you can measure time to sufficient accuracy, you need to account for time dilation due to gravity.

(Originally posted 16 April 2008.)

Preconditioners as eigenvalue clusterers

At the Copper Mountain Iterative Methods conference this year, I heard a really good explanation of how preconditioners work, from a PhD student at the University of Manchester. (Thanks Richard!) He explained to me that one can think of a Krylov subspace method as polynomial interpolation of the spectrum of the matrix. Each time you do a matrix-vector multiply, you increase the polynomial's degree by one. (This gets more clear if you've seen the proof that conjugate gradients is optimal in the norm induced by the matrix.)

If the matrix's eigenvalues are evenly distributed, it will get harder and harder to interpolate the spectrum, because of the Gibbs phenomenon. In contrast, if one eigenvalue is separated from the others, it's easy to interpolate that part of the spectrum accurately and "pick off" the separated eigenvalue. This is called "deflation." After this point, the Krylov method converges pretty much as if the separated eigenvalue wasn't there. This makes the goal of a good preconditioner clear: It should separate and cluster the eigenvalues (away from the origin), so that the Krylov method can pick off the clusters.

If I may also engage in some speculation, this perhaps also explains one reason why block preconditioners are popular in multiphysics solvers: they group together eigenmodes that are on similar scales, and separate unrelated eigenmodes. Of course, block preconditioning is the mathematical equivalent of encapsulation in the software engineering world: it lets you solve one problem at a time and then stitch together a combined solution out of the components. Mushing together all the subproblems into a single big problem loses the structure, whereas sparse linear algebra is all about exploiting structure.

(Originally posted 16 April 2008.)

Alchemy: pride and wonder

The website of NOVA, a PBS science program, has a neat article which decodes a page from Isaac Newton's alchemy notebooks, and discusses Newton's interest in alchemy in its historical context. Alchemy is a fascinating subject because it shows the tight links between science, philosophy, and religion at the very beginning of what we consider "modern" science, and also because it illustrates the tension between rationalization (wanting to understand or control the universe by discovering or imposing universal laws that govern it) and wonder (non-rational awe of nature). Newton himself shows both tendencies. The article mentions how he secretly called himself "Jehovah the Holy One," and William Blake depicts both Newton and God bearing the compass to measure and bound Creation. Yet, Newton saw his discoveries as if he were just a child picking up a pretty shell on the beach and barely intuiting the existence of countless more in the ocean before him.

Alchemy's all-encompassing vision arguably failed to persist into its successor sciences. By 1800 or so, Friedrich von Hardenberg (a.k.a. Novalis, a geologist and poet) was already condemning the "Scheidekunst" of the sciences: how they were dissected cadaver-like into subfields, without regard for the living interactions between fields. In some sense, modern physics with quantum mechanics inherited the idea of reaching for a universal theory: "grand unified theory" and "quantum gravity" reveal this desire to explain all interactions at both tiny and grand scales. They also attract their share of superstition and quackery (e.g., the use of "quantum tangling between doctor and patient" to explain how homeopathic medicine supposedly works), just as alchemy did in its time. Yet Novalis' and Newton's pursuit of universal natural philosophy as a spiritual activity shows the importance of non-rational awe, wonder, and curiosity in the sciences. Even such an avowed atheist as Carl Sagan could wax eloquent at the "billions and billions" of stars and speak tenderly of our planet as a "pale blue dot." (I've never heard an atheist write more reverently than he!)

(Originally posted 31March 2008.)

(Re)new(ed) blog

Welcome to my (re)new(ed) blog!

I deleted my old blog in order to start fresh. This new version will only have "nerdy" posts -- i.e., about the sciences or related recreational interests. I will spawn off different blog(s) for other topics, in particular, about music, religion, and politics.