06 December 2008

Making LaTeX builds faster

I do most of my home computing on a first-generation Asus Eee -- with the seven-inch screen and all (though it's hooked up to a big monitor and ergonomic keyboard -- I'd go nuts otherwise!). The little Eee is terrifically portable (only 2.2 lbs, and fits easily with fully open screen on an airplane tray) but is a bit underpowered -- mine has an Intel Celeron processor overclocked to 900 MHz, and presumably the memory bandwidth is much less than that of my workstation at work. I notice especially how underpowered the Eee is when I typeset documents with LaTeX. Of course, making the most of underpowered hardware is a great nerd exercise ;-) so I'm posting some tips about how to do that here.


The first trick I tried was to precompile the document header. This saved a little bit of time but not much. Plus, the binary format of the precompiled preamble is not portable between different versions of the LaTeX toolchain, even when running under the same OS and brand of CPU. I left it in just to save 0.4 or so seconds per document pass.


Precompiling the preamble didn't save much time, so it seemed then that processing the (100-page) document was taking most of the time. This made me think about the TeX build process. Prof. Knuth considered a build an interactive process: TeX runs, gives some warnings about overly long lines (with locations so you can correct them later), and on an error stops with a prompt for interactive editing. This was probably because at the time Knuth developed TeX, building the document was a long computation. One would want to interact with it in midstream to correct errors, rather than breaking off a build, editing, and starting the build over again. This is partly why it's so much trouble to write a good Makefile for TeX documents -- Knuth set the tools to give you a lot of feedback by default, and you're supposed to read the feedback to tell whether you need to let a tool make another pass over the document. On a fast computer, all the informative messages and warnings produced by the TeX toolchain zip by too fast to see. On my Eee, however, they don't -- which made me wonder about the expense of terminal output itself. TeX has an option ("-interaction batchmode") to turn off all the informative output. Using this brought the time for a complete build (pdflatex once, bibtex once, then pdflatex twice) down from 30 seconds to 6 seconds!


Another optimization is taking advantage of the \include and \includeonly comamnds before the document is finished. The \include command generates a .aux file for each included file. If you add all the included files in the \includeonly list once and do a complete build, then you'll generate .aux files for all the included files. This gives LaTeX references and page locations for all the files. Then, you can remove files from the \includeonly list if you're not working on them, and only build the parts of the document in which you're interested. This technique also preserves page numbers, equation numbers, and other reference-related things. I was able to optimize a single pdflatex step down from 3.5 seconds to 2.5 seconds with this technique.


The \include command has a side effect: it adds a page break after the included file. (Presumably this lets LaTeX compute page numbers without needing to regenerate the .aux file.) For shorter documents, I prefer to use \input to add files, because it doesn't create page breaks. Since I'm writing a longer document with chapters and LaTeX inserts a page break at the end of each chapter anyway, I can use \include for "chapter files" and within each chapter file, use \input to include section files. This website recommends solving the page break problem by using \include when editing, and \input when publishing, but for me, using \include for chapters reflects how I work on the document anyway.


LaTeX is missing some features that could make a build go a lot faster. First, building a document doesn't work like building a C library: compilation of individual files isn't independent of the whole build. I have a long document and I've taken the care to break it up into modular files; this helps with revision control, but LaTeX can't exploit this very effectively (other than with the \includeonly trick). Second, LaTeX must pass several times over the same document in order to get references right; in Knuth's time, this probably was meant to save memory, but now it would be nice to use more memory in order to save some passes.

06 October 2008

Copying by hand

A couple weeks ago, I noticed in a BoingBoing post the following quote:

Before literacy, we were mere listeners. We heard stories read to us as a group. After the printing press, we were elevating to individuals, each with our own, acknowledged perspective on what we read.
I made a comment (as "hilbertastronaut"), and I feel like expanding on
it a bit more.


The writer argues that the printing press elevated us (Westerners,
perhaps?) from mere listeners to individuals with opinions. If
anything, the opposite might be true. Back when the only way to
propagate information was to copy it by hand, it was standard practice
to add commentary to the margins of the document. This could include
clarifications or even personal reflections. One sees this in Western
monastic manuscripts as well as Chinese brush paintings (where
standard practice was to indicate receipt of the painting by marking
it with one's personal stamp, and then perhaps write one's thoughts on
the piece in the margin). In contrast, the printing press (and the
Reformation in some sense) gave people the idea of a book as an
official, received text. Written comments have a lesser status than
the printed text. For example, if one buys a used textbook at a
college bookstore, one generally hopes that it hasn't been written on
much. Before the printing press, the commentary could often be equally
or more valuable than the main text itself.


Copying other kinds of information by hand besides text also has
the same effect. For example, when J. S. Bach copied Italian
concerti, he added inner voices and ornamentation, making the concerti
into pieces universally recognized as his own compositions (and
arguably more interesting than the originals!). Bach treated copying
both as an end (to get his hands on other people's music), and as a
means (to improve his compositional skills). Even today, mathematics
teachers consider copying out and working through proofs an important
part of learning mathematics. This is because mathematics, like music
composition, is not about memorization of facts but about developing
one's creative problem-solving ability according to a combination of
rules and aesthetic principles. Copying a proof and understanding
each step is a guided re-creation of the original author's work.
Furthermore, creative mathematical thinkers in this process of
re-creation usually have new things to say, like clarifications or
generalizations, which they "write in the margins" and use in their
research -- just like Bach wrote inner voices and ornamentations "in
the margins" to produce new compositions.


My sophomore high school English teacher handed out red pens on
the first day of class, and exhorted us, even required us to
write in our books with them. At the time, it seemed ridiculous to
ruin books which could be reused for the next class. Pencil
annotations could at least be erased. Now I understand that using a
red pen elevates the status of the handwritten text, to be more
visible and more permanent even than the printed text. It makes the
reader's commentary just as valuable, if not more so, than the
original document. Whether my commentary really was more valuable
than what I read at the time is doubtful, but at least the long-term
lesson of the value of "manual" commentary stuck in my head.

06 August 2008

ImageMagick crops your white space!

ImageMagick is a fabulous command-line tool for image processing. I use it a lot for converting between different image formats, which it does with no fuss whatsoever -- as the following PNG to PDF example illustrates:

$ convert in.png out.pdf

ImageMagick can do much, much more than conversions. Today I learned the following trick from an
e-mail list discussion:

$ convert -trim img.pdf

which trims all the white space from around an image. It's a great trick for inserting Matlab plots into papers! I used to do this by hand with an image editing program like the GIMP; it's great to know that I don't have to fire up such a massive tool in order to accomplish this simple task. (The GIMP is a wonderful program, incidentally, but starting it up just to crop some white space is a task beneath its mighty powers.)

05 August 2008

New version of ECL is out

A new version of Embeddable Common Lisp (ECL) is out!!! I use ECL in my projects to support Lisp as an embedded scripting language in large C-based projects. ECL's chief developer (Juan Jose Garcia Ripoll) is amazingly responsive, which makes the library a pleasure to use. Yay ECL!!!

30 June 2008

(Pseudo)random numbers matter


The newly released LAPACK Working Note #206 gives yet another reason why generating good pseudorandom numbers matters:


In May 2007, a large high performance computer manufacturer ran a twenty-hour long High Performance Linpack benchmark. The run fails with the following output:

|| A x - b ||_oo / ( eps * ||A||_1 * N ) = 9.22e+94 ...... FAILED


What happened was that the benchmark's matrix generator uses a lame linear congruential pseudorandom number generator, which causes generated matrices to have repeated columns for certain unfortunate choices of matrix dimension. This of course makes one wonder why the generator doesn't just make a matrix which is known to be invertible, say, by generating a sufficiently nonzero diagonal matrix and hitting it on both sides with orthogonal transforms until the zeros are filled in. Regardless, the bug meant 20 hours of very expensive, intensely power-consuming supercomputer time were wasted on computing the wrong answer to a problem which at such sizes very few people need to solve. So, random numbers do matter ;-)

26 June 2008

Contemplation leads to messes

I like to make espresso using one of these gadgets. When you press out all the water, it leaves a "puck" of compressed grounds, which you can pop out straight into the garbage or compost just by pushing on the handle. I was looking at the puck this morning and saw some lovely patterns made by different layers of grounds. One thin layer in the puck appeared more compressed than the other layers, but this more compact layer wasn't a straightforward horizontal cross section -- it had a ridged topography like one sees in the sedimentary rock hills around here. It made me wonder about the bulk properties of solid particles, about which I had seen an interesting presentation a few months before, on simulating such materials.



As I was holding the puck in my hand and examining this layer from all angles, the puck suddenly broke into its constituent coffee grounds and made a huge mess all over the floor. I realized then that I had fallen into the nerd's usual trap of neglecting practical matters for the sake of contemplating lovely abstractions ;-)

13 May 2008

Verifying identity: Existence and uniqueness

Some friends of mine have been playing an online diplomacy and military simulation game, in which each player represents an independent country. One of the biggest problems in the game is "multis" -- multiple accounts created by one person. The one person uses the multiple nations to magnify his/her power in the game. The rules prohibit this practice, but it can be hard to detect. Restricting each player to use only one IP address is annoying (as it prevents multiple people from using the same computer), but a start. The underlying problem is how to establish uniqueness of identity: Are you the only person who claims to be you?

Another identity problem is that of verification: Are you the person whom you claim to be? This was a problem faced by Daniel Plainview, the protagonist of the 2008 movie There Will Be Blood. Was the man claiming to be his brother Henry actually that person? Daniel is understandably mistrustful. Ultimately the question is resolved accidentally, by the exchange of a bit of personal information -- an oblique reference to an inside joke. When verifying identity, one has to take care not to give away information when asking for it: for example, if you give a bank your social security number, the bank could then use it to masquerade as you. Some recent security research has addressed the question of how to verify identity without giving away a secret. Daniel Plainview's homegrown solution is just as effective: inside jokes rely not only on objective information, but on a particular emotional response which would be very hard for either a machine or an unrelated human to mimic. In that sense, they are even better than CAPTCHAs: They give away very little information and are very difficult for unauthorized agents to solve.

Uniqueness is an easy problem to solve in person, aside from comedy sketches involving twins. This is because human "duplicates" (genetically identical multiple births) are difficult to "make." In contrast, it's very hard to solve remotely and electronically. If you can fake a verification test, then you can break a uniqueness test. So verification and uniqueness go hand-in-hand.

Oftentimes in math, there's a fruitful tension between existence and uniqueness. When one wishes to prove that "there exists a unique object," one generally proves existence and uniqueness separately. (In this context, "uniqueness" means that if such a thing does exist, it must be unique. So uniqueness by itself doesn't necessarily imply existence, nor does existence by itself necessarily imply uniqueness.) I'm curious whether this tension could be helpful in the field of verifying identity. Answering the question "Am I Jane Doe?" is relatively easy, but the question "If I am really Jane Doe as I claim to be, then there is only one such person" doesn't even seem to be the right question to answer. Furthermore, it shouldn't be necessary to reveal your true identity in order to play an online game. Can uniqueness be solved without verification?

07 May 2008

Two new LAPACK Working Notes have been published!

"LAPACK Working Note 199: Regular Full Packed Format for Cholesky's Algorithm: Factorization, Solution and Inversion."
by Fred G. Gustavson, Jerzy Wasniewski, and Jack J. Dongarra
UT-CS-08-614, April 28, 2008.

"LAPACK Working Note 200: Some Issues in Dense Linear Algebra for Multicore and Special Purpose Architectures."
by Marc Baboulin, Jack Dongarra and Stanimire Tomov
UT-CS-08-615, May 6, 2008.

You can download them at the above link.

28 April 2008

Microsoft comes over to play

Monday and Tuesday this week: a new (three months old!) numerical libraries group from Microsoft came over to speak with us linear algebra hackers and parallel performance tuners. Today we did most of the talking, but we learned something from them: They aren't from MS Research, and they aim to do applied research (not "blue-sky research," as the group's manager put it) with a 2-5 year horizon, and then transition successful, desired prototypes out into a production group. They've been incubating some scientific libraries for about two years, and they want to push it out into a core scientific library (sparse and dense linear algebra for now). Target hardware is single-node multicore -- no network stuff -- and they are especially interested in higher-level interface design for languages like C++, C#, and IronPython, built on both native and managed code. ("Managed code" means heavyweight runtimes like the JVM and .NET -- .NET is a big thing for them in improving programmer productivity, and these runtimes have a growing ecosystem of friendly higher-level languages.) Their group is pretty small now, but they are actively hiring (in case any of you are looking for a job in Redmond-world), and they have some bright folks on their team.

One of our biggest questions was, and probably still is, "why?" -- why wouldn't third-party applications suffice? Then again, one could ask the same of a company like Intel -- why do they need a large in-house group of HPC hackers? However, there's a difference: MS doesn't have the personpower or expertise to contribute as much to, say, ScaLAPACK development as Intel has, nor do they intend to grow their group that much. This team seems to be mainly focused on interface design: how to wrap efficient but hard-to-use scientific codes so that coders in the MS world can exploit them. In that context, my advisor showed one of his favorite slides: three different interfaces to a linear solver (solve Ax = b for the vector x, where b is a known vector and A is a matrix). One is Matlab's: "A \ b". The other two are two possible invocations of ScaLAPACK's parallel linear solver. The second of these has nearly twenty obscurely named arguments relating to the data distribution (it's a parallel distributed-memory routine) and to iterative refinement -- clearly not what you want to give to a 22-year-old n00b fresh out of an undergrad CS curriculum who knows barely enough math to balance a checkbook. Ultimately, MS has to design programmer interfaces for these people, as well as for gurus -- which is something that the gurus often forget.

Another reason perhaps for the "why" is that high-performance, mathematical calculations are a compelling reason to buy new computing hardware and software. There are interesting performance-bound consumer applications being developed, most of which have some kind of math kernel(s) at their core. MS presumably wants to get in on that action, especially as it is starting to lose out on the shrink-wrapped OS-and-Office software market as well as the Web 2.0 / search market.

It's interesting to watch a large, bureaucratic company like Microsoft struggling to evolve. IBM managed this sort of transition pretty well, from what I understand. They still sell mainframes (and supercomputers!), but they also sell services, and maintain a huge and successful research effort on many fronts. MS Research is also a powerhouse, but somehow we don't see the research transitioning into products, or even driving the brand, as it does in IBM's case (think about the Kasparov-defeating chess computer Deep Blue, for example). Maybe it does drive the products, but somehow the marketing has failed to convey this. I kind of feel for them, just like the "Mac guy" feels for the "PC guy" in Apple's ad series: They have to struggle not only to command a new market, but also to reinvent their image and command a brand.

17 April 2008

Engineering applications of general relativity?

The first engineering application of general relativity: GPS. Apparently, global positioning requires accurate time computations, and once you can measure time to sufficient accuracy, you need to account for time dilation due to gravity.

(Originally posted 16 April 2008.)

Preconditioners as eigenvalue clusterers

At the Copper Mountain Iterative Methods conference this year, I heard a really good explanation of how preconditioners work, from a PhD student at the University of Manchester. (Thanks Richard!) He explained to me that one can think of a Krylov subspace method as polynomial interpolation of the spectrum of the matrix. Each time you do a matrix-vector multiply, you increase the polynomial's degree by one. (This gets more clear if you've seen the proof that conjugate gradients is optimal in the norm induced by the matrix.)

If the matrix's eigenvalues are evenly distributed, it will get harder and harder to interpolate the spectrum, because of the Gibbs phenomenon. In contrast, if one eigenvalue is separated from the others, it's easy to interpolate that part of the spectrum accurately and "pick off" the separated eigenvalue. This is called "deflation." After this point, the Krylov method converges pretty much as if the separated eigenvalue wasn't there. This makes the goal of a good preconditioner clear: It should separate and cluster the eigenvalues (away from the origin), so that the Krylov method can pick off the clusters.

If I may also engage in some speculation, this perhaps also explains one reason why block preconditioners are popular in multiphysics solvers: they group together eigenmodes that are on similar scales, and separate unrelated eigenmodes. Of course, block preconditioning is the mathematical equivalent of encapsulation in the software engineering world: it lets you solve one problem at a time and then stitch together a combined solution out of the components. Mushing together all the subproblems into a single big problem loses the structure, whereas sparse linear algebra is all about exploiting structure.

(Originally posted 16 April 2008.)

Alchemy: pride and wonder

The website of NOVA, a PBS science program, has a neat article which decodes a page from Isaac Newton's alchemy notebooks, and discusses Newton's interest in alchemy in its historical context. Alchemy is a fascinating subject because it shows the tight links between science, philosophy, and religion at the very beginning of what we consider "modern" science, and also because it illustrates the tension between rationalization (wanting to understand or control the universe by discovering or imposing universal laws that govern it) and wonder (non-rational awe of nature). Newton himself shows both tendencies. The article mentions how he secretly called himself "Jehovah the Holy One," and William Blake depicts both Newton and God bearing the compass to measure and bound Creation. Yet, Newton saw his discoveries as if he were just a child picking up a pretty shell on the beach and barely intuiting the existence of countless more in the ocean before him.

Alchemy's all-encompassing vision arguably failed to persist into its successor sciences. By 1800 or so, Friedrich von Hardenberg (a.k.a. Novalis, a geologist and poet) was already condemning the "Scheidekunst" of the sciences: how they were dissected cadaver-like into subfields, without regard for the living interactions between fields. In some sense, modern physics with quantum mechanics inherited the idea of reaching for a universal theory: "grand unified theory" and "quantum gravity" reveal this desire to explain all interactions at both tiny and grand scales. They also attract their share of superstition and quackery (e.g., the use of "quantum tangling between doctor and patient" to explain how homeopathic medicine supposedly works), just as alchemy did in its time. Yet Novalis' and Newton's pursuit of universal natural philosophy as a spiritual activity shows the importance of non-rational awe, wonder, and curiosity in the sciences. Even such an avowed atheist as Carl Sagan could wax eloquent at the "billions and billions" of stars and speak tenderly of our planet as a "pale blue dot." (I've never heard an atheist write more reverently than he!)

(Originally posted 31March 2008.)

(Re)new(ed) blog

Welcome to my (re)new(ed) blog!

I deleted my old blog in order to start fresh. This new version will only have "nerdy" posts -- i.e., about the sciences or related recreational interests. I will spawn off different blog(s) for other topics, in particular, about music, religion, and politics.