Direct Methods for Sparse Linear Systems

Direct Methods for Sparse Linear Systems (aka, 'the sparse backslash book')

Welcome to the web page for Direct Methods for Sparse Linear Systems, (informally, "the sparse backslash book"), by Tim Davis.

Published by SIAM as part of the SIAM Series on the Fundamentals of Algorithms, in September 2006. You can order the book here. Plans are underway to make the book available as an e-book, via legitimate and legal channels.

Reviews:


Video: A plenary talk at the SIAM 2006 Annual Meeting is a concise synopsis of much of this book.
Below is a copy of CSparse, a small yet feature-rich sparse matrix package written specifically for the book. The purpose of the package is to demonstrate a wide range of sparse matrix algorithms in as concise a code as possible. CSparse is about 2,200 lines long (excluding its MATLAB interface, demo codes, and test codes), yet it contains algorithms (either asympotical optimal or fast in practice) for all of the following functions described below. A MATLAB interface is included. Note that the LU and Cholesky factorization algorithms are not as fast as UMFPACK or CHOLMOD. Other functions have comparable performance as their MATLAB equivalents (some are faster). Documentation is very terse in the code; it is fully documented in the book. Some indication of how to call the C functions in CSparse is given by the CSparse/MATLAB/*.m help files.

To install CSparse and UFget in MATLAB, simply cd to the CSparse/MATLAB directory in MATLAB, and type cs_install in the MATLAB command window. CSparse can also be used in stand-alone applications via its C-callable interface.

The algorithms contained in CSparse have been chosen with five goals in mind:

CSparse is concise and simple to understand and thus suitable for a textbook, yet it is also industrial-strength software suitable for production use (in fact, the cs_dmperm function now appears as dmperm in MATLAB 7.5). Industrial-strength software does not terminate your program if it runs out of memory. Thus, a CSparse routine that runs out of memory frees any memory it has allocated and returns a null pointer as the result, in place of the desired result. This out-of-memory condition can then be handled by the application, which can terminate if it chooses to, or take some other corrective action instead. If given a null pointer as a required input parameter, all CSparse functions safely return a null result, thus protecting against out-of-memory conditions in the caller or in a prior call to CSparse. CSparse will neither seg-fault nor terminate if it or the caller runs out of memory.

CSparse includes an exhaustive test suite, which exercises 100% of the statements in CSparse and also checks all possible ways that CSparse can run out of memory. The test suite passes all valgrind tests. Part of the purpose of CSparse is to illustrate methods used in industrial-strength software.

The one thing lacking in CSparse in this respect is the inclusion of extensive comments in the C code itself, detailing the purpose of each function, how they work, and how they are called (see LDL/ldl.c for a good example of this). This was by design. These comments are present, but not in the C-callable part of CSparse. Adding them would unduly lengthen the book. The comments appear in the MATLAB interface. For the C-callable interface, they appear as the chapters and sections of the book. In that sense, CSparse has 226 pages of comments ... assuming you buy the book, which is a reasonable assumption if you'd like to use the code.

For a more thorough discussion of what it takes to write industrial-strength software, I highly recommend a recent tech report, Experiences of sparse direct symmetric solvers by Jennifer Scott at Rutherford Appleton Lab and Yifan Hu at Wolfram Research.


Downloads and related links:

Table of contents, and links to related software:


Click here for beta, current, and older versions of CSparse
Errata and additional exercises

References (please cite these when using this software):