TOC  Abstract  (1) Intro  (2) Simplex  (3) Impl  (4) Test  (5) Results  (6) Concl  Scripts  Biblio  Author  Chairman 
Chapter 5
Results
The results of the MATLAB were as expected for comparisons of execution time and performance between the variations of the BartelsGolub Method. However, the Revised Simplex Method outperformed all expectations to provide an insight into the inner workings of MATLAB and perhaps even a suggestion for direction of future research.
The number of flops that the Revised Simplex Method required in each problem was consistently better than any of the other methods, varying from a factor of two times better to more than 10 times better. This unexpected result suggested that the inverse of the basis was not nearly as dense as originally thought. Upon reviewing the inversebasis densities, this supposition holds. Although the Revised Simplex Method provided the worst inversebasis densities of all the methods, these densities were still sparse enough to take advantage of sparse matrix structure to reduce timecomplexity.
The worst inversedensity that the Revised Simplex Method only occasionally surpassed 40%, and it never reached even 55%. These sparse inversebases allow MATLAB to take advantage of their structure to reduce the time required by matrix multiplication. Since the matrixinversion routine involves a significant amount of matrix multiplication, it will also reduce in the timecomplexity due to the sparse nature of the inversebases. This result accounts for the significant timecomplexity advantage the Revised Simplex Method had over the other methods.
Despite the sparse nature of the inversebases, the Revised Simplex Method still produced much higher densities than its counterpart methods. Unlike the other methods, the Revised Simplex Method maintains the basisinverse as a matrix. As a result, any multiplication requires that the entire matrix be available in memory, so any implementation will be limited in use by the physically available memory. Alternatively, secondary storage and virtual memory could be used, however, both methods increase execution time significantly indicating that other methods may be more amenable. For smaller problems the Revised Simplex Method would be the quickest method.
The significant use of native MATLAB functions resulted in the Revised Simplex Method taking by far the least amount of execution time of all the methods. Unfortunately, MATLAB has some limitations in solving very large sparsematrix problems because of its interpretive nature. At present, compilation utilities for MATLAB still lack sparse matrix support, so this analysis could not be applied to them.
The BartelsGolub Method generally provided the worst flop count of all the methods. The sparse nature of the bases unfortunately did not translate into as signficant a savings during the LUfactorization as they did in the basisinversion of the Revised Simplex Method. With the extra overhead of the repeated backsolves, the BartelsGolub Method proved to be extremely costly in terms of floatingpoint operations.
The size of the upper and lower triangular factors proved to be much sparser than the inversebases of the Revised Simplex Method by a factor of two to five. This indicates that the BartelsGolub Method could handle larger problems in the same amount of memory. Similar to the Revised Simplex Method, though, the backsolves performed by the BartelsGolub Method still require large amounts of data to be present in memory, so computer memory may still be a severely limiting factor.
The actual running time of the BartelsGolub Method was the secondfastest of all the methods; in fact, it slightly superceded the Revised Simplex Method in one particular small problem. Although the BartelsGolub Method used native MATLAB calls like the Revised Simplex Method, the extra time involved in performing the repeated backsolves increased the running time significantly. As MATLAB continues to evolve, the BartelsGolub Method may eventually surpass the Revised Simplex Method in execution time.
Finding an appropriate threshhold number of eta factors before requiring refactorization of the basis proved to be difficult as no one value was good for every problem. The most effective range of threshhold values varied from to . Using this value, the Sparse BartelsGolub Method consistently outpaced the original BartelsGolub Method in all but a few illconditioned problems.
The Sparse BartelsGolub Method performed only slightly fewer flops than the BartelsGolub Method in most cases. In a couple situations, the Sparse BartelsGolub Method performed slightly more flops. In the remaining instances, however, the Sparse BartelsGolub Method managed to perform three to four times fewer floating point operations, showing the advantage of deferring refactorization. Because the sparse algorithm was more detailed and required more interpretation by MATLAB, the running time was increased often 10 to 20 times from the BartelsGolub Method.
Most significantly, the maximum number of nonzero entries that had to be stored by the Sparse BartelsGolub Method was fewer than the number of nonzero entries that the BartelsMethod had to contend with in each and every problem. In addition, the Sparse BartelsGolub Method does not require that the entire set of data be in memory at any one time. Since the eta matrices consisting the lowertriangular factor are applied in sequence, these same entries can be easily stored in secondary storage and read when needed. As a result, much larger problems are more easily handled by the Sparse BartelsGolub Method than either the Revised Simplex Method or the BartelsGolub Method.
No refactorization threshhold value proved to be satisfactory over any wide range for the ForrestTomlin Method. However, setting a threshhold for nonzero entries in the list of roweta factors proved to be most effective, with the threshhold values ranging from to . Although the ForrestTomlin Method showed promise, it usually proved to require more flops than the Sparse BartelsGolub and BartelsGolub Methods.
Despite the increase in floatingpoint operations, the ForrestTomlin Method usually succeeded in producing fewer nonzero factors than any of the other methods. It also surpassed the other sparse methods significantly in running time. The use of MATLAB’s intrinsic functions calculate and apply the roweta factors caused this speed up, often by factors of 2 to 10 times.
Because of their similarity, Reid’s Method used the same threshhold values as the Sparse BartelsGolub Method. Surprisingly, the use of rotations did not decrease the number of nonzero factors or floating point operations. In actuality, Reid’s Method proved to be slightly worse in most cases. In fact, the same illconditioned matrices that caused the Sparse BartelsGolub Method to perform poorly also caused Reid’s Method to fail and find a solution at a nearby corner instead.
Reid’s Method often produced the same number of nonzero entries as the Sparse BartelsGolub Method most likely because it followed the same set of decisions as the latter algorithm. However, on those occasions where they differed, Reid’s Method proved to be more often than not the worse of the two resulting in extra nonzero entries, longer running times, and more floatingpoint operations.
The problems solved were a variety of small and mediumsized problems available from the Netlib ftp site. These problems included a variety of industrial problems in a variety of situations, but most importantly they were real problems instead of academic problems. Table 1 describes the problems according to their matrix dimensions, , and total number of nonzero coefficients.


The Tested Problems 
The most scrutinized data were the number of flop counts each problem required for each algorithm. In Table 2, these flop counts are summarized in terms of millions of flops. Although the values range from problem to problem, the Revised Simplex Method consistently outperformed the other methods.
Table 2  


Millions of Flops per Execution 
The agg2 and agg3 problems were unable to find the correct solution when Reid’s method was applied to them. Closer analysis showed that as these problems approached the solution, the basis became unstable and took a slightly different path. Reid’s Method obtained a result that was within the same order of magnitude as the solution but was too disparate to be considered correct.
The amount of CPU time that each algorithm takes, as shown in Table 3, tells us little about the efficiency of the algorithm under MATLAB, but it does give us some insightful information about how the algorithms work with MATLAB. The Revised Simplex Method made extremely good use of MATLAB’s internal coding to run often two to three times faster than each of the other algorithms. The BartelsGolub Method also used MATLAB’s internal LUfactorization code to often run faster than the sparsematrix algorithms.
Table 3  


Seconds of CPU Time per Execution 
The stability of each method is easiest to compare by looking at the number of times the basis required refactorization. In the case of the Revised Simplex Method and the BartelsGolub Method, refactorization occurs each iteration so their numbers are far higher than the others. The full results are available in Table 4. The ForrestTomlin Method outperformed the other sparse algorithms, however the Sparse BartelsGolub and Reid’s Methods still significantly improved on the dense algorithms.


Number of Refactorizations Required 
Finally, some measure has to be taken about the amount of data storage that is required. Even with the price of computer memory dropping, the size of industrial problems are increasing and the actual amount of data that must be held in memory must remain a consideration in evaluating any algorithm. Table 5 presents the maximum number of nonzero data entries that each algorithm had to handle for each problem. The direct matrix inversion employed by the Revised Simplex Method resulted in the large numbers of nonzeros that can be seen here. The BartelsGolub Method proved to do much better, however the ForrestTomlin Method proved to be the best method for keeping the amount of data to a minimum.


Maximum Nonzero Data Items During Solution 
The data collected provided many insights into the Simplex Method algorithms being tested and the nature of MATLAB. In general, the Revised Simplex Method performed fastest and with the fewest calculations, but it required a large amount of memory and basis refactorizations. The Sparse BartelsGolub Method proved to be simple and effective, however the ForrestTomlin and Reid’s variations often performed slightly better.