IANIGLA   20881
INSTITUTO ARGENTINO DE NIVOLOGIA, GLACIOLOGIA Y CIENCIAS AMBIENTALES
Unidad Ejecutora - UE
congresos y reuniones científicas
Título:
The solution of large geodetic systems by sparse matrix trchniqures
Autor/es:
RICARD L BRANHAM, JR.
Lugar:
Mendoza
Reunión:
Congreso; XXIV reunion cientifica de la asociacion argentina de geofisicos y geodestas; 2009
Resumen:
Geodetic problems frequently lead to large matrices, tens of thousands of equations of condition and thousands of unknowns. These matrices are generally sparse and possess a characteristic structure: the unknowns, such as the station coordinates, are grouped together in a sparse row in the matrix of the equations of condition, . When the equations of condition are formed into normal equations, , these equations remain sparse, although more dense than the equations of condition. A Cholesky decomposition of the normal equations produces a triangular matrix, the Cholesky factor , that in general will be dense unless the rows and columns of the normal equations are permuted in such a way as to preserve sparcity. This remains true even should one eschew normal equations and apply orthogonal transformations directly to  to produce a triangular matrix identical, within round-off or truncation error, to  The sparcity structure of depends on the structure of  In 1880 the German geodesist Helmert developed a permutation method, called nested dissection, that permits one to calculate a sparse Cholesky factor. Code for nested dissection is available in Fortran (A. George and J. Liu, Computer Solution of Large Sparse Positive Definite Systems, Prentice-Hall, New Jersey, 1981, Ch. 8) and in the Matlab package CSparse (T.A. Davis, Direct Methods for Sparse Linear Systems, SIAM, Philadelphia, Sec. 7.6). A test problem with 30,000 equations of condition and 2,000 unknowns using CSparse on a computer with 3 Gb of RAM and 2 Ghz speed showed that nested dissection runs three times faster than not using sparse matrix techniques. Should the matrix  be too large to fit into memory,  must be formed by sequential accumulation of the rows of   or by application of orthogonal transformations to the rows to form . This becomes time consuming and the principal source of CPU usage. Even here, however, sparse matrix techniques work more efficiently.