Linear and Eigen Solvers


Getting started
Quick guide to Trilinos solvers and preconditioners (with Epetra/Tpetra compatibility chart)

Solver interfaces
Iterative linear and eigen-solvers
Direct solvers



Trilinos provides a wide-variety of solution methods for linear and eigen systems. The purpose of this page is to give an overview of the capabilities in the areas of iterative and direct solvers, preconditioners, high-level interfaces, and eigen-solvers. Unless otherwise noted, all packages have been publicly released.

Getting Started

You should first decide whether you want to use the Epetra or Tpetra sparse linear algebra library.  This decision will determine which solvers and preconditioners you can use, as not all solvers are compatible with both.  See the compatibility chart in the next section for details.

Epetra uses integer ordinal types and double scalar types.  This means that it currently can run problems with at most 231-1 (~2.1 billion) degrees of freedom (DOFs).  (An upgrade to Epetra is underway, however, that will remove this index limitation.)  Tpetra is templated on the ordinal type, scalar type, and node type.  Tpetra allows creation of problems with any number of DOFs, problems other than of type double, optimized computations on a variety of many-core architectures (including GPUs) through Kokkos, and mixing of MPI and threading.

Once you have chosen the sparse linear algebra library, you can focus on solver/preconditioner libraries.

Quick Guide to Trilinos Preconditioners/Solvers

The following table gives a one-line description of available Trilinos linear and eigensolver packages and their compatibility with Epetra and Tpetra.

Description Package

Compatible with



Krylov methods AztecOO





Direct solvers Amesos







Incomplete factorizations, SOR methods,Additive Schwarz Ifpack




Algebraic multigrid ML





Block preconditioning framework Teko


Eigen methods Anasazi



Hybrid Schur complement methods ShyLU


Equivalent real forms Komplex


Solver manager Stratimikos




Linear solver interfaces

Stratimikos: High level linear solver interface

Point-of-contact: Roscoe Bartlett (

Stratimikos contains a unified set of wrappers to linear solver and preconditioner capabilities in Trilinos.  Stratimikos essentially consists of the single class DefaultLinearSolverBuilder.  This class takes as input a (nested) parameter list that contains options for the desired solvers and preconditioners.

Stratimikos has adapters for AztecOO, Belos, Amesos, Ifpack, and ML.

Stratimikos is compatible with Epetra and Tpetra.

Iterative linear and eigen-solvers

AztecOO: Preconditioners and Krylov subspace methods

Point-of-contact: Mike Heroux (

AztecOO includes a number of Krylov iterative methods such as conjugate gradient (CG), generalized minimum residual (GMRES) and stabilized biconjugate gradient (BiCGSTAB) to solve systems of equations. AztecOO may use a variety of internally implemented preconditioners, such as SOR, polynomial, domain decomposition, and incomplete factorization preconditioning, as well as preconditioners provided by other Trilinos packages. AztecOO also fully contains the C-language Aztec linear solver package, so any application that is using Aztec can use the AztecOO library in place of Aztec. Note that only bug fixes are being applied to AztecOO. Active algorithm development is taking place in Belos.

AztecOO is compatible with Epetra only.

Belos: Classical and block Krylov subspace methods

Points-of-contact: Heidi Thornquist ( and Mike Parks (

Belos provides next-generation iterative linear solvers and a powerful linear solver developer framework. This framework includes the following abstract interfaces and implementations:

  • Abstract interfaces to linear algebra using traits mechanisms. This allows the user to leverage any existing investment in their description of matrices and vectors. The provided concrete linear algebra adapters enable Belos to be used anywhere Epetra and Thyra are employed for linear algebra services.
  • Abstract interfaces to orthogonalization; implementations of iterated classical Gram-Schmidt (ICGS), classical Gram-Schmidt with a DGKS correction step, and iterated modified Gram-Schmidt (IMGS) are included.
  • Abstract interfaces to iteration kernels; implementations of conjugate gradient (CG), block CG, block GMRES, pseudo-block GMRES, block flexible GMRES, and GCRO-DR iterations are included.
  • Powerful solver managers are provided for solving a linear system using CG or block CG, GMRES or block GMRES with restarting, pseudo-block GMRES for performing single-vector GMRES simultaneously on multiple right-hand sides, and a single-vector recycled Krylov method (GCRO-DR).
  • Basic linear problem class is provided for the user to define an unpreconditioned or preconditioned (left, right, two-sided) linear system for Belos to solve.

Belos is compatible with Epetra and Tpetra.

Anasazi: parallel eigen-solvers

Point-of-contact: Heidi Thornquist (

Anasazi is an extensible and interoperable framework for large-scale eigenvalue algorithms. The motivation for this framework is to provide a generic interface to a collection of algorithms for solving large-scale eigenvalue problems. Anasazi is interoperable because both the matrix and vectors (defining the eigenspace) are considered to be opaque objects—only knowledge of the matrix and vectors via elementary operations is necessary. An implementation of Anasazi is accomplished via the use of interfaces. Current interfaces available include Epetra and so any libraries that understand Epetra matrices and vectors (such as AztecOO) may also be used in conjunction with Anasazi.

One of the goals of Anasazi is to allow the user the flexibility to specify the data representation for the matrix and vectors and so leverage any existing software investment. The algorithms that are currently available through Anasazi are block Krylov-Schur, block Davidson, and locally-optimal block preconditioned conjugate gradient (LOBPCG) method.

Anasazi is compatible with Epetra and Tpetra.

Komplex: Complex-valued system solver

Points-of-contact: Mike Heroux ( and David Day (

KOMPLEX is an add-on module to AZTEC that allows users to solve complex-valued linear systems.  KOMPLEX solves a complex-valued linear system Ax=b by solving an equivalent real-valued system of twice the dimension.

Direct linear solvers

Amesos: direct sparse linear solver interface

Point-of-contact: Siva Rajamanickam (

Amesos is a set of C++ interfaces to serial and parallel sparse direct solvers. Amesos contains two nice sparse solvers: KLU and Paraklete. KLU is serial, while Paraklete (distributed with Trilinos 7.0 or higher) is a parallel solver. Amesos also offers an interface to LAPACK, and several other well-know solvers available on the web.

The main idea of Amesos is to give a high-level view of direct solvers, as composed by four main phases:

1)       specification of parameters
2)       initialization of the solver, using matrix sparsity only
3)       computation of the factors
4)       solution of the linear system

Amesos insulates the user from all the low-level details typical of direct solvers, like the matrix format, data distribution for the matrix, the solution and the right-hand side, parameter settings, and so on. Amesos is not based on any matrix format; instead, an matrix interface (specified by using the Epetra_RowMatrix class) is adopted. This facilitates the usage of Amesos classes in any projects whose matrix can be wrapped as an Epetra_RowMatrix.

Amesos2: direct sparse linear solver interface

Point-of-contact: Siva Rajamanickam (

Amesos2 can be considered a templated version of Amesos that supports a wider variety of scalar and index types.  Amesos2 provides two internal serial direct solvers, KLU2 (as of release 11.12) and Basker (as of release 11.14).  Users of prior releases will need a third-party direct solver, such as SuperLU.

Pliris: direct dense linear solver

Point-of-contact: Joe Kotulski (

Pliris is an object-oriented interface to a LU solver for dense matrices on parallel platforms. These matrices are double precision real matrices distributed on a parallel machine.

The matrix is torus-wrap mapped onto the processors(transparent to the user) and uses partial pivoting during the factorization of the matrix.  Each processor contains a portion of the matrix and the right hand sides determined by a distribution function to optimally load balance the computation and communication during the factorization of the matrix. The general prescription is that no processor can have no more(or less) than one row or column of the matrix than any other processor.  Since the input matrix is not torus-wrapped permutation of the results is performed to “unwrap the results” which is transparent to the user.


ShyLU: Hybrid iterative/direct Schur complement solver

Points-of-contact: Erik Boman ( and Siva Rajamanickam (

ShyLU is designed as a node-level solver and can use both MPI and threads in several ways. ShyLU was designed (a) to solve difficult but medium-size problems, and (b) to be used as a subdomain solver or smoother for very large problems within an iterative scheme. It is a purely algebraic method and so can be used as a black-box solver.

ShyLU uses a hybrid direct/iterative approach based on Schur complements. The goal is to provide robustness similar to sparse direct solvers, but memory usage more similar to preconditioned iterative solvers.

Teko: Block preconditioning framework

Point-of-contact: Eric Cyr (

Teko is a package for development and implementation of block preconditioners. This includes support for manipulation and setup of block operators. Furthermore tools exist to support decomposition of a fully coupled operator. Additionally, facilities that allow the construction of approximate inverse operators using the full complement of available preconditioners and solvers are available in Teko. Finally, a small number of generic block preconditioners has been implemented in Teko, including block Jacobi, and block Gauss-Seidel. For the Navier-Stokes equation, Teko has implementations of SIMPLE, PCD and LSC.

Ifpack: Point preconditioning, incomplete factorizations, and classical domain decomposition

Points-of-contact: Mike Heroux ( and Siva Rajamanickam (

Ifpack provides a suite of object-oriented algebraic preconditioners. Ifpack constructors expect an Epetra_RowMatrix object for construction. Ifpack objects interact well with other Trilinos classes. In particular,Ifpack can be used as a preconditioner for AztecOO and smoother in ML.

Ifpack contains one-level domain decomposition preconditioners of overlapping type. Each “subdomain” is defined by the set of rows assigned to a given processors. Several options are available for the local solution, ranging from simple relaxation schemes, to incomplete factorizations, to direct solvers (through the Amesos package).

Ifpack is compatible with Epetra only.

Ifpack2: Point preconditioning, incomplete factorizations

Points-of-contact: Mark Hoemmen ( ,  Chris Siefert (, Jonathan Hu (

Ifpack2 can be considered a templated version of Ifpack.  It provides SOR type relaxation methods, incomplete factorizations, and additive Schwarz methods.

Ifpack2 is compatible with Tpetra only.

ML: smoothed aggregation algebraic multigrid

Points-of-contact: Ray Tuminaro (, Jonathan Hu (, and Chris Siefert ( 

ML contains a variety of parallel multigrid schemes for preconditioning or solving large sparse linear systems of equations arising primarily from elliptic PDE discretizations.  The main methods in ML are

  • smoothed aggregation algebraic multigrid
  • FAS nonlinear algebraic multigrid
  • two distinct algebraic multigrid methods for the eddy current approximations to Maxwell’s equations
  • a smoothed-aggregation-like method for convection dominated systems
  • matrix-free algebraic multigrid.

Within each of these methods there are several different algorithms to guide the type of coarsening and the inter-grid transfers (including the ability to drop weak coupling within the operator during inter-transfer construction).  Additionally, ML can use Zoltan to rebalance coarse grid operators for better parallel performance.

ML provides a variety of smoothers:  SOR, polynomial, Ifpack domain decomposition and incomplete factorizations, and Aztec methods.  Coarse-grid solvers include the afore-mentioned smoothers, as well as any direct method available through Amesos.

ML can also be used as a framework to generate new multigrid methods. Using ML’s internal aggregation routines and Galerkin products, it is possible to focus on new types of inter-grid transfer operators without having to address the cumbersome aspects of generating an entirely new parallel algebraic multigrid code. We have used this flexibility to produce special multilevel methods using coarse grid finite element functions to serve as inter-grid transfers.

ML is compatible with Epetra only.

MueLu: multigrid framework

Points-of-contact: Ray Tuminaro (, Jonathan Hu (, and Andrey Prokopenko (

MueLu provides a framework for parallel multigrid preconditioning methods for large sparse linear systems.  MueLu provides algebraic multigrid methods for symmetric and nonsymmetric systems based on smoothed aggregation.   It is designed to be extensible and can in principle support other algebraic multigrid (e.g., Ruge-Stueben) and geometric multigrid methods.  MueLu does not provide any smoothers itself, but instead relies on other Trilinos packages for these capabilities.  MueLu is templated on the ordinal and scalar types, and it can also exploit the hybrid communication benefits of Tpetra and Kokkos.

MueLu is compatible with Epetra and Tpetra.

Last updated Wednesday, November 13, 2013