[Trilinos-Users] Memory leak inside Ifpack_IC::Compute()?

Thornquist, Heidi K hkthorn at sandia.gov
Thu Apr 17 14:44:55 MDT 2008


Hi Rony,

Thank you for such a detailed message!  I have verified that there is indeed a memory leak inside Ifpack_IC::Compute() when it is called multiple times.  This will be fixed in the next Trilinos 8.0 tarball.

Thanks,
Heidi


-----Original Message-----
From: trilinos-users-bounces at software.sandia.gov [mailto:trilinos-users-bounces at software.sandia.gov] On Behalf Of Rony Speck
Sent: Thursday, April 17, 2008 2:27 AM
To: trilinos-users at software.sandia.gov
Subject: [Trilinos-Users] Memory leak inside Ifpack_IC::Compute()?


Hi there

I'm doing my master thesis at the ETH in Zurich.

I'm using the Ifpack_IC preconditioner in a time-dependent setting. Within a time increment, multiple linear systems of equations are solved which all have the same matrix structure (only the numerical values change). I therefore create an Ifpack_IC object at the beginning of the increment and keep it throughout the whole increment and only call Ifpack_IC::Compute().
After the increment is done, I delete the object and create a new one for the next increment.
I now noticed that a lot of memory is used this way. When I delete the preconditioner after each single system is solved (not keeping it for a whole increment), the memory usage is normal.
So it seems, that multiple calls to Ifpack_IC::Compute() leeds to a memory leak. The code below shows the problem in a simpler setting. The code just creates a 2D Laplace problem with the help of the matrix gallery. CG inside AztecOO is used together with the Ifpack_IC preconditioner. Compute() is called twice on the Ifpack_IC object to simulate my setting.

#include "AztecOO_config.h"
#ifdef HAVE_MPI
#include "mpi.h"
#include "Epetra_MpiComm.h"
#else
#include "Epetra_SerialComm.h"
#endif
#include "Epetra_Map.h"
#include "Epetra_Vector.h"
#include "Epetra_CrsMatrix.h"
#include "AztecOO.h"
#include "Trilinos_Util_CrsMatrixGallery.h"
#include "Ifpack_IC.h"

int main(int argc, char *argv[]) {
#ifdef HAVE_MPI
        MPI_Init(&argc,&argv);
        Epetra_MpiComm comm(MPI_COMM_WORLD); #else
        Epetra_SerialComm comm;
#endif

        // Create matrix and vectors
        int problemSize = 10000;
        Trilinos_Util::CrsMatrixGallery gallery("laplace_2d", comm);
        gallery.Set("problem_size", problemSize);
        gallery.Set("map_type", "linear");
        gallery.Set("exact_solution", "random");

        Epetra_CrsMatrix *A = gallery.GetMatrix();
        Epetra_Vector *b = (Epetra_Vector*) gallery.GetRHS();
        Epetra_Vector *x = (Epetra_Vector*) gallery.GetStartingSolution();

        // Create linear problem
        Epetra_LinearProblem problem(A, x, b);

        // Create preconditioner
        Ifpack_IC *M = new Ifpack_IC(A);
        Teuchos::ParameterList ifpackList;
        int k = 2;
        double alpha = 0.0;
        double rho = 1.0;
        double droptol = 0.0;
        ifpackList.set("fact: level-of-fill", k);
        ifpackList.set("fact: absolute threshold", alpha);
        ifpackList.set("fact: relative threshold", rho);
        ifpackList.set("fact: drop tolerance", droptol);
        M->SetParameters(ifpackList);
        M->Compute();
        M->Compute();

        // Create solver object
        AztecOO solver(problem);
        solver.SetPrecOperator(M);
        Teuchos::ParameterList aztecList;
        aztecList.set("AZ_conv", AZ_rhs);
        aztecList.set("AZ_solver", AZ_cg);
        aztecList.set("AZ_output", AZ_all);
        solver.SetParameters(aztecList, true);

        // Solve
        solver.Iterate(10000, 0.001);

        // Free memory
        delete M;

        #ifdef HAVE_MPI
                MPI_Finalize() ;
        #endif
}

When running this code with valgrind --leak-check=full, it says that 360,004 bytes are definitely lost.
If Compute() is called only once, nothing is lost. According to valgrind, the source of the problem seems to be Ifpack_AIJMatrix_alloc inside crout_ict.

Best regards
Rony Speck


_________________________________________________________________
Es ist höchste Zeit, dabei zu sein - Holen Sie sich jetzt die neue Generation der Windows Live Services!
http://get.live.com

_______________________________________________
Trilinos-Users mailing list
Trilinos-Users at software.sandia.gov
http://software.sandia.gov/mailman/listinfo/trilinos-users




More information about the Trilinos-Users mailing list