[Trilinos-Users] Using OpenMP support in Trilinos

Eric Marttila eric.marttila at thermoanalytics.com
Wed Sep 19 14:17:04 MDT 2012


Hello,

I'm using AztecOO and ML to solve a linear system.  I've been running my 
simulation in serial mode, but now I would like to take advantage of multiple 
cores by using the OpenMP support that is available in Trilinos.  I realize 
that the packages I'm using are not fully multi-threaded with openmp, but I'm 
hoping for some performance improvement since some of the packages I'm using 
have at least some level of OpenMP support.

I reconfigured and built Trilinos 10.12.2 with 

-D Trilinos_ENABLE_OpenMP:BOOL=ON

...but when I run my simulation I see that it is slower than if I have 
Trilinos configured without the above option.  I have set the environment 
variable OMP_NUM_THREADS to the desired number of threads.

I was also able to reproduce this behavior with one of the trilinos example 
prgrams (attached below), so I suspect I am missing something obvious in using 
the OpenMP support.

Does anybody have thoughts of what I might be missing?

Thanks.
--Eric

-- 
Eric A. Marttila
ThermoAnalytics, Inc.
23440 Airpark Blvd.
Calumet, MI 49913

email: Eric.Marttila at ThermoAnalytics.com
phone: 810-636-2443
fax:   906-482-9755
web: http://www.thermoanalytics.com
-------------- next part --------------
// Use of ML as a black-box smoothed aggregation preconditioner

#include "Epetra_ConfigDefs.h"
#ifdef HAVE_MPI
#include "mpi.h"
#include "Epetra_MpiComm.h"
#else
#include "Epetra_SerialComm.h"
#endif
#include "Epetra_Map.h"
#include "Epetra_Vector.h"
#include "Epetra_RowMatrix.h"
#include "Epetra_CrsMatrix.h"
#include "Epetra_LinearProblem.h"
#include "Epetra_Time.h"
#include "AztecOO.h"

// includes required by ML
#include "ml_epetra_preconditioner.h"

#include "Trilinos_Util_CrsMatrixGallery.h"

using namespace Teuchos;
using namespace Trilinos_Util;

#include <iostream>

int main(int argc, char *argv[])
{

#ifdef EPETRA_MPI
  MPI_Init(&argc,&argv);
  Epetra_MpiComm Comm(MPI_COMM_WORLD);
#else
  Epetra_SerialComm Comm;

#endif

  int problemSize = 1000000;
  if (argc > 1) {
    problemSize = atoi(argv[1]);
  }

  cerr << "Using a problem size of " << problemSize << "\n";

  Epetra_Time Time(Comm);

  // initialize an Gallery object
  CrsMatrixGallery Gallery("laplace_3d", Comm);

  Gallery.Set("problem_size", problemSize);

  // retrive pointers to matrix and linear problem
  Epetra_RowMatrix * A = Gallery.GetMatrix();

  Epetra_LinearProblem * Problem = Gallery.GetLinearProblem();

  // Construct a solver object for this problem
  AztecOO solver(*Problem);

  // create the preconditioner object and compute hierarchy
  ML_Epetra::MultiLevelPreconditioner * MLPrec = 
    new ML_Epetra::MultiLevelPreconditioner(*A, true);

  // tell AztecOO to use this preconditioner, then solve
  solver.SetPrecOperator(MLPrec);

  solver.SetAztecOption(AZ_solver, AZ_cg);
  solver.SetAztecOption(AZ_output, 1);
  int Niters = 150;

  solver.Iterate(Niters, 1e-10);

  // print out some information about the preconditioner
  if( Comm.MyPID() == 0 ) cout << MLPrec->GetOutputList();

  delete MLPrec;

  // compute the real residual

  double residual, diff;

  Gallery.ComputeResidual(&residual);
  Gallery.ComputeDiffBetweenStartingAndExactSolutions(&diff);

  if( Comm.MyPID()==0 ) {

    cout << "||b-Ax||_2 = " << residual << endl;
    cout << "||x_exact - x||_2 = " << diff << endl;

    cout << "Total Time = " << Time.ElapsedTime() << endl;
  }

  if (residual > 1e-5)

    exit(EXIT_FAILURE);
#ifdef EPETRA_MPI
  MPI_Finalize() ;
#endif
  return(EXIT_SUCCESS);
}


More information about the Trilinos-Users mailing list