[Trilinos-Users] Different convergence behavior of Anasazi with Tpetra and Epetra

Hoemmen, Mark mhoemme at sandia.gov
Wed Jun 25 12:14:20 MDT 2014


On 6/25/14, 12:00 PM, "trilinos-users-request at software.sandia.gov"
<trilinos-users-request at software.sandia.gov> wrote:
>Message: 1
>Date: Wed, 25 Jun 2014 10:34:39 +0900
>From: SungHwan Choi <sunghwanchoi91 at gmail.com>
>To: trilinos-users at software.sandia.gov
>Subject: [Trilinos-Users] Different Convergence behavior of Anasazi
>	with	Tpetra and Epetra
>Message-ID:
>	<CAF=G9HxAgWDvRBHOWb=vTQ1-BSG899a27d7Ef=Xhang+FDY2GA at mail.gmail.com>
>Content-Type: text/plain; charset="utf-8"
>
>Dear all,
>Hi I am Sunghwan Choi.
>I recently moved from Epetra to Tpetra in order to run my code on GPU.
>Before running it on GPU, I had tested the Tpetra-version-code on MPI
>parallel environment. I faced the convergence problem of Anasazi; it does
>not occur with Epetra.
>
>I built the same matrix and diagonalize the constructed matrix with the
>same parameters and the same processors but Tpetra required more iteration
>number. It is not the situation that I expected. The speed of two
>subpackages might be different but the iteration number should be the
>same.
>Interestingly, the final result of two subpackages are the same
>
>I don't know whether it is conventional situation or not. If you have some
>information on this problem, please let me know

Not even MPI promises that you will get the same answer (bitwise) twice,
if you do the same floating-point MPI_Reduce twice.  Thus, I would expect
a slight difference in iteration counts across libraries.  Use of threads
will also introduce reproducibility issues.  However, if the Tpetra
example really does fail to converge or takes many more iterations than
with Epetra, that might be a problem.

If this is really a problem for you, please send us the matrix and your
Anasazi parameters (preferably embedded in a test program).

Thanks!
mfh



More information about the Trilinos-Users mailing list