[Trilinos-Users] C++ user: Question about block Arnoldi method

Hoemmen, Mark mhoemme at sandia.gov
Mon Apr 28 11:40:51 MDT 2014

On 4/26/14, 12:00 PM, "trilinos-users-request at software.sandia.gov"
<trilinos-users-request at software.sandia.gov> wrote:
>Message: 1
>Date: Fri, 25 Apr 2014 19:20:12 +0000
>From: "Chowdhary, Kenny" <kchowdh at sandia.gov>
>To: "trilinos-users at software.sandia.gov"
>	<trilinos-users at software.sandia.gov>
>Subject: [Trilinos-Users] C++ user: Question about block arnoldi
>	method
>Message-ID: <CF800383.5706%kchowdh at sandia.gov>
>Content-Type: text/plain; charset="us-ascii"
>Dear user support,
>I am using the Block Krylov Schur SVD example found in the Anasazi
>library, linked for convenience here:
>I am trying to perform a weak scaling test, which fixes the amount of
>work done by each processor. I was unsure about how the "block size" and
>the "number of blocks" variables affect how much work is done by each
>processor. For example, in the current example, A is a 500x100 matrix. If
>I run this example with 5 processors, each processor will get 100 rows of
>A so I expect each processor to be doing the same amount of work.
>However, since the blockSize = 1 and numBlocks = 10, is the amount of
>work done by each processor really the same?

For block iterative eigensolvers (like Block Krylov-Schur) in general, the
block size does not affect the distribution of the matrix A.  Rather, the
block size controls the number of vectors that the method processes at
once.  Non-block Krylov methods work with a single vector at a time: each
iteration does a sparse matrix-vector multiply with one vector, and some
inner products, AXPYs, and norms with the result.  Block Krylov-Schur does
a sparse matrix-vector multiply with multiple vectors at a time, and then
does a block orthogonalization with the resulting vectors, again all at
the same time.  The distribution of the input matrix and the vectors over
MPI processes remains the same, whether or not the method is a block


More information about the Trilinos-Users mailing list