[Trilinos-Users] [EXTERNAL] ML matrix-matrix multiplication

Chris Siefert csiefer at sandia.gov
Tue Mar 26 11:15:31 MDT 2013


Pavel,

I recently checked in a substantial upgrade to the EpetraExt MMM, which 
should eliminate the performance improvement of using ML's MMM is the 
vast majority of cases.

It isn't a release yet, but it is available in the public git 
repository: http://trilinos.sandia.gov/publicRepo/index.html

-Chris

On 03/25/2013 10:08 AM, Raymond Tuminaro wrote:
> ML's matrix-matrix multiply has different assumptions than Epetra's ... with respect to column maps and
> empty columns. ML makes sure that its matrices satisfy these assumptions and we don't generally recommend
> that others coming from outside ML use the ML's matrix-matrix multiply.
>
> -Ray
>
>
> On 03/25/13 06:56, Pavel Jiránek wrote:
>> Dear Trilinos people,
>>
>> I'm working on a small AMG code. I normally use MatrixMatrix from EpetraExt to compute the Galerkin matrices Pt * A * P
>> (or R * A * P in general) and everything worked fine. I saw once a document somewhere in the Trilinos source tree some
>> comparison of {E/T}petra matrix-matrix multiplications with the ones used in ML (in particular ML_Epetra_RAP) and I
>> thought if I could try it in my code if it would increase its performance since at some points the matrix-matrix
>> multiplications take about 50% of the preconditioner setup time.
>>
>> However, it seems that there is a small problem with the way how I'm handling the singletons in the linear problem (at
>> least it seems so). I'm using aggregation at the moment and if there is a singleton in the matrix on a level (even
>> singleton with respect to the chosen connection strength criterion) I just put a zero row in the prolongator at the
>> corresponding position. That is, e.g., if I'd have a simple 3D Laplacian on a 2x2 grid with an artificial large element
>> on the diagonal
>>
>> $ full(A)
>>
>> ans =
>>
>> 50 -1 -1 0 -1 0 0 0
>> -1 6 0 -1 0 -1 0 0
>> -1 0 6 -1 0 0 -1 0
>> 0 -1 -1 6 0 0 0 -1
>> -1 0 0 0 6 -1 -1 0
>> 0 -1 0 0 -1 6 0 -1
>> 0 0 -1 0 -1 0 6 -1
>> 0 0 0 -1 0 -1 -1 6
>>
>> the prolongator looks like this:
>>
>> $ full(P)
>>
>> ans =
>>
>> 0 0
>> 0.5774 0
>> 0 0.5000
>> 0.5774 0
>> 0 0.5000
>> 0.5774 0
>> 0 0.5000
>> 0 0.5000
>>
>> (the restrictor is just a transpose of P). If I apply the strength criterion used, e.g., in the smoothed aggregation,
>> the large diagonal element will be a singleton.
>>
>> Now when I implement the RAP product using EpetraExt::MatrixMatrix::multiply it works just fine and I get the
>> "coarse-grid" matrix
>>
>> $ full(RAP_Epetra)
>>
>> ans =
>>
>> 4.6667 -1.1547
>> -1.1547 4.5000
>>
>> However, ML_Epetra_RAP returns
>>
>> $ full(RAP)
>>
>> ans =
>>
>> -1.0000 2.3094
>> 4.0415 -0.2500
>>
>>
>> For problems without singletons (and hence no zero rows in P) ML multiplications work fine. It seems to me that this
>> problem is linked to the occurrence of singletons or, more precisely, to the way how I treat them (which, on the other
>> hand, are quite natural in my opinion). Is there any other "special" way how singletons are treated in ML? Or should the
>> prolongator (if it could have zero rows in ML) have some other special property so that ML_Epetra_RAP would work fine?
>>
>> Thank in advance.
>>
>> Best regards,
>>
>> Pavel
>>
>>
>>
>> This body part will be downloaded on demand.



More information about the Trilinos-Users mailing list