[Trilinos-Users] Problem with Epetra FECrsMatrix during the assembly phase

Williams, Alan B william at sandia.gov
Fri Jul 25 09:59:57 MDT 2008


Hi Cristiano,

Assembly of a matrix usually performs very well in parallel. Time decreases as more processors are used for the same matrix.

When assembling a matrix with a distributed finite element problem, FECrsMatrix places local contributions directly into the underlying CrsMatrix, and holds overlapping contributions (contributions related to shared nodes, i.e. nodes on the boundary between processors). The overlapping contributions get communicated to the "owning processor" when the GlobalAssemble function is called.

Can you send a portion of your code? I would like to see the code that creates your matrix and defines the row-map, and also the code that inserts or sums global values into the matrix.

Alan


> -----Original Message-----
> From: trilinos-users-bounces at software.sandia.gov
> [mailto:trilinos-users-bounces at software.sandia.gov] On Behalf Of MLX82
> Sent: Friday, July 25, 2008 9:48 AM
> To: trilinos-users at software.sandia.gov
> Subject: [Trilinos-Users] Problem with Epetra FECrsMatrix
> during the assembly phase
>
> Hi,
>
> I am Cristiano Malossi from MOX, (Department of Mathematics,
> Politecnico di Milano, Italy). I am having some problems during the
> assembly phase in a FEM code.
>
> If I run the code on a single CPU it takes about 50 seconds to
> assemble a 500k DOF matrix, which is a reasonable time. The same case,
> takes about 2200 seconds to create the matrix if I use two or more
> CPUs: that is strange, as I use always only one core to assemble the
> matrix, (which in this case has a map with rows distributed on two or
> more
> processes), and so I think I should have an assembly time of about the
> same magnitude. Note that, to assemble the matrix I loop on the
> elements, inserting many small blocks (4x4 or 16x16) with the
> instruction InsertGlobalValues(...).
>
> I have tried to solve the problem using graphs, but unfortunately it
> seems that nothing is changed in the assembly phase (now I am using
> SumIntoGlobalValueS()), while (fortunately) I have had some sensible
> increase on the performances generating the ILU preconditioner and
> solving the linear system.
>
> Now I am trying to find if the problem is related to the
> redistribution among the processes of the rows of the matrix: to do
> this I'm using the same comunicator, with a different "serial map"
> (with all rows on process 0) to generate a "serial matrix". After the
> assembly of this new matrix, I use these commands to create a new
> distributed matrix:
>
> Epetra_Export Exp(map_Serial, map_MPI);
> MATRIX_MPI.Export(MATRIX_Serial, Exp, Add);
>
> the results is that on some small cases (both serial and MPI) it works
> fine, while on others (always MPI) it collapses. In any case, the
> assembly time of the serial matrix seems ok (50 seconds) and the time
> taken by the export process is very low (1 o 2 seconds).
>
> Can anyone give me an advice to speed up my assembly process?
>
> Thanks for any help.
>
>    Cristiano Malossi
>
>
>
>
> _______________________________________________
> Trilinos-Users mailing list
> Trilinos-Users at software.sandia.gov
> http://software.sandia.gov/mailman/listinfo/trilinos-users
>



More information about the Trilinos-Users mailing list