[Trilinos-Users] Coupling NOX with deal.II for multiple MPI threads
Roland Richter
roland.richter at ntnu.no
Fri Apr 26 05:23:16 EDT 2019
Hei,
I am trying to couple NOX to my existing code, written using the library
deal.II. Here I try to solve my system using the JFNK-method by
supplying the residual. The residual is calculated using functions from
deal.II, thus I have to transfer data from deal.II-code to
Trilinos-variables, back to deal.II-code, back to NOX, and finally back
to deal.II. The way I do it is
bool computeF(const Epetra_Vector &x, Epetra_Vector &FVec,
NOX::Epetra::Interface::Required::FillType)
{
f_vec.reinit(locally_relevant_dofs, mpi_communicator);
//deal.II-vector
residual_vec.reinit(locally_relevant_dofs, mpi_communicator);
//deal.II-vector
Epetra_Vector f_epetra_vec = Epetra_Vector(View,
f_vec.trilinos_vector(), 0);
residual_vec = 0.;
f_epetra_vec = x;
residual_function(f_vec, residual_vec); //Function for
calculating the residual
for(auto index : residual_vec.locally_owned_elements())
FVec[index] = residual_vec[index]; //Problem!
return true;
}
This works quite well for a single MPI thread. But for running on two
MPI threads, and having 81 degrees of freedom, the length of FVec goes
down to 63 elements (which is the amount of relevant degrees of freedom,
including ghosted values, for f_vec for both threads), while the amount
of locally owned degrees of freedom for both f_vec-vectors is 45 and 36,
respectively (resulting in 81 dofs). Thus I am trying to write 81 values
into a vector with a length of 63, resulting in a segfault. Is there a
way to handle this correctly?
Thanks,
regards,
Roland
More information about the Trilinos-Users
mailing list