[Trilinos-Users] Question concerning the Epetra BlockMap function : RemoteIDList
Heroux, Michael A
maherou at sandia.gov
Thu Sep 17 18:52:31 EDT 2015
I believe the collective operation is only performed on the first call for
any given map. If this is the case, you should be able to do a pre-call
of the RemoteIDList() method, making sure that all processes are
participating. Then subsequent calls would be safe.
It is worth a try. If that does not work, please send some output, or if
making a small test case is possible, please do that.
Thanks.
Mike
On 9/17/15, 3:54 PM, "Trilinos-Users on behalf of John Jomo"
<trilinos-users-bounces at trilinos.org on behalf of john.jomo at tum.de> wrote:
>Hallo Mike,
>
>looked at my algorithm again and found out that not all processes are
>participating in the call thus the stall.
>Is there a way to make this call on a subset of processes?
>
>Thanks for the help.
>
>John.
>
>On 17.09.2015 19:32, Heroux, Mike wrote:
>> John,
>>
>> A few thoughts:
>>
>> - Generally speaking, this is a collective call, so all MPI processes
>>need
>> to participate in the call. Some logic paths through the function don¹t
>> require communication if the situation is simple enough to compute with
>> local data.
>> - Check error codes to see if there is a non-zero value being returned.
>>
>> Mike
>>
>> On 9/17/15, 11:09 AM, "Trilinos-Users on behalf of John Jomo"
>> <trilinos-users-bounces at trilinos.org on behalf of john.jomo at tum.de>
>>wrote:
>>
>>> Hallo everyone,
>>>
>>> here is a question concerning Epetra:
>>>
>>> I have created a distributed Epetra_BlockMap and I'm trying to find out
>>> on which processes a set of Ids reside using the RemoteIDList command.
>>>
>>> I run a loop over the function making repeated queries using different
>>> values of "IdsToQuery"
>>> int error = myMap->RemoteIDList( numberOfIds, &( IdsToQuery[0] ), &(
>>> processorIds[0] ), &( localIds[0] ));
>>>
>>> For some strange reason some processes find all Ids while others stall
>>> within the function causing a deadlock :(
>>>
>>> I thought the problem was caused by mutlithreading so I pinned the
>>> mpi-processes to the cores and made sure that I only used one process
>>> per core. This however did not solve the problem.
>>>
>>> Would really appreciate some help on this.
>>>
>>> thanks in advance
>>>
>>>
>>> John.
>>>
>>>
>>>
>>> --
>>> John Jomo M.Sc.
>>> Technische Universität München
>>> Computation in Engineering
>>> Simulation in Applied Mechanics - SAM
>>> Arcisstraße 21
>>> 80333 München
>>> Tel.: 0049 / 89 / 289 25064
>>> Fax: 0049 / 89 / 289 25051
>>> E-Mail:john.jomo at tum.de<mailto:john.jomo at tum.de>
>>> Internet:www.cie.bgu.tum.de<http://www.cie.bgu.tum.de/>
>>>
>>> _______________________________________________
>>> Trilinos-Users mailing list
>>> Trilinos-Users at trilinos.org
>>> https://trilinos.org/mailman/listinfo/trilinos-users
>
>--
>John Jomo M.Sc.
>Technische Universität München
>Computation in Engineering
>Simulation in Applied Mechanics - SAM
>Arcisstraße 21
>80333 München
>Tel.: 0049 / 89 / 289 25064
>Fax: 0049 / 89 / 289 25051
>E-Mail: john.jomo at tum.de<mailto:john.jomo at tum.de>
>Internet: www.cie.bgu.tum.de<http://www.cie.bgu.tum.de/>
>
>_______________________________________________
>Trilinos-Users mailing list
>Trilinos-Users at trilinos.org
>https://trilinos.org/mailman/listinfo/trilinos-users
More information about the Trilinos-Users
mailing list