[Trilinos-Users] Question concerning the Epetra BlockMap function : RemoteIDList

Heroux, Michael A maherou at sandia.gov
Mon Sep 21 08:53:21 EDT 2015


John,

I am glad you are able to move forward.  Thanks for the simple test case.  It will be helpful.

Regarding the collective behavior, there is no way around it for some cases, since global ID ownership has to be discovered and involves all processes.  

But there may be ways to make the process easier.

Thanks.

Mike

> On Sep 21, 2015, at 1:49 AM, John Jomo <john.jomo at tum.de> wrote:
> 
> Hi Mike,
> 
> I wrote a small test which shows that the function stalls when the function  RemoteIDList()
> is not called in a collective operation. The test is attached to this mail.
> 
> I managed to find a work around in my algorithm. For my application, I first sort out the indices to be queried and make one call to the function
> RemoteIDList(). But I would be still interested to know if a non-collective call can be made to RemoteIDList().
> 
> Short description of the test:
> 16 indices are divided among 4 processes and a connectivity map which defines how the different indices are connected to each other is stored on
> every process. This mimics the sparsity pattern of a finite element mesh. I chose a non-symmetric map for the test and query the location of a set of indices on every process.
> 
> thanks for all the help. At least now I can continue developing my code :)
> 
> John.
> 
> 
>> On 18.09.2015 00:52, Heroux, Michael A wrote:
>> I believe the collective operation is only performed on the first call for
>> any given map.  If this is the case, you should be able to do a pre-call
>> of the RemoteIDList() method, making sure that all processes are
>> participating.  Then subsequent calls would be safe.
>> 
>> It is worth a try.  If that does not work, please send some output, or if
>> making a small test case is possible, please do that.
>> 
>> Thanks.
>> 
>> Mike
>> 
>> On 9/17/15, 3:54 PM, "Trilinos-Users on behalf of John Jomo"
>> <trilinos-users-bounces at trilinos.org on behalf of john.jomo at tum.de> wrote:
>> 
>>> Hallo Mike,
>>> 
>>> looked at my algorithm again and found out that not all processes are
>>> participating in the call thus the stall.
>>> Is there a way to make this call on a subset of processes?
>>> 
>>> Thanks for the help.
>>> 
>>> John.
>>> 
>>>> On 17.09.2015 19:32, Heroux, Mike wrote:
>>>> John,
>>>> 
>>>> A few thoughts:
>>>> 
>>>> - Generally speaking, this is a collective call, so all MPI processes
>>>> need
>>>> to participate in the call.  Some logic paths through the function don¹t
>>>> require communication if the situation is simple enough to compute with
>>>> local data.
>>>> - Check error codes to see if there is a non-zero value being returned.
>>>> 
>>>> Mike
>>>> 
>>>> On 9/17/15, 11:09 AM, "Trilinos-Users on behalf of John Jomo"
>>>> <trilinos-users-bounces at trilinos.org on behalf of john.jomo at tum.de>
>>>> wrote:
>>>> 
>>>>> Hallo everyone,
>>>>> 
>>>>> here is a question concerning Epetra:
>>>>> 
>>>>> I have created a distributed Epetra_BlockMap and I'm trying to find out
>>>>> on which processes a set of Ids reside using the RemoteIDList command.
>>>>> 
>>>>> I run a loop over the function  making repeated queries using different
>>>>> values of "IdsToQuery"
>>>>> int error = myMap->RemoteIDList( numberOfIds, &( IdsToQuery[0] ), &(
>>>>> processorIds[0] ), &( localIds[0] ));
>>>>> 
>>>>> For some strange reason some processes find all Ids while others stall
>>>>> within the function causing a deadlock  :(
>>>>> 
>>>>> I thought the problem was caused by mutlithreading so I pinned the
>>>>> mpi-processes to the cores and made sure that I only used one process
>>>>> per core. This however did not solve the problem.
>>>>> 
>>>>> Would really appreciate some help on this.
>>>>> 
>>>>> thanks in advance
>>>>> 
>>>>> 
>>>>> John.
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> John Jomo M.Sc.
>>>>> Technische Universität München
>>>>> Computation in Engineering
>>>>> Simulation in Applied Mechanics - SAM
>>>>> Arcisstraße 21
>>>>> 80333 München
>>>>> Tel.:     0049 / 89 / 289 25064
>>>>> Fax:      0049 / 89 / 289 25051
>>>>> E-Mail:john.jomo at tum.de<mailto:john.jomo at tum.de>
>>>>> Internet:www.cie.bgu.tum.de<http://www.cie.bgu.tum.de/>
>>>>> 
>>>>> _______________________________________________
>>>>> Trilinos-Users mailing list
>>>>> Trilinos-Users at trilinos.org
>>>>> https://trilinos.org/mailman/listinfo/trilinos-users
>>> -- 
>>> John Jomo M.Sc.
>>> Technische Universität München
>>> Computation in Engineering
>>> Simulation in Applied Mechanics - SAM
>>> Arcisstraße 21
>>> 80333 München
>>> Tel.:     0049 / 89 / 289 25064
>>> Fax:      0049 / 89 / 289 25051
>>> E-Mail:   john.jomo at tum.de<mailto:john.jomo at tum.de>
>>> Internet: www.cie.bgu.tum.de<http://www.cie.bgu.tum.de/>
>>> 
>>> _______________________________________________
>>> Trilinos-Users mailing list
>>> Trilinos-Users at trilinos.org
>>> https://trilinos.org/mailman/listinfo/trilinos-users
> 
> -- 
> John Jomo M.Sc.
> Technische Universität München
> Computation in Engineering
> Simulation in Applied Mechanics - SAM
> Arcisstraße 21
> 80333 München
> Tel.:     0049 / 89 / 289 25064
> Fax:      0049 / 89 / 289 25051
> E-Mail:   john.jomo at tum.de<mailto:john.jomo at tum.de>
> Internet: www.cie.bgu.tum.de<http://www.cie.bgu.tum.de/>
> 
> <remoteIdTest.cpp>


More information about the Trilinos-Users mailing list