[Trilinos-Users] [EXTERNAL] How to distribute a Tpetra vector across memory spaces

Siefert, Christopher csiefer at sandia.gov
Mon Aug 10 09:46:00 MST 2020


Ben,

Each MPI rank has a memory space (via the node type) associated with it.  So that's how you divvy stuff up, presuming you can get your MPI processor/gpu bindings to do what you want.  In theory, you could create a Map object that mixes and matches node types (pack and unpack for migration should be insensitive to node type) and create your Vector based off that.  I said, "in theory" because this is not a capability that is regularly tested (though IIRC, the handful of times its been tried, it worked).

-Chris
________________________________________
From: Trilinos-Users <trilinos-users-bounces at trilinos.org> on behalf of Ben Cowan <benc at txcorp.com>
Sent: Sunday, August 9, 2020 3:24 PM
To: trilinos-users at trilinos.org
Subject: [EXTERNAL] [Trilinos-Users] How to distribute a Tpetra vector across memory spaces

How do I distribute a Tpetra vector across multiple memory spaces on a single node? For instance, some of the elements should reside in global memory on one GPU, some in global memory on another GPU, and others in host memory. The documentation doesn't say explicitly, but it seems like Tpetra::Map requires (at least) one process per memory space. Is this correct?

Thanks,
Ben


_______________________________________________
Trilinos-Users mailing list
Trilinos-Users at trilinos.org
http://trilinos.org/mailman/listinfo/trilinos-users_trilinos.org



More information about the Trilinos-Users mailing list