[Trilinos-Users] Write a non-contiguous Tpetra::Vector

Heroux, Mike MHeroux at CSBSJU.EDU
Mon Nov 11 10:14:18 EST 2019


The way you are doing it is probably the best option.  I am glad you found something that works.

From: Trilinos-Users <trilinos-users-bounces at trilinos.org> on behalf of Wen Yan <wenyan4work at gmail.com>
Date: Sunday, November 10, 2019 at 2:01 PM
To: "maherou at sandia.gov" <maherou at sandia.gov>
Cc: "trilinos-users at trilinos.org" <trilinos-users at trilinos.org>
Subject: Re: [Trilinos-Users] Write a non-contiguous Tpetra::Vector

Thank you, Mike! I tried the debug mode and it does seem that writer prints entries rank-by-rank:
    0: Wait on receive-data receive from Process 1
    0: Write entries from Process 1
    0: Wait on receive-data receive from Process 2
    0: Write entries from Process 2

Then, if I want to print the MultiVector entries always in the mathematical order, what is the best practice if performance is not a concern? I tried to build a new MultiVector with a contiguous map and then do an Import from the one with non-contiguous maps. This looks working, but I am curious if there is a better way.

Thank you again for your help.
Wen Yan


On Sat, Nov 9, 2019 at 10:43 PM Heroux, Michael A <maherou at sandia.gov<mailto:maherou at sandia.gov>> wrote:
Sorry, the debug variable is a bool set to false by default.

> On Nov 9, 2019, at 4:49 PM, Heroux, Mike <MHeroux at csbsju.edu<mailto:MHeroux at csbsju.edu>> wrote:
>
> Wen Yan,
>
> One suggestion is to turn on debug mode.  There is a final optional argument to the Tpetra Writer implementation called debug that is set to debug by default.  Setting it true could give you some insight.
>
> Mike
>
>> On Nov 7, 2019, at 10:44 AM, Wen Yan <wenyan4work at gmail.com<mailto:wenyan4work at gmail.com>> wrote:
>>
>> Hi trilinos users&developers,
>>
>> Could you please help me understand how Tpetra::MatrixMarket::Writer works for non-contiguous Vector?
>> I have a vector partitioned like this on two ranks:
>> [0,1,0,0,1,1]
>> Here the entries are the ranks they belong to. Then I write the vector and map to files using:
>>    Tpetra::MatrixMarket::Writer::writeMapFile( );
>>    Tpetra::MatrixMarket::Writer::writeDenseFile( );
>>
>> The results I get are:
>> %%MatrixMarket matrix array integer general
>> % Format: Version 2.0
>> %
>> % This file encodes a Tpetra::Map.
>> % It is stored as a dense vector, with twice as many
>> % entries as the global number of GIDs (global indices).
>> % (GID, PID) pairs are stored contiguously, where the PID
>> % is the rank of the process owning that GID.
>> 12 1
>> 0
>> 0
>> 2
>> 0
>> 3
>> 0
>> 1
>> 1
>> 4
>> 1
>> 5
>> 1
>> %%MatrixMarket matrix array real general
>> 6 1
>> 0.00000000000000000e+00
>> 0.00000000000000000e+00
>> 0.00000000000000000e+00
>> 1.00000000000000000e+00
>> 1.00000000000000000e+00
>> 1.00000000000000000e+00
>>
>> The map written to file is what I expected, but the vector written to file is not. I expect [0,1,0,0,1,1], but it seems that all entries from one rank are grouped together and written to the file.
>> Is this how Tpetra::MatrixMarket::Writer::writeDenseFile( ) is designed to work, or did I do something wrong?
>> The short test code is attached to this email.
>>
>> Thank you very much!
>> Wen Yan
>>
>> // Teuchos utility
>> #include <Teuchos_ArrayViewDecl.hpp>
>> #include <Teuchos_GlobalMPISession.hpp>
>> #include <Teuchos_SerialDenseMatrix.hpp>
>> #include <Teuchos_TimeMonitor.hpp>
>> #include <Teuchos_oblackholestream.hpp>
>>
>> // Tpetra container
>> #include <MatrixMarket_Tpetra.hpp>
>> #include <Tpetra_Core.hpp>
>> #include <Tpetra_CrsMatrix.hpp>
>> #include <Tpetra_Map.hpp>
>> #include <Tpetra_MultiVector.hpp>
>> #include <Tpetra_Operator.hpp>
>> #include <Tpetra_RowMatrixTransposer_decl.hpp>
>> #include <Tpetra_Vector.hpp>
>> #include <Tpetra_Version.hpp>
>>
>> void test() {
>>
>>    using TMAP = Tpetra::Map<int, int>;          ///< default Teuchos::Map type
>>    using TV = Tpetra::Vector<double, int, int>; ///< default to Tpetra::Vector type
>>
>>    int rank, nprocs;
>>    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>>    MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
>>    const int localSize1 = 1;
>>    const int localSize2 = 2;
>>    const int globalSize1 = localSize1 * nprocs;
>>    const int globalSize2 = localSize2 * nprocs;
>>
>>    std::vector<int> vecGlobalIndexOnLocal(localSize1 + localSize2);
>>    for (int i = 0; i < localSize1; i++) {
>>        vecGlobalIndexOnLocal[i] = rank * localSize1 + i;
>>    }
>>    for (int i = 0; i < localSize2; i++) {
>>        vecGlobalIndexOnLocal[i + localSize1] = rank * localSize2 + i + globalSize1;
>>    }
>>    auto commRcp = Teuchos::rcp(new Teuchos::MpiComm<int>(MPI_COMM_WORLD));
>>    auto map = Teuchos::rcp(
>>        new TMAP(globalSize1 + globalSize2, vecGlobalIndexOnLocal.data(), localSize1 + localSize2, 0, commRcp));
>>
>>    auto vec = Teuchos::rcp(new TV(map, true));
>>    for (int i = 0; i < localSize1 + localSize2; i++) {
>>        vec->replaceLocalValue(i, rank);
>>    }
>>
>>    Tpetra::MatrixMarket::Writer<TV> writer;
>>    writer.writeMapFile("map.mtx", *map);
>>    writer.writeDenseFile("vec.mtx", vec);
>> }
>>
>> int main(int argc, char **argv) {
>>    MPI_Init(&argc, &argv);
>>
>>    test();
>>
>>    MPI_Finalize();
>>    return 0;
>> }
>> _______________________________________________
>> Trilinos-Users mailing list
>> Trilinos-Users at trilinos.org<mailto:Trilinos-Users at trilinos.org>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__trilinos.org_mailman_listinfo_trilinos-2Dusers&d=DwICAg&c=Cvk6809QJWx44KVfpEaK-g&r=s0Cs_B6-5lnucqIc3AS5jW688PqfYiw9JNMZS0NXqtY&m=YwEC85JJCiRN-Rbfk0tI-Px_DxQPp9z1MsfBIGkfGNE&s=FvMpFg4h2lsJHHjewaJd9NTG0wsZBi6TbZtQULme-t8&e=
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://trilinos.org/pipermail/trilinos-users/attachments/20191111/9d16b99b/attachment.html>


More information about the Trilinos-Users mailing list