[Trilinos-Users] [EXTERNAL] IO and general questions

Carlos Breviglieri carbrevi at gmail.com
Fri Aug 16 10:32:52 MDT 2013


Greg, thanks for the quick reply!

It really explains some of the assumptions I was getting from viewing the
source code. The fact alone that at no point a single proc holds a mesh is
worthy enough to keep investigating further the trilinos "ecosystem".
Composition is indeed interesting but I can move past that for now and wait
it to stabilize.

Just to be sure, once you setup your problem for N number of procs, you can
only restart at the same number of processors, right? Where/what is the
"epu" executable you mention in item a.1?

Thanks again, I appreciate you taking time to help me out!

Carlos Breviglieri


On Fri, Aug 16, 2013 at 12:49 PM, Sjaardema, Gregory D
<gdsjaar at sandia.gov>wrote:

>
>
>   From: Carlos Breviglieri <carbrevi at gmail.com>
> Date: Friday, August 16, 2013 9:20 AM
> To: "trilinos-users at software.sandia.gov" <
> trilinos-users at software.sandia.gov>
> Subject: [EXTERNAL] [Trilinos-Users] IO and general questions
>
>       Hello, I am Carlos, a PhD candidate in Brazil. I've been recently
> checking out Trilinos as a basis for a CFD application. My research focus
> is on high-order unstructured methods for aeronautical/space applications.
> I have checked the TUG videos and must say that its a great way to get
> started. I've done most of the tutorials and read some doxygen stuff.
>
> I've come across a couple of points with some packages that I hope someone
> can clarify. These are somewhat entrypoint questions for me and, hopefully,
> might be usefull to someone else get started. I am using git sources from
> august 13th. I apologize if this is not the proper mail-list to do so and
> the lengthy topic...
>
>  *SEACAS::Ioss*
>  I need to do parallel IO. I've got experience with the CGNS/HDF5 library
> and I am aware of the par_exo factory (Iopx) for the exodus format and got
> it compiled with specific flags from cmake. I can switch to exodus but I
> was not able to confirm, from the source code, that:
>
>  a) can it do parallel read/write + parallel decomposition? From
> "packages/seacas/libraries/ioss/src/main/io_shell.C" it is possible to
> partition (zoltan or metis) the domain but I believe that at one moment one
> proc will hold the entire mesh, before calling the partitioner interface.
> Afterwards, it can compose the output, which is a feature I am interested
> in. Moreover the source states that, at this point, no parallel RESTART is
> supported.
>
>      -- At no time is the entire mesh on a single processor.  The code
> initially performs a linear decomposition (#element/#nprocessors) elements
> to each processor.  This initial decomposition is then fed to zoltan or
> metis to get a better decomposition.
>
> So, the questions here are:
>  a.1) Is Iopx able to do parallel IO (I assume so since it uses the
> netcdf4 format)?
>     It is, but note that it is a fairly new capability and hasn't been
> fully productionized and optimized.  The automatic decomposition is farther
> along than the composition.  Note that the historical method of doing
> parallel with exodus has been to provide a "file-per-processor" at the
> beginning of the analysis which then writes a file-per-processor.  The
> decomp script is used to decompose a mesh into a file-per-processor and the
> epu executable is used to join the file-per-processor output into a single
> file.
>
> a.2) Is Zoltan (did not check Zoltan2) able to do parallel decomposition
> internally (not rely on ParMETIS)?
>
>  It would be possible to modify the code to only support zoltan, but
> currently it is set up to require both zoltan and parmetis.
>
>
> a.3) Are composition methods (join the output solution files)
> experimental? (had issues with netcdf library versioning)
>
>  The composition method is definitely not production ready at this time.
>  There are efficiency issues with large numbers of processors and some
> tuning of hdf5 and mpiio settings is needed (as is probably some internal
> code work in Iopx::DatabaseIO).  There have been multiple bugs fixed in the
> parallel portions of netcdf; I would definitely recommend using
> netcdf-4.3.0 or later if you are planning to use the parallel-netcdf
> (either hdf5-based or pnetcdf-based).
>
>  Note that we hope to have a CGNS read/write capability for structured
> meshes in Ioss sometime in the next fiscal year.
>
> *STK
> *
>  b) I was able to read and decompose an exodus/netcdf4 mesh (through
> meshData.properties) in Panzer using exodus stk_interface class, but I am
> not sure if the decomposition and IO happens in parallel...?
>
>  Not sure about that method, but if you use the property method or the
> environment method to specify the DECOMPOSITION_METHOD (see 'e' below), it
> is done in parallel.
>
> c) I have not found any indication from stk sources that it supports
> output composition... Can it be done? It uses internal write routines that
> act upon an Ioss database, which does support it. I have also checked the
> tExodusReaderFactory.cpp from Panzer package that interfaces with STK but
> such functionality was not present.
>
>  The composition is selectable via the setting of "properties" that are
> passed down to the Ioss class.  I'm not sure how to do that in panzer.
>  Another method of doing this is by setting an environment variable prior
> to running the application.  For mesh composition, you would do something
> like:  export IOSS_PROPERTIES=COMPOSE_RESULTS=YES:COMPOSE_RESTART=YES
>
>
>
>
> d) Is STK able to do AMRC (Adaptive Mesh Refinement/Coarsening)? I would
> use such feature for shock-capturing algorithms.
>
> e) Assuming one is able to read a mesh in parallel (Iopx) each proc ends
> up with an unordered slice of the mesh. Is there functionallity in STK to
> decompose it in parallel and send/receive the mesh data to its appropriate
> proc? Would this be related to the rebalance algorithms in STK?
>
>  The mesh decomposition capability can be enabled in the same way via
> "export IOSS_PROPERTIES=DECOMPOSITION_METHOD={method}" where {method} is
> one of linear, rcb, rib, hsfc, block, cyclic, random, kway, geom_kway,
> metis_sfc.
>
>  --Greg Sjaardema
>
>    Regards
>
> Carlos Breviglieri
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://software.sandia.gov/pipermail/trilinos-users/attachments/20130816/7276e07a/attachment.html 


More information about the Trilinos-Users mailing list