[Trilinos-Users] [EXTERNAL] many interacting blobs in Trilinos
a.solernou at leeds.ac.uk
Fri Jan 20 05:30:47 EST 2017
Thanks for your extensive and prompt answer. Trilinos looks promising,
and could be the way to go.
Answering your questions, our code is written in C++, and continuously
develop new physics, so I would say Albany does not sound ideal. Low
level building blocks should be sparse linear algebra, iterative linear
solvers (CG), and we would mix all that with another MPI layer.
Browsing your website, I can see that there is support for that, and
that I should probably start reading TPetra documentation and examples.
I have still one concern. We want our code to perform, but also need to
be readable. We have PhD students, and they are supposed to work on the
PHYSICS side, and code new modules in the program. How much of extra
work would put on their side to learn to be "learning Trilinos". The
library is huge and the documentation terribly vast. I'm afraid that
this could be overwhelming.
On 01/19/2017 11:09 PM, Bill Spotz wrote:
> Dear Albert,
> As the User Experience capability area lead for Trilinos, perhaps I am best suited to answer (or start the discussion about) this question.
> There is a good chance that Trilinos could be appropriate for you. Just about everything in Trilinos is written for massively parallel architectures, via MPI, so this matches your needs. And for the most part, communication is hidden from the user and optimized. So at the very least, Trilinos deserves a hard look.
> Trilinos is a large suite of packages (each package representing one or more libraries), and each package started out as a component of a physics application code, and was pulled out because it was felt that the component could be useful to other physics codes. Trilinos provides an umbrella, a common build system and other infrastructure, and a framework for interoperability among the packages. The vast majority is written in C++, is object-oriented, and the most modern packages utilize advanced meta-programming techniques. These latest packages also stress performance portability, in which a single code base can run efficiently on both threaded architectures and GPUs, as well as a path for supporting other new architectures that may emerge.
> Trilinos started out as a set of three packages, one that supported linear algebra classes (distributed vectors and matrices that can compute distributed sparse matrix vector products, etc.), one that supported iterative linear solvers (based on Krylov methods) and one that supported algebraic preconditioning. Over the years, Trilinos has grown to support nonlinear solvers, eigensolvers, other preconditioning methods, mesh databases, finite element discretization support, load balancing, optimization, automatic differentiation, etc. We try to make as many packages interoperable with as many other packages as makes sense, while minimizing the required dependencies, so that users can use just what they need. As such, Trilinos packages are building blocks for physics application codes.
> Another path you may consider is Albany. This is a general finite element code that allows users to define their own evaluators, and thus address their own physics problems. It has been designed to use as much of Trilinos as possible, and the result is impressive: define your evaluators, and you get robust implicit methods, Jacobian and Hessian matrices for free, as well as embedded UQ, adjoints, inverse problems, and optimization and analysis capabilities. One of the latest features to be added is tetrahedral mesh adaptivity, which may be of interest to you.
> Sorry for the long response, but the short answers to your questions are more questions: what low-level building blocks do you need, or would you be interested in the high-level Albany approach? Is C+ acceptable, or a deal-breaker?
> Bill Spotz
> Sandia National Laboratories
> PO Box 5800 MS 1320
> Albuquerque, NM 87185
> (505) 845-0170
>> On Jan 19, 2017, at 10:30 AM, Albert Solernou <a.solernou at leeds.ac.uk> wrote:
>> Dear All,
>> In my team at the University of Leeds (UK) we have written a self-contained C++ code that simulates interacting blobs in water. More explicitly, it simulates the time evolution of visco-elastic bodies with irregular geometries, represented by unstructured tetrahedral meshes, which interact together via a number of pair-pair potentials. Therefore, our work combines the problem of the N^2 interacting bodies (external faces), together with solving the Cauchy momentum equation to calculate the time evolution of the system. At the moment, we don't consider hydrodynamic effects, but the friction with the solvent. Still, we will be modelling these in the future.
>> We now have strong interest in porting our code to MPI, and considered using a third party library.
>> Would Trilinos be the right library? How would its parallel layer cope with many small interacting bodies (meshes) and maybe some huge long beam to be bent? Could you please give some guidance on how best to spend my time learning how to use it? Would it be easy for us to write extensions to Trilios in case it was needed in the future?
>> Sorry by the long email, and for asking so many questions. I really only need some directions, as the library (or set of libraries) is really large.
>> Best regards,
>> Dr Albert Solernou
>> EPSRC Research Fellow,
>> Department of Physics and Astronomy,
>> University of Leeds
>> Tel: +44 (0)1133 431451
>> Trilinos-Users mailing list
>> Trilinos-Users at trilinos.org
Dr Albert Solernou
EPSRC Research Fellow,
Department of Physics and Astronomy,
University of Leeds
Tel: +44 (0)1133 431451
More information about the Trilinos-Users