[Trilinos-Users] Memory allocation error in ML

Charles Boivin charles.boivin at mayahtt.com
Tue Dec 4 13:21:30 MST 2012


Hello,

I am using Trilinos 10.6.4, and I've uncovered a memory allocation error in ML. In utils/ml_memory.c, we have a function:

int ML_memory_alloc( void **memptr, unsigned int leng, char *name )

Pretty much all memory allocation in ML goes through this. The obvious problem right away is that the largest block that can be allocated is a 32-bit unsigned integer, so 4GB. The problem, however, is made even worse due to the fact , later in the function, this 'leng' is divided into chunks of sizeof(double). The allocation is then performed as such:

      var_ptr = (char *) ML_allocate(nchunks*ndouble);

Here, both 'chunks' and 'ndouble' are 32-bit *signed* integers, which I believe means that the resulting value will be a 32-bit signed integer as well. For anything above 2GB, we're actually passing a negative value to ML_allocate() (at least, that what happens on win64 and on linux). Agreed, cases like this do not come by very often, but with desktop machines now capable of at least 64GB of RAM, it *can* happen.

Furthermore, there are many instances in the ML where the 'leng' parameter to ML_memory_alloc() is cast as an (int), too. 

I've made use of the 'ml_size_t' type and changed the integer values to this type, and also removed the casts to int in the code and that seems to have solved the issue for me, as much as I can test it. Is there any interest in fixing this in the code base? I can forward the modifications (based on 10.6.4) if so desired; just let me know in what form that would be needed. 

Thanks,

Charles Boivin





More information about the Trilinos-Users mailing list