abort on error - is this being addressed?
vincent at vinc17.net
Sat Aug 28 11:18:50 CEST 2010
On 2010-08-28 10:06:35 +0200, Joerg Arndt wrote:
> * Vincent Lefevre <vincent at vinc17.net> [Aug 28. 2010 09:37]:
> > Linux no longer blindly overcommits, so that if you have insanely
> > large computations, malloc will fail.
> This is (and IIRC always has been) adjustable via sysctl:
Yes, but the default has changed (in the past, malloc() always
succeeded if there was enough address space), and there are now
3 strategies instead of 2 (to limit overcommit even more).
> Even if the kernel overcommits (and one cannot change that)
> there is a "pretty safe" strategy:
> malloc all memory you'll need and write to it (to force
> getting actual RAM), then use this memory as a pool.
This is not always possible, e.g. in an interactive use, or if the
range of values (or precision) isn't known in advance. Anyway, I
agree with Torbjörn: either use customized allocation functions,
or trap SIGABRT.
Also, Maple uses GMP for integer operations, and it seems to handle
OOM quite well, though I haven't tested it extensively. I don't know
what method it uses...
Vincent Lefèvre <vincent at vinc17.net> - Web: <http://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <http://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / Arénaire project (LIP, ENS-Lyon)
More information about the gmp-devel