Memory usage for large multiplications

bodrato at bodrato at
Wed Feb 3 09:52:45 CET 2010


> FYI, Linux has the ability to overallocate memory, allocating it
> really only when needed. In such a case, the former strategy is
> better. But I wouldn't recommend it, as in case of lack of memory,
> expect random processes to be killed by the OOM Killer. :(

The OOM killer can kill a process even in the "just in time allocation"
scenario, when two processes concurrently ask for memory.

By the way, the kernel can delay real allocation, but it can not detect
when memory is not needed any more and free it.
Maybe the itch/scratch strategy should be limited to a few megabytes (or
to the small-to-middle-sizes algorithms)? And bigger memory areas should
be always allocated and freed on the fly?



More information about the gmp-devel mailing list