Arithmetic without limitations?
arndt at jjj.de
Mon Feb 15 10:44:48 CET 2010
* Pierrick Gaudry <pierrick.gaudry at gmail.com> [Feb 15. 2010 09:35]:
> Torbjörn said:
> > You've created a benchmark that could be useful for measuring disk
> > performance. The best short-term GMP speedup would surely be to get
> > yourself an SSD disk. :-)
> Indeed, before starting a big project like allowing out of core
> computation in GMP, one should really take into account the fact that SSD
> disks are now more and more frequent.
> If, as we sometimes hear, the classical hard drives are going to die and
> be replaced by SSD, then Torbjörn's approach of trusting the OS's swap
> system might actually become efficient enough for standard applications,
> since fragmentation is not a big problem for SSD.
Note SSDs are not much bigger than the total RAM of
a well-equipped system. And they are (and will continue
to be for forseeable time) quite bleeping expensive.
The price of the disks used in Bellard's computation
as SSD would have been ...?
The classical hard disks are not going away anytime soon.
The main advantage of SSDs is tiny seek time which is
for our purposes pointless(!). Combine a bunch of classical
disk into some RAID-whatever and you get read- and write
speed of >500MB/sec. This is fast enough, just compare to
the time it takes to finish a size-1GB FFT.
I'd advise to _not_ trust the O/S's swap mechanism,
it is likely optimized for a very different scenario.
(But yes, using swap with SSDs (so tiny seektime is
very good) is a neat no-brainer approach to push
things by just throwing in money).
Take home message: Bellard did the right thing.
More information about the gmp-devel