GMP «Arithmetic without limitations» GMP test systems
Last modified: 2018-11-16


The GMP project maintains a comprehensive test environment consisting of physical and emulated systems. All test systems use non-routable IP-addresses, and are firewalled behind the main GMP network.

GMP developers with an account at shell.gmplib.org can log in to any of these systems via shell. Only virtualised systems marked as running on servus are directly reachable. Other systems can be reached via the system ashell which acts as a secondary gateway. Log in to ashell from shell using this command:

shell$ ssh ashell

Most systems below are powered off except when tests are being run. The system for power control is a bit crude; the command for switching on [system] is

ashell$ pdu on [system]

and then it will be properly switched off by the test system. The delay before a system and its virtualised guest systems are up can be 100 seconds (or in a few cases worse).

Please see the status page for system power information.

Table colour coding indicates where a machine is located:

on off location
    TUG in Stockholm, access via shell.gmplib.org
    Salt, access via TUG's shell.gmplib.org and then as per instructions above

Real hardware systems

name arch cpu type cpu code name cores clk  L1
KiB
L2
KiB
 L3
MiB
ram
GiB
virt OS/kern pwr
stat
comment
servus x86-64 Xeon E5-1650v2 Ivy Bridge-EP 6 3500 6 × 32 6 × 256 12 96 xen gnu/linux gentoo on ssh to port 2202 to virtual system 'shell'
servile x86-64 Ryzen 2700X Zen/Pinnacle Ridge 8 3700-4300 8 × 32 8 × 512 16 64 xen gnu/linux gentoo on ssh via 'shell' above through tunnel to virtual system 'ashell'
k8/panther x86-64 Athlon X2 4800+ K8/Brisbane 2 2500 2 × 64 2 × 512 8 xen gnu/linux gentoo pdu use guest systems, see next table (soon)
king x86-64 Phenom II 1090T K10/Thuban 6 3200-3600 6 × 64 6 × 512  6 32 xen gnu/linux gentoo pdu use guest systems, see next table
tutu x86-64 FX-4100 Bulldozer/Zambezi 4 3600-3800 4 × 16 2 × 2048  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
pile x86-64 FX-8350 Piledriver/Vishera 8 4000-4200 8 × 16 4 × 2048  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
exca x86-64 A12-9800 Excavator/Bristol Ridge 4 3800-4200 4 × 32 2 × 1024 32 xen gnu/linux gentoo pdu use guest systems, see next table
zen x86-64 Ryzen 1500X (mfg 1740) Zen/Summit Ridge 4 3500-3900 4 × 32 4 × 512 16 32 xen gnu/linux gentoo pdu use guest systems, see next table
piri x86-64 Ryzen 2700X Zen/Pinnacle Ridge 8 3700-4300 8 × 32 8 × 512 16 48 xen gnu/linux gentoo pdu use guest systems, see next table
element x86-64 Xeon Nocona 2 3400 2 × 16 1024 8 gnu/linux gentoo timer boots unreliably at power-on; might crash under load
cnr x86-64 Core2 E6400 Conroe 2 2133 2 × 32 2048 8 xen gnu/linux gentoo pdu use guest systems, see next table
pnr x86-64 Xeon E3110 Penryn/Wolfdale 2 3000 2 × 32 6144 8 xen gnu/linux gentoo pdu use guest systems, see next table
nhm x86-64 Xeon X3470 Nehalem/Lynnfield 4 2933-3200 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
sbr x86-64 Xeon E3-1270 Sandybridge 4 3400-3800 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
hwl x86-64 Xeon E3-1271v3 Haswell 4 3600-4000 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
bwl x86-64 Xeon E3-1285Lv4 Broadwell 4 3400-3800 4 × 32 4 × 256 6+128 32 xen gnu/linux gentoo pdu use guest systems, see next table
osky x86-64 Core i5 6600K Skylake 4 3500 4 × 32 4 × 256  6 16 kvm gnu/linux debian pdu misc virtualisation host
sky x86-64 Xeon E3-1270v5 Skylake 4 3600-4000 4 × 32 4 × 256  8 64 xen gnu/linux gentoo pdu use guest systems, see next table
bobcat x86-64 E-350 Zacate 2 1600 2 × 32 2 × 512 8 gnu/linux gentoo pdu
jag x86-64 Athlon 5350 Jaguar/Kabini 4 2050 4 × 32 2048 16 xen gnu/linux gentoo pdu use guest systems, see next table
gege x86-64 Atom 330 Diamondville 2 1600 24 512 4 gnu/linux gentoo pdu
slm x86-64 Atom C2758 Silvermont/Rangeley 8 2400 8 × 24 4096 32 xen gnu/linux gentoo pdu waiting to meet its maker due to Intel C2000 clock bug
glm x86-64 Atom C3758 Goldmont/Denverton 8 2200 8 × 24 16384 32 xen gnu/linux gentoo pdu use guest systems, see next table
tambo x86-32 Athlon K7/Barton 1 2083 64 512 2 gnu/linux gentoo timer
parks x86-32 Pentium4-2 Northwood 1 2600 8 512 0.5 gnu/linux gentoo timer
olympic ia-64 Itanium 2 Mckinley 2 900 2 × 16 2 × 256 1.5 4 freebsd 10.3 pdu HP rx2620
g5 ppc64 PPC-970 2 1800 2 × 32 2 × 512 1.2 macos/darwin timer Power Mac G5
pi1 armv6 arm1176 1 700 0.5 gnu/linux on Raspberry Pi 1
odc1 armv7a Cortex-A5 4 1500 1 gnu/linux on Odroid-C1+
pi2 armv7a Cortex-A7 4 900 1 gnu/linux on Raspberry Pi 2
beagle armv7a Cortex-A8 1 1000 0.5 gnu/linux on Beaglebone black
panda armv7a Cortex-A9 2 1000 1 gnu/linux on Pandaboard; VERY CRASH-PRONE
nanot2 armv7a Cortex-A9 4 1400 1 gnu/linux on FriendlyELEC NanoPC-T2
odxu4 armv7a Cortex-A15 + Cortex-A7 4×2000 + 4×1400 2 gnu/linux on Odroid-XU4; Stable at last???
tinker armv7a Cortex-A17 4 1800 2 gnu/linux on ASUS Tinker Board
pi3 armv8 Cortex-A53 (32) 4 1400 1 gnu/linux on Raspberry Pi 3 B+
odc2 armv8 Cortex-A53 (64) 4 1536 2 gnu/linux on Odroid-C2
nanom4 armv8 Cortex-A72 + Cortex-A53 2×1800 + 4×1416 4 gnu/linux on FriendlyELEC NanoPi M4

Pictures of GMP development systems:
GMP main development systems
GMP misc development systems
GMP arm development systems

Type-1 virtualised x86 systems

The host names of the virtualised system are made from the physical host name, the abbreviated OS name, the OS flavour (32,64), and 'v' followed by the abbreviated version number. Some installs lack the version part. Exact names are given in the table below.

The primary systems for each host is the ones in bold. These systems are better maintained, have more memory, and are given several CPU cores.




system
name
make
µarch
virtsys
king
AMD
thuban
gnu/linux/xen
tutu
AMD
bulldozer
gnu/linux/xen
pile
AMD
piledriver
gnu/linux/xen
exca
AMD
excavator
gnu/linux/xen
zen
AMD
zen
gnu/linux/xen
piri
AMD
zen+
gnu/linux/xen
cnr
Intel
conroe
gnu/linux/xen
pnr
Intel
penryn
gnu/linux/xen
nhm
Intel
nehalem
gnu/linux/xen
sbr
Intel
sandybridge
gnu/linux/xen
servus
Intel
ivybridge
gnu/linux/xen
hwl
Intel
haswell
gnu/linux/xen
bwl
Intel
broadwell
gnu/linux/xen
sky
Intel
skylake
gnu/linux/xen
slm
Intel
silvermont
gnu/linux/xen
glm
Intel
goldmont
gnu/linux/xen
osky
Intel
skylake
gnu/linux/kvm
jaguar
AMD
jaguar
gnu/linux/xen
freebsd 9.3 32 kingfbsd32v93 tutufbsd32v93 pilefbsd32v93 excafbsd32v93 zenfbsd32v93 pirifbsd32v93 cnrfbsd32v93 pnrfbsd32v93 nhmfbsd32v93 sbrfbsd32v93 ivyfbsd32v93 hwlfbsd32v93 bwlfbsd32v93 skyfbsd32v93 slmfbsd32v93 glmfbsd32v93 jagfbsd32v93
freebsd 9.3 64 kingfbsd64v93 tutufbsd64v93 pilefbsd64v93 excafbsd64v93 zenfbsd64v93 pirifbsd64v93 cnrfbsd64v93 pnrfbsd64v93 nhmfbsd64v93 sbrfbsd64v93 ivyfbsd64v93 hwlfbsd64v93 bwlfbsd64v93 skyfbsd64v93 slmfbsd64v93 glmfbsd64v93 jagfbsd64v93
freebsd 10 32 kingfbsd32v10 tutufbsd32v10 pilefbsd32v10 excafbsd32v10 zenfbsd32v10 pirifbsd32v10 cnrfbsd32v10 pnrfbsd32v10 nhmfbsd32v10 sbrfbsd32v10 ivyfbsd32v10 hwlfbsd32v10 bwlfbsd32v10 skyfbsd32v10 slmfbsd32v10 glmfbsd32v10 jagfbsd32v10
freebsd 10 64 kingfbsd64v10 tutufbsd64v10 pilefbsd64v10 excafbsd64v10 zenfbsd64v10 pirifbsd64v10 cnrfbsd64v10 pnrfbsd64v10 nhmfbsd64v10 sbrfbsd64v10 ivyfbsd64v10 hwlfbsd64v10 bwlfbsd64v10 skyfbsd64v10 slmfbsd64v10 glmfbsd64v10 jagfbsd64v10
freebsd 11 32 kingfbsd32v11 tutufbsd32v11 pilefbsd32v11 excafbsd32v11 zenfbsd32v11 pirifbsd32v11 cnrfbsd32v11 pnrfbsd32v11 nhmfbsd32v11 sbrfbsd32v11 ivyfbsd32v11 hwlfbsd32v11 bwlfbsd32v11 skyfbsd32v11 slmfbsd32v11 glmfbsd32v11 jagfbsd32v11
freebsd 11 64 kingfbsd64v11 tutufbsd64v11 pilefbsd64v11 excafbsd64v11 zenfbsd64v11 pirifbsd64v11 cnrfbsd64v11 pnrfbsd64v11 nhmfbsd64v11 sbrfbsd64v11 ivyfbsd64v11 hwlfbsd64v11 bwlfbsd64v11 skyfbsd64v11 slmfbsd64v11 glmfbsd64v11 jagfbsd64v11
netbsd 6.0 32 kingnbsd32v60 tutunbsd32v60 pilenbsd32v60 excanbsd32v60 zennbsd32v60 pirinbsd32v60 cnrnbsd32v60 pnrnbsd32v60 nhmnbsd32v60 sbrnbsd32v60 ivynbsd32v60 hwlnbsd32v60 bwlnbsd32v60 skynbsd32v60 slmnbsd32v60 glmnbsd32v60 jagnbsd32v60
netbsd 6.0 64 kingnbsd64v60 tutunbsd64v60 pilenbsd64v60 excanbsd64v60 zennbsd64v60 pirinbsd64v60 cnrnbsd64v60 pnrnbsd64v60 nhmnbsd64v60 sbrnbsd64v60 ivynbsd64v60 hwlnbsd64v60 bwlnbsd64v60 skynbsd64v60 slmnbsd64v60 glmnbsd64v60 jagnbsd64v60
netbsd 6.1 32 kingnbsd32v61 tutunbsd32v61 pilenbsd32v61 excanbsd32v61 zennbsd32v61 pirinbsd32v61 cnrnbsd32v61 pnrnbsd32v61 nhmnbsd32v61 sbrnbsd32v61 ivynbsd32v61 hwlnbsd32v61 bwlnbsd32v61 skynbsd32v61 slmnbsd32v61 glmnbsd32v61 jagnbsd32v61
netbsd 6.1 64 kingnbsd64v61 tutunbsd64v61 pilenbsd64v61 excanbsd64v61 zennbsd64v61 pirinbsd64v61 cnrnbsd64v61 pnrnbsd64v61 nhmnbsd64v61 sbrnbsd64v61 ivynbsd64v61 hwlnbsd64v61 bwlnbsd64v61 skynbsd64v61 slmnbsd64v61 glmnbsd64v61 jagnbsd64v61
netbsd 7.0 32 kingnbsd32v70 tutunbsd32v70 pilenbsd32v70 excanbsd32v70 zennbsd32v70 pirinbsd32v70 cnrnbsd32v70 pnrnbsd32v70 nhmnbsd32v70 sbrnbsd32v70 ivynbsd32v70 hwlnbsd32v70 bwlnbsd32v70 skynbsd32v70 slmnbsd32v70 glmnbsd32v70 jagnbsd32v70
netbsd 7.0 64 kingnbsd64v70 tutunbsd64v70 pilenbsd64v70 excanbsd64v70 zennbsd64v70 pirinbsd64v70 cnrnbsd64v70 pnrnbsd64v70 nhmnbsd32v70 sbrnbsd64v70 ivynbsd64v70 hwlnbsd64v70 bwlnbsd64v70 skynbsd64v70 slmnbsd64v70 glmnbsd64v70 jagnbsd64v70
netbsd 7.1 32 kingnbsd32v71 tutunbsd32v71 pilenbsd32v71 excanbsd32v71 zennbsd32v71 pirinbsd32v71 cnrnbsd32v71 pnrnbsd32v71 nhmnbsd32v71 sbrnbsd32v71 ivynbsd32v71 hwlnbsd32v71 bwlnbsd32v71 skynbsd32v71 slmnbsd32v71 glmnbsd32v71 jagnbsd32v71
netbsd 7.1 64 kingnbsd64v71 tutunbsd64v71 pilenbsd64v71 excanbsd64v71 zennbsd64v71 pirinbsd64v71 cnrnbsd64v71 pnrnbsd64v71 nhmnbsd64v71 sbrnbsd64v71 ivynbsd64v71 hwlnbsd64v71 bwlnbsd64v71 skynbsd64v71 slmnbsd64v71 glmnbsd64v71 jagnbsd64v71
netbsd 8.0 32 kingnbsd32v80 tutunbsd32v80 pilenbsd32v80 excanbsd32v80 zennbsd32v80 pirinbsd32v80 cnrnbsd32v80 pnrnbsd32v80 nhmnbsd32v80 sbrnbsd32v80 ivynbsd32v80 hwlnbsd32v80 bwlnbsd32v80 skynbsd32v80 slmnbsd32v80 glmnbsd32v80 jagnbsd32v80
netbsd 8.0 64 kingnbsd64v80 tutunbsd64v80 pilenbsd64v80 excanbsd64v80 zennbsd64v80 pirinbsd64v80 cnrnbsd64v80 pnrnbsd64v80 nhmnbsd64v80 sbrnbsd64v80 ivynbsd64v80 hwlnbsd64v80 bwlnbsd64v80 skynbsd64v80 slmnbsd64v80 glmnbsd64v80 jagnbsd64v80
obsd 6.4 32 ivyobsd32v64
obsd 6.4 64 ivyobsd64v64
gentoo 32 kinggentoo32 tutugentoo32 pilegentoo32 excagentoo32 zengentoo32 pirigentoo32 cnrgentoo32 pnrgentoo32 nhmgentoo32 sbrgentoo32 ivygentoo32 hwlgentoo32 bwlgentoo32 skygentoo32 slmgentoo32 glmgentoo32 jaggentoo32
gentoo 64 kinggentoo64 tutugentoo64 pilegentoo64 excagentoo64 zengentoo64 pirigentoo64 cnrgentoo64 pnrgentoo64 nhmgentoo64 sbrgentoo64 ivygentoo64 hwlgentoo64 bwlgentoo64 skygentoo64 slmgentoo64 glmgentoo64 gentoos jaggentoo64
gentoo hardened 64 gentooh
fedora 24 64 fedora
alpine 64 ivyalpine64
debian 7 32 kingdeb32v7 tutudeb32v7 piledeb32v7 excadeb32v7 zendeb32v7 pirideb32v7 cnrdeb32v7 pnrdeb32v7 nhmdeb32v7 sbrdeb32v7 ivydeb32v7 hwldeb32v7 bwldeb32v7 skydeb32v7 slmdeb32v7 glmdeb32v7 jagdeb32v7
debian 7 64 kingdeb64v7 tutudeb64v7 piledeb64v7 excadeb64v7 zendeb64v7 pirideb64v7 cnrdeb64v7 pnrdeb64v7 nhmdeb64v7 sbrdeb64v7 ivydeb64v7 hwldeb64v7 bwldeb64v7 skydeb64v7 slmdeb64v7 glmdeb64v7 jagdeb64v7
debian 8 32 kingdeb32v8 tutudeb32v8 piledeb32v8 excadeb32v8 zendeb32v8 pirideb32v8 cnrdeb32v8 pnrdeb32v8 nhmdeb32v8 sbrdeb32v8 ivydeb32v8 hwldeb32v8 bwldeb32v8 skydeb32v8 slmdeb32v8 glmdeb32v8 jagdeb32v8
debian 8 64 kingdeb64v8 tutudeb64v8 piledeb64v8 excadeb64v8 zendeb64v8 pirideb64v8 cnrdeb64v8 pnrdeb64v8 nhmdeb64v8 sbrdeb64v8 ivydeb64v8 hwldeb64v8 bwldeb64v8 skydeb64v8 slmdeb64v8 glmdeb64v8 jagdeb64v8
debian 9 32 kingdeb32v9 tutudeb32v9 piledeb32v9 excadeb32v9 zendeb32v9 pirideb32v9 cnrdeb32v9 pnrdeb32v9 nhmdeb32v9 sbrdeb32v9 ivydeb32v9 hwldeb32v9 bwldeb32v9 skydeb32v9 slmdeb32v9 glmdeb32v9 jagdeb32v9
debian 9 64 kingdeb64v9 tutudeb64v9 piledeb64v9 excadeb64v9 zendeb64v9 pirideb64v9 cnrdeb64v9 pnrdeb64v9 nhmdeb64v9 sbrdeb64v9 ivydeb64v9 hwldeb64v9 bwldeb64v9 skydeb64v9 slmdeb64v9 glmdeb64v9 jagdeb64v9
debian 10 32 kingdeb32v10 tutudeb32v10 piledeb32v10 excadeb32v10 zendeb32v10 pirideb32v10 cnrdeb32v10 pnrdeb32v10 nhmdeb32v10 sbrdeb32v10 ivydeb32v10 hwldeb32v10 bwldeb32v10 skydeb32v10 slmdeb32v10 glmdeb32v10 jagdeb32v10
debian 10 64 kingdeb64v10 tutudeb64v10 piledeb64v10 excadeb64v10 zendeb64v10 pirideb64v10 cnrdeb64v10 pnrdeb64v10 nhmdeb64v10 sbrdeb64v10 ivydeb64v10 hwldeb64v10 bwldeb64v10 skydeb64v10 slmdeb64v10 glmdeb64v10 jagdeb64v10
macOS sierra 64 cod
solaris 32 ivysol32
solaris 64 ivysol64
dos 7 64 excados64 skydos64

Type-2 virtualised non-x86 systems, "user-mode" emulation

These pseudo-systems run under a Xen guest (currently qemuusr1 which in turn runs under servus), each under a chroot containing a complete GNU/Linux install.

The binaries thereunder are for the respective emulated systems with few exceptions. Currently only /bin/sh and /bin/bash are host binaries. It would be possible to greatly speed things up by providing more host binaries, notably cc1.

host arch running on emulator cores slowdown1 slowdown2 notes
armel-{debv6,debv7,debv8,debv9} armv7a servus → qemuusr qemu 2.10.2 6 10
armhf-{debv7,debv8,debv9} armv7a servus → qemuusr qemu 2.10.2 6 8 primarily use system "tinker" via ashell
arm64-{debv8,debv9} armv8-a servus → qemuusr qemu 2.11.0 6 9 5 primarily use system "odc2" via ashell
ppc32-{debv7,debv8,debv9} ppc32 servus → qemuusr qemu 2.11.0 6 11
ppc64eb-{debv7,debv8,debv9} ppc64eb servus → qemuusr qemu 2.12.1 6 10 6
ppc64el-{debv8,debv9} ppc64el servus → qemuusr qemu 2.12.1 6
power7el-debv9 ppc64el servus → qemuusr qemu 2.12.1 6 12 6
power8el-debv9 ppc64el servus → qemuusr qemu 2.12.1 6 12 6
power9el-debv9 ppc64el servus → qemuusr qemu 2.12.1 6 12 6
mipseb-{debv6,debv7,debv8,debv9} mips64eb servus → qemuusr qemu 2.10.2 6 10 6 intermittent problems executing n32 binaries (qemu bugs)
mipsel-{debv6,debv7,debv8,debv9} mips64el servus → qemuusr qemu 2.10.2 6 9 6 intermittent problems executing n32 binaries (qemu bugs)
s390x-{debv7,debv8,debv9} z196 servus → qemuusr qemu 2.11.0 6 9 7
alpha-gentoo ev68 servus → qemuusr qemu 2.10.2 6 6 3 qemu 2.11.0 does not work
hppa-gentoo servus → qemuusr qemu 2.11.0 6 8 9

Type-2 virtualised x86 and non-x86 full system emulation

The "user-mode" systems of the last section should primarily be used since they have much less overhead, and furthermore emulate 6 CPU cores.

These full system emulation hosts are mainly useful for things which currently don't work in user mode. That is m68k, ppc64 using the 32-bit ABI and 64-bit instructions, mips64 using the n32 ABI, and sparc. Debugging is also sometimes easier with ful systems emulation.

host arch running on emulator cores ram slowdown1 slowdown2 os/kern notes
armel-debv8.sys armv5tj servus qemu 2.12.1 1 256 30 gnu/linux deb 8
armhf-debv9.sys armv7a+neon servus qemu 2.12.1 4 256 33 gnu/linux deb 9 primarily use system "tinker" via ashell
arm64-fbsdv11.sys armv8 servus qemu 2.12.1 4 512 45 freebsd 11
arm64-debv9.sys armv8 servus qemu 2.12.1 4 512 gnu/linux deb 9 primarily use system "odc2" via ashell
ppc32-debv8.sys ppc32 servus qemu 2.12.1 1 256 gnu/linux deb 8
ppc64eb-fbsdv11.sys power9/be servus qemu 2.12.1 4 512 freebsd 11
ppc64eb-debv8.sys power8/be servus qemu 2.12.1 4 512 (33) gnu/linux deb 8
ppc64el-debv9.sys power9/le servus qemu 2.12.1 4 512 47 gnu/linux deb 9
mips64eb-debv9.sys mips64r2/be servus qemu 2.12.1 1 512 50 gnu/linux deb 9 use mainly for the n32 ABI, else mipseb-debv9 above
mips64el-debv9.sys mips64r2/le servus qemu 2.12.1 1 512 52 gnu/linux deb 9 use mainly for the n32 ABI, else mipsel-debv9 above
m68k.sys mc68040 servus aranym 1 256 38 gnu/linux deb 8
s390-debv9.sys z900 servus qemu 2.12.1 4 512 gnu/linux deb 9
alpha-gentoo.sys ev67 servus qemu 2.12.1 4 512 gnu/linux gentoo
sparcnbsd64 sparcv9b osky qemu 2.10.2 1 512 75 netbsd 7.1.2 only accessible by special means
sparcnbsd32 sparcv8 osky qemu 2.11.1 1 256 75 netbsd 7.1.2 only accessible by special means

Table footnotes:

  1. This slowdown factor is relative to each emulation host for GMP compilation, including emulator slowdown, and skewed by OS properties. The gcc versions might differ between host and guest, and gcc's speed varies from target to target.
  2. This slowdown factor is relative to each emulation host for running GMPbench. This is unfair mainly when emulating a 32-bit system on a 64-bit host, since GMP is much more efficient with native 64-bit arithmetic.