GMP «Arithmetic without limitations» GMP test systems
Last modified: 2018-02-23


The GMP project maintains a comprehensive test environment consisting of physical and emulated systems. All test systems use non-routable IP-addresses, and are firewalled behind the main GMP network.

GMP developers with an account at shell.gmplib.org can log in to any of these systems via shell. Only virtualised systems marked as running on servus are directly reachable. Other systems can be reached via the system ashell which acts as a secondary gateway. Log in to ashell from shell using this command:

shell$ ssh ashell

Most systems below are powered off except when tests are being run. The system for power control is a bit crude; the command for switching on [system] is

ashell$ pdu on [system]

and then it will be properly switched off by the test system. The delay before the system and its virtualised daughter system are up can be 100 seconds (or in a few cases worse).

Please see the status page for system power information.

Table colour coding indicates where a machine is located:

on off location
    TUG in Stockholm, access via shell.gmplib.org
    Salt, access via TUG's shell.gmplib.org and then as per instructions above

Real hardware systems

host arch cpu name cpu code name cores clk  L1
KiB
L2
KiB
 L3
MiB
ram
GiB
virt OS/kern stat comment
servus x86-64 Xeon E5-1650v2 Ivy Bridge-EP 6 3500 6 × 32 6 × 256 12 96 xen gnu/linux gentoo on ssh at port 2202 to virtual host 'shell'
systemet x86-64 Core i7 3555LE Ivy Bridge 2 2500 2 × 32 2 × 256  4 16 xen gnu/linux gentoo on fileserver (fs), login server (ashell), firewall (ratata)
panther x86-64 Athlon X2 4800+ K8/Brisbane 2 2500 2 × 64 2 × 512 4 gnu/linux gentoo pdu
king x86-64 Phenom II 1090T K10/Thuban 6 3200 6 × 64 6 × 512  6 32 xen gnu/linux gentoo pdu use guest systems, see next table
tutu x86-64 FX-4100 Bulldozer/Zambezi 4 3600 4 × 16 2 × 2048  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
pile x86-64 FX-8350 Piledriver/Vishera 8 4000 8 × 16 4 × 2048  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
exca x86-64 A12-9800 Excavator/Bristol Ridge 4 3800 4 × 32 2 × 1024 16 xen gnu/linux gentoo pdu use guest systems, see next table
brazen x86-64 Ryzen 1500X (mfg 1740) Zen/Summit Ridge 4 3500 4 × 32 4 × 512 16 32 xen gnu/linux gentoo pdu use guest systems, see next table
element x86-64 Xeon Nocona 2 3400 2 × 16 1024 8 gnu/linux gentoo pdu boots unreliably at power-on; might crash under load
cnr x86-64 Core2 E6400 Conroe 2 2133 2 × 32 2048 8 xen gnu/linux gentoo pdu use guest systems, see next table
pnr x86-64 Xeon E3110 Penryn/Wolfdale 2 3000 2 × 32 6144 8 xen gnu/linux gentoo pdu use guest systems, see next table
nhm x86-64 Xeon X3470 Nehalem/Lynnfield 4 2933 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
sbr x86-64 Xeon E3-1270 Sandybridge 4 3400 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
hannah x86-64 Core i7 4790K Haswell 4 4000 4 × 32 4 × 256  8 16 xen gnu/linux pdu system is being phased out
hwl x86-64 Xeon E3-1271v3 Haswell 4 3600 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
hwlna x86-64 Pentium G3220 Haswell 2 3000 2 × 32 2 × 256  3 16 xen gnu/linux
bwl x86-64 Xeon E3-1285Lv4 Broadwell 4 3400 4 × 32 4 × 256 6+128 32 xen gnu/linux gentoo pdu use guest systems, see next table
osky x86-64 Core i5 6600K Skylake 4 3500 4 × 32 4 × 256  6 16 kvm gnu/linux pdu misc virtualisation host
sky x86-64 Xeon E3-1270v5 Skylake 4 3600 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
skyna x86-64 Pentium G4400 Skylake 2 3300 2 × 32 2 × 256  3 16 xen gnu/linux
bobcat x86-64 E-350 Zacate 2 1600 2 × 32 2 × 512 8 gnu/linux gentoo pdu
jaguar x86-64 Athlon 5350 Jaguar/Kabini 4 2050 4 × 32 2048 16 xen gnu/linux pdu use guest systems, see next table
gege x86-64 Atom 330 Diamondville 2 1600 24 512 4 gnu/linux gentoo pdu
bay x86-64 Atom C2000 Bay Trail/Rangeley 8 2400 8 × 24 4 × 1024 32 xen gnu/linux gentoo pdu waiting to meet its maker due to Intel C2000 clock bug...
tambo x86-32 Athlon K7/Barton 1 2083 64 512 2 gnu/linux gentoo pdu
hill x86-32 Pentium2 Deschutes 1 400 16 512 0.5 gnu/linux gentoo
labrador x86-32 Pentium3 Coppermine 1 800 16 256 1 gnu/linux gentoo
parks x86-32 Pentium4-2 Northwood 1 2600 8 512 0.5 gnu/linux gentoo pdu
dupont alpha 21164A EV56 1 600 8 96  2 0.5 gnu/linux gentoo
olympic ia-64 Itanium 2 Mckinley 2 900 2 × 16 2 × 256 1.5 4 freebsd 10.3 pdu HP rx2620
g5 ppc64 PPC-970 2 1800 2 × 32 2 × 512 1.2 macos/darwin pdu Power Mac G5
pi1 armv6 arm1176 1 700 0.5 gnu/linux Raspberry Pi 1
odc1 armv7a Cortex-A5 4 1500 1 gnu/linux Odroid-C1+
pi2 armv7a Cortex-A7 4 900 1 gnu/linux Raspberry Pi 2
beagle armv7a Cortex-A8 1 1000 0.5 gnu/linux Beaglebone black
panda armv7a Cortex-A9 2 1000 1 gnu/linux arch Pandaboard; VERY CRASH-PRONE
odxu4 armv7a Cortex-A15 + Cortex-A7 4×2000 + 4×1400 2 gnu/linux Odroid-XU4; VERY CRASH-PRONE
pi3 armv8 Cortex-A53 (32) 4 1200 1 gnu/linux Raspberry Pi 3
odc2 armv8 Cortex-A53 (64) 4 1500 2 gnu/linux Odroid-C2

Type-1 virtualised x86 systems

The host names of the virtualised system are made from the physical host name, the abbreviated OS name, the OS flavour (32,64), and 'v' followed by the abbreviated version number. Some installs lack the version part. Exact names are given in the table below.

The primary systems for each host is the ones in bold. These systems are better maintained, have more memory, and are given several CPU cores.




system
name
make
µarch
virtsys
king
AMD
thuban
gnu/linux/xen
tutu
AMD
bulldozer
gnu/linux/xen
pile
AMD
piledriver
gnu/linux/xen
exca
AMD
excavator
gnu/linux/xen
zen
AMD
ryzen
gnu/linux/xen
cnr
Intel
conroe
gnu/linux/xen
pnr
Intel
penryn
gnu/linux/xen
nhm
Intel
nehalem
gnu/linux/xen
sbr
Intel
sandybridge
gnu/linux/xen
systemet
Intel
ivybridge
gnu/linux/xen
servus
Intel
ivybridge
gnu/linux/xen
hwl
Intel
haswell
gnu/linux/xen
bwl
Intel
broadwell
gnu/linux/xen
sky
Intel
skylake
gnu/linux/xen
bay
Intel
baytrail
gnu/linux/xen
osky
Intel
skylake
gnu/linux/kvm
jaguar
AMD
jaguar
gnu/linux/xen
hannah
Intel
haswell
gnu/linux/xen
freebsd 8.4 32 hannahfbsd32v84
freebsd 8.4 64 hannahfbsd64v84
freebsd 9.3 32 kingfbsd32v93 tutufbsd32v93 pilefbsd32v93 excafbsd32v93 zenfbsd32v93 nhmfbsd32v93 sbrfbsd32v93 ivyfbsd32v93 hwlfbsd32v93 bwlfbsd32v93 skyfbsd32v93 bayfbsd32v93 jagfbsd32v93 hannahfbsd32v93
freebsd 9.3 64 kingfbsd64v93 tutufbsd64v93 pilefbsd64v93 excafbsd64v93 zenfbsd64v93 nhmfbsd64v93 sbrfbsd64v93 ivyfbsd64v93 hwlfbsd64v93 bwlfbsd64v93 skyfbsd64v93 bayfbsd64v93 jagfbsd64v93 hannahfbsd64v93
freebsd 10 32 kingfbsd32v10 tutufbsd32v10 pilefbsd32v10 excafbsd32v10 zenfbsd32v10 nhmfbsd32v10 sbrfbsd32v10 ivyfbsd32v10 hwlfbsd32v10 bwlfbsd32v10 skyfbsd32v10 bayfbsd32v10 jagfbsd32v10 hannahfbsd32v10
freebsd 10 64 kingfbsd64v10 tutufbsd64v10 pilefbsd64v10 excafbsd64v10 zenfbsd64v10 nhmfbsd64v10 sbrfbsd64v10 ivyfbsd64v10 hwlfbsd64v10 bwlfbsd64v10 skyfbsd64v10 bayfbsd64v10 jagfbsd64v10 hannahfbsd64v10
freebsd 11 32 kingfbsd32v11 tutufbsd32v11 pilefbsd32v11 excafbsd32v11 zenfbsd32v11 cnrfbsd32v11 pnrfbsd32v11 nhmfbsd32v11 sbrfbsd32v11 ivyfbsd32v11 hwlfbsd32v11 bwlfbsd32v11 skyfbsd32v11 bayfbsd32v11 jagfbsd32v11 hannahfbsd32v11
freebsd 11 64 kingfbsd64v11 tutufbsd64v11 pilefbsd64v11 excafbsd64v11 zenfbsd64v11 cnrfbsd64v11 pnrfbsd64v11 nhmfbsd64v11 sbrfbsd64v11 ivyfbsd64v11 hwlfbsd64v11 bwlfbsd64v11 skyfbsd64v11 bayfbsd64v11 jagfbsd64v11 hannahfbsd64v11
netbsd 5.2 32 hannahnbsd32v52
netbsd 5.2 64 hannahnbsd64v52
netbsd 6.0 32 kingnbsd32v60 tutunbsd32v60 pilenbsd32v60 excanbsd32v60 zennbsd32v60 cnrnbsd32v60 pnrnbsd32v60 nhmnbsd32v60 sbrnbsd32v60 ivynbsd32v60 hwlnbsd32v60 bwlnbsd32v60 skynbsd32v60 baynbsd32v60 jagnbsd32v60 hannahnbsd32v60
netbsd 6.0 64 kingnbsd64v60 tutunbsd64v60 pilenbsd64v60 excanbsd64v60 zennbsd64v60 cnrnbsd64v60 pnrnbsd64v60 nhmnbsd64v60 sbrnbsd64v60 ivynbsd64v60 hwlnbsd64v60 bwlnbsd64v60 skynbsd64v60 baynbsd64v60 jagnbsd64v60 hannahnbsd64v60
netbsd 6.1 32 kingnbsd32v61 tutunbsd32v61 pilenbsd32v61 excanbsd32v61 zennbsd32v61 cnrnbsd32v61 pnrnbsd32v61 nhmnbsd32v61 sbrnbsd32v61 ivynbsd32v61 hwlnbsd32v61 bwlnbsd32v61 skynbsd32v61 baynbsd32v61 jagnbsd32v61 hannahnbsd32v61
netbsd 6.1 64 kingnbsd64v61 tutunbsd64v61 pilenbsd64v61 excanbsd64v61 zennbsd64v61 cnrnbsd64v61 pnrnbsd64v61 nhmnbsd64v61 sbrnbsd64v61 ivynbsd64v61 hwlnbsd64v61 bwlnbsd64v61 skynbsd64v61 baynbsd64v61 jagnbsd64v61 hannahnbsd64v61
netbsd 7.0 32 kingnbsd32v70 tutunbsd32v70 pilenbsd32v70 excanbsd32v70 zennbsd32v70 cnrnbsd32v70 pnrnbsd32v70 nhmnbsd32v70 sbrnbsd32v70 ivynbsd32v70 hwlnbsd32v70 bwlnbsd32v70 skynbsd32v70 baynbsd32v70 jagnbsd32v70 hannahnbsd32v70
netbsd 7.0 64 kingnbsd64v70 tutunbsd64v70 pilenbsd64v70 excanbsd64v70 zennbsd64v70 cnrnbsd64v70 pnrnbsd64v70 nhmnbsd32v70 sbrnbsd64v70 ivynbsd64v70 hwlnbsd64v70 bwlnbsd64v70 skynbsd64v70 baynbsd64v70 jagnbsd64v70 hannahnbsd64v70
netbsd 7.1 32 kingnbsd32v71 tutunbsd32v71 pilenbsd32v71 excanbsd32v71 zennbsd32v71 cnrnbsd32v71 pnrnbsd32v71 nhmnbsd32v71 sbrnbsd32v71 ivynbsd32v71 hwlnbsd32v71 bwlnbsd32v71 skynbsd32v71 baynbsd32v71 jagnbsd32v71 hannahnbsd32v71
netbsd 7.1 64 kingnbsd64v71 tutunbsd64v71 pilenbsd64v71 excanbsd64v71 zennbsd64v71 cnrnbsd64v71 pnrnbsd64v71 nhmnbsd64v71 sbrnbsd64v71 ivynbsd64v71 hwlnbsd64v71 bwlnbsd64v71 skynbsd64v71 baynbsd64v71 jagnbsd64v71 hannahnbsd64v71
obsd 6.1 32 (ivyobsd32v61)
obsd 6.1 64 (ivyobsd64v61)
gentoo 32 kinggentoo32 tutugentoo32 pilegentoo32 excagentoo32 zengentoo32 cnrgentoo32 pnrgentoo32 nhmgentoo32 sbrgentoo32 sysgentoo32 ivygentoo32 hwlgentoo32 bwlgentoo32 skygentoo32 baygentoo32 jaggentoo32 hannahgentoo32
gentoo 64 kinggentoo64 tutugentoo64 pilegentoo64 excagentoo64 zengentoo64 cnrgentoo64 pnrgentoo64 nhmgentoo64 sbrgentoo64 sysgentoo64 ivygentoo64 hwlgentoo64 bwlgentoo64 skygentoo64 baygentoo64 gentoos jaggentoo64 hannahgentoo64
gentoo hardened 64 gentooh
fedora 24 64 fedora
slackware 64 ivyslack64
alpine 64 ivyalpine64
debian 6 64 hannahdeb32v6
debian 6 64 hannahdeb64v6
debian 7 32 kingdeb32v7 tutudeb32v7 piledeb32v7 excadeb32v7 zendeb32v7 cnrdeb32v7 pnrdeb32v7 nhmdeb32v7 sbrdeb32v7 ivydeb32v7 hwldeb32v7 bwldeb32v7 skydeb32v7 baydeb32v7 jagdeb32v7 hannahdeb32v7
debian 7 64 kingdeb64v7 tutudeb64v7 piledeb64v7 excadeb64v7 zendeb64v7 cnrdeb64v7 pnrdeb64v7 nhmdeb64v7 sbrdeb64v7 ivydeb64v7 hwldeb64v7 bwldeb64v7 skydeb64v7 baydeb64v7 jagdeb64v7 hannahdeb64v7
debian 8 32 kingdeb32v8 tutudeb32v8 piledeb32v8 excadeb32v8 zendeb32v8 cnrdeb32v8 pnrdeb32v8 nhmdeb32v8 sbrdeb32v8 ivydeb32v8 hwldeb32v8 bwldeb32v8 skydeb32v8 baydeb32v8 jagdeb32v8 hannahdeb32v8
debian 8 64 kingdeb64v8 tutudeb64v8 piledeb64v8 excadeb64v8 zendeb64v8 cnrdeb64v8 pnrdeb64v8 nhmdeb64v8 sbrdeb64v8 ivydeb64v8 hwldeb64v8 bwldeb64v8 skydeb64v8 baydeb64v8 jagdeb64v8 hannahdeb64v8
debian 9 32 kingdeb32v9 tutudeb32v9 piledeb32v9 excadeb32v9 zendeb32v9 cnrdeb32v9 pnrdeb32v9 nhmdeb32v9 sbrdeb32v9 ivydeb32v9 hwldeb32v9 bwldeb32v9 skydeb32v9 baydeb32v9 jagdeb32v9 hannahdeb32v9
debian 9 64 kingdeb64v9 tutudeb64v9 piledeb64v9 excadeb64v9 zendeb64v9 cnrdeb64v9 pnrdeb64v9 nhmdeb64v9 sbrdeb64v9 ivydeb64v9 hwldeb64v9 bwldeb64v9 skydeb64v9 baydeb64v9 jagdeb64v9 hannahdeb64v9
debian 10 32 kingdeb32v10 tutudeb32v10 piledeb32v10 excadeb32v10 zendeb32v10 cnrdeb32v10 pnrdeb32v10 nhmdeb32v10 sbrdeb32v10 ivydeb32v10 hwldeb32v10 bwldeb32v10 skydeb32v10 baydeb32v10 jagdeb32v10 hannahdeb32v10
debian 10 64 kingdeb64v10 tutudeb64v10 piledeb64v10 excadeb64v10 zendeb64v10 cnrdeb64v10 pnrdeb64v10 nhmdeb64v10 sbrdeb64v10 ivydeb64v10 hwldeb64v10 bwldeb64v10 skydeb64v10 baydeb64v10 jagdeb64v10 hannahdeb64v10
macOS sierra 64 cod
solaris 32 ivysol32
solaris 64 ivysol64
dos 7 64 excados64 skydos64

Type-2 virtualised x86 and non-x86 systems

The "user-mode" emulation of the next section should primarily be used since they have much less overhead, and furthermore emulate 6 CPU cores.

These full system emulation hosts are mainly useful for things which currently don't work in user mode. That is m68k, ppc64 using the 32-bit ABI and 64-bit instructions, mips64 using the n32 ABI, and sparc.

host arch running on emulator cores ram slowdown1 slowdown2 os/kern notes
armel-debv8.sys armv5tj servus qemu 2.10.0 1 256 gnu/linux deb 8
armhf-debv8.sys armv7a+neon servus qemu 2.10.0 4 256 gnu/linux deb 8 primarily use odxu4 (or pi2 or odc1) via ashell
arm64-fbsdv11.sys armv8 servus qemu 2.10.0 4 512 freebsd 11
arm64-debv8.sys armv8 servus qemu 2.10.0 4 512 gnu/linux deb 8 primarily use odc2 via ashell
ppc64eb-fbsdv11.sys power9/be servus qemu 2.11.0 4 512 freebsd 11
ppc64eb-debv8.sys power8/be servus qemu 2.10.0 4 512 gnu/linux deb 8
ppc64el-debv8.sys power8/le servus qemu 2.10.0 4 512 gnu/linux deb 8
ppc64el-debv9.sys power9/le servus qemu 2.11.0 4 512 gnu/linux deb 9
ppc32-debv8.sys ppc32 servus qemu 2.10.0 1 256 26 45 gnu/linux deb 8
mips64eb-debv8.sys mips64r2/be servus qemu 2.10.0 1 384 80 gnu/linux deb 8
mips64el-debv8.sys mips64r2/le servus qemu 2.10.0 1 384 80 gnu/linux deb 8
m68k.sys mc68040 servus aranym 1 256 21 gnu/linux deb 8
alpha-gentoo.sys ev67 servus qemu 2.10.0 4 512 gnu/linux gentoo use alpha-gentoo below
sparcnbsd64 sparcv9b osky qemu 2.9.0 1 512 75 netbsd 7.1 only accessible by special means
sparcnbsd32 sparcv8 osky qemu 2.7.1 1 256 75 netbsd 7.0 only accessible by special means

Table footnotes:

  1. This slowdown factor is relative to each emulation host for GMP compilation, including emulator slowdown, and skewed by OS properties. The gcc versions might differ between host and guest, and gcc's speed varies from target to target.
  2. This slowdown factor is relative to each emulation host for running GMPbench. This is unfair mainly when emulating a 32-bit system on a 64-bit host, since GMP is much more efficient with native 64-bit arithmetic.

Type-2 virtualised non-x86 systems, "user-mode" emulation

These pseudo-systems run under a Xen guest (currently qemuusr1 which in turn runs under servus), each under a chroot containing a complete GNU/Linux install.

The binaries thereunder are for the respective emulated systems with few exceptions. Currently only /bin/sh and /bin/bash are host binaries. It would be possible to greatly speed things up by providing more host binaries, notably cc1.

host arch running on emulator cores slowdown1 slowdown2 notes
arm64-{debv8,debv9} armv8-a servus → qemuusr qemu 2.11.0 6 15 primarily use odc2 via ashell
armel-{debv6,debv7,debv8,debv9} armv7a servus → qemuusr qemu 2.10.2 6 17
armhf-{debv7,debv8,debv9} armv7a servus → qemuusr qemu 2.10.2 6 17 primarily use odxu4 (or pi2 or odc1) via ashell
ppc32-{debv7,debv8,debv9} ppc32 servus → qemuusr qemu 2.11.0 6
ppc64eb-{debv7,debv8,debv9} ppc64eb servus → qemuusr qemu 2.11.0 6 21
ppc64el-{debv8,debv9} ppc64el servus → qemuusr qemu 2.11.0 6 20
power7el-debv9 ppc64el servus → qemuusr qemu 2.11-rc2 6 20
power8el-debv9 ppc64el servus → qemuusr qemu 2.11-rc2 6 20
power9el-debv9 ppc64el servus → qemuusr qemu 2.11-rc2 6 20
mipseb-{debv6,debv7,debv8,debv9} mips64eb servus → qemuusr qemu 2.11.0 6 18 intermittent problems executing n32 binaries (qemu bugs)
mipsel-{debv6,debv7,debv8,debv9} mips64el servus → qemuusr qemu 2.11.0 6 18 intermittent problems executing n32 binaries (qemu bugs)
s390x-{debv7,debv8,debv9} z196 servus → qemuusr qemu 2.11.0 6 16
alpha-gentoo ev68 servus → qemuusr qemu 2.10.2 6 17 qemu 2.11.0 does not work
hppa-gentoo servus → qemuusr qemu 2.11.0 6