GMP «Arithmetic without limitations» GMP test systems
Last modified: 2020-09-20


The GMP project maintains a comprehensive test environment consisting of physical and emulated systems. All test systems use non-routable IP-addresses, and are firewalled behind the main GMP network.

GMP developers with an account at shell.gmplib.org can log in to any of these systems via shell. Only virtualised systems marked as running on servus are directly reachable. Other systems can be reached via the system ashell which acts as a secondary gateway. Log in to ashell from shell using this command:

shell$ ssh ashell

Most systems below are powered off except when tests are being run. The system for power control is a bit crude; the command for switching on [system] is

ashell$ pdu on [system]

and then it will be properly switched off by the test system. The delay before a system and its virtualised guest systems are up can be 100 seconds (or in a few cases worse).

Please see the status page for system power information.

Table colour coding indicates where a machine is located:

on off location
    TUG in Stockholm, access via shell.gmplib.org
    Salt, access via TUG's shell.gmplib.org and then as per instructions above

Real hardware systems

name arch cpu type cpu code name cores clk  L1
KiB
L2
KiB
 L3
MiB
ram
GiB
virt OS/kern pwr
stat
comment
servus x86-64 Xeon E5-1650v2 Ivy Bridge-EP 6 3500 6 × 32 6 × 256 12 96 xen gnu/linux gentoo on ssh to port 2202 to virtual system 'shell'
servile x86-64 Ryzen 3900X Zen2/Matisse 12 3800-4600 12 × 32 12 × 512 64 96 xen gnu/linux gentoo on ssh via 'shell' through tunnel to virtual system 'ashell'
k8 x86-64 Athlon X2 4800+ K8/Brisbane 2 2500 2 × 64 2 × 512 8 xen gnu/linux gentoo pdu use guest systems, see next table
k10 x86-64 Phenom II 1090T K10/Thuban 6 3200-3600 6 × 64 6 × 512  6 32 xen gnu/linux gentoo pdu use guest systems, see next table
bd1 x86-64 FX-4100 Bulldozer/Zambezi 4 3600-3800 4 × 16 2 × 2048  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
bd2 x86-64 FX-8350 Piledriver/Vishera 8 4000-4200 8 × 16 4 × 2048  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
bd4 x86-64 A12-9800 Excavator/Bristol Ridge 4 3800-4200 4 × 32 2 × 1024 32 xen gnu/linux gentoo pdu use guest systems, see next table
suri x86-64 Ryzen 1500X (1740) Zen/Summit Ridge 4 3500-3900 4 × 32 4 × 512 16 32 xen gnu/linux gentoo pdu use guest systems, see next table
piri x86-64 Ryzen 2700X Zen/Pinnacle Ridge 8 3700-4300 8 × 32 8 × 512 16 64 xen gnu/linux gentoo pdu use guest systems, see next table
mac x86-64 Ryzen 2700X Zen/Pinnacle Ridge 8 3700-4300 8 × 32 8 × 512 16 16 kvm gnu/linux gentoo on
mati x86-64 Ryzen 3700X Zen2/Matisse 8 3600-4400 8 × 32 8 × 512 32 128 xen gnu/linux gentoo pdu use guest systems, see next table
element x86-64 Xeon Nocona 2 3400 2 × 16 1024 8 gnu/linux gentoo timer unreliable system
cnr x86-64 Xeon 3085 Conroe 2 3000 2 × 32 6144 8 xen gnu/linux gentoo pdu use guest systems, see next table
pnr x86-64 Xeon E3110 Penryn/Wolfdale 2 3000 2 × 32 6144 8 xen gnu/linux gentoo pdu use guest systems, see next table
nhm x86-64 Xeon X3470 Nehalem/Lynnfield 4 2933-3200 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
wsm x86-64 Xeon E5649 Westmere 6 2533-2933 6 × 32 6 × 256  12 24 xen gnu/linux gentoo pdu use guest systems, see next table
sbr x86-64 Xeon E3-1270 Sandy Bridge 4 3400-3800 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
minivy x86-64 i7-3615QM Ivy Bridge 4 2300-3300 4 × 32 4 × 256  6 16 macos catalina pdu
hwl x86-64 Xeon E3-1271v3 Haswell 4 3600-4000 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
bwl x86-64 Xeon E3-1285Lv4 Broadwell 4 3400-3800 4 × 32 4 × 256 6+128 32 xen gnu/linux gentoo pdu use guest systems, see next table
osky x86-64 Core i5 6600K Skylake 4 3500 4 × 32 4 × 256  6 16 kvm gnu/linux debian pdu misc virtualisation host
sky x86-64 Xeon E3-1270v5 Skylake 4 3600-4000 4 × 32 4 × 256  8 64 xen gnu/linux gentoo pdu use guest systems, see next table
bt1 x86-64 E-350 Zacate 2 1600 2 × 32 2 × 512 8 xen gnu/linux gentoo pdu use guest systems, see next table
bt2 x86-64 Athlon 5350 Jaguar/Kabini 4 2050 4 × 32 2048 16 xen gnu/linux gentoo pdu use guest systems, see next table
gege x86-64 Atom 330 Diamondville 2 1600 24 512 4 gnu/linux gentoo pdu
slm x86-64 Atom C2758 Silvermont/Rangeley 8 2400 8 × 24 4096 32 xen gnu/linux gentoo pdu waiting to die due to Intel C2000 clock bug
glm x86-64 Atom C3758 Goldmont/Denverton 8 2200 8 × 24 16384 32 xen gnu/linux debian pdu
tambo x86-32 Athlon K7/Barton 1 2083 64 512 2 gnu/linux gentoo timer motherboard failure
labrador x86-32 Pentium3 Coppermine 1 800 1 gnu/linux gentoo off will come back under timer control
parks x86-32 Pentium4-2 Northwood 1 2600 8 512 1 gnu/linux gentoo timer
olympic ia-64 Itanium 2 Mckinley 2 900 2 × 16 2 × 256 1.5 4 freebsd 10.3 off HP rx2620
g5 ppc64 PPC-970 2 1800 2 × 32 2 × 512 1.2 macos/darwin pdu Power Mac G5
pi1 armv6 arm1176 1 700 0.5 gnu/linux on Raspberry Pi 1
odc1 armv7a Cortex-A5 4 1500 1 gnu/linux on Odroid-C1+
pi2 armv7a Cortex-A7 4 900 1 gnu/linux on Raspberry Pi 2
beagle armv7a Cortex-A8 1 1000 0.5 gnu/linux on Beaglebone black
nanot2 armv7a Cortex-A9 4 1400 1 gnu/linux on FriendlyELEC NanoPC-T2
odxu4 armv7a Cortex-A15/A7 4×2000 + 4×1400 2 gnu/linux on Odroid-XU4
tinker armv7a Cortex-A17 4 1800 2 gnu/linux on ASUS Tinker Board
pi3 armv8a Cortex-A53 (32-bit) 4 1400 1 gnu/linux on Raspberry Pi 3 B+
odc2 armv8a Cortex-A53 4 1536 2 gnu/linux on Odroid-C2
odc4 armv8a Cortex-A55 4 1908 4 gnu/linux on Odroid-C4
nanom4 armv8a Cortex-A72/A53 2×1800 + 4×1416 4 gnu/linux on FriendlyELEC NanoPi M4
odn2 armv8a Cortex-A73/A53 4×1800 + 2×1900 4 gnu/linux on Odroid-N2

Pictures of GMP development systems:
GMP main development systems
GMP misc development systems
GMP arm development systems

Type-1 virtualised x86 systems

The host names of the virtualised system are made from the physical host name, the abbreviated OS name, the OS flavour (32,64), and 'v' followed by the abbreviated version number. Some installs lack the version part. Exact names are given in the table below.

The primary systems for each host is the ones in bold. These systems are better maintained, have more memory, and are given several CPU cores.





system
name
make
µarch
 
virtsys
k8
AMD
bris-
bane
xen
k10
AMD
thu-
ban
xen
bd1
AMD
bull-
dozer
xen
bd2
AMD
pile-
driver
xen
bd4
AMD
exca-
vator
xen
suri
AMD
zen
 
xen
piri
AMD
zen+
 
xen
mati
AMD
zen2
 
xen
sys
AMD
zen2
 
xen
cnr
Intel
con-
roe
xen
pnr
Intel
pen-
ryn
xen
nhm
Intel
neha-
lem
xen
wsm
Intel
west-
mere
xen
sbr
Intel
sandy
bridge
xen
servus
Intel
ivy
bridge
xen
hwl
Intel
has-
well
xen
bwl
Intel
broad-
well
xen
sky
Intel
sky-
lake
xen
slm
Intel
silver-
mont
xen
glm
Intel
gold-
mont
xen
bt1
AMD
bob-
cat
xen
bt2
AMD
jag-
uar
xen
osky
Intel
sky-
lake
kvm
freebsd 9.3 32
freebsd 9.3 64
freebsd 10 32
freebsd 10 64
freebsd 11 32 2,3,4
freebsd 11 64 2,3,4
freebsd 12 32 0,1
freebsd 12 64 0,1
netbsd 6.0 32
netbsd 6.0 64
netbsd 6.1 32
netbsd 6.1 64
netbsd 7.0 32
netbsd 7.0 64
netbsd 7.1 32
netbsd 7.1 64
netbsd 7.2 32
netbsd 7.2 64
netbsd 8.0 32
netbsd 8.0 64
netbsd 8.1 32
netbsd 8.1 64
netbsd 8.2 32
netbsd 8.2 64
netbsd 9.0 32
netbsd 9.0 64
gentoo 32
gentoo 64
gentoo hard 32
gentoo hard 64
debian 7 32
debian 7 64
debian 8 32
debian 8 64
debian 9 32
debian 9 64
debian 10 32
debian 10 64
debian 11 32
debian 11 64
dragonfly 5.6 64
dragonfly 5.8 64
devuan 2 32
devuan 2 64
fedora 29 64
fedora 30 64
fedora 31 64
fedora 32 64
ubuntu 1804 64
ubuntu 1810 64
ubuntu 1904 64
ubuntu 1910 64
ubuntu 2004 64
alpine 3.11 32
alpine 3.11 64
void linux 32
void linux 64
clear linux 64
macos sierra 64
solaris 32
solaris 64
dos 7 64

Type-2 virtualised non-x86 systems, "user-mode" emulation

These pseudo-systems run under a Xen guest (currently qemuusr1 which in turn runs under servus), each under a chroot containing a complete GNU/Linux install.

The binaries thereunder are for the respective emulated systems with few exceptions. Currently only /bin/sh and /bin/bash are host binaries. It would be possible to greatly speed things up by providing more host binaries, notably cc1.

2019-02-25: We moved to newer qemu version for all these systems without taking the time to check for qemu regressions. We will instead revert to working qemu versions as GMP testing reveals bugs. [Reverted hppa, mipsel, mipseb, ppc32, ppc64, to latest good qemu version]

host arch running on emulator cores slowdown1 slowdown2 notes
armel-deb{v6,v7,v8,v9,v10,v11} armv7a servile qemu 4.1.1 24 11
armhf-deb{v7,v8,v9,v10,v11} armv7a servile qemu 4.1.1 24 11 primarily use system "tinker" via ashell
arm64-deb{v8,v9,v10,v11} armv8a servile qemu 4.1.1 24 10 primarily use system "odn2" via ashell
ppc32-gentoo ppc32 servile qemu 4.2.0 24 14
power{7,8,9}eb-gentoo power{7,8,9}/be servile qemu 4.2.0 24 15
power{8,9}el-gentoo power{8,9}/le servile qemu 4.2.0 24 15
ppc64eb-deb{v7,v8} ppc64/be servile qemu 4.2.0 24 10
ppc64el-deb{v8,v9,v10,v11} ppc64/le servile qemu 4.2.0 24
mips64eb-deb{v6,v7,v8,v9,v10} mips64/be servile qemu 4.1.1 24 10 problems executing some n32 binaries (qemu bugs)
mips64el-deb{v6,v7,v8,v9,v10,v11} mips64/le servile qemu 4.1.1 24 9 problems executing some n32 binaries (qemu bugs)
mips64elr6-debv10 mips64r6/le servile qemu 4.2.0 24 13 only abi=64 supported
s390x-gentoo z196? servile qemu 4.0.0 24 15
s390x-deb{v7,v8,v9,v10,v11} z196? servile qemu 4.1.1 24
alphaev{5,56,6,67}-gentoo ev{5,56,6,67} servile qemu 4.2.0 24 9
hppa-gentoo servile qemu 2.11.2 24 9
m68k-gentoo servile qemu 4.2.0 24
riscv-fed28 servile qemu 3.1.0 24 9

Type-2 virtualised x86 and non-x86 full system emulation

The "user-mode" systems of the last section should primarily be used since they have much less overhead, and furthermore emulate many more CPU cores.

These full system emulation hosts are mainly useful for things which currently don't work in user mode. That is m68k, ppc64 using the 32-bit ABI and 64-bit instructions, mips64 using the n32 ABI, and sparc. Debugging is also sometimes easier with full systems emulation.

host arch running on emulator cores ram slowdown1 slowdown2 os/kern notes
armel-debv8.sys armv5tj servus qemu 2.12.1 1 256 30 gnu/linux deb 8
armhf-debv9.sys armv7a+neon servus qemu 4.0.0 4 256 33 gnu/linux deb 9 primarily use system "tinker" via ashell
arm64-fbsdv12.sys armv8a servus qemu 4.0.0 4 512 45 freebsd 12
arm64-debv10.sys armv8a servus qemu 4.1.0 4 512 gnu/linux deb 10 primarily use system "odn2" via ashell
ppc32-debv8.sys ppc32 servus qemu 3.0.1 1 256 gnu/linux deb 8
ppc64eb-fbsdv12.sys power8/be servus qemu 2.12.1 4 512 freebsd 12
ppc64eb-debv8.sys power8/be servus qemu 3.0.1 4 512 (33) gnu/linux deb 8
ppc64el-debv9.sys power9/le servus qemu 20181126 4 512 47 gnu/linux deb 9
mips64eb-debv10.sys mips64r2/be servus qemu 4.1.0 1 512 50 gnu/linux deb 10 use mainly for the n32 ABI, else mipseb-debv10 above
mips64el-debv10.sys mips64r2/le servus qemu 4.1.0 1 512 52 gnu/linux deb 10 use mainly for the n32 ABI, else mipsel-debv10 above
m68k.sys mc68040 servus aranym 1 256 38 gnu/linux deb 8
s390-debv9.sys z196 servus qemu 4.0.0 4 512 gnu/linux deb 9
alpha-gentoo.sys ev67 servus qemu 4.0.0 4 512 gnu/linux gentoo
sparcnbsd64 sparcv9b osky qemu 2.10.2 1 512 75 netbsd 7.1.2 only accessible by special means
sparcnbsd32 sparcv8 osky qemu 2.11.1 1 256 75 netbsd 7.1.2 only accessible by special means

Table footnotes:

  1. This slowdown factor is relative to each emulation host for GMP compilation, including emulator slowdown, and skewed by OS properties. The gcc versions might differ between host and guest, and gcc's speed varies from target to target.
  2. This slowdown factor is relative to each emulation host for running GMPbench. This is unfair mainly when emulating a 32-bit system on a 64-bit host, since GMP is much more efficient with native 64-bit arithmetic.