GMP «Arithmetic without limitations» GMP test systems
Last modified: 2019-07-16


The GMP project maintains a comprehensive test environment consisting of physical and emulated systems. All test systems use non-routable IP-addresses, and are firewalled behind the main GMP network.

GMP developers with an account at shell.gmplib.org can log in to any of these systems via shell. Only virtualised systems marked as running on servus are directly reachable. Other systems can be reached via the system ashell which acts as a secondary gateway. Log in to ashell from shell using this command:

shell$ ssh ashell

Most systems below are powered off except when tests are being run. The system for power control is a bit crude; the command for switching on [system] is

ashell$ pdu on [system]

and then it will be properly switched off by the test system. The delay before a system and its virtualised guest systems are up can be 100 seconds (or in a few cases worse).

Please see the status page for system power information.

Table colour coding indicates where a machine is located:

on off location
    TUG in Stockholm, access via shell.gmplib.org
    Salt, access via TUG's shell.gmplib.org and then as per instructions above

Real hardware systems

name arch cpu type cpu code name cores clk  L1
KiB
L2
KiB
 L3
MiB
ram
GiB
virt OS/kern pwr
stat
comment
servus x86-64 Xeon E5-1650v2 Ivy Bridge-EP 6 3500 6 × 32 6 × 256 12 96 xen gnu/linux gentoo on ssh to port 2202 to virtual system 'shell'
servile x86-64 Ryzen 2700X Zen/Pinnacle Ridge 8 3700-4300 8 × 32 8 × 512 16 64 xen gnu/linux gentoo on ssh via 'shell' through tunnel to virtual system 'ashell'
k8 x86-64 Athlon X2 4800+ K8/Brisbane 2 2500 2 × 64 2 × 512 8 xen gnu/linux gentoo pdu use guest systems, see next table
k10 x86-64 Phenom II 1090T K10/Thuban 6 3200-3600 6 × 64 6 × 512  6 32 xen gnu/linux gentoo pdu use guest systems, see next table
bull x86-64 FX-4100 Bulldozer/Zambezi 4 3600-3800 4 × 16 2 × 2048  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
pile x86-64 FX-8350 Piledriver/Vishera 8 4000-4200 8 × 16 4 × 2048  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
exca x86-64 A12-9800 Excavator/Bristol Ridge 4 3800-4200 4 × 32 2 × 1024 32 xen gnu/linux gentoo pdu use guest systems, see next table
suri x86-64 Ryzen 1500X (mfg 1740) Zen/Summit Ridge 4 3500-3900 4 × 32 4 × 512 16 32 xen gnu/linux gentoo pdu use guest systems, see next table
piri x86-64 Ryzen 2700X Zen/Pinnacle Ridge 8 3700-4300 8 × 32 8 × 512 16 48 xen gnu/linux gentoo pdu use guest systems, see next table
mati x86-64 Ryzen 3700X Zen2/Matisse 8 3600-4400 8 × 32 8 × 512 32 32 xen gnu/linux gentoo pdu use guest systems, see next table
element x86-64 Xeon Nocona 2 3400 2 × 16 1024 8 gnu/linux gentoo timer unreliable system
cnr x86-64 Xeon 3085 Conroe 2 3000 2 × 32 6144 8 xen gnu/linux gentoo pdu use guest systems, see next table
pnr x86-64 Xeon E3110 Penryn/Wolfdale 2 3000 2 × 32 6144 8 xen gnu/linux gentoo pdu use guest systems, see next table
nhm x86-64 Xeon X3470 Nehalem/Lynnfield 4 2933-3200 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
wsm x86-64 Xeon E5649 Westmere 6 2533-2933 6 × 32 6 × 256  12 24 xen gnu/linux gentoo pdu use guest systems, see next table
sbr x86-64 Xeon E3-1270 Sandybridge 4 3400-3800 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
hwl x86-64 Xeon E3-1271v3 Haswell 4 3600-4000 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
bwl x86-64 Xeon E3-1285Lv4 Broadwell 4 3400-3800 4 × 32 4 × 256 6+128 32 xen gnu/linux gentoo pdu use guest systems, see next table
osky x86-64 Core i5 6600K Skylake 4 3500 4 × 32 4 × 256  6 16 kvm gnu/linux debian pdu misc virtualisation host
sky x86-64 Xeon E3-1270v5 Skylake 4 3600-4000 4 × 32 4 × 256  8 32 xen gnu/linux gentoo pdu use guest systems, see next table
bobcat x86-64 E-350 Zacate 2 1600 2 × 32 2 × 512 8 gnu/linux gentoo pdu
jag x86-64 Athlon 5350 Jaguar/Kabini 4 2050 4 × 32 2048 16 xen gnu/linux gentoo pdu use guest systems, see next table
gege x86-64 Atom 330 Diamondville 2 1600 24 512 4 gnu/linux gentoo pdu
slm x86-64 Atom C2758 Silvermont/Rangeley 8 2400 8 × 24 4096 32 xen gnu/linux gentoo pdu waiting to die due to Intel C2000 clock bug
glm x86-64 Atom C3758 Goldmont/Denverton 8 2200 8 × 24 16384 32 xen gnu/linux gentoo pdu use guest systems, see next table
cough x86-64 Celeron J4105 Goldmont Plus/Gemini Lake 4 1500-2400 4 × 24 4096 8 gnu/linux gentoo pdu
tambo x86-32 Athlon K7/Barton 1 2083 64 512 2 gnu/linux gentoo timer
labrador x86-32 Pentium3 Coppermine 1 800 1 gnu/linux gentoo off will come back under timer control
parks x86-32 Pentium4-2 Northwood 1 2600 8 512 1 gnu/linux gentoo timer
olympic ia-64 Itanium 2 Mckinley 2 900 2 × 16 2 × 256 1.5 4 freebsd 10.3 pdu HP rx2620
g5 ppc64 PPC-970 2 1800 2 × 32 2 × 512 1.2 macos/darwin timer Power Mac G5
pi1 armv6 arm1176 1 700 0.5 gnu/linux on Raspberry Pi 1
odc1 armv7a Cortex-A5 4 1500 1 gnu/linux on Odroid-C1+
pi2 armv7a Cortex-A7 4 900 1 gnu/linux on Raspberry Pi 2
beagle armv7a Cortex-A8 1 1000 0.5 gnu/linux on Beaglebone black
panda armv7a Cortex-A9 2 1000 1 gnu/linux on Pandaboard; VERY CRASH-PRONE
nanot2 armv7a Cortex-A9 4 1400 1 gnu/linux on FriendlyELEC NanoPC-T2
odxu4 armv7a Cortex-A15 + Cortex-A7 4×2000 + 4×1400 2 gnu/linux on Odroid-XU4; Stable at last???
tinker armv7a Cortex-A17 4 1800 2 gnu/linux on ASUS Tinker Board
pi3 armv8 Cortex-A53 (32) 4 1400 1 gnu/linux on Raspberry Pi 3 B+
odc2 armv8 Cortex-A53 (64) 4 1536 2 gnu/linux on Odroid-C2
nanom4 armv8 Cortex-A72 + Cortex-A53 2×1800 + 4×1416 4 gnu/linux on FriendlyELEC NanoPi M4
odn2 armv8 Cortex-A73 + Cortex-A53 4×1800 + 2×1900 4 gnu/linux on Odroid-N2

Pictures of GMP development systems:
GMP main development systems
GMP misc development systems
GMP arm development systems

Type-1 virtualised x86 systems

The host names of the virtualised system are made from the physical host name, the abbreviated OS name, the OS flavour (32,64), and 'v' followed by the abbreviated version number. Some installs lack the version part. Exact names are given in the table below.

The primary systems for each host is the ones in bold. These systems are better maintained, have more memory, and are given several CPU cores.




system
name
make
µarch
virtsys
k8
AMD
brisbane
gnu/linux/xen
k10
AMD
thuban
gnu/linux/xen
bull
AMD
bulldozer
gnu/linux/xen
pile
AMD
piledriver
gnu/linux/xen
exca
AMD
excavator
gnu/linux/xen
suri
AMD
zen
gnu/linux/xen
piri
AMD
zen+
gnu/linux/xen
mati
AMD
zen2
gnu/linux/xen
cnr
Intel
conroe
gnu/linux/xen
pnr
Intel
penryn
gnu/linux/xen
nhm
Intel
nehalem
gnu/linux/xen
wsm
Intel
westmere
gnu/linux/xen
sbr
Intel
sandybridge
gnu/linux/xen
servus
Intel
ivybridge
gnu/linux/xen
hwl
Intel
haswell
gnu/linux/xen
bwl
Intel
broadwell
gnu/linux/xen
sky
Intel
skylake
gnu/linux/xen
slm
Intel
silvermont
gnu/linux/xen
glm
Intel
goldmont
gnu/linux/xen
jaguar
AMD
jaguar
gnu/linux/xen
osky
Intel
skylake
gnu/linux/kvm
freebsd 9.3 32 k8fbsd32v93 k10fbsd32v93 bullfbsd32v93 pilefbsd32v93 excafbsd32v93 surifbsd32v93 pirifbsd32v93 matifbsd32v93 cnrfbsd32v93 pnrfbsd32v93 nhmfbsd32v93 wsmfbsd32v93 sbrfbsd32v93 ivyfbsd32v93 hwlfbsd32v93 bwlfbsd32v93 skyfbsd32v93 slmfbsd32v93 glmfbsd32v93 jagfbsd32v93
freebsd 9.3 64 k8fbsd64v93 k10fbsd64v93 bullfbsd64v93 pilefbsd64v93 excafbsd64v93 surifbsd64v93 pirifbsd64v93 matifbsd64v93 cnrfbsd64v93 pnrfbsd64v93 nhmfbsd64v93 wsmfbsd64v93 sbrfbsd64v93 ivyfbsd64v93 hwlfbsd64v93 bwlfbsd64v93 skyfbsd64v93 slmfbsd64v93 glmfbsd64v93 jagfbsd64v93
freebsd 10 32 k8fbsd32v10 k10fbsd32v10 bullfbsd32v10 pilefbsd32v10 excafbsd32v10 surifbsd32v10 pirifbsd32v10 matifbsd32v10 cnrfbsd32v10 pnrfbsd32v10 nhmfbsd32v10 wsmfbsd32v10 sbrfbsd32v10 ivyfbsd32v10 hwlfbsd32v10 bwlfbsd32v10 skyfbsd32v10 slmfbsd32v10 glmfbsd32v10 jagfbsd32v10
freebsd 10 64 k8fbsd64v10 k10fbsd64v10 bullfbsd64v10 pilefbsd64v10 excafbsd64v10 surifbsd64v10 pirifbsd64v10 matifbsd64v10 cnrfbsd64v10 pnrfbsd64v10 nhmfbsd64v10 wsmfbsd64v10 sbrfbsd64v10 ivyfbsd64v10 hwlfbsd64v10 bwlfbsd64v10 skyfbsd64v10 slmfbsd64v10 glmfbsd64v10 jagfbsd64v10
freebsd 11 32 k8fbsd32v11 k10fbsd32v11 bullfbsd32v11 pilefbsd32v11 excafbsd32v11 surifbsd32v11 pirifbsd32v11 matifbsd32v11 cnrfbsd32v11 pnrfbsd32v11 nhmfbsd32v11 wsmfbsd32v11 sbrfbsd32v11 ivyfbsd32v11 hwlfbsd32v11 bwlfbsd32v11 skyfbsd32v11 slmfbsd32v11 glmfbsd32v11 jagfbsd32v11
freebsd 11 64 k8fbsd64v11 k10fbsd64v11 bullfbsd64v11 pilefbsd64v11 excafbsd64v11 surifbsd64v11 pirifbsd64v11 matifbsd64v11 cnrfbsd64v11 pnrfbsd64v11 nhmfbsd64v11 wsmfbsd64v11 sbrfbsd64v11 ivyfbsd64v11 hwlfbsd64v11 bwlfbsd64v11 skyfbsd64v11 slmfbsd64v11 glmfbsd64v11 jagfbsd64v11
freebsd 12 32 k8fbsd32v12 k10fbsd32v12 bullfbsd32v12 pilefbsd32v12 excafbsd32v12 surifbsd32v12 pirifbsd32v12 matifbsd32v12 cnrfbsd32v12 pnrfbsd32v12 nhmfbsd32v12 wsmfbsd32v12 sbrfbsd32v12 ivyfbsd32v12 hwlfbsd32v12 bwlfbsd32v12 skyfbsd32v12 slmfbsd32v12 glmfbsd32v12 jagfbsd32v12
freebsd 12 64 k8fbsd64v12 k10fbsd64v12 bullfbsd64v12 pilefbsd64v12 excafbsd64v12 surifbsd64v12 pirifbsd64v12 matifbsd64v12 cnrfbsd64v12 pnrfbsd64v12 nhmfbsd64v12 wsmfbsd64v12 sbrfbsd64v12 ivyfbsd64v12 hwlfbsd64v12 bwlfbsd64v12 skyfbsd64v12 slmfbsd64v12 glmfbsd64v12 jagfbsd64v12
netbsd 6.0 32 excanbsd32v60 pirinbsd32v60 matinbsd32v60 ivynbsd32v60 skynbsd32v60
netbsd 6.0 64 excanbsd64v60 pirinbsd64v60 matinbsd64v60 ivynbsd64v60 skynbsd64v60
netbsd 6.1 32 k10nbsd32v61 bullnbsd32v61 pilenbsd32v61 excanbsd32v61 surinbsd32v61 pirinbsd32v61 matinbsd32v61 nhmnbsd32v61 wsmnbsd32v61 sbrnbsd32v61 ivynbsd32v61 hwlnbsd32v61 bwlnbsd32v61 skynbsd32v61 slmnbsd32v61 glmnbsd32v61 jagnbsd32v61
netbsd 6.1 64 k10nbsd64v61 bullnbsd64v61 pilenbsd64v61 excanbsd64v61 surinbsd64v61 pirinbsd64v61 matinbsd64v61 nhmnbsd64v61 wsmnbsd64v61 sbrnbsd64v61 ivynbsd64v61 hwlnbsd64v61 bwlnbsd64v61 skynbsd64v61 slmnbsd64v61 glmnbsd64v61 jagnbsd64v61
netbsd 7.0 32 k10nbsd32v70 bullnbsd32v70 pilenbsd32v70 excanbsd32v70 surinbsd32v70 pirinbsd32v70 matinbsd32v70 nhmnbsd32v70 wsmnbsd32v70 sbrnbsd32v70 ivynbsd32v70 hwlnbsd32v70 bwlnbsd32v70 skynbsd32v70 slmnbsd32v70 glmnbsd32v70 jagnbsd32v70
netbsd 7.0 64 k10nbsd64v70 bullnbsd64v70 pilenbsd64v70 excanbsd64v70 surinbsd64v70 pirinbsd64v70 matinbsd64v70 nhmnbsd32v70 wsmnbsd32v70 sbrnbsd64v70 ivynbsd64v70 hwlnbsd64v70 bwlnbsd64v70 skynbsd64v70 slmnbsd64v70 glmnbsd64v70 jagnbsd64v70
netbsd 7.1 32 k8nbsd32v71 k10nbsd32v71 bullnbsd32v71 pilenbsd32v71 excanbsd32v71 surinbsd32v71 pirinbsd32v71 matinbsd32v71 cnrnbsd32v71 pnrnbsd32v71 nhmnbsd32v71 wsmnbsd32v71 sbrnbsd32v71 ivynbsd32v71 hwlnbsd32v71 bwlnbsd32v71 skynbsd32v71 slmnbsd32v71 glmnbsd32v71 jagnbsd32v71
netbsd 7.1 64 k8nbsd64v71 k10nbsd64v71 bullnbsd64v71 pilenbsd64v71 excanbsd64v71 surinbsd64v71 pirinbsd64v71 matinbsd64v71 cnrnbsd64v71 pnrnbsd64v71 nhmnbsd64v71 wsmnbsd64v71 sbrnbsd64v71 ivynbsd64v71 hwlnbsd64v71 bwlnbsd64v71 skynbsd64v71 slmnbsd64v71 glmnbsd64v71 jagnbsd64v71
netbsd 7.2 32 k8nbsd32v72 k10nbsd32v72 bullnbsd32v72 pilenbsd32v72 excanbsd32v72 surinbsd32v72 pirinbsd32v72 matinbsd32v72 cnrnbsd32v72 pnrnbsd32v72 nhmnbsd32v72 wsmnbsd32v72 sbrnbsd32v72 ivynbsd32v72 hwlnbsd32v72 bwlnbsd32v72 skynbsd32v72 slmnbsd32v72 glmnbsd32v72 jagnbsd32v72
netbsd 7.2 64 k8nbsd64v72 k10nbsd64v72 bullnbsd64v72 pilenbsd64v72 excanbsd64v72 surinbsd64v72 pirinbsd64v72 matinbsd64v72 cnrnbsd64v72 pnrnbsd64v72 nhmnbsd64v72 wsmnbsd64v72 sbrnbsd64v72 ivynbsd64v72 hwlnbsd64v72 bwlnbsd64v72 skynbsd64v72 slmnbsd64v72 glmnbsd64v72 jagnbsd64v72
netbsd 8.0 32 k8nbsd32v80 k10nbsd32v80 bullnbsd32v80 pilenbsd32v80 excanbsd32v80 surinbsd32v80 pirinbsd32v80 matinbsd32v80 cnrnbsd32v80 pnrnbsd32v80 nhmnbsd32v80 wsmnbsd32v80 sbrnbsd32v80 ivynbsd32v80 hwlnbsd32v80 bwlnbsd32v80 skynbsd32v80 slmnbsd32v80 glmnbsd32v80 jagnbsd32v80
netbsd 8.0 64 k8nbsd64v80 k10nbsd64v80 bullnbsd64v80 pilenbsd64v80 excanbsd64v80 surinbsd64v80 pirinbsd64v80 matinbsd64v80 cnrnbsd64v80 pnrnbsd64v80 nhmnbsd64v80 wsmnbsd64v80 sbrnbsd64v80 ivynbsd64v80 hwlnbsd64v80 bwlnbsd64v80 skynbsd64v80 slmnbsd64v80 glmnbsd64v80 jagnbsd64v80
netbsd 8.0 32 k8nbsd32v81 k10nbsd32v81 bullnbsd32v81 pilenbsd32v81 excanbsd32v81 surinbsd32v81 pirinbsd32v81 matinbsd32v81 cnrnbsd32v81 pnrnbsd32v81 nhmnbsd32v81 wsmnbsd32v81 sbrnbsd32v81 ivynbsd32v81 hwlnbsd32v81 bwlnbsd32v81 skynbsd32v81 slmnbsd32v81 glmnbsd32v81 jagnbsd32v81
netbsd 8.0 64 k8nbsd64v81 k10nbsd64v81 bullnbsd64v81 pilenbsd64v81 excanbsd64v81 surinbsd64v81 pirinbsd64v81 matinbsd64v81 cnrnbsd64v81 pnrnbsd64v81 nhmnbsd64v81 wsmnbsd64v81 sbrnbsd64v81 ivynbsd64v81 hwlnbsd64v81 bwlnbsd64v81 skynbsd64v81 slmnbsd64v81 glmnbsd64v81 jagnbsd64v81
gentoo 32 k8gentoo32 k10gentoo32 bullgentoo32 pilegentoo32 excagentoo32 surigentoo32 pirigentoo32 matigentoo32 cnrgentoo32 pnrgentoo32 nhmgentoo32 wsmgentoo32 sbrgentoo32 ivygentoo32 hwlgentoo32 bwlgentoo32 skygentoo32 slmgentoo32 glmgentoo32 jaggentoo32
gentoo 64 k8gentoo64 k10gentoo64 bullgentoo64 pilegentoo64 excagentoo64 surigentoo64 pirigentoo64 matigentoo64 cnrgentoo64 pnrgentoo64 nhmgentoo64 wsmgentoo64 sbrgentoo64 ivygentoo64 hwlgentoo64 bwlgentoo64 skygentoo64 slmgentoo64 glmgentoo64 jaggentoo64
gentoo hard 32 ivygentoo32h
gentoo hard 64 ivygentoo64h
debian 7 32 k8deb32v7 k10deb32v7 bulldeb32v7 piledeb32v7 excadeb32v7 surideb32v7 pirideb32v7 matideb32v7 cnrdeb32v7 pnrdeb32v7 nhmdeb32v7 wsmdeb32v7 sbrdeb32v7 ivydeb32v7 hwldeb32v7 bwldeb32v7 skydeb32v7 slmdeb32v7 glmdeb32v7 jagdeb32v7
debian 7 64 k8deb64v7 k10deb64v7 bulldeb64v7 piledeb64v7 excadeb64v7 surideb64v7 pirideb64v7 matideb64v7 cnrdeb64v7 pnrdeb64v7 nhmdeb64v7 wsmdeb64v7 sbrdeb64v7 ivydeb64v7 hwldeb64v7 bwldeb64v7 skydeb64v7 slmdeb64v7 glmdeb64v7 jagdeb64v7
debian 8 32 k8deb32v8 k10deb32v8 bulldeb32v8 piledeb32v8 excadeb32v8 surideb32v8 pirideb32v8 matideb32v8 cnrdeb32v8 pnrdeb32v8 nhmdeb32v8 wsmdeb32v8 sbrdeb32v8 ivydeb32v8 hwldeb32v8 bwldeb32v8 skydeb32v8 slmdeb32v8 glmdeb32v8 jagdeb32v8
debian 8 64 k8deb64v8 k10deb64v8 bulldeb64v8 piledeb64v8 excadeb64v8 surideb64v8 pirideb64v8 matideb64v8 cnrdeb64v8 pnrdeb64v8 nhmdeb64v8 wsmdeb64v8 sbrdeb64v8 ivydeb64v8 hwldeb64v8 bwldeb64v8 skydeb64v8 slmdeb64v8 glmdeb64v8 jagdeb64v8
debian 9 32 k8deb32v9 k10deb32v9 bulldeb32v9 piledeb32v9 excadeb32v9 surideb32v9 pirideb32v9 matideb32v9 cnrdeb32v9 pnrdeb32v9 nhmdeb32v9 wsmdeb32v9 sbrdeb32v9 ivydeb32v9 hwldeb32v9 bwldeb32v9 skydeb32v9 slmdeb32v9 glmdeb32v9 jagdeb32v9
debian 9 64 k8deb64v9 k10deb64v9 bulldeb64v9 piledeb64v9 excadeb64v9 surideb64v9 pirideb64v9 matideb64v9 cnrdeb64v9 pnrdeb64v9 nhmdeb64v9 wsmdeb64v9 sbrdeb64v9 ivydeb64v9 hwldeb64v9 bwldeb64v9 skydeb64v9 slmdeb64v9 glmdeb64v9 jagdeb64v9
debian 10 32 k8deb32v10 k10deb32v10 bulldeb32v10 piledeb32v10 excadeb32v10 surideb32v10 pirideb32v10 matideb32v10 cnrdeb32v10 pnrdeb32v10 nhmdeb32v10 wsmdeb32v10 sbrdeb32v10 ivydeb32v10 hwldeb32v10 bwldeb32v10 skydeb32v10 slmdeb32v10 glmdeb32v10 jagdeb32v10
debian 10 64 k8deb64v10 k10deb64v10 bulldeb64v10 piledeb64v10 excadeb64v10 surideb64v10 pirideb64v10 matideb64v10 cnrdeb64v10 pnrdeb64v10 nhmdeb64v10 wsmdeb64v10 sbrdeb64v10 ivydeb64v10 hwldeb64v10 bwldeb64v10 skydeb64v10 slmdeb64v10 glmdeb64v10 jagdeb64v10
obsd 6.4 32 obsd32v64
obsd 6.4 64 obsd64v64
obsd 6.5 64 piriobsd64v65
devuan 2 64 ivydev64v2
devuan 2 32 ivydev32v2
devuan 3 64 ivydev64v3
devuan 3 32 ivydev32v3
fedora 29 64 ivyfed29
ubuntu 1804 64 ivyubu1804
ubuntu 1810 64 ivyubu1810
ubuntu 1904 64 ivyubu1904
arch 64
alpine 64 ivyalpine64
macos sierra 64 cod
solaris 32 ivysol32
solaris 64 ivysol64
dos 7 64 excados64 skydos64

Type-2 virtualised non-x86 systems, "user-mode" emulation

These pseudo-systems run under a Xen guest (currently qemuusr1 which in turn runs under servus), each under a chroot containing a complete GNU/Linux install.

The binaries thereunder are for the respective emulated systems with few exceptions. Currently only /bin/sh and /bin/bash are host binaries. It would be possible to greatly speed things up by providing more host binaries, notably cc1.

2019-02-25: We moved to newer qemu version for all these systems without taking the time to check for qemu regressions. We will instead revert to working qemu versions as GMP testing reveals bugs. [Reverted hppa, mipsel, mipseb, ppc32, ppc64, to latest good qemu version]

host arch running on emulator cores slowdown1 slowdown2 notes
armel-{debv6,debv7,debv8,debv9} armv7a servus qemu 3.1.0 6 11
armhf-{debv7,debv8,debv9} armv7a servus qemu 3.1.0 6 11 primarily use system "tinker" via ashell
arm64-{debv8,debv9} armv8-a servus qemu 3.1.0 6 10 primarily use system "odc2" via ashell
ppc32-gentoo ppc32 servus qemu 3.0.1 6 14
ppc32-{debv7,debv8} ppc32 servus qemu 3.0.1 6 11
ppc64eb-gentoo power8/be servus qemu 20181126 6 15
ppc64el-gentoo power8/le servus qemu 20181126 6 16
ppc64eb-{debv7,debv8} ppc64/be servus qemu 20181126 6 10
ppc64el-{debv8,debv9} ppc64/le servus qemu 20181126 6
power8el-debv9 power8/le servus qemu 20181126 6 12
power9el-debv9 power9/le servus qemu 20181126 6 12
mips64eb-{debv6,debv7,debv8,debv9} mips64/be servus qemu 2.10.2 6 10 problems executing some n32 binaries (qemu bugs)
mips64el-{debv6,debv7,debv8,debv9} mips64/le servus qemu 2.10.2 6 9 problems executing some n32 binaries (qemu bugs)
mips64elr6-debv10 mips64r6/le servus qemu custom 6 13 only abi=64 supported
s390x-gentoo z196? servus qemu 4.0.0 6 15
s390x-{debv7,debv8,debv9} z196? servus qemu 3.1.0 6
alpha-gentoo ev68 servus qemu 3.1.0 6 9
hppa-gentoo servus qemu 2.11.2 6 9
riscv-fed28 servus qemu 3.1.0 6 9

Type-2 virtualised x86 and non-x86 full system emulation

The "user-mode" systems of the last section should primarily be used since they have much less overhead, and furthermore emulate 6 CPU cores.

These full system emulation hosts are mainly useful for things which currently don't work in user mode. That is m68k, ppc64 using the 32-bit ABI and 64-bit instructions, mips64 using the n32 ABI, and sparc. Debugging is also sometimes easier with full systems emulation.

host arch running on emulator cores ram slowdown1 slowdown2 os/kern notes
armel-debv8.sys armv5tj servus qemu 2.12.1 1 256 30 gnu/linux deb 8
armhf-debv9.sys armv7a+neon servus qemu 4.0.0 4 256 33 gnu/linux deb 9 primarily use system "tinker" via ashell
arm64-fbsdv12.sys armv8 servus qemu 4.0.0 4 512 45 freebsd 12
arm64-debv9.sys armv8 servus qemu 4.0.0 4 512 gnu/linux deb 9 primarily use system "odc2" via ashell
ppc32-debv8.sys ppc32 servus qemu 3.0.1 1 256 gnu/linux deb 8
ppc64eb-fbsdv12.sys power8/be servus qemu 2.12.1 4 512 freebsd 12
ppc64eb-debv8.sys power8/be servus qemu 3.0.1 4 512 (33) gnu/linux deb 8
ppc64el-debv9.sys power9/le servus qemu 20181126 4 512 47 gnu/linux deb 9
mips64eb-debv9.sys mips64r2/be servus qemu 4.0.0 1 512 50 gnu/linux deb 9 use mainly for the n32 ABI, else mipseb-debv9 above
mips64el-debv9.sys mips64r2/le servus qemu 4.0.0 1 512 52 gnu/linux deb 9 use mainly for the n32 ABI, else mipsel-debv9 above
m68k.sys mc68040 servus aranym 1 256 38 gnu/linux deb 8
s390-debv9.sys z196 servus qemu 4.0.0 4 512 gnu/linux deb 9
alpha-gentoo.sys ev67 servus qemu 4.0.0 4 512 gnu/linux gentoo
sparcnbsd64 sparcv9b osky qemu 2.10.2 1 512 75 netbsd 7.1.2 only accessible by special means
sparcnbsd32 sparcv8 osky qemu 2.11.1 1 256 75 netbsd 7.1.2 only accessible by special means

Table footnotes:

  1. This slowdown factor is relative to each emulation host for GMP compilation, including emulator slowdown, and skewed by OS properties. The gcc versions might differ between host and guest, and gcc's speed varies from target to target.
  2. This slowdown factor is relative to each emulation host for running GMPbench. This is unfair mainly when emulating a 32-bit system on a 64-bit host, since GMP is much more efficient with native 64-bit arithmetic.