1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-11-24 07:40:52 +00:00
freebsd/tests
Tino Reichardt bca9b64e7b ZTS: Use QEMU for tests on Linux and FreeBSD
This commit adds functional tests for these systems:
- AlmaLinux 8, AlmaLinux 9, ArchLinux
- CentOS Stream 9, Fedora 39, Fedora 40
- Debian 11, Debian 12
- FreeBSD 13, FreeBSD 14, FreeBSD 15
- Ubuntu 20.04, Ubuntu 22.04, Ubuntu 24.04

- enabled by default:
 - AlmaLinux 8, AlmaLinux 9
 - Debian 11, Debian 12
 - Fedora 39, Fedora 40
 - FreeBSD 13, FreeBSD 14

Workflow for each operating system:
- install qemu on the github runner
- download current cloud image of operating system
- start and init that image via cloud-init
- install dependencies and poweroff system
- start system and build openzfs and then poweroff again
- clone build system and start 2 instances of it
- run functional testings and complete in around 3h
- when tests are done, do some logfile preparing
- show detailed results for each system
- in the end, generate the job summary

Real-world benefits from this PR:

1. The github runner scripts are in the zfs repo itself. That means
   you can just open a PR against zfs, like "Add Fedora 41 tester", and
   see the results directly in the PR. ZFS admins no longer need
   manually to login to the buildbot server to update the buildbot config
   with new version of Fedora/Almalinux.

2. Github runners allow you to run the entire test suite against your
   private branch before submitting a formal PR to openzfs. Just open a
   PR against your private zfs repo, and the exact same
   Fedora/Alma/FreeBSD runners will fire up and run ZTS. This can be
   useful if you want to iterate on a ZTS change before submitting a
   formal PR.

3. buildbot is incredibly cumbersome. Our buildbot config files alone
   are ~1500 lines (not including any build/setup scripts)!
   It's a huge pain to setup.

4. We're running the super ancient buildbot 0.8.12. It's so ancient
   it requires python2. We actually have to build python2 from source
   for almalinux9 just to get it to run. Ugrading to a more modern
   buildbot is a huge undertaking, and the UI on the newer versions is
   worse.

5. Buildbot uses EC2 instances. EC2 is a pain because:
   * It costs money
   * They throttle IOPS and CPU usage, leading to mysterious,
   * hard-to-diagnose, failures and timeouts in ZTS.
   * EC2 is high maintenance. We have to setup security groups, SSH
   * keys, networking, users, etc, in AWS and it's a pain. We also
   * have to periodically go in an kill zombie EC2 instances that
   * buildbot is unable to kill off.

6. Buildbot doesn't always handle failures well. One of the things we
   saw in the past was the FreeBSD builders would often die, and each
   builder death would take up a "slot" in buildbot. So we would
   periodically have to restart buildbot via a cron job to get the slots
   back.

7. This PR divides up the ZTS test list into two parts, launches two
   VMs, and on each VM runs half the test suite. The test results are
   then merged and shown in the sumary page. So we're basically
   parallelizing ZTS on the same github runner. This leads to lower
   overall ZTS runtimes (2.5-3 hours vs 4+ hours on buildbot), and one
   unified set of results per runner, which is nice.

8. Since the tests are running on a VM, we have much more control over
   what happens. We can capture the serial console output even if the
   test completely brings down the VM. In the future, we could also
   restart the test on the VM where it left off, so that if a single test
   panics the VM, we can just restart it and run the remaining ZTS tests
   (this functionaly is not yet implemented though, just an idea).

9. Using the runners, users can manually kill or restart a test run
   via the github IU. That really isn't possible with buildbot unless
   you're an admin.

10. Anecdotally, the tests seem to be more stable and constant under
    the QEMU runners.

Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #16537
2024-09-17 12:03:27 -07:00
..
runfiles Adding Direct IO Support 2024-09-14 13:47:59 -07:00
test-runner ZTS: Use QEMU for tests on Linux and FreeBSD 2024-09-17 12:03:27 -07:00
zfs-tests ZTS: increase timeout of mmap_sync_001_pos 2024-09-17 12:03:08 -07:00
Makefile.am
README.md

ZFS Test Suite README

1) Building and installing the ZFS Test Suite

The ZFS Test Suite runs under the test-runner framework. This framework is built along side the standard ZFS utilities and is included as part of zfs-test package. The zfs-test package can be built from source as follows:

$ ./configure
$ make pkg-utils

The resulting packages can be installed using the rpm or dpkg command as appropriate for your distributions. Alternately, if you have installed ZFS from a distributions repository (not from source) the zfs-test package may be provided for your distribution.

- Installed from source
$ rpm -ivh ./zfs-test*.rpm, or
$ dpkg -i ./zfs-test*.deb,

- Installed from package repository
$ yum install zfs-test
$ apt-get install zfs-test

2) Running the ZFS Test Suite

The pre-requisites for running the ZFS Test Suite are:

  • Three scratch disks
    • Specify the disks you wish to use in the $DISKS variable, as a space delimited list like this: DISKS='vdb vdc vdd'. By default the zfs-tests.sh script will construct three loopback devices to be used for testing: DISKS='loop0 loop1 loop2'.
  • A non-root user with a full set of basic privileges and the ability to sudo(8) to root without a password to run the test.
  • Specify any pools you wish to preserve as a space delimited list in the $KEEP variable. All pools detected at the start of testing are added automatically.
  • The ZFS Test Suite will add users and groups to test machine to verify functionality. Therefore it is strongly advised that a dedicated test machine, which can be a VM, be used for testing.
  • On FreeBSD, mountd(8) must use /etc/zfs/exports as one of its export files by default this can be done by setting zfs_enable=yes in /etc/rc.conf.

Once the pre-requisites are satisfied simply run the zfs-tests.sh script:

$ /usr/share/zfs/zfs-tests.sh

Alternately, the zfs-tests.sh script can be run from the source tree to allow developers to rapidly validate their work. In this mode the ZFS utilities and modules from the source tree will be used (rather than those installed on the system). In order to avoid certain types of failures you will need to ensure the ZFS udev rules are installed. This can be done manually or by ensuring some version of ZFS is installed on the system.

$ ./scripts/zfs-tests.sh

The following zfs-tests.sh options are supported:

-v          Verbose zfs-tests.sh output When specified additional
            information describing the test environment will be logged
            prior to invoking test-runner.  This includes the runfile
            being used, the DISKS targeted, pools to keep, etc.

-q          Quiet test-runner output.  When specified it is passed to
            test-runner(1) which causes output to be written to the
            console only for tests that do not pass and the results
            summary.

-x          Remove all testpools, dm, lo, and files (unsafe).  When
            specified the script will attempt to remove any leftover
            configuration from a previous test run.  This includes
            destroying any pools named testpool, unused DM devices,
            and loopback devices backed by file-vdevs.  This operation
            can be DANGEROUS because it is possible that the script
            will mistakenly remove a resource not related to the testing.

-k          Disable cleanup after test failure.  When specified the
            zfs-tests.sh script will not perform any additional cleanup
            when test-runner exists.  This is useful when the results of
            a specific test need to be preserved for further analysis.

-f          Use sparse files directly instead of loopback devices for
            the testing.  When running in this mode certain tests will
            be skipped which depend on real block devices.

-c          Only create and populate constrained path

-I NUM      Number of iterations

-d DIR      Create sparse files for vdevs in the DIR directory.  By
            default these files are created under /var/tmp/.
            This directory must be world-writable.

-s SIZE     Use vdevs of SIZE (default: 4G)

-r RUNFILES Run tests in RUNFILES (default: common.run,linux.run)

-t PATH     Run single test at PATH relative to test suite

-T TAGS     Comma separated list of tags (default: 'functional')

-u USER     Run single test as USER (default: root)

The ZFS Test Suite allows the user to specify a subset of the tests via a runfile or list of tags.

The format of the runfile is explained in test-runner(1), and the files that zfs-tests.sh uses are available for reference under /usr/share/zfs/runfiles. To specify a custom runfile, use the -r option:

$ /usr/share/zfs/zfs-tests.sh -r my_tests.run

Otherwise user can set needed tags to run only specific tests.

3) Test results

While the ZFS Test Suite is running, one informational line is printed at the end of each test, and a results summary is printed at the end of the run. The results summary includes the location of the complete logs, which is logged in the form /var/tmp/test_results/[ISO 8601 date]. A normal test run launched with the zfs-tests.sh wrapper script will look something like this:

$ /usr/share/zfs/zfs-tests.sh -v -d /tmp/test

--- Configuration ---
Runfile:         /usr/share/zfs/runfiles/linux.run
STF_TOOLS:       /usr/share/zfs/test-runner
STF_SUITE:       /usr/share/zfs/zfs-tests
STF_PATH:        /var/tmp/constrained_path.G0Sf
FILEDIR:         /tmp/test
FILES:           /tmp/test/file-vdev0 /tmp/test/file-vdev1 /tmp/test/file-vdev2
LOOPBACKS:       /dev/loop0 /dev/loop1 /dev/loop2
DISKS:           loop0 loop1 loop2
NUM_DISKS:       3
FILESIZE:        4G
ITERATIONS:      1
TAGS:            functional
Keep pool(s):    rpool


/usr/share/zfs/test-runner/bin/test-runner.py  -c /usr/share/zfs/runfiles/linux.run \
    -T functional -i /usr/share/zfs/zfs-tests -I 1
Test: /usr/share/zfs/zfs-tests/tests/functional/arc/setup (run as root) [00:00] [PASS]
...more than 1100 additional tests...
Test: /usr/share/zfs/zfs-tests/tests/functional/zvol/zvol_swap/cleanup (run as root) [00:00] [PASS]

Results Summary
SKIP	  52
PASS	 1129

Running Time:	02:35:33
Percent passed:	95.6%
Log directory:	/var/tmp/test_results/20180515T054509

4) Example of adding and running test-case (zpool_example)

This broadly boils down to 5 steps

  1. Create/Set password-less sudo for user running test case.
  2. Edit configure.ac, Makefile.am appropriately
  3. Create/Modify .run files
  4. Create actual test-scripts
  5. Run Test case

Will look at each of them in depth.

  • Set password-less sudo for 'Test' user as test script cannot be run as root

  • Edit file configure.ac and include line under AC_CONFIG_FILES section

      tests/zfs-tests/tests/functional/cli_root/zpool_example/Makefile
    
  • Edit file tests/runfiles/Makefile.am and add line zpool_example.

      pkgdatadir = $(datadir)/@PACKAGE@/runfiles
      dist_pkgdata_DATA = \
        zpool_example.run \
        common.run \
        freebsd.run \
        linux.run \
        longevity.run \
        perf-regression.run \
        sanity.run \
        sunos.run
    
  • Create file tests/runfiles/zpool_example.run. This defines the most common properties when run with test-runner.py or zfs-tests.sh.

      [DEFAULT]
      timeout = 600
      outputdir = /var/tmp/test_results
      tags = ['functional']
    
      tests = ['zpool_example_001_pos']
    

    If adding test-case to an already existing suite the runfile would already be present and it needs to be only updated. For example, adding zpool_example_002_pos to the above runfile only update the "tests =" section of the runfile as shown below

      [DEFAULT]
      timeout = 600
      outputdir = /var/tmp/test_results
      tags = ['functional']
    
      tests = ['zpool_example_001_pos', 'zpool_example_002_pos']
    
  • Edit tests/zfs-tests/tests/functional/cli_root/Makefile.am and add line under SUBDIRS.

      zpool_example \ (Make sure to escape the line end as there will be other folders names following)
    
  • Create new file tests/zfs-tests/tests/functional/cli_root/zpool_example/Makefile.am the contents of the file could be as below. What it says it that now we have a test case zpool_example_001_pos.ksh

      pkgdatadir = $(datadir)/@PACKAGE@/zfs-tests/tests/functional/cli_root/zpool_example
      dist_pkgdata_SCRIPTS = \
        zpool_example_001_pos.ksh
    
  • We can now create our test-case zpool_example_001_pos.ksh under tests/zfs-tests/tests/functional/cli_root/zpool_example/.

    # DESCRIPTION:
    #	zpool_example Test
    #
    # STRATEGY:
    #	1. Demo a very basic test case
    #
    
    DISKS_DEV1="/dev/loop0"
    DISKS_DEV2="/dev/loop1"
    TESTPOOL=EXAMPLE_POOL
    
    function cleanup
    {
    	# Cleanup
    	destroy_pool $TESTPOOL
    	log_must rm -f $DISKS_DEV1
    	log_must rm -f $DISKS_DEV2
    }
    
    log_assert "zpool_example"
    # Run function "cleanup" on exit
    log_onexit cleanup
    
    # Prep backend device
    log_must dd if=/dev/zero of=$DISKS_DEV1 bs=512 count=140000
    log_must dd if=/dev/zero of=$DISKS_DEV2 bs=512 count=140000
    
    # Create pool
    log_must zpool create $TESTPOOL $type $DISKS_DEV1 $DISKS_DEV2
    
    log_pass "zpool_example"
    
  • Run Test case, which can be done in two ways. Described in detail above in section 2.

    • test-runner.py (This takes run file as input. See zpool_example.run)
    • zfs-tests.sh. Can execute the run file or individual tests