docs: Relax the expectations of HW CI farms.
[mesa.git] / docs / ci / index.rst
1 Continuous Integration
2 ======================
3
4 GitLab CI
5 ---------
6
7 GitLab provides a convenient framework for running commands in response to git pushes.
8 We use it to test merge requests (MRs) before merging them (pre-merge testing),
9 as well as post-merge testing, for everything that hits ``master``
10 (this is necessary because we still allow commits to be pushed outside of MRs,
11 and even then the MR CI runs in the forked repository, which might have been
12 modified and thus is unreliable).
13
14 The CI runs a number of tests, from trivial build-testing to complex GPU rendering:
15
16 - Build testing for a number of build systems, configurations and platforms
17 - Sanity checks (``meson test`` & ``scons check``)
18 - Some drivers (softpipe, llvmpipe, freedreno and panfrost) are also tested
19 using `VK-GL-CTS <https://github.com/KhronosGroup/VK-GL-CTS>`__
20 - Replay of application traces
21
22 A typical run takes between 20 and 30 minutes, although it can go up very quickly
23 if the GitLab runners are overwhelmed, which happens sometimes. When it does happen,
24 not much can be done besides waiting it out, or cancel it.
25
26 Due to limited resources, we currently do not run the CI automatically
27 on every push; instead, we only run it automatically once the MR has
28 been assigned to ``Marge``, our merge bot.
29
30 If you're interested in the details, the main configuration file is ``.gitlab-ci.yml``,
31 and it references a number of other files in ``.gitlab-ci/``.
32
33 If the GitLab CI doesn't seem to be running on your fork (or MRs, as they run
34 in the context of your fork), you should check the "Settings" of your fork.
35 Under "CI / CD" → "General pipelines", make sure "Custom CI config path" is
36 empty (or set to the default ``.gitlab-ci.yml``), and that the
37 "Public pipelines" box is checked.
38
39 If you're having issues with the GitLab CI, your best bet is to ask
40 about it on ``#freedesktop`` on Freenode and tag `Daniel Stone
41 <https://gitlab.freedesktop.org/daniels>`__ (``daniels`` on IRC) or
42 `Eric Anholt <https://gitlab.freedesktop.org/anholt>`__ (``anholt`` on
43 IRC).
44
45 The three gitlab CI systems currently integrated are:
46
47
48 .. toctree::
49 :maxdepth: 1
50
51 bare-metal
52 LAVA
53 docker
54
55 Intel CI
56 --------
57
58 The Intel CI is not yet integrated into the GitLab CI.
59 For now, special access must be manually given (file a issue in
60 `the Intel CI configuration repo <https://gitlab.freedesktop.org/Mesa_CI/mesa_jenkins>`__
61 if you think you or Mesa would benefit from you having access to the Intel CI).
62 Results can be seen on `mesa-ci.01.org <https://mesa-ci.01.org>`__
63 if you are *not* an Intel employee, but if you are you
64 can access a better interface on
65 `mesa-ci-results.jf.intel.com <http://mesa-ci-results.jf.intel.com>`__.
66
67 The Intel CI runs a much larger array of tests, on a number of generations
68 of Intel hardware and on multiple platforms (x11, wayland, drm & android),
69 with the purpose of detecting regressions.
70 Tests include
71 `Crucible <https://gitlab.freedesktop.org/mesa/crucible>`__,
72 `VK-GL-CTS <https://github.com/KhronosGroup/VK-GL-CTS>`__,
73 `dEQP <https://android.googlesource.com/platform/external/deqp>`__,
74 `Piglit <https://gitlab.freedesktop.org/mesa/piglit>`__,
75 `Skia <https://skia.googlesource.com/skia>`__,
76 `VkRunner <https://github.com/Igalia/vkrunner>`__,
77 `WebGL <https://github.com/KhronosGroup/WebGL>`__,
78 and a few other tools.
79 A typical run takes between 30 minutes and an hour.
80
81 If you're having issues with the Intel CI, your best bet is to ask about
82 it on ``#dri-devel`` on Freenode and tag `Clayton Craft
83 <https://gitlab.freedesktop.org/craftyguy>`__ (``craftyguy`` on IRC) or
84 `Nico Cortes <https://gitlab.freedesktop.org/ngcortes>`__ (``ngcortes``
85 on IRC).
86
87 .. _CI-farm-expectations:
88
89 CI farm expectations
90 --------------------
91
92 To make sure that testing of one vendor's drivers doesn't block
93 unrelated work by other vendors, we require that a given driver's test
94 farm produces a spurious failure no more than once a week. If every
95 driver had CI and failed once a week, we would be seeing someone's
96 code getting blocked on a spurious failure daily, which is an
97 unacceptable cost to the project.
98
99 Additionally, the test farm needs to be able to provide a short enough
100 turnaround time that we can get our MRs through marge-bot without the
101 pipeline backing up. As a result, we require that the test farm be
102 able to handle a whole pipeline's worth of jobs in less than 15 minutes
103 (to compare, the build stage is about 10 minutes).
104
105 If a test farm is short the HW to provide these guarantees, consider
106 dropping tests to reduce runtime.
107 ``VK-GL-CTS/scripts/log/bottleneck_report.py`` can help you find what
108 tests were slow in a ``results.qpa`` file. Or, you can have a job with
109 no ``parallel`` field set and:
110
111 .. code-block:: yaml
112
113 variables:
114 CI_NODE_INDEX: 1
115 CI_NODE_TOTAL: 10
116
117 to just run 1/10th of the test list.
118
119 If a HW CI farm goes offline (network dies and all CI pipelines end up
120 stalled) or its runners are consistenly spuriously failing (disk
121 full?), and the maintainer is not immediately available to fix the
122 issue, please push through an MR disabling that farm's jobs by adding
123 '.' to the front of the jobs names until the maintainer can bring
124 things back up. If this happens, the farm maintainer should provide a
125 report to mesa-dev@lists.freedesktop.org after the fact explaining
126 what happened and what the mitigation plan is for that failure next
127 time.