docs: Explain how to set up a personal gitlab runner.
[mesa.git] / docs / ci / index.rst
1 Continuous Integration
2 ======================
4 GitLab CI
5 ---------
7 GitLab provides a convenient framework for running commands in response to git pushes.
8 We use it to test merge requests (MRs) before merging them (pre-merge testing),
9 as well as post-merge testing, for everything that hits ``master``
10 (this is necessary because we still allow commits to be pushed outside of MRs,
11 and even then the MR CI runs in the forked repository, which might have been
12 modified and thus is unreliable).
14 The CI runs a number of tests, from trivial build-testing to complex GPU rendering:
16 - Build testing for a number of build systems, configurations and platforms
17 - Sanity checks (``meson test`` & ``scons check``)
18 - Some drivers (softpipe, llvmpipe, freedreno and panfrost) are also tested
19 using `VK-GL-CTS <>`__
20 - Replay of application traces
22 A typical run takes between 20 and 30 minutes, although it can go up very quickly
23 if the GitLab runners are overwhelmed, which happens sometimes. When it does happen,
24 not much can be done besides waiting it out, or cancel it.
26 Due to limited resources, we currently do not run the CI automatically
27 on every push; instead, we only run it automatically once the MR has
28 been assigned to ``Marge``, our merge bot.
30 If you're interested in the details, the main configuration file is ``.gitlab-ci.yml``,
31 and it references a number of other files in ``.gitlab-ci/``.
33 If the GitLab CI doesn't seem to be running on your fork (or MRs, as they run
34 in the context of your fork), you should check the "Settings" of your fork.
35 Under "CI / CD" → "General pipelines", make sure "Custom CI config path" is
36 empty (or set to the default ``.gitlab-ci.yml``), and that the
37 "Public pipelines" box is checked.
39 If you're having issues with the GitLab CI, your best bet is to ask
40 about it on ``#freedesktop`` on Freenode and tag `Daniel Stone
41 <>`__ (``daniels`` on IRC) or
42 `Eric Anholt <>`__ (``anholt`` on
43 IRC).
45 The three gitlab CI systems currently integrated are:
48 .. toctree::
49 :maxdepth: 1
51 bare-metal
53 docker
55 Intel CI
56 --------
58 The Intel CI is not yet integrated into the GitLab CI.
59 For now, special access must be manually given (file a issue in
60 `the Intel CI configuration repo <>`__
61 if you think you or Mesa would benefit from you having access to the Intel CI).
62 Results can be seen on ` <>`__
63 if you are *not* an Intel employee, but if you are you
64 can access a better interface on
65 ` <>`__.
67 The Intel CI runs a much larger array of tests, on a number of generations
68 of Intel hardware and on multiple platforms (x11, wayland, drm & android),
69 with the purpose of detecting regressions.
70 Tests include
71 `Crucible <>`__,
72 `VK-GL-CTS <>`__,
73 `dEQP <>`__,
74 `Piglit <>`__,
75 `Skia <>`__,
76 `VkRunner <>`__,
77 `WebGL <>`__,
78 and a few other tools.
79 A typical run takes between 30 minutes and an hour.
81 If you're having issues with the Intel CI, your best bet is to ask about
82 it on ``#dri-devel`` on Freenode and tag `Clayton Craft
83 <>`__ (``craftyguy`` on IRC) or
84 `Nico Cortes <>`__ (``ngcortes``
85 on IRC).
87 .. _CI-farm-expectations:
89 CI farm expectations
90 --------------------
92 To make sure that testing of one vendor's drivers doesn't block
93 unrelated work by other vendors, we require that a given driver's test
94 farm produces a spurious failure no more than once a week. If every
95 driver had CI and failed once a week, we would be seeing someone's
96 code getting blocked on a spurious failure daily, which is an
97 unacceptable cost to the project.
99 Additionally, the test farm needs to be able to provide a short enough
100 turnaround time that we can get our MRs through marge-bot without the
101 pipeline backing up. As a result, we require that the test farm be
102 able to handle a whole pipeline's worth of jobs in less than 15 minutes
103 (to compare, the build stage is about 10 minutes).
105 If a test farm is short the HW to provide these guarantees, consider
106 dropping tests to reduce runtime.
107 ``VK-GL-CTS/scripts/log/`` can help you find what
108 tests were slow in a ``results.qpa`` file. Or, you can have a job with
109 no ``parallel`` field set and:
111 .. code-block:: yaml
113 variables:
117 to just run 1/10th of the test list.
119 If a HW CI farm goes offline (network dies and all CI pipelines end up
120 stalled) or its runners are consistently spuriously failing (disk
121 full?), and the maintainer is not immediately available to fix the
122 issue, please push through an MR disabling that farm's jobs by adding
123 '.' to the front of the jobs names until the maintainer can bring
124 things back up. If this happens, the farm maintainer should provide a
125 report to after the fact explaining
126 what happened and what the mitigation plan is for that failure next
127 time.
129 Personal runners
130 ----------------
132 Mesa's CI is currently run primarily on's m1xlarge nodes
133 (2.2Ghz Sandybridge), with each job getting 8 cores allocated. You
134 can speed up your personal CI builds (and marge-bot merges) by using a
135 faster personal machine as a runner. You can find the gitlab-runner
136 package in debian, or use gitlab's own builds.
138 To do so, follow `gitlab's instructions
139 <>`__ to
140 register your personal gitlab runner in your Mesa fork. Then, tell
141 Mesa how many jobs it should serve (``concurrent=``) and how many
142 cores those jobs should use (``FDO_CI_CONCURRENT=``) by editing these
143 lines in ``/etc/gitlab-runner/config.toml``, for example::
145 concurrent = 2
147 [[runners]]
148 environment = ["FDO_CI_CONCURRENT=16"]
151 Docker caching
152 --------------
154 The CI system uses docker images extensively to cache
155 infrequently-updated build content like the CTS. The `
156 CI templates
157 <>`_ help us
158 manage the building of the images to reduce how frequently rebuilds
159 happen, and trim down the images (stripping out manpages, cleaning the
160 apt cache, and other such common pitfalls of building docker images).
162 When running a container job, the templates will look for an existing
163 build of that image in the container registry under
164 ``FDO_DISTRIBUTION_TAG``. If it's found it will be reused, and if
165 not, the associated `.gitlab-ci/containers/<jobname>.sh`` will be run
166 to build it. So, when developing any change to container build
167 scripts, you need to update the associated ``FDO_DISTRIBUTION_TAG`` to
168 a new unique string. We recommend using the current date plus some
169 string related to your branch (so that if you rebase on someone else's
170 container update from the same day, you will get a git conflict
171 instead of silently reusing their container)
173 When developing a given change to your docker image, you would have to
174 bump the tag on each ``git commit --amend`` to your development
175 branch, which can get tedious. Instead, you can navigate to the
176 `container registry
177 <>`_ for
178 your repository and delete the tag to force a rebuild. When your code
179 is eventually merged to master, a full image rebuild will occur again
180 (forks inherit images from the main repo, but MRs don't propagate
181 images from the fork into the main repo's registry).