Skip to content
This repository was archived by the owner on Jul 25, 2024. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build

# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
Binary file added docs/assets/docs1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/docs2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/project_view.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/assets/squad_sign.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
76 changes: 76 additions & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html

# -- Path setup --------------------------------------------------------------

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
import sphinx_rtd_theme
sys.path.insert(0, os.path.abspath('.'))
from squad.version import __version__
import recommonmark
from recommonmark.transform import AutoStructify

# -- Project information -----------------------------------------------------

project = 'SQUAD'
copyright = '2016-2020, Linaro Limited'
author = 'Linaro'
version = __version__
release = __version__
# -- General configuration ---------------------------------------------------

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'sphinx_rtd_theme',
'recommonmark']

source_suffix = ['.txt', '.md']

master_doc = 'index'
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']

language = None
# -- Options for HTML output -------------------------------------------------

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']

pygments_style = 'sphinx'


def setup(app):
app.add_config_value('recommonmark_config', {
# 'url_resolver': lambda url: github_doc_root + url,
'auto_toc_tree_section': 'Contents',
'enable_auto_toc_tree': True,
'enable_math': False,
'enable_inline_math': False,
'enable_eval_rst': True,
}, True)
app.add_transform(AutoStructify)
283 changes: 283 additions & 0 deletions docs/guide/model.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,283 @@
## Introduction


### "Core" Data Model

+----+ * +-------+ * +-----+ * +-------+ * +----+ * 1 +-----+
|Team|--->|Project|--->|Build|--->|TestRun|--->|Test|------>|Suite|
+----+ +---+---+ +-----+ +-------+ +----+ +-----+
^ * ^ | * | ^ 1
| | | | * +------+ * |
+---+--------+ | | +--->|Metric|--------+
|Subscription| | | +------+
+------------+ | v 1
+-------------+ | +-----------+
|ProjectStatus|-+ |Environment|
+-------------+ +-----------+



Given a Linux Kernel Validation team, a **Group** respresnting this team can have
multiple projects each focusing on a different kernel tree testing (eg. `mainline` and `stable`).
Within each **Project** , there will be multiple **Builds** each comprising multiple test runs.
Each **TestRun** can include multiple Test results (which can be either pass/fail results),
or metrics containing one or more measurement values. **Test** and **Metric** results
can belong to a **Suite**, which is a basically used to group and analyze results together.
Every test suite must be associated with exactly one **Environment**, which describes the
holistic environment in which the tests were executed, such as hardware platform, hardware configuration,
OS, build settings (e.g. regular compilers vcs optimized compilers), etc.
Results are always organized by environments, so we can compare apples to apples.

Projects can have **Subscriptions**, which are either users or manually-entered
email addreses that should be notified about important events such as changing
test results. **ProjectStatus** records the most recent build of a project, against
which future results should be compared in search for important events to
notify subscribers about. SQUAD also supports a metric threshold system, which
will send notification to project subscribers if the test result metrics exceed
a certain value. The threshold values will also appear in the charts. Projects
have the `project_settings` field for any specific configuration it might require. (eg.
enabling plugins)

Builds can be compared against baselines exposing regressions and fixes. Visit
`/_/comparebuilds/`, select a project then two builds from it. This comparison
will go over each build's tests and find all different states from each. For instance
if a test failed in baseline but passes in a more current one, this is considered
to be a `"fix"`. If it starts failing compared to baseline it is a `"regression"`.
The concept can vary for other transitions.


A typical project in SQUAD would like the below snapshot, where `job_url` lists the jobs
on various backends along with other data about the build, commit, git describe..etc.

Each row below "Latest Builds" is a tiny dashboard showing the build version, summaries of
all test runs (complete and incomplete) followed by summaries on overall total tests and broken by result.


![Project View In SQUAD](../assets/project_view.png "Project View In SQUAD")


```eval_rst
.. note::
An ``xfail`` test result indicate a **KnownIssue** which could be a flakey or unstable test result.
```


### Getting Started

#### Submitting results

The API is the following

**POST** /api/submit/:group/:project/:build/:environment

- `:group` is the group identifier. It must exist previously.
- `:project` is the project identifier. It must exist previously.
- `:build` is the build identifier. It can be a git commit hash, a
Android manifest hash, or anything really. Extra information on the
build can be submitted as an attachment. If a build timestamp is not
informed there, the time of submission is assumed.
- `:environment` is the environmenr identitifer. It will be created
automatically if does not exist before.

All of the above identifiers (`:group`, `:project`, `:build`, and
`:environment`) must match the regular expression
`[a-zA-Z0-9][a-zA-Z0-9_.-]*`.

The test data files must be submitted as either file attachments, or as
regular `POST` parameters. . The following files are supported:

- `tests`: test results data
- `metrics`: metrics data
- `metadata`: metadata about the test run
- `attachment`: arbitrary file attachments. Multiple attachments can
be submitted by providing this parameter multiple times.

See [`Input file formats`](#input-file-formats) below for details on
the format of the data files.

Example with test data as file uploads::

$ curl \
--header "Auth-Token: $SQUAD_TOKEN" \
--form tests=@/path/to/test-results.json \
--form metrics=@/path/to/metrics.json \
--form metadata=@/path/to/metadata.json \
--form log=@/path/to/log.txt \
--form attachment=@/path/to/screenshot.png \
--form attachment=@/path/to/extra-info.txt \
https://squad.example.com/api/submit/my-group/my-project/x.y.z/my-ci-env

Example with test data as regular `POST` parameters::

$ curl \
--header "Auth-Token: $SQUAD_TOKEN" \
--form tests='{"test1": "pass", "test2": "fail"}' \
--form metrics='{"metric1": 21, "metric2": 4}' \
--form metadata='{"foo": "bar", "baz": "qux", "job_id": 123}' \
--form log='log text ...' \
--form attachment=@/path/to/screenshot.png \
--form attachment=@/path/to/extra-info.txt \
https://squad.example.com/api/submit/my-group/my-project/x.y.z/my-ci-env

Example with test data using Python's requests library:

```python
import json
import requests
import os

tests = json.dumps({"test1": "pass", "test2": "fail"})
metrics = json.dumps({"metric1": 21, "metric2": 4})
metadata = json.dumps({"foo": "bar", "baz": "qux", "job_id": 123})
log = "log text ..."

headers = {"Auth-Token": os.getenv('SQUAD_TOKEN')}
url = 'https://squad.example.com/api/submit/my-group/my-project/x.y.z/my-ci-env'
data = {"metadata": metadata, "log": log, "tests": tests_file}

result = requests.post(url, headers=headers, data=data)
if not result.ok:
print(f"Error submitting to qa-reports: {result.reason}: {result.text}")
```

Since test results should always come from automation systems, the API
is the only way to submit results into the system. Even manual testing
should be automated with a driver program that asks for user input, and
them at the end prepares all the data in a consistent way, and submits
it to dashboard.

### Input file formats

#### Test results

Test results must be posted as JSON, encoded in UTF-8. The JSON data
must be a hash (an object, strictly speaking). Test names go in the
keys, and values must be either `"pass"` or `"fail"`. Case does not
matter, so `"PASS"`/`"FAIL"` will work just fine. Any value that
when downcased is not either `"pass"` or `"fail"` will be mapped to
`None`/`NULL` and displayed in the UI as *skip*.

Tests that have `"fail"` as results and are known to have any issues
are displayed as *xfail* (eXpected-fail).

Tests can be grouped in test suites. For that, the test name must be
prefixed with the suite name and a slash (`/`). Therefore, slashes are
reserved characters in this context, and cannot be used in test names.
There is one exception to this rule. If test name contains square brackets
(`[`, `]`) they are considered as test variant. The string inside
brackets can contain slashes. Suite names can have embedded slashes in
them; so "foo/bar" means suite "foo", test "bar"; and "foo/bar/baz" means
suite "foo/bar", test "baz".

Example:

```json
{
"test1": "pass",
"test2": "pass",
"testsuite1/test1": "pass",
"testsuite1/test2": "fail",
"testsuite2/subgroup1/testA": "pass",
"testsuite2/subgroup2/testA": "pass",
"testsuite2/subgroup2/testA[variant/one]": "pass",
"testsuite2/subgroup2/testA[variant/two]": "pass"
}
```
There is an alternative format for sending results. Since SQUAD supports
storing test log in the Test object, passed JSON file can look as follows:

```json
{
"test1": {"result": "pass", "log": "test 1 log"},
"test2": {"result": "pass", "log": "test 2 log"},
"testsuite1/test1": {"result": "pass", "log": "test 1 log"},
"testsuite1/test2": {"result": "fail", "log": "test 2 log"}
}
```

Both forms are supported. In case log entry is missing or simple JSON
format is used, logs for each Test object are empty. They can be filled
in using plugins.

#### Metrics


Metrics must be posted as JSON, encoded in UTF-8. The JSON data must be
a hash (an object, strictly speaking). Metric names go in the keys, and
values must be either a single number, or an array of numbers. In the
case of an array of numbers, then their mean will be used as the metric
result; the whole set of results will be used where applicable, e.g. to
display ranges.

As with test results, metrics can be grouped in suites. For that, the
test name must be prefixed with the suite name and a slash (`/`).
Therefore, slashes are reserved characters in this context, and cannot
be used in test names. Suite names can have embedded slashes in them; so
"foo/bar" means suite "foo", metric "bar"; and "foo/bar/baz" means suite
"foo/bar", metric "baz".

Example:

```
{
"v1": "1",
"v2": "2.5",
"group1/v1": [1.2, 2.1, 3.03],
"group1/subgroup/v1": [1, 2, 3, 2, 3, 1]
}
```

#### Metadata

Metadata about the test run must be posted in JSON, encoded in UTF-8.
The JSON data must be a hash (an object). Keys and values must be
strings. The following fields are recognized:

- `build_url`: URL pointing to the origin of the build used in the
test run
- `datetime`: timestamp of the test run, as a ISO-8601 date
representation, with seconds. This is the representation that `date
--iso-8601=seconds` gives you.
- `job_id`: identifier for the test run. Must be unique for the
project. **This field is mandatory**
- `job_status`: string identifying the status of the project. SQUAD
makes no judgement about its value.
- `job_url`: URL pointing to the original test run.
- `resubmit_url`: URL that can be used to resubmit the test run.
- `suite_versions`: a dictionary with version number strings for suite names
used in the tests and metrics data. For example, if you have test suites
called "foo" and "bar", their versions can be expressed having metadata that
looks like this:

```json
{
# ...
"suite_versions": {
"foo": "1.0",
"bar": "3.1"
}
}
```
If a metadata JSON file is not submitted, the above fields can be
submitted as POST parameters. If a metadata JSON file is submitted, no
POST parameters will be considered to be used as metadata.

When sending a proper metadata JSON file, other fields may also be
submitted. They will be stored, but will not be handled in any specific
way.


### CI loop integration (optional)

SQUAD can integrate with existing automation systems to participate in a
Continuous Integration (CI) loop through its CI subsystem. For more details
check :ref:`ci_ref_label`.


#### Default auth group 'squad'

SQUAD creates by default an auth group with most of the permissions required
for authenticated/registered users to view, add, change and delete objects
in the projects they have access to. The name of the group is 'squad' by default.
All newly created users therefrom are automatically added to this group to eleviate
the need for manual intervention to add a user each time one is created.
Loading