CI
At first, you’ll want to write your tests locally, and test them against as many local browsers as possible. However, to really test out your features, you’ll want to:
run them against as many real browsers on other operating systems as possible
have easy access to human- and machine-readable test results and build assets
integration with development tools like GitHub
Enter Continuous Integration (CI).
Cloud: Multi-Provider
Historically, Jupyter projects have used a mix of free-as-in-beer-for-open source hosted services: - Appveyor for Windows - Circle-CI for Linux - TravisCI for Linux and MacOS
Each brings their own syntax, features, and constraints to building and maintaining robust CI workflows.
JupyterLibrary
started on Travis-CI, but as soon as we wanted to support more platforms and browsers…
Cloud: Azure Pipelines
At the risk of putting all your eggs in one (proprietary) basket, Azure Pipelines provides a single-file approach to automating all of your tests against reasonably modern versions of browsers.
JupyterLibrary
is itself built on Azure, and looking at the pipeline and various jobs and steps used can provide the best patterns we have found.
On-Premises: Jenkins
If you are working on in-house projects, and/or have the ability to support it, Jenkins is the gold standard for self-hosted continuous integration. It has almost limitless configurability, and commercial support is available.
Approach: It’s Just Scripts
No matter how shiny or magical your continuous integration tools appear the long-term well-being of your repo depends on techniques that are: - simple - cross-platform - frequently run outside of your CI
Since this is Jupyter, this boils down to putting as much as possible into platform-independent python (and, when neccessary, nodejs) code.
JupyterLibrary
uses a small collection of scripts, not shipped as part of the distribution, which handle the pipeline. In addition, this library uses anaconda-project to manage multiple environment versions, and to combine multiple script invocations with different parameters into small, easy-to-remember (and complete) commands. Unfortunately, some of these approaches don’t quite work in Azure Pipelines, so some duplication of commands and dependencies are present.