Staging dashboard changes#
Important
This chapter deals with what to do when you are staging changes to the dashboard. This incorporates elements from the local dashboard workflow and the operational workflows, so it is very important to be familiar with these resources before reading on.
Staging is only appropriate after you have thoroughly confirmed with unit tests that the changes are expected to work.
Introduction#
Any change to a dashboard component that adds a new feature should be staged before release; that is, you should test the changes on a copy of a dashboard and confirm the results are what you expected AND that nothing adverse happens for dashboards that opt out of new features. This is a practice known as integration testing.
This is not an exact science and you should use your best judgement when moving changes into production. The dashboards do have a lot of moving pieces, but they are not infinite and not insurmountable. This chapter should give you a few scenarios you can use to piece together the process for confidently adding new features or fixing critical bugs in the dashboard workflow.
Generic steps of workflow tools#
An important concept to understand is that the control room workflows are just that: workflows. They implement the same basic steps as the local dashboard workflow and push the individual outputs to branches of the repository. In order to do that, they use the following steps:
install the tool that’s required
download the dashboard configuration file
download the additional resources needed
run the command to generate output
push that output to a branch
When thinking about staging the changes, it is important to remember that there is not a one-size-fits-all process for this. You have to think holistically and look at the changes you want to make from the perspective of a hub administrator who just wants a dashboard for their hub that they don’t have to keep fixing or adjusting much. From that perspective, it does not really matter how elegant the tools are as long as the workflows can run them. What matters is that the users have sensible defaults and options they can add or omit from their configuration files.
Examples
For example in building the forecast data, the steps are:
install hub-dashboard-predtimechart
download
predtimechart-config.yml
andsite-config.yml
from the dashboard repodownload the hub (defined in
site-config.yml
)run
ptc_generate_target_json_files
andptc_generate_json_files
save the output in the
ptc/data
branch and push it
similarly, for building the site, the steps are:
launch an interactive shell in a container based on the hub-dash-site-builder Docker image.
download
site-config.yml
from the dashboard repodownload the dashboard repo contents
run
render.sh
save the output in the
gh-pages
branch and push it
Data flow#
When staging dashboard changes, it is helpful to think about how the data flow
from the source to the branches in the control
room, which is
illustrated by the diagram below1This graph is simplified in the following ways: 1. The local
dashboard workflows, which call the control-room workflows are excluded from
this diagram because they act as a messenger, passing data downstream. 2. The
artifacts and branches generated from the workflows are performed in
parallel; generate-site.yaml generates the site
and gh-pages
artifact
and branch while generate-data.yaml generates the rest..
--- config: theme: base themeVariables: primaryBorderColor: '#3c88be' primaryColor: '#dbeefb' --- flowchart TD subgraph dashboard contents[/"[site contents]"/] site-config.yaml[/site-config.yml/] predtimechart-config.yaml[/predtimechart-config.yml/] predevals-config.yaml[/predevals-config.yml/] end generate-site.yaml generate-data.yaml subgraph artifacts site[\site\] eval-data[\eval-data\] forecast-data[\forecast-data\] end subgraph dashboard-branches gh-pages>gh-pages] predevals/data>predevals/data] ptc/data>ptc/data] end subgraph tools hub-dash-site-builder hub-dashboard-predtimechart hubPredEvalsData-docker end dashboard ~~~ tools site-config.yaml ~~~ tools predevals-config.yaml ==> generate-data.yaml predtimechart-config.yaml ==> generate-data.yaml contents -.-> generate-site.yaml site-config.yaml ==> generate-site.yaml site-config.yaml -.-> hub -.-> generate-data.yaml hub-dash-site-builder --> generate-site.yaml hub-dashboard-predtimechart --> generate-data.yaml hubPredEvalsData-docker --> generate-data.yaml generate-site.yaml --> artifacts generate-data.yaml --> artifacts generate-site.yaml ==> push-things.yaml generate-data.yaml ==> push-things.yaml artifacts -.-> push-things.yaml ==> dashboard-branches
I’ve arranged the diagram starting with the configuration files because in order to know what pieces are affected by modification of a given tool, you should start with the config file and follow the arrows.
configuration file |
tool |
workflow |
artifact |
branch |
---|---|---|---|---|
|
hub-dash-site-builder |
|
site |
gh-pages |
|
hubPredEvalsData-docker |
|
eval-data |
predevals/data |
|
hub-dashboard-predtimechart |
|
forecast-data |
ptc/data |
When to stage changes#
You will need to stage changes when any of the tools produces results whose structures diverge significantly from the output of the latest versions of the dashboard tools. The judgement of what a significant divergence is can be subjective, but a pretty clear checklist to consult is:
Is there a new option that is available from the configuration file?
Does the tool generate new files?
Is a previously required option becoming optional?
Are the arguments for the reusable workflows changing?
Is a previously optional option becoming required? (breaking)
Is the structure of the generated files changing? (breaking)
If the answer to any of the above questions is yes, then you need to stage changes before pushing a release.
Not all changes need to be staged
You might be able to think of situations that were not mentioned above. For example, addressing a bug fix for something that can easily be verified with internal tests does not absolutely need to fully stage changes for deployment.
Sometimes a fix is urgent enough that you need to deploy without running through the staging. It’s okay to do this if the change is small enough, but for large changes, you must stage the changes and verify that it works.
Where to stage changes#
Depending on the type of change happening, you have, broadly two options to stage the changes locally or in a remote build. In general, if nothing changes about how the resource is built or provisioned, then you can do local staging.
Option 1: local staging#
This option is ideal for changes that do not affect how the resource is built or provisioned. Doing local staging involves the same steps as the local workflow except that you would install the development version of the tool you are testing.
Local staging example
For example, if you add a new option to predtimechart-config.yml
, you would
follow this process:
implement change in new branch of hub-dashboard-predtimechart and install locally.
create a local copy of a dashboard repository and the hub it points to
generate the forecasts data with the appropriate command
inspect the output (check that no files are missing or added)
generate the site and preview (NOTE: If the JavaScript component also changes, you will need to post-edit the javascript components)
add the new option to
predtimechart-config.yml
and repeat steps 3–5.
The following sections will cover local staging strategies based on what component you are modifying. Note that you will often have to mix testing strategies.
In JavaScript tools#
The JavaScript tools PredTimeChart and PredEvals are both tools that can be staged locally. For either of the tools, the steps to perform local staging is:
create a new branch to implement the change
implement the change and get a passing review on your pull request
download the
gh-pages
branch of any dashboard repository to your local machineedit the first line of
resources/predtimechart.js
orresources/predevals_interface.js
so that the app pulls from your branch or commit:-import App from 'https://cdn.jsdelivr.net/gh/reichlab/predtimechart@v3/dist/predtimechart.bundle.js'; +import App from 'https://cdn.jsdelivr.net/gh/reichlab/predtimechart@<branch-name>/dist/predtimechart.bundle.js';
in the root of the folder, run
python -m http.server 8080
and open a browser to http://localhost:8080inspect the page and make sure that the page behaves as you expect.
The reason why these JavaScript tools can be staged locally is because they are loaded when someone visits the site. The site builder does not know anything about the underlying JavaScript.
--- config: theme: base themeVariables: primaryBorderColor: '#3c88be' primaryColor: '#dbeefb' --- flowchart TD subgraph site forecast.html eval.html subgraph resources predtimechart.js predevals_interface.js end end hub-dash-site-builder -..->|render.sh| site forecast.html -->|calls| predtimechart.js -->|loads| predtimechart["reichlab/predtimechart@v3"] predtimechart.js -->|fetches| ptc/data[(ptc/data)] predtimechart.js -->|updates| forecast.html eval.html -->|calls| predevals_interface.js -->|loads| predevals["hubverse-org/predevals@v1"] predevals_interface.js -->|fetches| predevals/data[(predevals/data)] predevals_interface.js -->|updates| eval.html
Purge the cache on release of JavaScript modules
The JavaScript components we provide to the webpage are known as version
aliased URLs, which, at the time of writing, is signified by the @v3
for PredTimeChart or @v1
for
PredEvals. This means that the sites will always get the latest version up to
that major version number. For example, if we release version 1.1.0 of PredEvals,
then the users downstream will have that version delivered via the CDN, but if
we turn around and release version 2.0.0, they will still get the 1.1.0 version.
When you release a JavaScript module, https://cdn.jsDeliver.com will pick it up within 12 hours, but user machines can keep it cached for up to seven days.
If you want your changes to show up near instantly, you can purge jsDeliver’s CDN cache by entering two URLs.
The way to do this is to paste the URLs in the browser and replace cdn
with purge
.
Note that you also need to do this for the un-versioned URL as well:
https://purge.jsdelivr.net/gh/reichlab/predtimechart@v3/dist/predtimechart.bundle.js
https://purge.jsdelivr.net/gh/reichlab/predtimechart/dist/predtimechart.bundle.js
https://purge.jsdelivr.net/gh/hubverse-org/predevals@v1/dist/predevals.bundle.js
https://purge.jsdelivr.net/gh/hubverse-org/predevals/dist/predevals.bundle.js
Accessing pre-built data#
The data generation workflows may take into account data that have already been generated in order to save computational time. When you are staging locally, it is a good idea to mimic the state of the control room as best you can.
Unlike the local workflow, the remote workflow stores the data in separate orphan branches that do not share history with the main branch of the repository. You can keep these branches local by using a git worktree. This is a strategy that we use in the site builder tests.
The pattern to add a git worktree is:
git worktree add --checkout <directory> <branch>
git clone https://github.com/reichlab/metrocast-dashboard.git
cd metrocast-dashboard
mkdir -p data/
git worktree add --checkout data/ptc ptc/data
git worktree add --checkout data/predevals predevals/data
From here, you can generate the data and you can use git to see what changed.
tmp=$(mktemp -d)
git clone https://github.com/reichlab/flu-metrocast.git "${tmp}"
mkdir -p data/ptc/forecasts
ptc_generate_json_files \
"${tmp}" \
predtimechart-config.yaml \
data/ptc/predtimechart-options.json \
data/ptc/forecasts
When you are done, you can remove the worktrees with
rm -rf data/
git worktree prune
Option 2: remote staging#
This option is necessary for changes that modify workflows, affect how the resource is built, and how the resource is provisioned. Again, the root of this process is based in the local workflow, but now you also have to effectively make a copy of the dashboard AND a copy of the control room and connect them to the right places.
Caution
This process is the most involved and it requires careful planning. If you want to avoid stepping into this process, the solution is to not make changes to the public interface to your application. If you want to update something to allow people to opt-in or opt-out of something, make it configurable via the configuration file.
--- config: theme: base themeVariables: primaryBorderColor: '#3c88be' primaryColor: '#dbeefb' --- flowchart TD subgraph control-room cro>"control-room (@main)"] crf>"control-room (@test)"] end dash["dashboard"] site>"dashboard@gh-pages"] dashf["user/dashboard"] sitef>"user/dashboard@gh-pages"] art[\"artifact"\] artf[\"artifact"\] subgraph tool toolv>"tool (@v1.1.1)"] toolf>"tool (@main)"] end dash --> cro toolv --> cro cro --> art --> site toolf --> crf dashf --> crf crf --> artf -->sitef2This was before we had fully ironed out the details of the docker publishing workflow, so the image tag is actually
znk-dispatch-fix-34
. I am writing this as if we had the workflow that we have now.3In this process, I did not create any forks. Instead, I modified the workflow for the
app, ran it without pushing, and inspected the artifacts. This had the same
effect, but meant that I could test several repositories at once.Remote staging example
For example in late March 2025, we wanted to update the site builder to v1.0.0. This changed the interface so instead of positional BASH arguments, we used argument flags:
bash /render.sh "$OWNER" "$REPO" "ptc/data" "" "$HAS_FORECASTS"
render.sh -u "$OWNER" -r "$REPO"
In order to test this, we needed to do the following
build the image from the main branch
create a branch in the control room that would reference this image and modify the commands
create a fork of a dashboard that would point to this new branch in the control room
After merging the updated dockerfile with new tests and interface into the main branch of the repository, I created the docker image from the main branch by using the Create, Test, and Publish Docker Image workflow. Note that this docker image has the main
tag2This was before we had fully ironed out the details of the docker publishing workflow, so the image tag is actually znk-dispatch-fix-34
. I am writing this as if we had the workflow that we have now., but it is not published.
Once I did that, I opened hubverse-org/hub-dashboard-control-room#59 and then modified the workflow to use the new container and the new interface (see hubverse-org/hub-dashboard-control-room@58628f0f4).
I created a fork of the dashboard repositories and changed their build-site.yaml
workflows to use the branch I was testing3In this process, I did not create any forks. Instead, I modified the workflow for the
app, ran it without pushing, and inspected the artifacts. This had the same
effect, but meant that I could test several repositories at once.
-uses: hubverse-org/hubverse-org/hub-dashboard-control-room/.github/workflows/generate-site.yaml@main
+uses: hubverse-org/hubverse-org/hub-dashboard-control-room/.github/workflows/generate-site.yaml@znk/use-release-hsdb/58
I then ran the workflows from the dashboard forks to confirm that the site was correctly generated.
Staging without forks (using the GitHub App)
Not all dashboards are going to behave the same. They can pick and choose the features that we offer. Some dashboards have hubverse-formatted target data while others have more custom formats. It is often a good idea to test multiple dashboards for these changes, not just one. One tool that can help with staging several dashboards at once is hubDashboard and you can run it directly from the control room.
In short, we control a list of known
hubs
that have the app installed. The workflows build.yaml
, rebuild-data.yaml
,
and rebuild-site.yaml
can all be triggered manually to build the website or
data for these hubs. You can use these workflows to run dry runs from a specific
branch and inspect the artifacts that are generated. These workflows are
similar to the workflows you will find in the dashboard repositories because
they use the same reusable
workflows.
The only difference is that there is a job that will fetch repositories that
has the app installed.
The reason this works because the control room stores two secrets: the App ID (${{ vars.APP_ID }}
) and a private key (${{ secrets.PRIVATE KEY }}
) (similar to your
SSH private key). These two items are passed as secrets
to the reusable
workflows and allows the control room workflows to authenticate as the app,
which can generate a temporary PAT for any repository that installed it.
--- config: theme: base themeVariables: primaryBorderColor: '#3c88be' primaryColor: '#dbeefb' --- flowchart TD subgraph dashboard-workflow db["dashboard repository"] t["key: GITHUB_TOKEN"] id["id: 'none'"] c["reusable workflow"] site db --> t --> c id --> c c -->|token from GITHUB_TOKEN| site end subgraph app-workflow adb["dashboard repository"] app{app} pk["key: PRIVATE_KEY"] aid["id: APP_ID"] ac["reusable workflow"] asite["site"] adb -->|installed| app app --> pk --> ac app --> aid --> ac -->|token generated from app| asite end
The App was originally intended to be a way for dashboards to be built without requiring hub admins to worry about an extra workflow. Since we migrated to fully reusable workflows, we have shut down the external webserver that was being used to receive webhooks from the repositories, but access from the control room remains as long as those secrets exist.
The following sections will cover remote staging strategies based on what component you are modifying working. Note that you will often have to mix testing strategies.
In the control room#
There are three situations that you will find yourself staging,
updates affecting the control room generate workflows
updates affecting the control room
push-things.yaml
workflowupdates affecting the control room scripts
Control room generate-
workflows#
--- config: theme: base themeVariables: primaryBorderColor: '#3c88be' primaryColor: '#dbeefb' --- flowchart LR subgraph dashboard build-data.yaml build-site.yaml end subgraph control-room generate-data.yaml generate-site.yaml end build-data.yaml --> generate-data.yaml build-site.yaml --> generate-site.yaml
If you are modifying one of the generate-data.yaml
or generate-site.yaml
workflows in the control room, then:
create a branch in the control room
make the changes you need to change (e.g. pointing to the correct branch of the tool you are modifying, updating the call syntax, or updating a step)
(in a fork of a dashboard repository), change the
@main
tag for the reusable workflow to@<branch-name>
-uses: hubverse-org/[...]/workflows/generate-site.yaml@main +uses: hubverse-org/[...]/workflows/generate-site.yaml@<branch-name>
inspect the resulting page and artifacts
Control room push-things.yaml
workflow#
--- config: theme: base themeVariables: primaryBorderColor: '#3c88be' primaryColor: '#dbeefb' --- flowchart LR subgraph dashboard build-data.yaml build-site.yaml end subgraph control-room generate-data.yaml generate-site.yaml artifacts push-things.yaml end build-data.yaml --> generate-data.yaml --> artifacts build-site.yaml --> generate-site.yaml --> artifacts artifacts --> push-things.yaml
The push things workflow is a reusable workflow that is called by other workflows and takes the generated artifacts and pushes them to a specific branch. The process for staging builds off of the the control room generate workflows with one more step where you need to make sure the branch of the reusable workflow is correct.
create a branch in the control room
make the changes you need to change (e.g. pointing to the correct branch of the tool you are modifying, updating the call syntax, or updating a step)
(in the control room) In the
generate-*
workflows, change thepush-things.yaml
workflows to use@<branch-name>
:-uses: hubverse-org/[...]/workflows/push-things.yaml@main +uses: hubverse-org/[...]/workflows/push-things.yaml@<branch-name>
(in a fork of a dashboard repository), change the
@main
tag for the reusable workflow to@<branch-name>
-uses: hubverse-org/[...]/workflows/generate-site.yaml@main +uses: hubverse-org/[...]/workflows/generate-site.yaml@<branch-name>
inspect the resulting page and artifacts
Control room scripts#
--- config: theme: base themeVariables: primaryBorderColor: '#3c88be' primaryColor: '#dbeefb' --- flowchart LR subgraph dashboard build-data.yaml build-site.yaml end subgraph control-room generate-data.yaml generate-site.yaml artifacts push-things.yaml scripts/ end build-data.yaml --> generate-data.yaml --> artifacts build-site.yaml --> generate-site.yaml --> artifacts artifacts --> push-things.yaml scripts/ --> push-things.yaml
If a script changes, it builds off of the the control room push-things.yaml
workflow:
create a branch in the control room
make the changes you need to change (e.g. pointing to the correct branch of the tool you are modifying, updating the call syntax, or updating a step)
(in the control room) In the
generate-*
workflows, change thepush-things.yaml
workflows to use@<branch-name>
:-uses: hubverse-org/[...]/workflows/push-things.yaml@main +uses: hubverse-org/[...]/workflows/push-things.yaml@<branch-name>
(in the control room) In the
push-things.yaml
workflow, modify theref
key of thecheckout-this-here-repo-scripts
step:theref
key should point to the new branch#60 steps: 61 - id: checkout-this-here-repo-scripts 62 uses: actions/checkout@v4 63 with: 64 repository: hubverse-org/hub-dashboard-control-room 65 ref: <branch-name> 66 persist-credentials: false 67 sparse-checkout: | 68 scripts
(in a fork of a dashboard repository), change the
@main
tag for the reusable workflow to@<branch-name>
-uses: hubverse-org/[...]/workflows/generate-site.yaml@main +uses: hubverse-org/[...]/workflows/generate-site.yaml@<branch-name>
inspect the resulting page and artifacts
hub-dashboard-predtimechart#
To stage changes to the hub-dashboard-predtimechart from the control room:
implement change in new branch of hub-dashboard-predtimechart
create new branch in the control room and modify
generate-data.yaml
so that it points to your branch instead of$latest
:- pip install "git+https://github.com/hubverse-org/hub-dashboard-predtimechart@$latest" + pip install "git+https://github.com/hubverse-org/hub-dashboard-predtimechart@<branch-name>"
fork a dashboard repository, point its workflows to the new control room branch
generate the data
generate the site and preview (NOTE: If the JavaScript component also changes, you will need to preview the site locally)
add the new option and repeat steps 4 and 5
If you do not need to change any options in the control room, then you can delete the control room branch and the dashboard fork. However, if arguments change, then there will be a period of time that the workflows will not work because you need to release the update AND you need to update the control room right after. To ensure things go smoothly, use the following steps:
plan a time for the release and optionally announce it
release the new version hub-dashboard-predtimechart
reset the control room’s branch reference to hub-dashboard-predtimechart to be
$latest
(yes, the dollar sign is important for the workflow)reset any the references to the reusable workflows back to
@main
merge the control room branch to main and it will be live
hubPredEvalsData-docker#
To stage changes to hubPredEvalsData-docker from the control room:
implement change in hubPredEvalsData-docker and push to the
main
branch after testing.publish a new image from the main branch (note that only tags will create an official release, so this is safe to do)
create new branch in the control room and modify
generate-data.yaml
so that it points to themain
branch instead oflatest
:runs-on: ubuntu-latest container: - image: ghcr.io/hubverse-org/hubpredevalsdata-docker:latest + image: ghcr.io/hubverse-org/hubpredevalsdata-docker:main ports: - 80 volumes: - ${{ github.workspace }}:/project
fork a dashboard repository, point its workflows to the new control room branch
generate the data
generate the site and preview (NOTE: If the JavaScript component also changes, you will need to preview the site locally)
add the new option and repeat steps 4 and 5
If you do not need to change any options in the control room, then you can delete the control room branch and the dashboard fork. However, if arguments change, then there will be a period of time that the workflows will not work because you need to release the update AND you need to update the control room right after. To ensure things go smoothly, use the following steps:
plan a time for the release and optionally announce it
release the new version hubPredEvalsData-docker
reset the control room’s branch reference to hubPredEvalsData-docker to be
:latest
reset any the references to the reusable workflows back to
@main
merge the control room branch to main and it will be live
hub-dash-site-builder#
To stage changes to hub-dash-site-bulder from the control room:
implement change in hub-dash-site-bulder and push to the
main
branch after testing.publish a new image from the main branch (note that only tags will create an official release, so this is safe to do)
create new branch in the control room and modify
generate-site.yaml
so that it points to themain
branch instead oflatest
:runs-on: ubuntu-latest container: - image: ghcr.io/hubverse-org/hub-dash-site-builder:latest + image: ghcr.io/hubverse-org/hub-dash-site-builder:main ports: - 80 volumes: - ${{ github.workspace }}:/project
fork a dashboard repository, point its workflows to the new control room branch
(optional) generate the data
generate the site and preview (NOTE: If the JavaScript component also changes, you will need to preview the site locally)
add the new option and repeat step 5
If you do not need to change any options in the control room, then you can delete the control room branch and the dashboard fork. However, if arguments change, then there will be a period of time that the workflows will not work because you need to release the update AND you need to update the control room right after. To ensure things go smoothly, use the following steps:
plan a time for the release and optionally announce it
release the new version hub-dash-site-bulder
reset the control room’s branch reference to hub-dash-site-bulder to be
:latest
reset any the references to the reusable workflows back to
@main
merge the control room branch to main and it will be live