Skip to content

Adding benchmarking and scaling sections and polishing content #24

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Jul 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/_config.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Book settings
# Learn more at https://jupyterbook.org/customize/config.html

title: GitHub Actions for Scientific Workflows (SciPy 2024)
title: GitHub Actions for Scientific Data Workflows (SciPy 2024)
author: Valentina Staneva, George (Quinn) Brencher, Scott Henderson
logo: logo.png

Expand Down
6 changes: 4 additions & 2 deletions docs/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,7 @@ chapters:
- file: caching
- file: exporting-results
- file: visualizing-results-webpage
- file: ../glacier_image_correlation/README
title: Batch Computing
- file: batch-computing
title: Scaling Workflows
- file: model_benchmarking
title: Collaborative Model Versioning and Benchmarking
43 changes: 43 additions & 0 deletions docs/batch-computing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Scaling Workflows

We demonstrate how GitHub Actions can be used for scaling computationally expensive workflows through a use case aiming to measure glacier surface velocity from satellite
imagery.
* how to perform batch computing by running many workflows in parallel
* how to build complex pipelines by calling workflows from another workflow
* how to specify paramers to run a workflow



# Measuring Glacier Surface Velocity
#### Quinn Brencher, University of Washington

This set of Github Actions workflows allows you to measure horizontal glacier surface velocity from Sentinel-2 image pairs using [autoRIFT software](https://github.com/nasa-jpl/autoRIFT). No external accounts or API keys are required. These workflows were created for the Github Actions for Scientific Data Workflows workshop at the 2024 SciPy conference.

## Usage
We use three workflows to batch process image pairs for glacier surface velocity. For demonstration purposes the workflows are only set up to work over the [Yazghil Glacier](https://earth.google.com/earth/d/1myewNJrDEM0tW1_xdpWCYaRCGDcOBwiy?usp=drive_link) in Pakistan. To run the workflows, simply fork this repository, visit the "Actions" tab, and choose the `batch_image_correlation` workflow (which runs the other two workflows as well).

![plot](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/glacier_image_correlation/images/workflow_diagram.png)

### 1. `image_correlation_pair`
This workflow calls a Python script (image_correlation.py) that runs autoRIFT on a pair of spatially overlapping [Sentinel-2 L2A](https://docs.sentinel-hub.com/api/latest/data/sentinel-2-l2a/) images. It requires the [product names](https://sentiwiki.copernicus.eu/web/s2-products) of the two images. The images are downloaded from aws using the [Element 84 Earth Search API](https://element84.com/earth-search/). Only the near infrared band (NIR, B08) is used which has a spatial resolution of 10 m. autoRIFT is used to perform image correlation. Search distances are scaled with temporal baseline assuming a maximum surface velocity of 1000 m/yr, so images acquired farther apart in time take longer to process. Surface velocity maps are saved as geotifs and uploaded as [Github Artifacts](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts).

![plot](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/glacier_image_correlation/images/input_images.png)

### 2. `batch_image_correlation`
This workflow can be used to create surface velocity maps from many pairs of Sentinel-2 images. Required inputs include maximum cloud cover percent, start month (recommend >=5 to minimize snow cover), end month (recommend <=10 to minimize snow cover), and number of pairs per image, e.g.:
- 1 pair per image: (img<sub>i</sub>, img<sub>i+1</sub>), (img<sub>i+1</sub>, img<sub>i+2</sub>), (img<sub>i+2</sub>, img<sub>i+3</sub>), ...
- 2 pairs per image: (img<sub>i</sub>, img<sub>i+1</sub>), (img<sub>i</sub>, img<sub>i+2</sub>), (img<sub>i+1</sub>, img<sub>i+2</sub>), ...
- 3 pairs per image: (img<sub>i</sub>, img<sub>i+1</sub>), (img<sub>i</sub>, img<sub>i+2</sub>), (img<sub>i</sub>, img<sub>i+3</sub>), ...

Only the first suitable image is selected for each month. Once image pairs are identified, a matrix job is set up to run `image_correlation_pair` for each pair. Finally, `summary_statistics` is run.

### 3. `summary_statistics`
This workflow downloads all of the velocity maps created during a `batch_image_correlation` run and uses them to calculate and plot median velocity, standard deviation of velocity, and valid pixel count across all velocity maps. The summary statistics plot is uploaded as a Github Artifact.

![plot](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/glacier_image_correlation/images/velocity_summary_statistics.png)


## Acknowledgements
- Scott Henderson developed many of the original ideas and much of code used for this set of workflows
- [University of Washington eScience Incubator Program 2024](https://escience.washington.edu/incubator-24-glacial-lakes/)

10 changes: 4 additions & 6 deletions docs/exporting-results.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,14 +32,12 @@ gh run download

The workflow run also provides a publicly available link to the download artifact:

Artifact download URL: [https://github.com/uwescience/SciPy2024-
GitHubActionsTutorial/actions/runs/9591972369/artifacts/1619380017](https://github.com/uwescience/SciPy2024-
Artifact download URL: [`https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/actions/runs/9591972369/artifacts/1619380017`](https://github.com/uwescience/SciPy2024-
GitHubActionsTutorial/actions/runs/9591972369/artifacts/1619380017)

There is a `download-artifact` action to download the artifacts and share between jobs within a workflow run (note this is limited to the inidividual workflow run, for downloading across runs use the other options).
There is a `download-artifact` action to download the artifacts and share between jobs within a workflow run (note this is limited to the individual workflow run, for downloading across runs use the other options).

[Here](Artifact download URL: https://github.com/uwescience/SciPy2024-
GitHubActionsTutorial/actions/runs/9591972369/artifacts/1619380017) is more detailed documentation on GitHub Artifacts.
[Here](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts) you can find more detailed documentation on GitHub Artifacts.



Expand All @@ -55,7 +53,7 @@ The approach consists of a few steps:
* we will use [AnimMouse/setup-rclone](https://github.com/marketplace/actions/setup-rclone-action)
* configure a Google Drive remote locally
* encode the text in the config file and save it as a secret `RCLONE_CONFIG`
* MacOX: `openssl base64 -in ~/.config/rclone/rclone_drive.conf`
* `openssl base64 -in ~/.config/rclone/rclone_drive.conf`
* run the `rclone` command to upload the plots to Google Drive
* `rclone copy ambient_sound_analysis/img/broadband.png mydrive:rclone_uploads/`

Expand Down
14 changes: 10 additions & 4 deletions docs/getting-started.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,25 @@
# Setup
* Fork this repo
* Enable Github Actions:

* We expect all participants to have a GitHub account (if not you can make one here [https://github.com/login](https://github.com/login))
* Fork [https://github.com/uwescience/SciPy2024-GitHubActionsTutorial](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial)
* Enable GitHub Actions:
* Settings -> Actions -> Allow actions and reusable workflows
* [Managing Permissions
Documentation](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository)


All workflow configurations are stored in the [`.github/workflows`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/tree/main/.github/workflows) and will go through them in the following order:
All workflow configurations are stored in the [`.github/workflows`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/tree/main/.github/workflows) folder and we will go through them in the following order:

1. [`python_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/python_env.yml)
2. [`conda_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/conda_env.yml)
3. [`noise_processing.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/noise_processing.yml)
4. [`create_website_spectrogram.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website_spectrogram.yml)
5. [`create_website.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website.yml)
6. ...
6. [`batch_image_correlation.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/batch_image_correlation.yml)
7. [`image_correlation_pair.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/image_correlation_pair.yml)
8. [`summary_statistics.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/summary_statistics.yml)
9. [`model_benchmarking.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/model_benchmarking.yml)
10. [`create_website_benchmarks`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website_benchmarks.yml)



Expand Down
101 changes: 100 additions & 1 deletion docs/intro.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,103 @@
# Welcome to GitHub Actions for Scientific Workflows
# Welcome to GitHub Actions for Scientific Data Workflows


Tutorial presented at [SciPy 2024 Conference](https://www.scipy2024.scipy.org/)

Authors: Valentina Staneva, Quinn Brencher, Scott Henderson

## Abstract

In this tutorial we will introduce GitHub Actions to scientists as a tool for lightweight automation of scientific data workflows. We will
demonstrate that GitHub Actions are not just a tool for software testing, but can be used in various ways to improve the reproducibility
and impact of scientific analysis. Through a sequence of examples, we will demonstrate some of GitHub Actions' applications to scientific
workflows, such as scheduled deployment of algorithms to sensor streams, updating visualizations based on new data, processing large
datasets, model versioning and performance benchmarking. GitHub Actions can particularly empower Python scientific programmers who are not
willing to build fully-fledged applications or set up complex computational infrastructure, but would like to increase the impact of their
work. The goal is that participants will leave with their own ideas of how to integrate Github Actions in their own work.

## Description

GitHub Actions are quite popular within the software engineering community, but a scientific Python programmer may not have seen their use
beyond a continuous integration framework for unit testing. We would like to increase their visibility through a scientific workflow lens.
We will use examples that are relevant to the community: wrangling a messy realtime hydrophone data stream to display noise sounds from the
Puget Sound (not far from the conference venue!) or processing hundreds of satellite radar images over glacial lakes in High-Mountain Asia
to study flood hazards. We assume no knowledge on GitHub Actions and will start slowly with a “Hello World” step, but build quickly to
create complex and exciting workflows. We will also showcase their value for scientific collaborations across institutions as a means to
share reproducible workflows and computing infrastructure.

## Prerequisites
GitHub account, familiarity with git (commits, versioning), GitHub (push, pull requests), and Python (conda, scipy, matplotlib), some maturity in manipulating scientific data and
exposure to the challenges associated with it, ability to read code (our examples may use libraries not familiar to the audience, but the
focus will be on the steps these libraries accomplish rather than the details)

## Installation Instructions
Participants can make edits from the GitHub interface, but if they are willing to make updates locally, they need to have a functioning git
([set up instructions](https://swcarpentry.github.io/git-novice/#installing-git))

## Outline

### Short Version
```{tableofcontents}
```

### Long Version (with approximate schedule)
* Overview of GitHub Actions and Workflows and their popular uses in Python software development (examples of testing, listing,
packaging)(20 min)
* We will explain the main components of GitHub Actions and associated terminology
* We will summarize their typical uses in software development
* We will point to popular GitHub Actions used in Python software development and packaging (the focus of this tutorial will not be
on them but rather on scientific pipelines)

* Setting up your first workflow: a scientific Python environment (20 min)
* participants will update a workflow `.yml` file to create an environment with their favorite Python libraries
* participants will inspect the github interface to see the workflow runs

* Scheduled algorithm deployment to a realtime stream (30 min)
* we will deploy a typical scientific workflow: reading data, converting to a new format, and making a visualization
* participants will update the deployment schedule to trigger a new workflow and will monitor the progress in the GitHub interface

* Break (15 min)

* Exporting results (30 min)
* participants will learn about various ways to store the results:
* caching
* committing to GitHub
* creating GitHub artifacts
* storing to personal storage
* they will modify the code to make a new plot which will be automatically updated
* they will use either matplotlib or an interactive library such as plotly

* Update results on a webpage (30 min)
* we will overview different ways to display scientific results on a webpage
* we will demonstrate the workflow to deploy the webpage
* participants will rerender the webpage based on the updates in GitHub

* Large-scale data processing (45 min)
* we will demonstrate a use-case of processing large data sets with GitHub Actions
* participants will fiddle with problem size to understand the power and limits of the computational infrastructure
* we will discuss connections to cluster/cloud computing

* Break (10 min)

* Model Versioning and Benchmarking (20 min)
* we will introduce how to leverage GitHub’s version control to version different models and performance
* participants can contribute a new model and check its performance
* we will discuss how this can be used as a community network to share methods and results

* Recap and Discussion (or buffer time) (20 min)
* we will have a discussion on potential uses of GitHub Actions within the work of the participants


# References
* [*GitHub Actions for Scientific Data Workflows*](https://github.com/valentina-s/GithubActionsTutorial-USRSE23), Valentina Staneva,
[US-RSE 2023 Tutorial](https://us-rse.org/usrse23/program/tutorials/)
* [*Characterizing glacial lake outburst flood hazard at regional scale using fused InSAR-speckle tracking surface displacement time
series*](https://escience.washington.edu/2024-incubator-projects/), Quinn Brencher and Scott Henderson, eScience Institute Data Incubator
Project, 2024, [[repo](https://github.com/relativeorbit/actions-batch-demo)]
* [*GitHub Actions Workflows for Scheduled Algorithm
Deployment*](https://summerofcode.withgoogle.com/archive/2021/projects/5026942771789824), Dmitry Volodin, Jesse Lopez, Scott Veirs, Val
Veirs, Valentina Staneva, Orcasound Google Summer Of Code 2021 Project, [[repo]](https://github.com/orcasound/orca-action-workflow)
* [*GitHub Actions Documentation*](https://docs.github.com/en/actions/learn-github-actions)



45 changes: 45 additions & 0 deletions docs/model_benchmarking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Collaborative Model Versioning and Benchmarking

Here we will describe a scenario in which users submit different models to be applied to common data and compare the results. For this we will leverage GitHub's core features to facilitate code versioning and collaborative development and will set up a GitHub Actions configuration which triggers the evaluation when a user creates a `pull request` with a new version of the model and updates a table with user's results and corresponding commit number.

We will use a simple approach to approximate the number of ships passing during a time window by counting the number of peaks that appear above a threshold in the broadband plot. The threshold is set in the [`model_benchmarking.py`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/ambient_sound_analysis/model_benchmarking.py) script.


## Model Versioning Workflow
The workflow which triggers the model evaluation is in [`model_benchmarking.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/model_benchmarking.yml). It consists of the following steps:

1. it gets triggered on `pull_request`
* `synchronize` type ensures it get triggered when somebody updates existing pull request
2. it runs the `model_benchmarking.py` script which creates a `.csv` file containing the estimated number of ships
3. It appends to the row with number of ships extra metatada of the submission: username, commit SHA, pull request title
4. It stores the row to a `score_[SHA].csv`
5. It commits the 1-row file to the `ambient_sound_analysis/csv` folder


## Model Benchmarking Workflow

The next workflow follows the steps `create_website_spectrogram` workflow, which converts a notebook [`display_benchmarks`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/ambient_sound_analysis/display_benchmarks.ipynb) to a website. In this case, we have a very simple notebook which reads all `score_[SHA].csv` and displays a "benchmark table" with the individual entries. This notebook is converted to a webpage ([https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html](https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html/)).

### Exercise

Create a branch and update the `model_versioning.py` file with a different threshold

```
# set threshold
threshold = ??
```

Submit a pull request from this branch to main and monitor the execution of the workflows. Check out the generated website at [https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html](https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html/).