You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/model_benchmarking.md
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -2,11 +2,11 @@
2
2
3
3
Here we will describe a scenario in which users submit different models to be applied to common data and compare the results. For this we will leverage GitHub's core features to facilitate code versioning and collaborative development and will set up a GitHub Actions configuration which triggers the evaluation when a user creates a `pull request` with a new version of the model and updates a table with user's results and corresponding commit number.
4
4
5
-
We will use a simple approach to approximate the number of ships passing during a time window by counting the number of peaks that appear above a threshold in the broadband plot. The threshold is set in the [`model_benchmarking.py`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/ambient_sound_analysis/model_benchmarking.py) script.
5
+
We will use a simple approach to approximate the number of ships passing during a time window by counting the number of peaks that appear above a threshold in the broadband plot. The threshold is set in the [`model_benchmarking.py`](https://github.com/uwescience/GitHubActionsTutorial-USRSE24/blob/main/ambient_sound_analysis/model_benchmarking.py) script.
6
6
7
7
8
8
## Model Versioning Workflow
9
-
The workflow which triggers the model evaluation is in [`model_benchmarking.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/model_benchmarking.yml). It consists of the following steps:
9
+
The workflow which triggers the model evaluation is in [`model_benchmarking.yml`](https://github.com/uwescience/GitHubActionsTutorial-USRSE24/blob/main/.github/workflows/model_benchmarking.yml). It consists of the following steps:
10
10
11
11
1. it gets triggered on `pull_request`
12
12
* `synchronize` type ensures it get triggered when somebody updates existing pull request
@@ -18,7 +18,7 @@ The workflow which triggers the model evaluation is in [`model_benchmarking.yml`
18
18
19
19
## Model Benchmarking Workflow
20
20
21
-
The next workflow follows the steps `create_website_spectrogram` workflow, which converts a notebook [`display_benchmarks`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/ambient_sound_analysis/display_benchmarks.ipynb) to a website. In this case, we have a very simple notebook which reads all `score_[SHA].csv` and displays a "benchmark table" with the individual entries. This notebook is converted to a webpage ([https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html](https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html/)).
21
+
The next workflow follows the steps `create_website_spectrogram` workflow, which converts a notebook [`display_benchmarks`](https://github.com/uwescience/GitHubActionsTutorial-USRSE24/blob/main/ambient_sound_analysis/display_benchmarks.ipynb) to a website. In this case, we have a very simple notebook which reads all `score_[SHA].csv` and displays a "benchmark table" with the individual entries. This notebook is converted to a webpage ([https://uwescience.github.io/GitHubActionsTutorial-USRSE24/display_benchmarks.html](https://uwescience.github.io/GitHubActionsTutorial-USRSE24/display_benchmarks.html/)).
22
22
23
23
### Exercise
24
24
@@ -29,7 +29,7 @@ Create a branch and update the `model_versioning.py` file with a different thres
29
29
threshold = ??
30
30
```
31
31
32
-
Submit a pull request from this branch to main and monitor the execution of the workflows. Check out the generated website at [https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html](https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html/).
32
+
Submit a pull request from this branch to main and monitor the execution of the workflows. Check out the generated website at [https://uwescience.github.io/GitHubActionsTutorial-USRSE24/display_benchmarks.html](https://uwescience.github.io/GitHubActionsTutorial-USRSE24/display_benchmarks.html/).
33
33
34
34
35
35
@@ -42,4 +42,4 @@ Submit a pull request from this branch to main and monitor the execution of the
0 commit comments