You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+17-10Lines changed: 17 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -31,22 +31,29 @@ Each reference implementation provides the following:
31
31
32
32
# Running Benchmarks
33
33
34
-
These benchmarks have been tested on the following machine configuration:
35
-
36
-
* 16 CPUs, one Nvidia P100.
37
-
* Ubuntu 16.04, including docker with nvidia support.
38
-
* 600GB of disk (though many benchmarks do require less disk).
39
-
* Either CPython 2 or CPython 3, depending on benchmark (see Dockerfiles for details).
40
-
41
-
Generally, a benchmark can be run with the following steps:
34
+
Follow instructions on the Readme of each benchmark. Generally, a benchmark can be run with the following steps:
42
35
43
36
1. Setup docker & dependencies. There is a shared script (install_cuda_docker.sh) to do this. Some benchmarks will have additional setup, mentioned in their READMEs.
44
37
2. Download the dataset using `./download_dataset.sh`. This should be run outside of docker, on your host machine. This should be run from the directory it is in (it may make assumptions about CWD).
45
38
3. Optionally, run `verify_dataset.sh` to ensure the was successfully downloaded.
46
39
4. Build and run the docker image, the command to do this is included with each Benchmark.
47
40
48
-
Each benchmark will run until the target quality is reached and then stop, printing timing results.
41
+
Each benchmark will run until the target quality is reached and then stop, printing timing results.
42
+
43
+
Some these benchmarks are rather slow or take a long time to run on the reference hardware. We expect to see significant performance improvements with more hardware and optimized implementations.
49
44
50
-
Some these benchmarks are rather slow or take a long time to run on the reference hardware (i.e. 16 CPUs and one P100). We expect to see significant performance improvements with more hardware and optimized implementations.
45
+
# MLPerf Training v4.0 (Submission Deadline May 10, 2024)
46
+
*Framework here is given for the reference implementation. Submitters are free to use their own frameworks to run the benchmark.
51
47
48
+
| model | reference implementation | framework | dataset
0 commit comments