Skip to content

Add --output-iter-metrics flag to cpu userbenchmark scripts #2600

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

murste01
Copy link
Contributor

Adds a new --output-iter-metrics flag which adds per-iteration metrics to benchmark result JSON files. This allows us to do our own statistical analysis and comparison of latency/throughput.

@facebook-github-bot
Copy link
Contributor

Hi @murste01!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@murste01 murste01 force-pushed the murste01/add-output-iter-metrics-option branch from f091f71 to 88314e6 Compare March 25, 2025 12:15
@murste01
Copy link
Contributor Author

CCing @FindHao as I've seen you review similar PRs.

@murste01 murste01 force-pushed the murste01/add-output-iter-metrics-option branch from 88314e6 to 2cc4f79 Compare March 25, 2025 16:10
Copy link
Member

@FindHao FindHao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add the commands to run with and w/o this config and corresponding output?

@@ -143,6 +149,9 @@ def run(args: List[str], extra_args: List[str]):
parser.add_argument(
"--metrics", default="latencies", help="Benchmark metrics, split by comma."
)
parser.add_argument(
"--output-iter-metrics", action=argparse.BooleanOptionalAction, help="Enable per-iteration benchmark metrics"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default=False

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Adds a new `--output-iter-metrics` flag which adds per-iteration metrics
to benchmark result JSON files.
@murste01 murste01 force-pushed the murste01/add-output-iter-metrics-option branch from 2cc4f79 to 84056bd Compare March 26, 2025 09:55
Copy link
Member

@FindHao FindHao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thank you for your contribution!
Nit:
can you add the commands to run with and w/o this config and corresponding output in this PR's description? One example is good enough. like the Test plan in this PR #2507

@facebook-github-bot
Copy link
Contributor

@FindHao has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@murste01
Copy link
Contributor Author

murste01 commented Mar 26, 2025

Of course. See below:

Without --output-iter-metrics:

Example command:

$ python3 run_benchmark.py cpu -t eval -m llama,dlrm --precision fp32 --nwarmup 10 --niter 10 -o results
                                                                                                                                                                                                                      
Running benchmark: /home/murste01/miniforge3/envs/pytorch/bin/python3 /home/murste01/git/benchmark/userbenchmark/cpu/run_config.py -m llama -d cpu -t eval --precision fp32 --metrics latencies --nwarmup 10 --niter 1
0 -o results                                                                                                                                                                                                          
Running TorchBenchModelConfig(name='llama', test='eval', device='cpu', batch_size=None, extra_args=['--precision', 'fp32'], extra_env=None, output_dir=None, skip=False) ... [Done]                                   
                                                                                                                                                                                                                      
Running benchmark: /home/murste01/miniforge3/envs/pytorch/bin/python3 /home/murste01/git/benchmark/userbenchmark/cpu/run_config.py -m dlrm -d cpu -t eval --precision fp32 --metrics latencies --nwarmup 10 --niter 10
 -o results                                                                                                                                                                                                           
Running TorchBenchModelConfig(name='dlrm', test='eval', device='cpu', batch_size=None, extra_args=['--precision', 'fp32'], extra_env=None, output_dir=None, skip=False) ... [Done]

Output tree:

$ tree
...
metrics-20250326164001.json
results/
|-- dlrm-eval
|   `-- metrics-1489.json
`-- llama-eval
    `-- metrics-1362.json
...

results/dlrm-eval/metrics-1489.json contents:

$ cat results/dlrm-eval/metrics-1489.json
{
    "name": "cpu",
    "environ": {
        "pytorch_git_version": "2236df1770800ffea5697b11b0bb0d910b2e59e1"
    },
    "metrics": {
        "latency": 19.028374499999998
    }
}

results/llama-eval/metrics-1362.json contents:

$ cat results/llama-eval/metrics-1362.json
{
    "name": "cpu",
    "environ": {
        "pytorch_git_version": "2236df1770800ffea5697b11b0bb0d910b2e59e1"
    },
    "metrics": {
        "latency": 115.062351
    }
}

metrics-20250326164001.json contents:

$ cat metrics-20250326164001.json
{
    "name": "cpu",
    "environ": {
        "pytorch_git_version": "2236df1770800ffea5697b11b0bb0d910b2e59e1"
    },
    "metrics": {
        "llama-eval_latency": 115.062351,
        "dlrm-eval_latency": 19.028374499999998
    }
}

With --output-iter-metrics:

Example command:

$ python3 run_benchmark.py cpu -t eval -m llama,dlrm --precision fp32 --nwarmup 10 --niter 10 -o results --output-iter-metrics
                                                                                                                                                                                                                      
Running benchmark: /home/murste01/miniforge3/envs/pytorch/bin/python3 /home/murste01/git/benchmark/userbenchmark/cpu/run_config.py -m llama -d cpu -t eval --precision fp32 --output-iter-metrics --metrics latencies 
--nwarmup 10 --niter 10 -o results                                                                                                                                                                                    
Running TorchBenchModelConfig(name='llama', test='eval', device='cpu', batch_size=None, extra_args=['--precision', 'fp32'], extra_env=None, output_dir=None, skip=False) ... [Done]                                   
                                                                                                                                                                                                                      
Running benchmark: /home/murste01/miniforge3/envs/pytorch/bin/python3 /home/murste01/git/benchmark/userbenchmark/cpu/run_config.py -m dlrm -d cpu -t eval --precision fp32 --output-iter-metrics --metrics latencies -
-nwarmup 10 --niter 10 -o results                                                                                                                                                                                     
Running TorchBenchModelConfig(name='dlrm', test='eval', device='cpu', batch_size=None, extra_args=['--precision', 'fp32'], extra_env=None, output_dir=None, skip=False) ... [Done]

Output Tree:

$ tree
...
metrics-20250326163144.json
results/
|-- dlrm-eval
|   `-- metrics-1088.json
`-- llama-eval
    `-- metrics-961.json
...

results/dlrm-eval/metrics-1088.json contents:

$ cat results/dlrm-eval/metrics-1088.json
{
    "name": "cpu",
    "environ": {
        "pytorch_git_version": "2236df1770800ffea5697b11b0bb0d910b2e59e1"
    },
    "metrics": {
        "latency": 19.2843725,
        "iter_latencies": [
            18.054879,
            21.023861,
            20.065568,
            20.18488,
            18.503177,
            20.390915,
            18.033985,
            21.313112,
            17.822422,
            18.1846
        ]
    }
}

results/llama-eval/metrics-991.json contents:

$ cat results/llama-eval/metrics-961.json
{
    "name": "cpu",
    "environ": {
        "pytorch_git_version": "2236df1770800ffea5697b11b0bb0d910b2e59e1"
    },
    "metrics": {
        "latency": 116.56627399999999,
        "iter_latencies": [
            124.029208,
            112.892981,
            116.560487,
            115.278424,
            101.89432,
            121.628168,
            116.572061,
            120.830798,
            132.563498,
            114.872396
        ]
    }
}

metrics-20250326163144.json contents:

$ cat metrics-20250326163144.json
{
    "name": "cpu",
    "environ": {
        "pytorch_git_version": "2236df1770800ffea5697b11b0bb0d910b2e59e1"
    },
    "metrics": {
        "llama-eval_latency": 116.56627399999999,
        "llama-eval_iter_latencies": [
            124.029208,
            112.892981,
            116.560487,
            115.278424,
            101.89432,
            121.628168,
            116.572061,
            120.830798,
            132.563498,
            114.872396
        ],
        "dlrm-eval_latency": 19.2843725,
        "dlrm-eval_iter_latencies": [
            18.054879,
            21.023861,
            20.065568,
            20.18488,
            18.503177,
            20.390915,
            18.033985,
            21.313112,
            17.822422,
            18.1846
        ]
    }
}

@facebook-github-bot
Copy link
Contributor

@FindHao merged this pull request in 49c3f18.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants