You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add variable seqlen and sparsity parameters to jagged_sum benchmark
Summary:
Modify existing `jagged_sum` operator benchmark to optionally accept any of the following parameters: `B` (dimension 0 of nested tensor), `M` (dimension 2 of nested tensor), `seqlen` (maximum sequence length on ragged dimension), or `sparsity` (average sparsity on ragged dimension). This diff fixes the provided command line parameters and varies all other parameters above, enabling testing of all combinations of multiple parameters in parallel.
The following errors persist with sufficiently large inputs:
- `RuntimeError: numel needs to be smaller than int32_t max; otherwise, please use packed_accessor64` (when running command `buck2 run mode/{opt,inplace} //pytorch/benchmark:triton -- --op jagged_sum --B 1024 --M 1024 --sparsity 0.3`)
- `torch.OutOfMemoryError: CUDA out of memory.`
Reviewed By: davidberard98
Differential Revision: D58772201
# greater sparsity --> shorter sequence lengths on ragged dimension
164
197
seqlen_avg=math.floor(
165
-
self.seqlen* (1-self.sparsity)
198
+
seqlen* (1-sparsity)
166
199
) # average sequence length across all tensors in nested tensor
167
200
seqlen_margin=math.floor(
168
-
self.seqlen*RANDOM_CHOICE_MARGIN
201
+
seqlen*RANDOM_CHOICE_MARGIN
169
202
) # use margin to constrain sequence lengths to range [seqlen_avg - seqlen_margin, seqlen_avg + seqlen_margin] to approximate an average sequence length, which correlates with sparsity
0 commit comments