Skip to content

Commit 40a841b

Browse files
Dark Knightfacebook-github-bot
Dark Knight
authored andcommitted
Revert D71711852
Summary: This diff reverts D71711852 causes qps regression as per: https://fb.workplace.com/groups/1982103978790468/posts/2575339479466912/?comment_id=2577572732576920 bypass-github-export-checks "these should not have triggered for DK revert" Reviewed By: izaitsevfb Differential Revision: D72140190 fbshipit-source-id: 94dfe23dd6b6cee3d95a40fcb28558d4e23c7893
1 parent 7727610 commit 40a841b

File tree

1 file changed

+0
-10
lines changed

1 file changed

+0
-10
lines changed

userbenchmark/dynamo/dynamobench/common.py

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3545,16 +3545,6 @@ def run(runner, args, original_dir=None):
35453545
if args.devices == ["xpu"]:
35463546
torch.use_deterministic_algorithms(True, warn_only=True)
35473547
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
3548-
# TODO(eqy): revisit when cuBLASLt workspace size is bumped
3549-
# if args.only is not None and args.only in {
3550-
# "DebertaForQuestionAnswering",
3551-
# "RobertaForQuestionAnswering",
3552-
# "nvidia_deeprecommender",
3553-
# "volo_d1_224",
3554-
# }:
3555-
# # These seem unhappy with numerics of larger cuBLASLt workspace
3556-
# # sizes following #145130 (due to enabling split-k?)
3557-
# torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = False
35583548
torch.backends.cudnn.deterministic = True
35593549
torch.backends.cudnn.allow_tf32 = False
35603550
torch.backends.cudnn.benchmark = False

0 commit comments

Comments
 (0)