Skip to content

Commit afb3bf3

Browse files
mengluy0125facebook-github-bot
authored andcommitted
Support activation quantization without scaling (pytorch#2607)
Summary: X-link: pytorch/pytorch#148380 We enable the activation quantization in the forward pass, and users can customize the dtype they want to quantize. Differential Revision: D70522237
1 parent e53141f commit afb3bf3

File tree

1 file changed

+4
-0
lines changed
  • userbenchmark/dynamo/dynamobench/_dynamo

1 file changed

+4
-0
lines changed

userbenchmark/dynamo/dynamobench/_dynamo/utils.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4589,3 +4589,7 @@ def maybe_disable_inference_mode_for_fake_prop() -> Generator[None, None, None]:
45894589
yield
45904590
else:
45914591
yield
4592+
4593+
4594+
def is_node_meta_valid(node: Optional[torch.fx.Node]) -> bool:
4595+
return node is None or "example_value" in node.meta or "val" in node.meta

0 commit comments

Comments
 (0)