Trying to understand the Gas calculation in the traces. #5063
Unanswered
Michaelr-spherex
asked this question in
Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey there. This is probably me missing something small in the code, but it drives me crazy for a week now, so maybe someone can point me in the right direction:
I'm playing with foundry to make sure I understand how transactions work and I run into some unclear results with the gas calculation. I'll show with an example. The Tx I'm looking at is
0xe2e638ff6357796193f2e9e08d007a1c74cbb35bf9376535f61fa14054e70b92
on the ethereum mainnet. Lookin at it in etherscan I can see it used 137,054 gas units. That's great. When I'm running the transaction in my simulator (the one written with foundry) I get the same result. So far so good. The problem begins when I look at the parity trace of the Tx on etherscan, and I compare it to the traces I get from the simulator.Here is the code I use to get the traces in the simulator:
(
CallTraceItem
is my class with more simple "serde" properties for each of the traces). No I get Gas descrepencies for some of the specific "steps" of the traces. For instances the first action costs 146781 gas units according to etherscan, but only 117425 in the traces.Some of the actions, by the way, show the same gas usage.
So my questions is:
What is the origin of the difference between Etherscan and foundry (in the traces of each action. The overall Tx gas usage seems to be consistent)
Some notes:
This is the code setting up the executor:
and also, I've made sure to fork in the previous block and run all transactions that should run before the one in question, so that the state of the blockchain is identical
Thanks in advance
Michael
Beta Was this translation helpful? Give feedback.
All reactions