You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/a2c-full.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -94,7 +94,7 @@ At the moment, the policy is just a neural network with some code around it so t
94
94
Additionally, the bot was only trained to play as the home team and would not know how to play on the other side of the field. Let's fix these things so we can watch
95
95
our agent play, and even play against it. The code that will be presented can also be used to submit your own neural network based bot to the Bot Bowl competition.
96
96
97
-
The [examples/a2c/a2c_example.py](https://github.com/njustesen/ffai/blob/master/examples/a2c/a2c_agent.py) script implements the ```Agent``` class just like the
97
+
The [examples/a2c/a2c_agent.py](https://github.com/njustesen/ffai/blob/master/examples/a2c/a2c_agent.py) script implements the ```Agent``` class just like the
98
98
scripted bots in our previous tutorials. In the constructor of our Agent class, we load in our neural network policy.
99
99
100
100
```python
@@ -239,4 +239,4 @@ sometimes score from this situation.
239
239
- Perform reasonable blocks at the line of scrimmage.
240
240
- It's hard to see, but it seems that agent is fouling a lot on the line of scrimmage as well.
241
241
242
-
Can you come up with better ways to guide the RL agent during training? Can you achieve a higher win rate with more parameters and more training?
242
+
Can you come up with better ways to guide the RL agent during training? Can you achieve a higher win rate with more parameters and more training?
0 commit comments