Skip to content

Commit 80864b0

Browse files
authored
Update a2c-full.md
1 parent 503b07a commit 80864b0

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/a2c-full.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ At the moment, the policy is just a neural network with some code around it so t
9494
Additionally, the bot was only trained to play as the home team and would not know how to play on the other side of the field. Let's fix these things so we can watch
9595
our agent play, and even play against it. The code that will be presented can also be used to submit your own neural network based bot to the Bot Bowl competition.
9696

97-
The [examples/a2c/a2c_example.py](https://github.com/njustesen/ffai/blob/master/examples/a2c/a2c_agent.py) script implements the ```Agent``` class just like the
97+
The [examples/a2c/a2c_agent.py](https://github.com/njustesen/ffai/blob/master/examples/a2c/a2c_agent.py) script implements the ```Agent``` class just like the
9898
scripted bots in our previous tutorials. In the constructor of our Agent class, we load in our neural network policy.
9999

100100
```python
@@ -239,4 +239,4 @@ sometimes score from this situation.
239239
- Perform reasonable blocks at the line of scrimmage.
240240
- It's hard to see, but it seems that agent is fouling a lot on the line of scrimmage as well.
241241

242-
Can you come up with better ways to guide the RL agent during training? Can you achieve a higher win rate with more parameters and more training?
242+
Can you come up with better ways to guide the RL agent during training? Can you achieve a higher win rate with more parameters and more training?

0 commit comments

Comments
 (0)