-
Notifications
You must be signed in to change notification settings - Fork 0
Evaluations functions
An evaluation function is a function used to estimate the value or goodness of a position (usually at a leaf or terminal node) in the game tree.
In this library, the estimation is represented by an integer. It is recommended, but not mandatory, to have evaluations able to be coded by a short (It makes it compact enough for storage in transposition tables, and usually, precise enough to get a good evaluation accuracy.
The default win score returned by the com.fathzer.games.ai.evaluation.Evaluator
complies with this advice.
A static evaluator is the simplest kind of evaluators. It evaluates the position represented by a move generator that is passed to its evaluate
method. It is usually stateless and thread safe, so quite easy to develop. In the other hand it could be far slower than incremental evaluation functions.
The com.fathzer.games.StaticEvaluator
interface implements the appropriate behavior of most of the com.fathzer.games.ai.evaluation.Evaluator
interface and let you implements the only relevant method: int evaluate(B board)
.
Unlike static evaluator, the incremental evaluator maintains a state of the evaluation by being informed of the moves played during the game.
These evaluators are, for games where the situation evolves slowly (chess, for example), much faster than static evaluators, at the cost of greater complexity of their code.
The com.fathzer.games.ai.evaluation.Evaluator
interface describes the methods an incremental evaluator have to implement.
A lot of games like chess, reversi, etc... are zero sum games. This means an advantage for one side is equals to a loss for the other. In other words, the evaluation of a position from a player point of view is the negation of the evaluation from its opponent point of view.
The com.fathzer.games.ai.evaluation.ZeroSumEvaluator
interface simplifies the evaluator's by letting the developer only code the evaluation from the white player point of view and negate the result when required.
The AI algorithms based on moves tree exploration evaluate positions reached after a fixed number of moves. This number of moves, or depth, is like a horizon beyond which the algorithm is blind. Unfortunately, it leads to evaluate positions that are greatly unstable (imagine in chess a position where your queen is about to be taken), and to get inaccurate evaluation. This problem is known as the horizon effect.
In order to mitigate this effect, we can implement quiescence search.
The com.fathzer.games.ai.Negamax
AI allows the developer to define a com.fathzer.games.ai.QuiesceEvaluator
that is free to explore deeper the position it considers as unstable.
As this extra exploration is clearly game dependent, this library does not provide any implementation of this class (By default, the one used by the Negamax implementation considers all positions as stable and directly makes the evaluation).