You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why are the BLEU scores from train.py different from those in eval.py?
In my experiments, the BLEU scores computed in train.py were around 0.16, whereas those computed in eval.py were around 0.24.