Description
Current implementation of BatchProcessor have a faulty "back pressure" mechanic which only unlocks next polling interation of the worker when all of the pending tasks finishes and not when any of them finishes.
The correct implementation should allow poller to poll more tasks when any new execution slot fees up in internal promise queue.
We can also keep an internal "buffer" of messages so we always have some of them waiting for the exeuction slot and poll to fill - this will make it even faster, at the risk of exceeding the visibility timeout.
This was reported by Renaud on Discord: https://discord.com/channels/1330460493495795804/1369991442226876498
batching X concurrent tasks then waiting for them all to finish before starting a new batch (instead of picking tasks as soon as one finished). This is not great if some of the tasks are long as it means it will delay the next batch.