It feels like they compare the asynchronous-ness of their algorithm to Nginx being asynchronous ("asynchronous web-servers like Nginx use a state-machine parser. They parse the bytes as they arrive, and discard them."), but I don't see how that relates. The way web-servers handle requests (multiple threads vs multiple processes vs asynchronous event-driven in one thread) is completely orthogonal to how they parse headers.
The impression I have is that their algorithm works in a streaming way, without having to allocate memory buffers, and that they call that asynchronous (wrongly, as far as I can see).
I also don't really see what they mean by their algorithm being more scalable.
I guess since state is fully encoded inside a single state variable and of course the output counts, it would be trivially simple to switch between multiple input streams and incrementally count words in each of them as new data arrives.