> Some compression algorithms support checkpointing the compression state so that it can be resumed from that point repeatedly ("dictionary-based compression"
Another toplevel comment claims it is relevant for the use case where you stuff the whole corpus into a single stream. When you want to add new data, you don't want to start over compressing everything.
Is it some kind of memoization?