Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Some compression algorithms support checkpointing the compression state so that it can be resumed from that point repeatedly ("dictionary-based compression"

Is it some kind of memoization?



Another toplevel comment claims it is relevant for the use case where you stuff the whole corpus into a single stream. When you want to add new data, you don't want to start over compressing everything.

https://news.ycombinator.com/item?id=27441474




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: