Hacker Newsnew | past | comments | ask | show | jobs | submit | borrow's commentslogin

It's self-reinforcing. If commit history is reasonably clean, the barrier to doing code archeology is lower, so you reach for it more often. And if you do code archeology often, you develop a better sense of what's a clean commit history and what makes commit messages useful.

The need for code archeology depends on a project. When you writing a lot of new code it's probably less important than in a legacy thing where most changes are convoluted tweaks made for non-obvious reasons.


Sounds interesting. What reading on such modern architectures would you recommend?


Novel among mainstream languages, but not unseen before: https://blog.adamant-lang.org/2019/operator-precedence/


Postgres (which is a wrapper around tokio-postgres, and as a result drags the whole tokio bloated dependency graph). I would love to have a purely sync alternative to that.


I am so glad someone said this. This also shows nicely how async is wrong, not just that it's viral, but it's bad design because it forces code duplication.

Middleware/library writers that touch on anything that could be async (db, SPI, network etc.) will now have to write two versions of their API and duplicate most code.


SEEKING WORK, EU, part-time remote contracts only

A generalist, programming since ~2000 (professionally since 2008).

Some areas I can help with: * math * advanced algorithms * interfacing with experts in other technical fields * prototyping * figuring out minimalist solutions * performance optimization * Rust, vanilla TypeScript, Python, C++ * DevEx

vlad.shcherbina@gmail.com


What if that particular unique ID is deleted right before the client requests the next page?


Pagination only makes sense in the context of an ordered collection; if there is no stable sort order then you can’t paginate. So you identify the last record seen with whatever fields you are ordering by, and if the last record has been deleted, then it doesn’t matter because you are only fetching the items greater than those values according to the sort order.

Anyway, there is plenty of documentation out there for cursor-based pagination; Hacker News comments isn’t the right place to explain the implementation details.


1. This kind of pagination can be done for any key as long as its data type admits total ordering.

2. The WHERE condition typically uses `>` or `<`, so it doesn't fail even when that record is deleted before the next client request referring to it.


If the IDs are monotonous, it doesn't matter.


(for posterity, this appears to have been a typo for monotonic)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: