Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The example of factory QA is really an OLTP / near real time system, for which queues are definitely a bad idea.

Any system where you "would much rather an exception be raised immediately and crater the caller" is by definition an OLTP type system, which is a really bad fit for queues.

Lots of bad experiences with queues don't necessarily mean that queues aren't useful tools. They're just tools—they have a time and a place. If you've tried integrating with an email or SMS gateway, or a legacy rate limited bank you might have a different experience. Or dealing with slow job execution combined with spiky high speed job creation.

Depends entirely on the problem, and probably the solution team and constraints as well. Don't think it makes sense to dismiss them out of hand for all problems, though—just as it doesn't make sense to claim that they should be used for each and every problem.



There is some nuance here. My contention is with queues as a means to communicate between systems. Queues within systems are totally reasonable.

For instance - You have 2 services: Bank Core Integration and Teller. The Bank Core Integration service contains the queue of items that need to ultimately be pushed to the underlying legacy, rate-limited system. The teller service operates in RPC terms with the bank core integration service, so callers into the problem area are isolated from the semantics.

This might sound like a subtle difference, but it is huge in practice. If there is some problem with that queue, it is entirely contained to that one service and can be dealt with in isolation. Logs from that one service should comprehensively document the concern. If there was a message bus between these systems, you would have to go back and review message logs to see if things got missed between systems.


This eventually becomes a microservices vs monolith discussion though.

Implementing both the message queue and the bank core integration service within 1 service makes for less communication/network overhead and is easier to debug, which is definitely valuable.

Seperating them out means you can use off the shelf products for the message queue and is easier to scale, which can also be valuable.


Makes sense, yeah. If a system architecture demands that failures be cascaded downstream to all dependencies, then queues should not be used. There are definitely architectures for which this is a reasonable demand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: