Hacker Newsnew | past | comments | ask | show | jobs | submit | awesome_dude's commentslogin

I've been in teams like this - people who are lower on the chain of power get run in circles as they change to appease one, then change to appease another then change to go back to appease the first again.

Then, going through their code, they make excuses about their code not meeting the same standards they demand.

As the other responder recommends, a style guide is ideal, you can even create an unofficial one and point to it when conflicting style requests are made


A fairly obvious solution (IMO) would be to have multiple people buying the ingredients, some even buying unused ingredients. That would cover purchasing.

The mixing, again, spreading it out, have factory A mix ingredients x, y, and z, factory B mix ingredients Alpha, Beta, Gamma, and factory C mix factory A and B's mixtures.


The fermented food that has always blown my mind has been

<drum roll>

Chocolate

I have no idea WHY that should come as a shock to me, but it does

Honorable mentions also go to Tea and Coffee


Oh whew, when I finally learned how Chocolate is made....mind blown.

The Western 19th and 20th centuries's approach to foods have been an incredible disservice to culinary and health history and modernist trends.


My GUESS is that canning really changed Western diets because food could last indefinitely in good condition

These accusations of someone using ChatGPT are cheap mindless attacks based on nothing more than the fact that someone has put together a good argument and used good formatting.

If that's all your evidence is, don't you dare go near any scientific papers.


Yeah, because there’s plenty of ChatGPT going on in academia too :P

Heh - point taken.

But it is important to note that a lot of what people decry as "AI Generated" is really the fact that someone is adhering to what have been best practices in publishing arguments for some time.


"Paid" demonstrators has been an accusation used by governments for several decades.

Edit: https://www.yourdictionary.com/rent-a-crowd (Rent a crowd/mob is often used to claim the protest is attended by people paid to be there, and was first coined in the mid 20th century, but apparently the actual accusation (though) is as old as demonstrations)


The usual boogie man.

Did you read that link? It’s hardly damming.

“Through a fund, the foundation issued a $3 million grant to the Indivisible Organization that was good for two years "to support the grantee's social welfare activities.” The grants were not specifically for the No Kings protests, the foundation said.”

If 7 million people protested, that 3 million over 2 years sure went a long way. They work for pennies.

https://en.wikipedia.org/wiki/October_2025_No_Kings_protests


I'm not sure why you are attacking me, I am clearly replying to someone who is claiming that recent times the retort of "paid demonstrators" is effective, and I have pointed out that the claim of people being paid to demonstrate has been made for decades, if not centuries.

Thank you for articulating the accusation, giving me the opportunity to respond, but try to take your own advice and read what's actually being said.


You appear to have edited your comment after I replied.

When I replied to you, the link in your comment was the below one.

https://abc6onyourside.com/news/nation-world/no-kings-protes...


You replied to the wrong person. Look further down thread for the person who posted that link.

Uhh - My client is showing that my comment was up for a couple of hours before you replied

That's around the maximum time allowed to edit a comment on Hacker News.

For the level of attack you injected in your previous comment, and now a claim of dishonesty, I would need to see some actual evidence of your claims (I know that I never posted that link, and am confused why you would try such a bizarre claim)


They replied to the wrong comment. The one they meant to reply to was from monero-xmr further down.

I think that using Postgres as the message/event broker is valid, and having a DLQ on that Postgres system is also valid, and usable.

Having SEPARATE DLQ and Event/Message broker systems is not (IMO) valid - because a new point of failure is being introduced into the architecture.


Sorry, but what's stopping the DLQ being a different topic on that Kafka - I get that the consumer(s) might be dead, preventing them from moving the message to the DLQ topic, but if that's the case then no messages are being consumed at all.

If the problem is that the consumers themselves cannot write to the DLQ, then that feels like either Kafka is dying (no more writes allowed) or the consumers have been misconfigured.

Edit: In fact there seems to be a self inflicted problem being created here - having the DLQ on a different system, whether it be another instance of Kafka, or Postgres, or what have you, is really just creating another point of failure.


> Edit: In fact there seems to be a self inflicted problem being created here - having the DLQ on a different system, whether it be another instance of Kafka, or Postgres, or what have you, is really just creating another point of failure.

There's a balance. Do you want to have your Kafka cluster provisioned for double your normal event intake rate just in case you have the worst-case failure to produce elsewhere that causes 100% of events to get DLQ'd (since now you've doubled your writes to the shared cluster, which could cause failures to produce to the original topic).

In that sort of system, failing to produce to the original topic is probably what you want to avoid most. If your retention period isn't shorter than your time to recover from an incident like that, then priority 1 is often "make sure the events are recorded so they can be processed later."

IMO a good architecture here cleanly separates transient failures (don't DLQ; retry with backoff, don't advance consumer group) from "permanently cannot process" (DLQ only these), unlike in the linked article. That greatly reduces the odds of "everything is being DLQ'd!" causing cascading failures from overloading seldom-stressed parts of the system. Makes it much easier to keep your DLQ in one place, and you can solve some of the visibility problems from the article from a consumer that puts summary info elsewhere or such. There's still a chance for a bug that results in everything being wrongly rejected, but it makes you potentially much more robust against transient downstream deps having a high blast radius. (One nasty case here is if different messages have wildly different sets of downstream deps, do you want some blocking all the others then? IMO they should then be partitioned in a way so that you can still move forward on the others.)


I think that you're right to mention that if the DLQ is over used that that potentially cripples the whole event broker, but I don't think that having a second system that could fall over for the same reason AND a host of other reasons is a good plan. FTR I think doubling kafka provisioned capacity is simpler, easier, cheaper, and more reliable approach.

BUT, you are 100% right to point to what i think is the proper solution, and that is to treat the DLQ with some respect, not a bit bucket where things get dumped because the wind isn't blowing in the right direction.



> its a big game of dont get fired, and collect pay checks as long as possible

That pretty much sums it up for 90% of the world's employment...


Sorry, what am I missing here, this complaint is true for all architectures, because the readers are always going to be out of sync with the state in the database until they do another read.

The nanosecond that the system has the concept of readers and writers being different processes/people/whatever it has multiple copies, the one held by the database, and the copies held by the readers when they last read.

It does not matter if there is a single DB lock, or a multi shared distributed lock.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: