Hacker Newsnew | past | comments | ask | show | jobs | submit | ams92's commentslogin

I’m wondering the same. I’ve read multiple articles about formal methods and how they’ve been used to find obscure bugs in distributed systems, but usually they just show a proof and talk about formal methods without concrete examples.


The problem with dating apps is they turn to shit when it’s time for the platform to monetize.


People aren’t eating the right foods and not exercising enough. The cause is very simple, the solution is not. It’s hard to get millions of people to make lifestyle changes and that’s even assuming they have access to healthier food in the first place.


Most of the complaints here could be solved by having smaller pull requests and then squashing commits when it’s time to merge.


Well, yes. Or by having stacked branches and rebasing your branches, etc.

The point is that GitHub/git's default experience makes this harder to do than a system that bakes it in.


We use stacked commits + rebase only in our company. The commit history is linear and it's very easy to revert changes. I don't see any advantage of using merging instead of rebase

I am not sure why we need to squash commits. We encourage the opposite where you should commit small and often. So if we need to revert any commit, it's less painful to do so.


Without squashing it's hard for me to commit as small and often as I would like.

Some things I want out of the final series of commits:

1) everything builds. If I need to revert something or roll back a commit, the resulting point of the codebase is valid and functional and has all passing tests.

2) features are logically grouped and consistent - kinda similar to the first, but it's not just that I want the build to pass, I don't want, say, module A to be not yet ready for feature flag X but module B to expect the feature flag to work. In the original article, this is to say that I want the three commits listed, but not one halfway through the "migrate API users" step.

But when I'm developing I do want to commit halfway through steps. I might commit 50 lines of changes that I'm confident in and then try the next 50 lines and decide I want to throw them away and try a different way. I might just want to push a central copy of what I've got at the end of the day in case my laptop breaks overnight (it's rare, but happens!). I might want to push something WIP for a coworker to take an initial look at with no intent of it being ready to land.

But I don't want any of those inconsistent/not-buildable/not-runnable states to be in the permanent history. It fucks with things like git bisect and git blame.


I think there's an ambiguity here between squashing every commit in the PR into a single one, and squashing fixup commits made as responses to review into the commits that originated them.

For example, if the original commit series was

    Do a small refactor before I can start adding the test
    Add the test for the feature
    Do a small refactor before I can start adding the feature
    Work in progress
    Complete sub-feature 1
    Work in progress
    lint
    lint
    Complete sub-feature 2
    Respond to reviewer 1 comments
    Respond to reviewer 2 comments
Then you can either squash the entire PR down to

    Implement feature
or you can, using interactive rebase, squash (or more precisely fixup) individual WIP, lint, and response commits into where they belong to obtain

    Do a small refactor before I can start adding the test
    Do a small refactor before I can start adding the feature
    Complete sub-feature 1
    Complete sub-feature 2
where each commit individually builds and passes tests. I far prefecr the latter!


I think either of these are fine - and the latter is certainly nice but also requires more work - but both require some sort of "squashing."

I don't understand the proposed workflow of "commit early and often" without any sort of squashing of WIP.


When we publish a stack of commits, our ci ensures that every commit is build and tested individually. There is no consistency issue

Squash and merge actually makes the above goal harder. With rebase + small commits, all we need to make sure is that every commit pass all the build signals and tests during ci


This only works if your commit in a green state. Sometimes we have to change when things are still "Yellow"..

I tend to add all my tests in one go and commit the RED. "tests are written" Then as I pass each test, I commit that.

This pattern works really well for me because if I mess up, then rolling back to the last yellow is easy. I can also WIP commit if I have to fix an urgent bug, and then get back to the WIP later.


Not sure what you mean... When we ship a stack of commits, every commit has to pass everything in CI. You are not suppose to ship a commit that's not passing the ci bar. There is a escape hatch that you can bypass but it's rarely used.

You can make changes before you ship however you wanted as long as they pass ci. If you already shipped the code and want to make changes later, that means making new commit or reverting a bad commit. It's simple as that


What is "publishing a stack of commits"?

Is that putting it up for review? Or are you not doing a PR workflow at all, in which case this doesn't really relate to the article.

Is the expectation that the developer either never commits stuff in a broken state during development or that they go back and rewrite or squash the sequence before pushing it for review?


> What is "publishing a stack of commits"?

Yes publishing for review.

> Is the expectation that the developer either never commits stuff in a broken state

That's exactly right. In a stack, every commit is built by ci, reviewed by the team. There will be no broken commits


My experience is that systemically squashing PRs enables a "fire and forget" style where you can add a bunch of small commits to your PR to address reviews and CI failures without worrying about making them fit a narrative of "these are the commits my PR is made of".

On a more concrete level, squashing PRs means every single commit is guaranteed to pass CI (assuming you also use merge queues) which is helpful when bisecting.


With stacked commits, every commit is already passing CI though.

To us the mental model is minimum. All you need to do is to make sure each commit pass CI. You can ship any number of stacked commits together

----------------------------------------------------------------------------------------------------

Not sure why I can't reply in a technical discussion. I have to edit to answer your question @danparsonson

> if I'm working on a long series of changes across multiple days, and halfway through it the code doesn't build yet?

That's why you break them down into small commits. The early you push it to CI, the earlier you will know whether each commit builds. For example, push commit 1 2 3 to the CI when they are ready. When the CI is running, you are working on commit 4 5 6

> The code won't pass CI because I'm not finished, but I want to commit my progress

If your commit 1,2,3 are ready, just ship them. It doesn't stop you have a few commits in reviews and a few WIP commits. There is no down time


Perhaps I misunderstand you but what if I'm working on a long series of changes across multiple days, and halfway through it the code doesn't build yet? The code won't pass CI because I'm not finished, but I want to commit my progress so I don't lose it if something goes wrong, and I can roll back if make mistakes.


Then fix up the commit history at the end, for example like this: https://news.ycombinator.com/item?id=41509051


That's like a caveman approach to the problem. Imagine the extra overhead required to submit the "refactor" commit. The result world be either nobody refactors or refactors are just bundled into the feature commit so it's never clear what you're actually reviewing.


Can someone explain to me why everything has to be done with PRs? Like you just have three commits for a PR. But the correct way is to split that up into three single-commit PRs? Why?

Not to mention that it doesn’t give you an interdiff. Because now you need to diff across three pull request.

It looks more like you are punting on the problem. Not solving anything.


Not really. The idea is to split work into separate stages which are reviewed separately, but as a whole.

In the example: "small refactor 25LOC -> new API 500LOC -> migrate API users 50LOC"

Making a PR of the small refactor will probably garner comments about "why is this necessary".

Opening two PRs at the same time is clutter as GitHub presents them as separate.

As well, sometimes CI won't pass on one of the stages meaning it can't be a separate PR, but it would still be useful in the code review to see it as a separate stage.


I'd be quite happy with seeing the three jobs in the article as three separate PRs. Fixing a bug and adding a feature are two jobs that, as I think we all agree, need to be tracked individually - so work on them individually.

> As well, sometimes CI won't pass on one of the stages meaning it can't be a separate PR

Could you give an example of this? Not sure what you mean.


Commits aren't always perfect.

Sometimes I'll make the unit test first, which fails CI and the next set of commits implements the behavior.


By doing this, you break commit atomicity and make bisects hell. Please don’t do this. Commits aren’t perfect at first for sure, but they should be by the time you make them reviewable.


It's fine to break commit atomicity on feature branches. You can use git bisect --first-parent on you development/master branch.


I completely disagree. In doing so you lose all visibility into the components and gradual evolution of the code that atomic commits provide. Same thing with squashing (which is just the worst).


the comments about "why is this necessary" can be handled with a decent PR template, and a comment.

What I tend to do is make the changes locally with different commits and then cherry pick the refactor into a PR branch and wait for that to be accepted. Then I rebase the FULL branch with "master" after the merge and create the PR.


Yeah I don't see the point, why not just use mergetrains?


Historically, console makers have taken a loss on consoles they sell and they primarily make money on the games themselves. I wonder how much Microsoft actually cares about console sales.


Was this true historically? I thought Nintendo made a profit on every console sold they released, and out of the three they are the oldest.


Yeah and 42M records is not prohibitively large.


it's miniscule. every resident could own 10, or 100 vehicles and sqlite would still handle it just fine.


This is nearly identical to the massive issues with retail and car theft that we have in San Francisco.

The uber liberals will say “oh they’re just trying to feed their families” while there are endless low skilled jobs (since the pandemic) available that pay $20+ an hour.

The reality is that current policing and prosecution rules/prosecution practices make being a retail thief more convenient than getting a job. Nothing is going to change until the punishment becomes a deterrent again.


For many people SMS or emails with tokens are already pushing their technological capabilities. Might even go as far to say most people.


With containerization it’s very quick to spin up test dependencies as well as part of your CICD. Why mock calls to a datastore when it’s super easy to spin up an ephemeral postgresql instance to test on?


> Why mock calls to a datastore when it’s super easy to spin up an ephemeral postgresql instance to test on?

It's actually super hard to get Postgres to fail, which is what you will be most interested in testing. Granted, you would probably use stubbing for that instead.


Because you have 40000 tests, and an in memory object means they can run in seconds. And the real thing runs in minutes.


Yup. Section "Easier to Test IO" in TFA.


Seattle became a tech city because of Microsoft and Amazon, period. If they didn’t decide to have their HQ there, we probably wouldn’t consider it a tech city.


Ok, but the valley wouldn’t be the valley without Fairchild semiconductors, HP, intel, Sun, Netscape, then Google. If they didn’t decide to have their HQ there we probably wouldn’t consider it a tech city.

Likewise seattle wouldn’t be a tech center without Boeing, Cray, Microsoft, Amazon, F5, Expedia, Real, etc.

It’s not like tech is a natural resource that comes from the ground in some cities and not others. It’s built around what was built there, whether it’s seattle or Mountain View.


Fairchild is an excellent and key example, because it shows clearly what is necessary: people leaving their former jobs to go start or help start different things. https://en.wikipedia.org/wiki/Traitorous_eight

It's just far too terrifying a world for that shit now. Housing prices are absurd, and just as bad, healthcare is deathly terrifying to consider. None of these companies reward ambition or drive very well, progress is incredibly slow, culminating in more glossy over-produced show-off events of very little. This top-heavy pace would be horrible for engineers and is, except it's well paid awful & unproductiveness, and jobs have been relatively abundant.

Usually when there's some kind of shitty local optimum that's obviously bad at producing new value, there is correction. New things start. But this whole era has been consolidation/aquisitions & belt-tightening difficulty, even somewhat when zero-interest rates were in effect. There's such limited ability & will to try, such a dearth of ideas people will find if it doesn't promise to become the best new monopoly in the world.


I don’t think this bears out in a place like seattle for everyone. I’ve been here since 2014 when home prices were much more affordable. I also made a substantial amount on my FAANG+ and adjacent work such I own my house and am debt free and have plenty of savings. I’m not alone. That’s a common story. And, after I wrap what I’m working on now, I’m starting my own thing. But I don’t plan to take VC, I’ve got my own capital now. Especially as FAANG loses its luster I think you’ll see people who did well and aren’t interested in retirement doing their own self financed things. A fair amount of my personal orbit have already done that. They’re actual real engineers that build stuff reliably that’s commercial. Their startups tend to do pretty well. But they’re not flashy valley kids - so maybe they don’t register for most people.

Yes it’s not a story for everyone. But an awful lot of the seattle scene has had a really good run and every one of the FAANG have a second largest if not largest presence here and Microsoft has been churning out wealthy alum for decades, and an awful lot are tired of the megacorp bullshit.


Microsoft might not have been successful without a local Seattle software company to buy 86-DOS from.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: