Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But what I've discovered in practice is that, during those early iterations, I don't really need the compiler to help me predict what will break, because it's already in my head.

Reading this, I have the feeling that you're talking mostly of single-person (or at least small team) projects. Am I wrong?

> The more common problem is that static typing results in more breaks than there would be in the code that just uses heterogeneous maps and lists, because I've got to set up special types, constructors, etc. for different states of the data such as "an ID has/has not been assigned yet". So it kind of ends up being the best solution to a problem largely of its own making.

There is definitely truth to this. I feel that this is a tax I'm absolutely willing to pay for most of my projects, but for single-person highly experimental projects, I agree that it sometimes feels unnecessary.

> I'm also working from the assumption here that one will go through and clean up code before putting it into production. That could be as simple as replacing dicts with dataclasses and adding type hints, but might also mean migrating modules to cython or Rust when it makes sense to do so. So you should still have good static type checking of code by the time it goes into production.

Just to be sure that we're talking of the same thing: do we agree that dataclasses and type hints are just the first step towards actually using types correctly? Just as putting things in `struct` or `enum` in Rust are just the first step.



> I have the feeling that you're talking mostly of single-person (or at least small team) projects. Am I wrong?

Small teams. And ones that work collaboratively, not ones that than carve the code up into bailiwicks so that they can mostly work in mini-silos.

I frankly don't like to work any other way. Conway's Law all but mandates that large teams produce excess complexity, because they're basically unable to develop efficient internal communication patterns. (Geometric scaling is a heck of a thing.) And then additive bias means that we tend to deal with that problem by pulling even more complexity into our tooling and development methodologies.

I used to believe that was just how it was, but now I'm getting to old for that crap. Better to push the whole "expensive communication" mess up to a macro scale where it belongs so that the day-to-day work can be easier.


> Small teams. And ones that work collaboratively, not ones that than carve the code up into bailiwicks so that they can mostly work in mini-silos.

Well, that may explain some of the difference between our points of view. Most of my experience is with medium (<20 developers) to pretty large (> 500 developers) applications. At some point, no matter how cooperative the team is, the amount of complexity that you can hold in your head is not sufficient to make sure that you're not breaking stuff during a simple-looking refactoring.


Sure but at that point we're probably not on the first iteration of code anyway. Even at a big tech company, I find it most effective to make a POC first-iteration that you prove out in a development or staging environment that uses the map-of-heterogeneous-types style development. Once you get the PMs and Designers onboard, you'll iterate through it until the POC is in an okay state, and then you turn that into a final product that goes through a larger architecture review and gets carved up into deliverables that medium and large-scale teams work on. This latter work is done in a language with better type systems that can better handle the complexity of coordinating across 10s or 100s of developers and can generally handle the potential scale of Big Tech.

There's something to be said that the demand for type systems is being driven by organizational bloat but it's also true that large organizations delivering complex software has been a constant for decades now.


Do you work in an organization that does this? Because most organizations I've seen who don't pick the approach of "write it like it's Rust" rather have the following workflow.

1. Iterate on early prototype.

2. Show prototype to stakeholders.

3. Stakeholders want more features. At best, one dev has a little time to tighten a few bolts here and there while working on second prototype.

4. Show second prototype to stakeholders.

5. Stakeholders want more features. At best, one dev has a little time to tighten a few bolts here and there while working on third prototype.

6. etc.

Of course, productivity decreases with each iteration because as things progress (and new developers join or historical developers leave), people lose sight of what every line of code means.

In the best case, at some point, a senior enough developer gets hired and has enough clout to warrant some time to tighten things a bit further. But that attempt never finishes, because stakeholders insist that new features are needed, and refactoring a codebase while everybody else is busy hacking through it is a burnout-inducing task.


> Do you work in an organization that does this?

Yup! I'm at a company that used to be a startup and ended up becoming Big Tech (over many years, I'm a dinosaur here.) Our initial phase involved building lots of quick-and-dirty services as we were iterating very quickly. These services were bad and unreliable but were quick to write and throwaway.

From there we had a "medium" phase where we built a lot of services in more strictly typed languages that we intended on living longer. The problem we encountered in this phase was that no matter the type safety or performance, we started hitting issues from the way our services were architected. We started putting too much load on our DBs, we didn't think through our service semantics properly and started encountering consistency issues/high network chatter, our caches started having hotspotting issues, our queues would block on too much shared state, etc, etc.

We decided to move to a model that's pretty common across Big Tech of having senior engineers/architects develop a PoC and using that PoC to shop around the service. For purely internal services with constrained problem domains and infrequent changes, we'd usually skip this step and move directly to a strictly typed, high performance language (for us that's Java or Go because we find them able to deal with < 15 ms P99 in-region latency guarantees (2 ms P50 latencies) just fine.) For services with more fluid requirements, the senior engineer usually creates a throwaway-ish service written in something like Node or Python and brings stakeholders together to iterate on the service. Iteration usually lasts a couple weeks to a couple months (big tech timelines can be slow), and then once requirements are agreed upon, we actually carve out the work necessary to stand up the real service into production. We specifically call out these two phases (pre-prod and GA) in each of our projects. Sometimes the mature service work occurs in parallel to the experimentation as a lot of the initial setup work is just boilerplate of plugging things in.

===

I have friends who work/have worked in places like you describe but a lot of them tell me that those shops end up in a morass of tech debt over time anyway and eventually find it very difficult to hire due to the huge amount of tech debt and end up mandating huge rewrites anyway.


That's nice! Feels like your company has managed to get Python to work well for your case!

Most of the shops I've seen/heard of don't seem to reach that level of maturity. Although I'm trying very hard to get mine there :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: