Hacker Newsnew | past | comments | ask | show | jobs | submit | narnarpapadaddy's commentslogin

So, depending on someone else’s shared library, rather than my own shared library, is the difference between a microservice and not a microservice?


This right here. WTF do you do when you need to upgrade your underlying runtime such as Python, Ruby, whatever ¯\_(ツ)_/¯ you gotta go service by service.


If needs be. Or, you upgrade the mission critical ones and leave the rest for when you pick them up again. If your culture is “leave it better than when you found it” this is a non issue.

The best is when you use containers and build against the latest runtimes in your pipelines so as to catch these issues early and always have the most up to date patches. If a service hasn’t been updated or deployed in a long time, you can just run another build and it will pull latest of whatever.


The opposite situation of needing to upgrade your entire company's codebase all at once is much more painful. With services you can upgrade runtimes on an as-needed basis. In monoliths, runtime upgrades were massive projects that required a ton of coordination between teams and months or years of work.


Fair point.


FWIW, (I have one Clojure project I inherited at work that my team maintains) I love this direction.


Any given model has less fidelity than reality. An atlas map of the US has less detail than the actual terrain. The Planck constants represent the maximal fidelity possible with the standard model of physics. We can’t model shorter timeframes or smaller sizes, so we can’t predict what happens at scales that small. Building equipment the can measure something so small is difficult too… how do you measure something when you don’t know what to look for?

It may be that one day we come up with a more refined model. But as of today, it’s not clear how that would happen or if it’s even possible.

Imagine going from 4K to 8k to 16k resolution and then beyond. At some point a “pixel” to represent part of an image doesn’t make sense anymore, but what do you use instead? Nobody currently knows.


One addendum / clarification:

It may also be that "space" and "time" are emergent properties, much like an "apple" is "just" a description of a particular conglomeration of molecules. If we get past Planck scales it may turn that out that there are no such things as "space" and "time" and the Planck constants are irrelevant. We currently don't know but there _are_ a few theoretical frameworks that have yet to be empirically verified, like string theory.


The point is that black and white, all or nothing is easier for many to stick to. It’s easier to not be tempted by a cigarette if you never see one or hang out with someone who smokes. With food, you can’t take approach.


That is fair but you can't pretend all food is bad when that's not true. That is what I took issue with. You can eat as many greens and lentils as you want. No such thing with cigs


The fact you can’t pretend _is the point_. A blanket policy of “no and never” that works well for other addictions or compulsions can’t be applied to food. :)

As a counter-factual, imagine if every time you wanted to smoke you had to decide if one particular type or brand of cigarette was good for you.


We still do a coding assignment, but a significant chunk of the technical interview is dedicated to a walkthrough of the code. Thus far, that’s been able to detect those who relied solely on AI.

…If you used AI and can still explain to me why code works and what it does, even better. You have learned how to use new tools.

(have not tried the randomized question approach to compare, but I’m curious to try it and see what happens)


We do it similarly and it's pretty easy to tell if someone knows their stuff, especially as the assignment is just a platform to dig deeper in the face to face interview.

However, the coding assignment was a really good filter and allowed us to dismiss the majority of candidates before committing to a labour-intensive face to face.

I haven't interviewed anyone since AI took off, but I am assuming that from now on the majority of candidates that would usually send us crap code will send us AI code instead; thereby wasting our time when they finally appear for the face to face.

Have you encountered that yet?


Yes, but we had that problem before when somebody would farm out coding assignments to a friend. I couldn’t say yet how it’s impacted the coding assignment’s effectiveness as a filter yet. We still do get crap code just sometimes it’s obviously AI generated.


Implicitly, IIRC, the optimal ratio is 5-20:1. Your interface must cover 5-20 cases for it have value. Any fewer, the additional abstraction is unneeded complexity. Any more, and your abstraction is likely too broad to be useful/understandable. The example he gives specifically was considering the number of subclasses in a hierarchy.

It’s like a secret unlock code for domain modeling. Or deciding how long functions should be (5-20 lines, with exceptions).

I agree, hugely usual principle.


This is a good rule of thumb, but what would be a good response to have interfaces because, "what if a new scenario comes up in the future"?


The scenario NEVER comes up in the future as it was originally expected. You'll end up having to remove and refactor a lot of code. Abstractions are useful only used sparingly and when they don't account for handling something that doesn't even exist yet.


When doing the initial design start in the middle of the complexity to abstraction budget. If you have 100 “units of complexity” (lines of code, conditions, states, classes, use cases, whatever) try to find 10 subdivisions of 10 units each. Rarely, you’ll have a one-off. Sometimes, you’ll end up with more than 20 in a group. Mostly, you should have 5-20 groups of 5-20 units.

If you start there, you have room for your abstraction to bend before it becomes too brittle and you need to refactor.

Almost never is an interface worth it for 1 implementation, sometimes for 3, often for 5-20, sometimes for >20.

The trick is recognizing both a “unit of complexity” and how many “units” a given abstraction covers. And, of course, different units might be in tension and you have to make a judgement call. It’s not a silver bullet. Just a useful (for me at least) framing for thinking about how to manage complexity.


Even one use case may be enough e.g., if one class accepts another then a protocol (using Python parlance) SupportsSomething could be used to decouple two classes, to carve out the exact boundary. The protocol may be used for creating a test double (a fake) too.


If you own the code base, refactor. It's true that, if you're offering a stable interface to users whose code you can't edit, you need to plan carefully for backward compatibility.


"We'll extract interfaces as and when we need them - and when we know what the requirements are we'll be more able to design interfaces that fit them. Extracting them now is premature, unless we really don't have any other feature work to be doing?"


Maybe some examples would clarify your intent, because all the candidate interpretations I can think of are absurd.

The sin() function in the C standard library covers 2⁶⁴ cases, because it takes one argument which is, on most platforms, 64 bits. Are you suggesting that it should be separated into 2⁶⁰ separate functions?

If you're saying you should pass in boolean and enum parameters to tell a subroutine or class which of your 5–20 use cases the caller needs? I couldn't disagree more. Make them separate subroutines or classes.

If you have 5–20 lines of code in a subroutine, but no conditionals or possibly-zero-iteration loops, those lines of code are all the same case. The subroutine doesn't run some of them in some cases and others in other cases.


That function covers 2⁶⁴ inputs, not cases. It handles only one case: converting an angular value to (half of) a cartesian coordinate.


Sounds like you haven't ever tried to implement it. But if the "case" you're thinking of is the "case" narnarpapadaddy was referring to, that takes us to their clause, "Any fewer [cases], the additional abstraction is unneeded complexity." This is obviously absurd when we're talking about the sin() function. Therefore, that can't possibly have been their intended meaning.


The alternative and more charitable interpretation, of course, is that a single function like sin() is not what said GP meant when using the word "interface". But hey, don't let me interrupt your tilting at straw men, you're doing a great job.


Appreciate the charitable interpretation. Both “complexity“ and “abstraction” take many different forms in software, and exceptions to the rule-of-thumb abound so it’s easy to come up with counter examples. Regardless, thinking in terms of complexity ratios has been a useful perspective for me. :)

IMO, a function _can_ be an interface in the broadest sense of that term. You’re just giving a name to some set of code you’d like to reuse or hide.


Think of it more like a “complexity distribution.”

Rarely, a function with a single line or an interface with a single element or a class hierarchy with a single parent and child is useful. Mostly, that abstraction is overhead.

Often, a function with 5-20 lines or an interface 5-20 members or a class hierarchy with 5-20 children is a useful abstraction. That’s the sweet spot between too broad (function “doStuff”) and too narrow (function “callMomOnTheLandLine”).

Sometimes, any of the above with the >20:1 complexity ratio are useful.

It’s not a hard and fast rule. If your complexity ratio falls outside that range, think twice about your abstraction.


And with respect to function behavior, I’d view it through the lens of cyclomatic complexity.

Do I need 5-20 non-trivial test cases to cover the range of inputs this function accepts?

If yes, function is probably about the right level of behavioral complexity to add value and not overhead.

If I need only 1 test or if I need 200 tests it’s probably doing too much or too little.


That's not what cyclomatic complexity is, and if you think 5–20 test cases is enough for sin(), open(), or Lisp EVAL, you need your head examined.


You’re right, I suggested two different dimensions of complexity there as a lens into how much complexity a function contains. But I think the principle holds for either dimension.

I don’t think you need only 20 test cases for open(). Sometimes, more than 20 is valid because you’re saving across some other dimension of complexity. That happens and I don’t dispute it.

But the fact that you need >20 raises the question: is open() a good API?

I’m not making any particular judgment about open(), but what constitutes a good file API is hotly contested. So, for me, that example is validation of the principle: here’s an API that’s behaviorally complex and disputed. That’s exactly what I’m suggesting would happen.

Does that help clarify?


Yes, open() is a good API. I can't believe you're asking that question! It's close to the Platonic ideal of a good API; not that it couldn't have been designed better, but almost no interface in the software world comes close to providing as much functionality with as little interface complexity, or serving so many different callers or so many different callees. Maybe TCP/IP, HTTP, JSON, and SQL compete along some of these axes, but not much else.

No, 20 test cases is not enough for open(). It's not even close. There are 36 error cases for open() listed in the Linux man page for it.

What constitutes a good file API is not hotly contested. It was hotly contested 50 years ago; for example, the FCB-based record I/O in CP/M and MS-DOS 1.0, TOPS-20's JFN-based interface, and OS/370's various access methods for datasets were all quite different from open() and from each other. Since about 35 years ago, every new system just copies the Unix API with minor variations. Sometimes they don't use bitwise flags, for example, or their open() reports errors via additional return values or exceptions instead of an invalid file descriptor. Sometimes they have opaque file descriptor objects instead of using integers. Sometimes the filename syntax permits drive letters, stream identifiers, or variables. But nothing looks like the I/O API of Guardian, CP/M, Multics, or VAX/VMS RMS, and for good reason.


I think I understand the use case for a smart surround plugin like this; I watched the demo video and saw and lot of picking-and-pulling text.

What I don’t understand is the development workflow that includes so much text manipulation. If you’re writing new code, there’s nothing to manipulate. If you’re refactoring existing code, wouldn’t you want the support typical AST-based refactoring tools provide? Where’s the sweet spot where shuffling strings around makes sense?

That’s not sarcasm. I’m genuinely asking.


For me (maintainer of Zed's vim mode) it comes down to a few things:

1. LSPs differ per-language, and so I'm never sure whether I'll get lucky today or not. It's more reliable for small changes to talk about them in terms of the text.

2. LSPs are also quite slow. For example in Zed I can do a quick local rename with `ga` to multi-cursor onto matching words and then `s new_name` to change it. (For larger or cross-file renames I still use the LSP).

3. I err as a human continually, for example in Rust a string is `"a"` and a char is `'a'`. It's easy for my javascript addled brain to use the wrong quotes. I don't know of any LSP operation that does "convert string literal into char literal" (or vice versa), but in vim, it's easy.

We are slowly pulling in support for various vim plugins; but the tail is long and I am not likely to build a vim-compatible Lua (or VimScript :D) API any time soon.

For example, most of vim-surround already works so you could get the most used parts of mini.surround to work with just a few keybindings `"s a":["vim::PushOperator", { "AddSurrounds": {} }]`, but rebuilding every plugin is a labor of love :D.


I think you’re just highlighting the different preferences people have between a text editor and an IDE. Obviously the line between the two is very blurry. I much prefer being able to efficiently edit text myself rather than relying on refactoring tools.


Appreciate the response. I viewed it more as a question about scope rather than preference.

At mega-scale, even IDE based tools are skipped in favor of automated tools such as Refaster/OpenRewrite that can refactor 10s of millions of lines of code at once.

I do find myself occasionally using, say, a regex find/replace to change something project wide. But most of the time (95% to put some arbitrary number on it), once I’m beyond the scope of single function, I use AST-based tools to ensure changes are correctly reflected in other files or parts of the project.

So, I’m trying understand to who lives in my 5% long enough that they the need what is essentially a highly specialized regex. Are they doing cross-project changes based on text? Do they have giant functions where that’s not a concern? Are their projects just smaller and they have many of them?

I definitely see the allure of having a smaller, faster editor. How far are people actually able to push that paradigm?


Relative to your competition.

Performance is only adequate if it’s at least as good as the engineers of the other players in your industry. Otherwise, you’re losing ground. As long as anyone in your market space is actively trying to manage their engineering talent (recruiting your top performers, releasing low performers, being more selective in hiring) so must you just to keep pace. An “adequate” engineer may make the company money, but the opportunity cost of not hiring someone better who could make even more money can be higher still.


The sorts of decisions and results that make a company the size of Meta succeed or fail happen above the levels of the folks who will get cut. Most of the net value produced by individual engineers is determined by which projects they're on, rather than whether they're good at their job. A savy entrepreneur with a few engineers worth of openai credits can create more value in a week than a median FAANG middle management career maxxer with 10-100 engineers in their subtree of the org creates in a month.


Personally, I think it’s both. Yes, the strategy is important but it’s nothing without the ability to execute. And we’ve all worked with the god-tier engineer who creates never-ending boondoggles because they can. And, yes, the larger the org the harder it is to get both strategy and execution aligned at once.


My point was that the scope/impact/value/etc of the contributions made by individual engineers will be determined more by the projects they're working on than by their inherent ability to contribute. So, if we go through the org and cut the bottom 5% of engineers by how much value they added to the company, most cuts will be determined by the context in which an individual was operating rather than their inherent ability to contribute. Ie, the cuts will mostly just punish people for getting stuck with bad managers or lackluster projects.

Of course, some people are obviously great in any context and some are obviously useless (or worse) in any context, but those folks should already be handled appropriately even without the "cut 5%" mandate.


While the KMP story specifically is still pretty rough, supporting native and JS (or WASM) is becoming table stakes for new languages. Go, Rust, Gleam, even C all support some version of this. Now that there’s a common system API available across all the platforms, all the UI/database/network/whatever application libraries and frameworks can be unified as well. Multiplatform of today is closer to targeting different architectures (like x86 vs x64, which has worked since the dawn of time) compared to, say, Xamarin of yesteryear which abstracted over various native UI libraries.


If one is not currently employed as an engineer?

Frankly, seriously consider a career change. The ladder has been pulled up for entry-level positions due to AI, interest-rates, etc. This will come back and bite us as an industry, but it’ll be 10 years from now and most people can’t wait that long.

I can’t speak for everyone, but 3000+ applicants for a single opening is typical at my org. The odds of any given individual getting in are essentially zero. Referrals get priority over everyone else, even candidates that are on-paper better qualified.

It sucks for everyone involved, especially for job hunters. But from the hiring side, truthfully, there’s no end in sight.


Oooooor work in Europe. Plenty of work here. I still get 1 job offer per 1 application.


My 5 year plan is to move to the EU, but it's a process. You're not going to be doing it as your next job hop from the US if you haven't been planning for it.


The trick is to get a masters or MBA in the country where you want to live. Germany and Netherlands are excellent for this. You can find lots of jobs with no local language requirements.


The fun part is that I went the security engineer route instead of SDE/SWE. It has some pros and cons, but seems like it's one of the "high demand" roles that gets more traction looking at others who have moved abroad.

I also have friends and family in Netherlands, France, and UK who help me keep tabs on how things are going in various places and where might be better locations to target for an American with a technical background looking to just up and leave the US.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: