Hacker Newsnew | past | comments | ask | show | jobs | submit | seer's commentslogin

Also it has interceptors, which allow you to build easily reusable pieces of code - loggers, oauth, retriers, execution time trackers etc.

These are so much better than the interface fetch offers you, unfortunately.


You can do all of that in fetch really easily with the init object.

   fetch('https://api.example.com/data', {
  headers: {
    'Authorization': 'Bearer ' + accessToken
  }
})

There are pretty much two usage patterns that come up all the time:

1- automatically add bearer tokens to requests rather than manually specifying them every single time

2- automatically dispatch some event or function when a 401 response is returned to clear the stale user session and return them to a login page.

There's no reason to repeat this logic in every single place you make an API call.

Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.

Finally, there's some nice mocking utilities for axios for unit testing different responses and error codes.

You're either going to copy/paste code everywhere, or you will write your own helper functions and never touch fetch directly. Axios... just works. No need to reinvent anything, and there's a ton of other handy features the GP mentioned as well you may or may not find yourself needing.


Interceptors are just wrappers in disguise.

    const myfetch = async (req, options) => {
        let options = options || {};
        options.headers = options.headers || {};
        options.headers['Authorization'] = token;
    
        let res = await fetch(new Request(req, options));
        if (res.status == 401) {
            // do your thing
            throw new Error("oh no");
        }
        return res;
    }
Convenience is a thing, but it doesn't require a massive library.

but it does for massive DDoS :p

That fetch requires so many users to rewrite the same code - that was already handled well by every existing node HTTP client- says something about the standards process.

It could also be trivially written for XMLHttpRequest or any node client if needed. Would be nice if they had always been the same, but oh well - having a server and client version isn't that bad.

Because it is so few lines it is much more sensible to have everyone duplicate that little snippet manually than import a library and write interceptors for that...

(Not only because the integration with the library would likely be more lines of code, but also because a library is a significantly liability on several levels that must be justified by significant, not minor, recurring savings.)


> Likewise, every response I get is JSON.

fetch responses have a .json() method. It's literally the first example in MDN: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/U...

It's literally easier than not using JSON because I have to think about if I want `repsponse.text()` or `response.body()`.


that's such a weak argument. you can write about 20 lines of code to do exactly this without requiring a third party library.

Helper functions seem trivial and not like you’re reimplementing much.

Don't be silly, this is the JS ecosystem. Why use your brain for a minute and come up with a 50 byte helper function, if you can instead import a library with 3912726 dependencies and let the compiler spend 90 seconds on every build to tree shake 3912723 out again and give you a highly optimized bundle that's only 3 megabytes small?

> usage patterns

IMO interceptors are bad. they hide what might get transformed with the API call at the place it is being used.

> Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.

This is not true unless you are not interfacing with your own backends. even then why not just make a helper that unwraps as json by default but can be passed an arg to parse as something else


One more use case for Axios is it automatically follows redirects, forwarding headers, and more importantly, omiting or rewriting the headers that shouldn't be forwarded for security reasons.

What does an interceptor in the RequestInit look like?

It also supports proxies which is important to some corporate back-end scenarios

fetch supports proxies

Haven’t seen the movie yet, but the book is definitely one of my all time favourites, so I would recommend reading it regardless of the movie.

The way the book is structured there is only one big reveal that would be spoiled by the movie, but I don’t think that was the most interesting thing in the book anyways, it was all about engineering, the scientific method and all that, and I think that will still hold before or after watching.

The one big exception I’ve found to “read the book first” advice to me has been “the expanse” there the books and the series were so great that they sort of complemented each other, and the advice there is “definitely do both”. I was reading the books and watching the series in parallel - side by side.

I do hope Hail Marry is like that…


MAD

If they strike desalination plants, Israel/us can do the same … really mass casualty event could follow.

And they might, at some point the Iranian gov might feel desperate enough to be like “fuck it, we have nothing to lose” … Dubai could end up with a lot more graves.

Almost all of their water comes from these plants, and humans can’t survive without water for more than 3 days …

There are reserves/stores sure, but how long will they last, and which part of the population do they cover? In a week you could have thousands of civilians dead on both sides.

So MAD keeps things in check.

I think this is whaly Iran has invested so much into rockets - they are very ineffective at providing decisive military victory by themselves, but without them, Iran will be at Israel’s mercy, and they have proven to not possess that in great amounts lately


Israel already attacked desalination plants. Iran already responded by doing the same to the surrounding countries.

It's been tit-for-tat though.

They address this in the docs - it is meant to make authoring the content easier for LLMs since that is easy for them to write.

It still uses MJML for the actual templates, but it is a translation layer between markdown and the template itself.

If you need to author a lot of emails with LLM this does seem like it would be a great fit.


If you need to author a lot of emails with LLM you should be rethinking your business strategy tbh

If the goal is to write emails purely using AI, then it is trivial to attach the MJML documentation as context to your LLM using context7 MCP or something of the sort. It's not a very complex language and its documentation is not large at all.

That's assuming the crawlers haven't ingested it all already...


Isn’t a “local emulator of cloud services” kind of the perfect project to be vibe coded? Extremely well documented surface to implement, very easy to test automatically and prove it matches the spec, and if you make some things sub optimal performance wise, that is totally fine because by project will not be used in a tight loop anyway - e.g. it will just need to be faster than over the network hop plus the time it takes for the cloud to actually persist things. This can just need to do this in ram and doesn’t need to scale.

So I’m shocked cloud providers haven’t just done this themselves, given how feasible it is with the right harness


AWS _does_ officially provide local-first dev containers for services like DynamoDB but sadly not every AWS service comes with those. Why? I have no idea, like you said it's clearly feasible and they already do it for some services today...

I think what they meant is you “can save 10 hours of planning with one hour of doing”

And I think this has become even more so with the age of ai, because there is even more unknown unknowns, which is harder to discover while planning, but easy wile “doing” and that “doing” itself is so much more streamlined.

In my experience no amount of planning will de-risk software engineering effort, what works is making sure coming back and refactoring, switching tech is less expensive, which allows you to rapidly change the approach when you inevitably discover some roadblock.

You can read all the docs during planning phases, but you will stumble with some undocumented behaviour / bug / limitation every single time and then you are back to the drawing board. The faster you can turn that around the faster you can adjust and go forward.

I really like the famous quote from Churchill- “Plans are useless, planning is essential”


> I think what they meant is you “can save 10 hours of planning with one hour of doing”

I know what they meant, and I also meant the thing I said instead. I have seen many, many people forge ahead on work that could have been saved by a bit more planning. Not overplanning, but doing a reasonable amount of planning.

Figuring out where the line is between planning and "just start trying some experiments" is a matter of experience.


I'm pretty sure I've literally never seen planning deliver value. But interestingly even the environments where planning was most obviously useless rarely diminished people's willingness to defend it.

> I really like the famous quote from Churchill- “Plans are useless, planning is essential”

I really like Churchill’s second famous quote: “What the fuck is software, lol”.


Planning includes the prototype you build with AI.


I remember reading the letters of Cicero about Gaias Julius (Later known as Cesar) how he complains how the he and his gang is acting all amoral and wearing ridiculous scandalous clothes, waring the togas in provocative feminine fashion.

There are accounts from all over history of how "the times were more thoughtful and moral in the good old days" But here we are, thousands of years later, still complaining about the younger members of our species and how they will bring ruin to us all. Perhaps they will, but it all seems so human to complain about that.

I remember the art of the 90s - when my part of the world got access to marvelous pieces like Thunder in Paradise, Barbed Wire, American Ninja, Bay Watch ... at the time it was considered the pinnacle of art by teenagers like me, and despised by my parents. But at the same time we had things like The Matrix, The Shawshank Redemption, Leon ... We remember the good stuff and the forget the fluff.

There are some real gems being created all the time, maybe not always from Hollywood but human creativity soldiers on.

The Good Place, The Expanse, 3 Body Problem, Horizon Zero Dawn, Expedition 33, Project Hail Marry. There is a constant stream of incredible thoughtful stuff being produced - books, games, movies, essays, videos, podcasts - the medium might change but humans always try to find ways to discover, understand and express the world around us in novel ways, one just needs to listen/watch.


> But here we are, thousands of years later,

Not like there was a general lack of tragedy, pain, suffering, war, chaos in the intervening thousands of years.

Seems so superficial to ignore everything and just say if we're here, we exist, then the claim that things will go bad is proven false. The only thing proven false is if anyone ever claimed humanity will be extinct. But think of all the suffering in all the wars between the roman empire and now. Is that nothing? Does that not qualify as very bad stuff? Did humanity advance continuosly, or was it a chaotic path, with ups and downs? Don't the downs qualify as what the complainers predicted?

To me it seems history teaches us we will survive as a species. But there is definitely a lot of room for very bad stuff to happen. It has happened before.


I got myself a plastic welder - the thing that melts little pieces of metal to strengthen plastic joints - now I can keep old plastic things in shape almost indefinitely. Cost like 10 usd or so and has prolonged the life of all manner of things.

If you still want to make the old headphones work these welders are a godsend, and with some small amount of diy work of cleaning, sanding and buffing you can easily hide these welds.

I personally like to leave them though since they accent that something that was once broken is whole again, and that it has a long history!


When I’m interviewing I never ask a question about something I know super well. I circle around what the candidate signals he has great passion and understanding at, and start deep diving into that.

If I know about it as well, then we can have a really deep discussion, if I don’t- then I can learn something new.

The aim when interviewing is to check how well / deeply the interviewee can think through a problem.

If we pick a topic that they don’t have deep knowledge - they can either stumble and freeze emotionally, or hallucinate something just to appear smart. At this point it is an examination not an interview. And sure some people are capable enough to get to an answer, but that’s more of a lottery than a real analysis.

It usually boils down to how often have they interviewed before and been in a similar situation. And “people who have interviewed a lot” is hardly a metric I want to optimise for.

Now picking something they know or have expressed interest or passion in, this means we are sure to have more signal than noise. If the interviewee’s description is more of a canned response - then I delve deeper or broader.

“I’ve managed to solve this issue by introducing caching” - “Great, are there other solutions? How do you handle cache invalidation, what are the limits? What will you do if the load increases 10 fold, would you need to rethink the solution?”


Live coding during an interview is one of the most oppressive things I’ve witnessed in the industry in general.

There is usually a huge disconnect between someone who knows that “this task should take 20mins” and doing it cold in a super high-pressure environment.

People sweat, panic, brain freeze, and are just plain out stressed.

I’ll only OK something like this if we give out a similar but not the same task before the interview so a person can train a bit beforehand.

I’ve heard it all justified as “we want to see how you perform under pressure” but to me that has always sounded super flimsy - like if this is representative of how work is done at this organisation, then do I want to work there in the first place? And if it isn’t, why the hell are you putting people through this ringer in the first place, just sounds inhumane.


Yea, there's really no way to do an "interview assignment" well.

If you give unlimited amount of time, you're giving an advantage to people with no life who can just focus on your assignment and polish it as if it were a full time job.

If you give a limited amount of time, then you're making the interview a pressure cooker with a countdown clock, giving a disadvantage to people who are just not great at working under minute-to-minute time pressure.


Depends on the purpose. If you treat it as a minimum bar to pass and are up front about and actually adhere to that then anyone spending more than the limit on it is presumably just wasting his own time (and to an extent the company's because the application process continues). It only becomes a problem if instead of an objective pass/fail metric you start gauging other details that would benefit from additional time spent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: