Hacker Newsnew | past | comments | ask | show | jobs | submit | jchmbrln's commentslogin

What would be the explanation for an int taking 28 bytes but a list of 1000 ints taking only 7.87KB?


That appears to be the size of the list itself, not including the objects it contains: 8 bytes per entry for the object pointer, and a kilo-to-kibi conversion. All Python values are "boxed", which is probably a more important thing for a Python programmer to know than most of these numbers.

The list of floats is larger, despite also being simply an array of 1000 8-byte pointers. I assume that it's because the int array is constructed from a range(), which has a __len__(), and therefore the list is allocated to exactly the required size; but the float array is constructed from a generator expression and is presumably dynamically grown as the generator runs and has a bit of free space at the end.


That's impressive how you figured out the reason for the difference in list of floats vs list of ints container size, framed as an interview question that would have been quite difficult I think


It was. I updated the results to include the contained elements. I also updated the float list creation to match the int list creation.



This is insightful. I remember thinking after the first generation of SPA frameworks like Backbone and Ember and—somewhat later—AngularJS that maybe the second generation (React, Vue, etc.) would get it all sorted out and we'd arrive at stability and consensus. But that hasn't happened. The next generation was better in some ways, worse in a few, and still not quite right in many others.

Of course I hear plenty of people complaining that apps on top of hypertext is a fundamental mistake and so we can't expect it to ever really work, but the way you put it really made it click for me. The problem isn't that we haven't solved the puzzle, it's that the pieces don't actually fit together. Thank you.


> [...] we'd arrive at stability and consensus.

Honestly, that's basically happened, but just a generation later.

Most modern frameworks are doing very similar things under the hood. Svelte, SolidJS, and modern Vue (i.e. with Vapor mode) all basically do the same thing: they turn your template into an HTML string with holes in it, and then update those holes with dynamic data using signals to track when that data changes. Preact and Vue without Vapor mode also use signals to track reactivity, but use VDOM to rerender an entire component whenever a signal changes. And Angular is a bit different, but still generally moving over to using signals as the core reactivity mechanism.

The largest differences between these frameworks at this point is how they do templating (i.e. JSX vs HTML vs SFCs, and syntax differences), and the size of their ecosystems. Beyond that, they are all increasingly similar, and converging on very similar ideas.

The black sheep of the family here is React, which is doing basically the same thing it always did, although I notice even there that there are increasing numbers of libraries and tools that are basically "use signals with React".

I'd argue the signal-based concept fits really well with the DOM and browsers in general. Signals ensure that the internal application data stays consistent, but you can have DOM elements (or components) with their own state, like text boxes or accordions. And because with many of these frameworks, templates directly return DOM nodes rather than VDOM objects, you're generally working directly with browser APIs when necessary.


To egotistically comment on my own comment (unfortunately I can't edit any more):

This is all specific to frontend stuff, i.e. how do you build an application that mostly lives on the client. Generally, the answer to that seems to be signals* and your choice of templating system — unless you're using React, then the answer is pure render functions and additional mechanisms to attach state to that.

Where there's more exploration right now is how you build an application that spans multiple computers. This is generally hard — I don't think anyone has demonstrated an obvious way of doing this yet. In the old days, we had fully client-side applications, but syncing files was always done manually. Then we had applications that lived on external servers, but this meant that you needed a network round-trip to do anything meaningful. Then we moved the rendering back to the client which meant that the client could make some of its own decisions, and reduced the number of round trips needed, but this bloats the client, and makes it useless if the network goes down.

The challenge right now, then, is trying to figure out how to share code between client and server in such a way that

    (a) the client doesn't have to do more than it needs to do (no need to contain the logic for rendering an entire application just to handle some fancy tabs).
    (b) the client can still do lots of things that are useful (optimistic updates, frontend validation, etc
    (c) both client and server can be modelled as a single application as opposed to two different ones, potentially with different tooling, languages, and teams
    (d) the volatility of the network is always accounted for and does not break the application
This is where most of the innovation in frontend frameworks is going towards. React has their RSCs (React Server Components, components that are written using React but render only on the server), and other frameworks are developing their own approaches. You also see it more widely with the growth of local-first software, which approaches the problem from a different angle (can we get rid of the server altogether?) but is essentially trying to solve the same core issues.

And importantly here, I don't think anyone has solved these issues yet, even in older models of software. The reason this development is ongoing has nothing to do with the web platform (which is an incredible resource in many ways: a sandboxed mini-OS with incredibly powerful primitives), it's because it's a very old problem and we still need to figure out what to do about it.

* Or some other reactivity primitive or eventing system, but mostly signals.


Frontend web development is effectively distributed systems built on top of markup languages and backwards compatible scripting languages.

We are running code on servers and clients, communicating between the two (crossing the network boundary), while our code often runs on millions of distributed hostile clients that we don't control.

It's inherently complex, and inherently hostile.

From my view, RSC's are the first solution to acknowledge these complexities and redesign the paradigms closer to first principles. That comes with a tougher mental model, because the problem-space is inherently complex. Every prior or parallel solution attempts to paper over that complexity with an over-simplified abstraction.

HTMX (and rails, php, etc.) leans too heavily on the server, client-only-libraries give you no accessibility to the server, and traditional JS SSR frameworks attempt to treat the server as just another client. Astro works because it drives you towards building largely static sites (leaning on build-time and server-side routing aggressively).

RSCs balance most of these incentives, gives you the power to access each of them at build-time and at run-time (at the page level or even the component level!). It makes each environment fully powerful (server, client, and both). And manages to solve streaming (suspense and complex serialization) and diffing (navigating client-side and maintaining state or UI).

But people would rather lean on lazy tropes as if they only exist to sell server-cycles or to further frontend complexity. No! They're just the first solution to accept that complexity and give developers the power to wield them. Long-term, I think people will come to learn their mental model and understand why they exist. As some react core team members have said, this is kind of the way we should have always built websites-once you return to first principles, you end up with something that looks similar to RSCs[0]. I think others will solve these problems with simpler mental models in the future, but it's a damn good start and doesn't deserve the vitriol it gets.

[0] https://www.youtube.com/watch?v=ozI4V_29fj4


Except RSC doesn't solve for apps, it solves for websites, which means its server-first model leads you to slow feeling websites, or lots of glue code to compensate. That alongside the immensely complex restrictions leaves me wondering why it exists or has any traction, other than a sort of technical exercise and new thing for people to play with.

Meanwhile, sync engines seem to actually solve these problems - the distributed data syncing and the client-side needs like optimistic updates, while also letting you avoid the complexity. And, you can keep your server-first rendering.

To me it's a choice between lose-lose (complex, worse UX) and win-win (simpler, better UX) and the only reason I think anyone really likes RSC is because there is so much money behind it, and so little relatively in sync engines. That said, I don't blame people for not even mentioning them as they are newer. I've been working with one for the last year and it's an absolute delight, and probably the first genuine leap forward in frontend dev in the last decade, since React.


> Except RSC doesn't solve for apps, it solves for websites

This isn't true, because RSCs let you slide back into classic react with a simple 'use client' (or lazy for pure client). So anywhere in the tree, you have that choice. If you want to do so at the root of a page (or component) you can, without necessarily forcing all pages to do so.

> which means its server-first model leads you to slow feeling websites, or lots of glue code to compensate

Again, I don't think this is true - what makes you say it's slow feeling? Personally, I feel it's the opposite. My websites (and apps) are faster than before, with less code. Because server component data fetching solves the waterfall problem and co-locating data retrieval closer to your APIs or data stores means faster round-trips. And for slower fetches, you can use suspense and serialize promises over the wire to prefetch. Then unwrapping those promises on the client, showing loading states in the meantime as jsx and data stream from the server.

When you do want to do client-side data fetching, you still can. RSCs are also compatible with "no server"-i.e. running your "server" code at build-time.

> To me it's a choice between lose-lose (complex, worse UX) and win-win (simpler, better UX)

You say it's worse UX but that does not ring true to my experience, nor does it really make sense as RSCs are additive, not prescriptive. The DX has some downsides because it requires a more complex model to understand and adds overhead to bundling and development, but it gives you back DX gains as well. It does not lead to worse UX unless you explicitly hold it wrong (true of any web technology).

I like RSCs because they unlock UX and DX (genuinely) not possible before. I have nothing to gain from holding this opinion, I'm busy building my business and various webapps.

It's worth noting that RSCs are an entire architecture, not just server components. They are server components, client components, boundary serialization and typing, server actions, suspense, and more. And these play very nicely with the newer async client features like transitions, useOptimistic, activity, and so on.

> Meanwhile, sync engines seem to actually solve these problems

Sync engines solve a different set of problems and come with their own nits and complexities. To say they avoid complexity is shallow because syncing is inherently complex and anyone who's worked with them has experienced those pains, modern engines or not. The newer react features for async client work help to solve many of the UX problems relating to scheduling rendering and coordinating transitions.

I'm familiar with your work and I really respect what you've built. I notice you use zero (sync engine), but I could go ahead and point to this zero example as something that has some poor UX that could be solved with the new client features like transitions: https://ztunes.rocicorp.dev

These are not RSC exclusive features, but they display how sync engines don't solve all the UX problems you're espousing they do without coordinating work at the framework level. Happy to connect and walk you through what a better UX for this functionality would look like.


Definitely disagree on most of your points here, I think you don’t touch at all on optimistic mutations, don’t put enough weight on the extreme downsides it enforces on your code organization, the limits and downsides of forcing server trips, the huge downsides of opting out (yes you can, but now you have two ways of writing everything and two ways of dealing with data, or you can’t share data/code at all), it is in effect all or nothing else you really are duplicating a ton and then even worse DX.

Many of the features like transitions and all the new concepts are workaround you just don’t really need when your data is mostly local and optimistically mutated, and the ztunes app is a tiny demo but ofc you could easily server render it and split transitions and all sorts of things to make it more of a comparable demo to what I assume you think are downsides vs RSC.

I think time will show that RSC was a bad idea, like Redux which I also predicted would not last the time of time, it’s interesting in theory but too verbose and cumbersome in practice, and other ways of doing things have too many advantages.

The problems they solve overlap more than enough, and once you have a sync engine giving you optimistic mutations free, free local caching, and free realtime sync, you look at what RSC gives you above SSR and there’s really no way to justify the immense conceptual burden and actual concrete downsides (like now having two worlds / essentially function coloring, forces server trips / lack of routing control) I just bet it won’t win. Though given the immense investment by two huge companies it may take a while for that to become clear.


Web is simultaneously not really designed for apps but also a sort of golden environment for app development because it is the one thing that ships on nearly every consumer machine that behaves more or less the same with an actual standard.

People bemoan the lack of native development, but the consuming public (and the devs trying to serve them) really just want to be able to do things consistently across phones and laptops and other computing devices and the web is the most consistent thing available, and it is the most battle-tested thing available.


Resolution is in design not engineering. Instead of trying to tireless work around the web as a sort of “broken mobile”, design with its strengths instead.

The difficulty is finding designers who understand web fundamentals.


I’m completely out of the loop. What’s going on with Spring Boot?


The VMware apocalypse.


One does not need VMware for SpringBoot so?


Spring’s corporate steward is VMWare, and Broadcom bought VMWare, ergo Spring is subject to Broadcom’s whims.



Not spring boot, but spring, is owned by VMware. Sure spring is under a free license but if upstream enshittifies, community forks would be required.


And as popular and widely used as Spring is, that would 100% happen. To me at least, I wouldn't count this as a particularly huge risk. But in an enterprise setting, with mandatory auditing and stuff, I can understand why there would be a requirement to at least pre-identify alternative(s).


> Not spring boot, but spring, is owned by VMware

How do I reconcile this statement with VMWare holding the copyright which you will find unambiguously littered in the official Spring Boot repository?

Since you contend the contrary, who does in fact hold the copyright?


Probably a bit of overreaction given that Broadcom is now in charge of Spring. At the end of the day it’s a wildly popular open source project — it has a path forward if Broadcom pulls shenanigans.

That said, I have noticed that the free support window for any given version is super short these days. I.e. if you’re not on top of constantly upgrading you’re looking at paid support if you want security patches.


> An even more significant improvement would be electrified trains, which can accelerate roughly twice as fast as those with diesel power...

Can someone comment on why this is? My understanding is that the existing diesel trains use diesel generators to power electric motors.

My questions are: 1) Does "electrified" mean pulling power from a third rail? 2) Whatever it means, what makes "electrified" twice as fast as diesel-electric?


> Does "electrified" mean pulling power from a third rail?

Yes, or more precisely either third rail or using overhead lines (catenaries). Overhead lines have many benefits over third rail so they make up the majority of new electrification projects, but third rail still has a lot of use in suburban railways and metro systems.

> Whatever it means, what makes "electrified" twice as fast as diesel-electric?

You're completely right about the engineering, it's just that the diesel generators don't have quite as good peak power output compared to a fully electric system. I think that the article is overplaying this particular benefit of electrification though. The trains that I frequently take are bi-mode, and although you can certainly feel the extra 'kick' of acceleration when you enter the electrified parts of the line, it makes little difference to the total journey time compared to the old diesel-electric trains that used to run on the route.


I can see the value of examples, but in this case I appreciate the post largely for its universality and lack of examples. On reading it, examples from past and present experience spring immediately to mind, and I'm tucking this away as a succinct description of the problem. Maybe I can share it with others when more concrete examples come up in future code review.

A principle takes skill the apply, but it's still worth stating and pondering.


> examples from past and present experience spring immediately to mind

Examples of what?

Picking the wrong abstraction? Regretting your mistakes?

I can certainly think of many examples of that.

How you unwrapped an abstraction and made things better by removing it?

I have dozens of battle stories.

Choosing not to use an abstraction because it was indirection?

Which is what the article says to do?

I’m skeptical.

I suspect you’ll find most examples of that are extremely open to debate.

After all, you didn't use the abstraction so you don’t know if it was good or not, and you can only speculate that the decision you made was actually a good one.

So, sharing that experience with others would be armchair architecture wouldn't it?

That’s why this article is arrogant; because it says to make decisions based on gut feel without actually justifying it.

“Is this truly simplifying the system?”

Well, is it?

It’s an enormously difficult question to answer.

Did it simplify the system after doing it is a much easier one, and again that should be the advice to people;

Not: magically do the right thing somehow.

Rather: here is how to undo a mistake.

…because fixing things is a more important skill and (always) magically doing the right thing from the start is impossible; so it’s meaningless advice.

That’s the problem with universal advice; it’s impossible to apply.


> In other words it’s easy to make a difference as a high performer in a low performance organization.

And yet, the big takeaway for me is that to be a high performer it isn’t enough to A) know what needs to be done, or B) be able to do it well. The key is C) figuring out the incentive landscape.

His story of carving out his own job only to find he had no support from the board is what I’ve tried before. In my low performing organization, I thought I could be a high performer by knowing what needed to be done and doing it well. Everybody I directly worked with loved me and thought I was highly effective, but I never made any lasting change like this author. I didn’t understand the need to skip way up the levels until I was already burnt out.


From the article:

> The WolfsBane Hider rootkit hooks many basic standard C library functions such as open, stat, readdir, and access. While these hooked functions invoke the original ones, they filter out any results related to the WolfsBane malware.

I took this to mean some things like a simple “ls -a” might now leave out those suspicious results.


I wonder how much of that growth is because the bags themselves are so much thicker/heavier now. It would be interesting to compare count vs. weight.

It looks like the new, thick bags are 5x thicker than the old, thin ones (0.5 mils vs 2.5 mils) [0]. 11lb is a lot less than 5x 8lb, so the per-person bag trashing must be drastically lowered, unless weight and thickness aren’t correlated.

[0] https://1bagatatime.com/learn/what-is-a-mill/


This was my first thought on seeing the title as well. My entire life I and every family member I can think of has used plastic grocery bags for our little bathroom trash cans. I don’t even know what the options are for “real” bags of that size, having never shopped for them. Time to learn, I guess.

That said, this could still reduce my plastic use. The old, thin bags were sufficient. The new, thick bags have been overkill since day one. Given that the frequency with which I take out the trash didn’t change when the thin bags were banned, I’m sure I use quite a bit more plastic than before the ban. Maybe bags sold for small trash cans are thinner, and I’ll go back to pre-ban levels of use.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: