Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

99.9999%* of apps don't need anything nearly as 'fancy' as this, if resolving breadth-first is critical they can just make multiple calls (which can have very little overhead depending on how you do it).

* I made it up - and by extension, the status quo is 'correct'.



To be clear, I wouldn't suggest someone to implement this manually in their app. I'm just describing at the high level how the RSC wire protocol works, but narratively I wrapped it in a "from the first principles" invention because it's more fun to read. I don't necessarily try to sell you on using RSC either but I think it's handy to understand how some tools are designed, and sometimes people take ideas from different tools and remix them.


I get that. Originally my comment was a response to another but I decided to delete and repost it at the top level — however I failed to realize that not having that context makes the tone rather snarky and/or dismissive of the article as a whole, which I didn't intend.


Np, fair enough!


I'm already thinking of whether there's any ideas here I might take for CSTML -- designed as a streaming format for arbitrary data but particularly for parse trees


Multiple calls?! That sounds like n*n+1. Gross :P

I think the issue with the example json is that it's sent in OOP+ORM style (ie nested objects), whereas you could just send it as rows of objects, something like this;

  {
    header: "Welcome to my blog",
    post_content: "This is my article",
    post_comments: [21,29,88], # the numbers are the comment ID's
    footer: "Hope you like it",
    comments: {21: "first", 29: "second", 88: "third" }
  }
But then you may as well just go with protobufs or something, so your endpoints and stuff are all typed and defined, something like this;

  syntax = "proto3";
  service DirectiveAffectsService {
    rpc Get(GetPageWithPostParams) returns (PageWithPost);
  }
  message GetPageWithPostParams {
    string post_id = 1;
  }
  message PageWithPost {
    string page_header = 1;
    string page_footer = 2;
    string post_content = 3;
    repeated string post_comments = 4;
    repeated CommentInPost comments_for_post = 5;
  }
  message CommentInPost {
    string comment_id = 1;
    string comment_text = 2;
  }
And with this style, you don't necessarily need to embed the comments in 1 call like this, and you could cleanly do it in 2 like parent-comment suggests (1 to get page+post, second to get comments), which might be aided with `int32 post_comment_count = 4;` instead (so you can pre-render n blocks).


There's nothing wrong with "accidentally-overengineering" in the sense of having off-the-shelf options that are actually nice.

There is something wrong with adding a "fancy" feature to an off-the-shelf option, if said "fancy" feature is realistically "a complicated engineering question, for which we can offer a leaky abstraction that will ultimately trip up anybody who doesn't have the actual mechanics in mind when using it".


> There's nothing wrong with "accidentally-overengineering" in the sense of having off-the-shelf options that are actually nice.

Your comment focuses on desired outcomes (i.e., "nice" things), but fails to acknowledge the reality of tradeoffs. Over engineering a solution always creates problems. Systems become harder to reason with, harder to maintain, harder to troubleshoot. For example, in JSON arrays are ordered lists. If you onboard an overengineered tool that arbitrarily reorders elements in a JSON array, things can break in non-trivial ways. And they often do.


We technically didn't need more than 640K either.

Having progressive or partial reads would dramatically speed up applications, especially as we move into an era of WASM on the frontend.

A proper binary encoded format like protobuf with support for partial reads and well defined streaming behavior for sub message payloads would be incredible.

It puts more work on the engineer, but the improvement to UX could be massive.


Sure, if you’re the 0.00001% that need that. It’s going to be over engineering for most cases. There are so many simpler and easier to support things that can be done before trying this sort of thing.

Following the example, why is all the data in one giant request? Is the DB query efficient? Is the DB sized correctly? How about some caching? All boring, but if rather support and train someone on boring stuff.


>We technically didn't need more than 640K either.

That old chestnut again - this was true for MSDOS PCs in 1981 when the quote was said. It was still true 10 years later for whatever version of DOS was current then. People keep bringing it up as though Bill Gates said 'no one will need > 640k for all time to come'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: