Hacker Newsnew | past | comments | ask | show | jobs | submit | panstromek's commentslogin

Steam takes 30% cut, though?

Yes, and that is also excessive.

https://en.wikipedia.org/wiki/Whataboutism


I have to respond to your point, though. Whether 30% cut is excessive depends on whether devs feel like they are getting a good deal. As far as I can tell, game developers don't seem to complain about Steam cut very much, it seems like the value you get is worth it.

For example, this thread https://www.reddit.com/r/Steam/comments/10wvgoo/do_you_think... seems like majority is positive about it, even though people debate. When Apple tax is brought up, there's almost never even a discussion there, it's pretty universally hated.

Apple seems to have almost adveserial relationship to its developers. I deploy to App Store and I feel like I'm getting screwed. Even compared to Google, which takes the same cut, but does bahave a lot more nicely to its developers.


I'm not judging that, it just seems to contradict the "But Steam shows us another model..." sentence, so I'm trying to make sense of that.

You're right, I didn't know it was 30%.

Checking an LLM, it sounds like they more or less all charge 30%. That's shit.


> Note: This image has been edited to include a pile of cash.

I giggled


There's so many ways this benchmark can go wrong that there's pretty much no way I can trust this conclusion.

> All the loops call a dummy function DATA.doSomethingWithValue() for each array element to make sure V8 doesn't optimize out something too much.

This is probably the most worrying comment - what is "too much?" How are you sure it doesn't change between different implementations? Are you sure v8 doesn't do anything you don't expect? If you don't look into what's actually happening in the engine, you have no idea at this point. Either you do the real work and measure that, or do the fake work but verify that the engine does what you think it does.


There are a lot of "probably"s in the article. I was also suspicious that the author didn't say they did any pre measurement runs of the code to ensure that it was warmed up first. Nor did they e.g. use V8 arguments with Node (like --trace-opt) to check what was actually happening.


u can compile to v8 turbofan final bytecode and use ai to analyze and compare the instructions.


> Maybe that's me, but I rarely saw teams which over-document, under-documenting is usually the case.

This is a good point, although this recently changed with LLMs, which often spit out a ton of redundant comments by default.


Claude Code in particular seems to use very few redundant comments. That or it's just better at obeying the standing instruction I give it to not create them, something other assistants seem to blithely ignore.


Sure, but you can't always fix the bug if it's not in your system.


Fork it, you should have ownership of your whole stack.

If you have the spare time, you can try and submit your patches upstream; in the meantime, you just maintain your own version.


No, you can't always do that. We have workarounds for platform bugs that were even fixed, because we get users with old devices that can't upgrade. You cannot fork a phone of a random person on the other side of the world. Once a platform bug is out, it can stay out in the wild for a very long time.


Deploy your own platform -- if need be built on top of other (unreliable) platforms.


Our website codebase contains a workaround for a bug in native Android file picker in Samsung One UI. How are you supposed to solve this by "deploying your own platform?"


By decoupling your application logic from your UI toolkit.


So, when operating system gives you invalid file, it magically becomes valid, because your UI code is in a different file. Sure, that sounds plausible.


I suggest you read up about encapsulation.

Have a good day and happy new year!


I also find that phrase super misleading. I've been using a different heuristic that seems to work better for me - "comments should add relevant information that is missing." This works against redundant comments but also isn't ambigous about what "why" means.

There might be a better one that also takes into account whether the code does something weird or unexpected for the reader (like the duplicate clear call from the article).


I like this framing, but might add to it: "comments should add relevant information that is missing and which can't easily be added by refactoring the code".


It might look ok from user's point of view, but lot of the problems fall on web developers who have to work around a bunch of these issues to make their pages work in Safari


Been working with web related tech since the early 00’s. Safari has just never been a problem except for invasive ads, like back in the Flash days.


This is such nonsense and everyone who’s a web developer knows you’re not being honest here but just to make it ever clearer for anyone else here’s a chart showing the number of bugs that only occur in a single browser.

https://wpt.fyi/results/?label=master&label=experimental&ali...

It’s undeniable that Apple makes a dogshit browser.


> This is such nonsense and everyone who’s a web developer knows you’re not being honest

And in your opinion "being honest" is speaking for every web dev out there?

I've been a web dev for 25 years (god I'm old) and Safari has not been a major pain for me.

You keep bandying wpt.fyi results around not even understanding what they mean. E.g. Safari only passes 8 out of 150 accelerometer tests. So? Does it affect every web dev? Lol no. But it does pass 57 out 57 accessibility tests which is significantly more important.

Edit: don't forget that there's also Interop 2025 which paints a very different picture: https://wpt.fyi/interop-2025?stable


I was doing web dev or related from 2000 to 2016. IE6 was far worse than anything Safari has done.


Late on a lot of standards, quirky in many ways and just a lot of bugs, especially around images and videos. Also positioning issues. They recently broke even position fixed, which broke a ton of web pages on iOS, including apple.com


I like this, especially because it focuses on the actual problem these contributioms cause, not the AI tools themselves.

I especially like the term "extractive contribution." That captures the issue very well and covers even non-AI instances of the problem which were already present before LLMs.

Making reviewer friendly contributions is a skill on its own and makes a big difference.


I have bumped into this myself, too. It's really annoying. The biggest footgun isn't even discussed explicitly and it might be how the error got introduced - it's when the struct goes from POD to non-POD or vice-versa, the rules change, so completely innocent change, like adding a string field, can suddenly create undefined behaviour in unrelated code that was correct previously.


wow, can you elaborate how adding a string field can break some assumptions?


Not the OP, but note that adding a std::string to a POD type makes it non-POD. If you were doing something like using malloc() to make the struct (not recommended in C++!), then suddenly your std::string is uninitialized, and touching that object will be instant UB. Uninitialized primitives are benign unless read, but uninitialized objects are extremely dangerous.


That's not what was happening in this example though. It would be UB even if it was a POD.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: