Hacker Newsnew | past | comments | ask | show | jobs | submit | danijelb's commentslogin

In Croatia more than 55% of households have AC installed, and only 10 years ago it was less than 25% of households. It got more popular as our summers are getting increasingly more hot and humid. Average salaries in France are probably double compared to Croatia. It definitely can't be classified as luxury if more than half of the country can afford it, in one of the poorer EU member states. I assume in the next 10 years it's probably gonna be 70+% of all households.

Regarding technical sophistication, AC is more or less using the same technology as a fridge, just scaled and adapted for room cooling instead of food storage.


It could lead to decentralization of both consuming and publishing. There could be a native timeline UI integrated into iOS/Android. Mobile network operators could host fediverse instances for text/audio/video and bundle them into phone plans which everyone already pays for. The possibilities are endless.

In such scenario search engines would also play a bigger role because if people are hosting videos on 50 different services instead of just Youtube, you need a separate search engine which crawls all instances.

I think the current fediverse servers like Pixelfed and Mastodon are emulating centralized services because we are used to how they function and it would be too confusing. If it catches on, then those services can slowly morph and adopt the benefits of the new paradigm


If ActivityPub gets really popular, there won't be a single service that's winning. For example, if each Youtube profile is suddenly a fediverse account which you can follow, the owner of that Youtube profile doesn't need a separate Twitter or Wordpress account - simply the act of uploading the video to Youtube will appear in user timelines of twitter, mastodon and all other services. Same would happen with blogs, instagram accounts, etc.

And if we get to such future, it's possible that we won't even need a Mastodon/Twitter account to have a timeline. iOS and Android could build native support for following accounts and displaying timelines in the operating system.


It's great they are adopting the standard but those aliases are ugly.

Instead of openprotocolfanblog.wordpress.com@openprotocolfanblog.wordpress.com it should be openprotocolfanblog@wordpress.com


We have decided to do it like that, because it is an easy and nice way to have a unique ID that works with or without a custom domain.

For example: `openprotocolfanblog@wordpress.com` makes only sense if you use the wordpress.com subdomain. If you have your own domain, you want to have something like `username@domain` not `username@wordpress.com`.

Besides of that, you will be able to activate user-accounts (next to the blog-account) on higher plans. That means we had to choose something that is consistent but causes no collisions with usernames.

And finally Mastodon and others only show the part before the @, that makes the ID very similar to what Bluesky is doing. https://mastodon.social/@pfefferle/111220452911718192


I respect you and Automattic being transparent about your thinking here on HN.


I assume that would create confusing overlaps with @wordpress.com email addresses which may or may not exist? Maybe just "blog@NAME.wordpress.com" would suffice.


we also thought about using something generic like `blog@NAME.wordpress.com` or `feed@NAME.wordpress.com` but this would have made autocomplete useless. Mastodon users would see a list of thousands of `blog` or `feed` users when searching for a WordPress.com user.


The trouble with that is moderation. Presumably there exists some Nazi who has a Wordpress blog. If they're a particularly noisy Nazi, that could get Wordpress.com banned on a lot of instances, because the Fediverse's primary and easiest moderation route is nuking badly-behaved instances, and many instance admins are _particularly_ sceptical of corporate ones (some instances have _preemptively_ banned threads).

This way, every Wordpress blog is for practical purposes its own instance.

That said, the username component does seem unnecessarily unwieldy.


Resistance from Mastodon enthusiasts doesn't matter. Governments and companies can still decide to run their own ActivityPub infrastructure, running on Mastodon or something else. Threads and other services will federate with them, users will be able to follow accounts from those government and company instances.

Small Mastodon communities can choose to not federate, but they would be missing out on a lot, assuming ActivityPub takes off.


I agree otherwise except on the missing out a lot part.

I haven't defederated Threads on my instance as it has so far not become a problem. They can easily become one if they for example start pushing up ads into the stream.

Most Mastodon users have what they want, they aren't really missing out on anything. Except ads and spying mainly.


I agree for ads - I don't think it would be acceptable to push ads to other servers. I'm not sure if it's even possible in the standard because they'd have to have some sort of an ad account, but if the user on your instance is not following that account they'd never see the ad.

As for missing out, I guess it's a matter of perspective. I would assume that right now most Mastodon users have at least one more account on a mainstream corporate social media site - Twitter, Instagram, whatever. What they are missing out from Mastodon, they get on other places.

But if we imagine a scenario where ActivityPub becomes mainstream and all providers start supporting it, there are two totally different experiences.

An experience of a person on a server that federates with everyone could mean having just one account and follow everything from one place. For example Twitter would support ActivityPub so you can follow Twitter users without an account there. You could follow a Youtube channel and have it in your feed. The owner of the channel wouldn't need to create a separate account on Twitter, Instagram, Mastodon, or something - their channel is their followable account on all platforms.

The experience of a person on a small mastodon server that defederates from big corporate servers would be exactly as is today. To follow users of Instagram, Twitter, Threads, Youtube, etc you'd maintain another account. But in the future where ActivityPub is mainstream, that other account is also activitypub-compatible. So, why have two when one is less "powerful" than the other?

Letting the imagination run wild, in such future scenario ActivityPub feeds could be integrated deeply into your iOS/Android phone UI, without needing a separate app. Perhaps also on TV. Most people will want an account that doesn't limit them.


I'm sure that ads thing is something they put 99% of their resources on. They can easily augment their users' posts by randomly adding ads into them when relayed to ActivityPub. They can also comment on posts automatically or half-automatically from ads accounts.

They can also post as their users. Like YouTube, when you look at a video, they put ad videos there in the start and in between, so they could do something like that by posting ads on influencer accounts.

They can also reward the people themselves for posting ads.

Anyhow, I already got one spam message on Mastodon from that browser-embedded instance, Vivaldi, because they didn't do a great job in vetting their users. I was already ready to block them if I receive another, but I never did.

They will find a way to push garbage on ActivityPub I have no doubt.

Regarding defederation, there are different levels. Typically you would silence the instance but allow people to follow single accounts from there.

If it's more powerful to be fully open, then an email client without a spam filter would be superior to one with a spam filter.

Twitter is moving towards closing up their network from users not signed in. EU is making it mandatory for social networks to be interoperable though, so that pulls to the other direction. In any case, it's not a great future for the big social network corporations because people can choose to have either what they offer or roughly the same without ads or tracking.


> I haven't defederated Threads on my instance as it has so far not become a problem.

Wait Threads already support ActivityPub? I thought this was coming later?


The government shouldn't provide that service to ordinary citizens, same as they don't provide email hosting on government email servers. I think it helps if you think about Mastodon as email. Government runs their accounts on government server, other users can interact with their content from other servers.

As for people caring about a server for functional programming or some other niche, I believe this is just a temporary state of Mastodon and ActivityPub because it's such an early and experimental technology.

I believe that in the near future there will be multiple large corporate hosted social networks supporting ActivityPub (running on Mastodon or something else, doesn't matter) and then the choice becomes clear for an average user - who do they want handling their data. There could be a Google activitypub service, something from Facebook (they said Threads will support it), different media companies could run their services. Phone networks and ISPs could offer ad-free instances bundled with internet and phone plans. And, of course, there will still be a ton of small niche communities owned by enthusiasts for a variety of topics.

Celebrities and companies will most likely host their own instances, because an account on an official well-known domain acts as verification.

The fediverse itself will be a neutral ground.


Current federation looks like that, a bunch of small independent communities focused on a single topic like tech or photography or whatever, but it doesn't have to be like that. If federation catches on then the most likely the pick will be between 5-10 huge general instances run by corporations.

In such world, Twitter and Facebook could add support for ActivityPub, Google and Microsoft could build their own, different ISPs and media companies could run their instances, and so on. Then the benefit of choice becomes more clear - who do you trust the most to provide you an account. Basically, it would be like the situation with email where most people use a 3-4 biggest services, but the ecosystem is open to new competitors.

The current state of federated social networks is very experimental. If it catches on with big corporations 90% of people will use that, and these small communities can still survive but would be more or less irrelevant.


The web went in the wrong direction when we abandoned the initial concepts of user agents, which was that the browser has the ultimate choice of what to render and how. That concept, transferred to today's world of apps would simply mean that any client like Apollo is essentially a browser locked on Reddit's website, parsing HTML (which has the role of an API) and rendering the content in a native interface. As long as the user can access the HTML for free, they should be able to use any application (a browser or a special app) and render the content however they wish.

Unfortunately with today's SPA apps we don't even get the HTML directly, but with the recent resurgence of server-side rendering we may soon be able to get rendered HTML with one HTTP request. And then the only hurdles will be legal.


> Unfortunately with today's SPA apps we don't even get the HTML directly

It works the other way: with today's SPAs the API (that powers the frontend) is exposed for us to use directly, without going through the HTML - just use your browser's devtools to inspect the network/fetch/XHR requests and build your own client.

-----

On an related-but-unrelated note: I don't know why so many website companies aren't allowing users to pay to use their own client: it's win-win-win: the service operator gets new revenue to make-up for the lack of ads in third-party clients, it doesn't cost the operator anything (because their web-services and APIs are already going to be well-documented, right?), and makes the user/consumer-base happy because they can use a specialized client.

Where would Twitter be today if we could continue to use Tweetbot and other clients with our own single-user API-key or so?


> inspect the network/fetch/XHR requests and build your own client

The purpose of an API is the agreement, more than the access. You can always reverse engineer something, but your users won't be too happy when things randomly stop working, whenever reddit chooses.


Total non-issue. If it breaks, people will fix it. There's people out there maintaining immense ad filter lists and executable countermeasures against ad blocker detection. Someone somewhere will care enough to fix it.


> There's people out there maintaining immense ad filter lists and executable countermeasures against ad blocker detection.

This is not a useful comparison. A failure of an ad blocker means you don't see an ad while using the service. Big deal. A failure of a reverse engineered glorified web scraper is that the app stops working, completely, for all users of the client, at once, until someone fixes it.

Yes, it could be democratized, but most users wouldn't understand any of this, and say "ugh, this app never works". It would be a user experience that reddit could make as terrible as they wanted.


It absolutely is a useful comparison. It's obvious that this software depends on unstable interfaces that will eventually break. I wasn't talking about that, I was talking about the sheer effort it takes to create such things. Such efforts are absolutely in the realm of existence today. Projects like nitter and teddit exist. Teddit is on the frontpage of HN right now no doubt in reaction to this thread. There's probably one for HN too, I just haven't found HN to be hostile enough to search for it.

Honestly I don't really care about "most users". To me they're only relevant as entries in the anonymity set. As long as we have access to such powerful software, I'm happy. I'm not out to save everyone.


> I was talking about the sheer effort it takes to create such things

I understand what you're saying, but I think this is the key to my point:

> It would be a user experience that reddit could make as terrible as they wanted.

It's an unfair cat and mouse game. Yes, effort could be made to fix it each time, but, if reddit chose, they could force everyone into the "most users" group, when the only app works for 5 minutes a day, and people get bored, because they decided to randomize page elements.


There are only so many programmers, who will fix the client, per 1 person. This fraction, when inverted, will be a rough threshold for the client's audience size for continued fixes to be there.


And yet these people somehow maintain immense amounts of ad blocking filters and code, including active counter measures which require reverse engineering web site javascripts. I gotta wonder what would happen if they started making custom clients for each website instead.


Adblockers' audience is huge, much more than any single site's audience, and they probably wouldn't care about most single sites (to care, you have to be in the audience, and most sites have small audiences).


Someone cared enough to defeat annoying blocker blockers of sites. If they care just a little bit more, they could replace the web developer's code with their own minimal version. Chances are the site doesn't actually need most of the code it includes anyway.

What I'm talking about already exists by the way. Stuff like nitter, teddit, youtube downloaders. I once wrote one for my school's shitty website.


CORS ruined this pipe dream. Ideally you’d be able to tell your browser that website X loading content from site Y was a-okay and exactly what you want to happen because site Y is user-hostile and site X addresses all those issues, but alas.

Now the only way to access site Y is by a) routing all your data through some third party server, or b) installing a native application which has way more access to your machine than the web app would.

Some days you gotta wonder if anyone on the web committees has any interest in end-users.


> Now the only way to access site Y is by a) routing all your data through some third party server, or b) installing a native application which has way more access to your machine than the web app would.

Or installing a browser extension that allows rewriting CORS headers.

> Some days you gotta wonder if anyone on the web committees has any interest in end-users.

Oh, they do. The defaults are much safer for end-users than they used to be. Who they mostly leave out is a narrow slice of power users with use cases where bypassing make sense, and the extension facilities available address some of that.


From what I can tell there’s no such extension on iOS. I think it should be part of the standard, not a hole left for extensions to fill in.

The slice is only narrow because it’s practically impossible. If there were an option presented to end users “let X.com read data from Y.com?” there would be a rich ecosystem of alternative UI’s for any website you could think of.

These alt-UI’s would be likely to have better security practices than the original, or at the very least introduce competition to drive privacy/security/accessibility standards up for everyone. Whereas currently if the Origin has the data, they have full ability to impose whatever draconian practices they want on people who desire to access that data.


I understand what you're saying, but plenty of websites resolve this by having an in-browser OAuth flow, and then working off of an API. It's not like APIs are asking for CORS stuff in general, just cookie auth to the third party server requires CORS.

If a third-party webapp wanted to access Reddit, an auth flow that gets API tokens from it and then stories those for usage gets this working (in the universe in which Reddit wants this to happen of course). You still get CORS protection from the general drive-by issues, and you'll need an explicit auth step on a third party site (but that's why OAuth sends you to the data provider's website to then be redirected)


I don’t think you do get what I’m saying. If an Origin wants to be accessed by other Origins there are plenty of ways to do that, that much should be obvious.

I’m talking about the case when the User wants origin A to render data origin B has, but origin B doesn’t want that. You’d expect the User Agent to act on the User’s behalf and hand B’s data to A after confirming with the User that is their intention.

But instead the User Agent totally disregards the User and exclusively listens to origin B. This prevents the User from rendering the data in the more accessible/secure/privacy-preserving/intuitive way that origin A would have provided.

Strange to see all the comments arguing that in fact the browser ought to be an Origin Agent.


> Strange to see all the comments arguing that in fact the browser ought to be an Origin Agent

Funny

One universe I could see is the browser allowing a user to grant cross origin cookies when wanted. Though even then a site B that really doesn’t want this can stick CSRF tokens in the right spots and that just falls apart immediately

I imagine you understand the security questions at play here right? Since a user going to origin A might not know what other origins that origin A wants to reach out to.

CSRF mitigations mean that origins could still block things off even without CORS, but it’s an interesting thought experiment


Can they stick CSRF tokens in the right spot under this model? The typical CSRF mitigations require other origins to not be able to access the HTML of the page (as they just inject a hidden form field or similar). If the cross-origin has full access to the page’s resources they ought to be able to emulate the environment of the page as viewed in-origin quite accurately.

Worth noting this model would introduces no new holes - everything I ask for is already possible when running a native application.


I get what you're saying w/r/t CSRF. While every app could be different, in practice most websites do real bog-standard CSRF tokens, and I could see a user agent be able to get things working with like 95% of websites. Though I could think of many schemes to obfuscate things dynamically if you are motivated enough! But I like the idea of a user agent that is built around making it easier for you to just get "your" data in these ways.

> introduces no new holes - everything I ask for is already possible when running a native application.

A native application involves downloading a binary and installing it on your machine. Those involve a higher degree of trust than, say, clicking on a random URL. "I will read this person's blog" vs "I will download a binary to read this preson's blog" are acts with different trust requirements. At least for most people.

I suppose in a somewhat ironic way the iOS sandbox makes me feel more comfortable downloading random native apps but it probably really shouldn't! The OS is good about isolating cookie access for exactly the sort of things you're talking about (the prompt is like "this app wants to access your data for website.com)), but I should definitely be careful


Technically you can still do that by launching chrome with some special flags or with a chrome extension.

But I do agree that CORS is being hijacked/abused for this purpose. But at the same time it's an important security feature. It prevents the scenario where you visit some website and some malicious javascript starts making calls to some-internal-site/api/... and exfiltrating data.


The chrome flag disables CORS entirely, which presents a major security risk as you point out. What I’m asking for is an option to let specific origins read from specific other origins. Extensions might be able to do this but they aren’t available in all contexts (iOS, for instance)


There's two reasons why they don't want third-party clients as a pro feature:

- It's a very niche thing to charge for, and merely charging for something means having to support it, so you can be underwater on support costs alone

- Users on third-party clients are resistant to enshittification

The business model of any Internet platform is to reintermediate: find a transaction that is being done direct-to-consumer, create a platform for that transaction, and get everyone on both ends of the transaction to use your platform yourself. You get people hooked to your platform by shifting your surpluses around, until everyone's hooked and you can skim 30% for yourself. But you can't really do this if a good chunk of your users have third-party clients.

This is usually phrased as "third-party clients don't show ads", but it extends way broader than that. If it was just ads, you could just charge $x.99/mo and make it profitable. But there's plenty of other ways to make money off users that isn't ads. For example, you might want to open a new vertical on your site to attract new creators. Think like Facebook's "pivot to video", how every social network added Stories, or YouTube Shorts. Those sorts of strategic moves are very unlikely to be properly supported by third-party clients, because nobody actually wants Twitter to become Snapchat. So your most valuable power users would be paying you money in order to... become less valuable users!

If social media businesses worked how they said they worked, then yes, this would actually be a good idea. But it isn't. Platform capitalism is entirely a game of butting yourself in to every transaction and extracting a few pennies off the top of everything.


> Where would Twitter be today if we could continue to use Tweetbot and other clients with our own single-user API-key or so?

So like OAuth? IIRC Twitter used that with all the 3rd party clients. I think the problem is that 3rd party clients filters out ad posts one way or the other. Your other point still stands though, just charge the user API access.


> I don't know why so many website companies aren't allowing users to pay to use their own client...

If you do that, I'm going to make a client that uses a rotating set of accounts and masquerades as a different client. I am then going to make content available through my client for free, and I'm going to put ads on it so that I can make money. With some small number of accounts, I will serve perhaps x1000 users and you can't do anything about it.

In time, perhaps I will lock the users into my platform. They will talk about how the community on Reddit doesn't understand Reneit and how all the memes come from Reneit. If I win, I'll be Reddit over Digg. If I lose I'll be Imgur.

So go ahead. You'll be Invision to Tapatalk and you will die.


They sort of are allowing users to pay to use their own client by charging for API access. It will be interesting to see how Apollo adapts to this new reality.


> allowing users to pay to use their own client

On the user side you need to:

- pay the service a recurring fee

- pay the client probably a recurring fee (x2 or x3 if you use multiple clients on different platform)

- mix and match the above and manage when it falls out of sync

It's totally possible, but how many users are willing to go that route ? Weather apps could be an example of that with the pluggable data sources, but that's to me a crazy small niche.


The reason there will always be ads: average consumers are never willing to pay as much to keep their eyes clean as others are willing to pay to dirty them.


> the browser has the ultimate choice of what to render and how

Fundamentally you're advocating for a web that doesn't rely on ad money. I'm totally with you, but the discussion should probably expand beyond the web and to why our society generate so much ad money in the first place.

What should we do to free our societies from ad money ?


There was a 15 year period where many websites were only compatible with Internet Explorer. The dream of clients in control is worth fighting for, but it’s never been reality.


App Store. It’s the App Store and iPhone that killed the web.


There's free API access with a client of your own. You just can't distribute a single client that intermediates the site: thereby not being a user agent so much as its own site. If you use your own client_id and OAuth2, you get 100 req/min which is enough to browse.


> parsing HTML (which has the role of an API) and rendering the content in a native interface

That's a nice dream but the reality is that HTML would be a really bad API, even worse than SOAP.


Why can't these apps just use the api that reddit.com uses? How could the servers differentiate between reddit.com and apollo app pretending to be reddit.com to the server?


Seems like you could still a meta UI that drives the underlying SPA in a hidden browser but it would be a pain. Maybe a framework for that will be built one day


Seems like we're always missing a fusion of:

1. SPA that you can run on your phone or desktop

2. Centralized User Management, need some way to block known bad actors

3. Signing posts / comments

4. Distribution of posts and comments over DHT?

5. Hosting images, videos and lengthy text posts on torrents

6. A whack ton of content moderation software to somehow make decentralized moderation work.

7. Image recognition for gore / CP that inevitably will get spammed

This would enable people to help host the subreddits they are subscribed to, but murder battery life on mobile unfortunately.


> As long as the user can access the HTML for free, they should be able to use any application (a browser or a special app) and render the content however they wish.

You can see how the end game of this is HTML no longer being free, right?


The worse case vision I have of the future internet in one in which content and advertising is hosted by the advertising companies and rendered via a web assembly system.

Content and advertising cannot be separated by IP and the site content is basically an application that is difficult to parse.


Another great success that didn't exist 15 years ago - no more roaming charges


Reminds me of how AI image generators draw stuff slightly wrong


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: