Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One possible option that never gets discussed is to nuke the amplification methods. If we stop recommending content automatically this ceases to be a problem.


They are nuking them right now (temporarily). It's not just being discussed, it's being done.

But most of the time people like getting content recommended. It's what consumers want.


As someone who's worked at a big social media company — no, that's not at all what consumers want. They want chronological feeds with zero garbage mixed in. It's okay to have a separate recommendations feed, some (not all!) people want to discover new things, but it's totally not okay to meddle with the main one, and it's nothing but mockery to give users no control over it. People also want their preferences respected, they certainly don't want them reset every now and then.

The only reason people keep using services like Twitter is because their network keeps them there.


Well I guess it depends on making a distinction between what consumers think they want, versus what they actually do.

Yes, people say they don't want recommendations, because 95% of them are irrelevant.

But then the 5% (or 2% or 0.5%) turn out to be super-relevant, and they find new people to follow that they love, and learn about things they love, and the experience in the end turns out to be a net positive.

Their actions show that it's valuable in the end. Otherwise the feature wouldn't exist at all. Recommendations aren't advertisements, sites don't make money off them -- sites use them because people genuinely find things that lead them to use the sites more.


I'm not denying the undeniable fact that some people sometimes want to discover new things. I'm just saying that it's absolutely possible to have it done in a respectful manner. No one, ever, under any circumstances, likes or wants to be manipulated, be it overtly or by having their subconscious played with — period. Adding non-configurable extra anything into people's newsfeeds, be it recommended posts, ads, or "people you may know" blocks, is a crime against user-frendliness. Those who do want to discover new things, will simply open the "discover"/"explore" tab that contains a dedicated recommended content feed on their own. There is no need to nudge anyone to anything.

People aren't stupid if you don't build your UI/UX around the assumption that they are. They also like transparent, understandable algorithms. Chronological feed of (only) the people one follows is as transparent as it gets. A chronological feed with some recommendations mixed in is more opaque and confusing. An algorithmic feed is an epitome of opaqueness. Opaqueness naturally drives users away because it doesn't exactly instill confidence that their posts will reach their followers.

Another example: do you understand what the "see less often" button in Twitter does? No one does. No one likes cryptic algorithmic bullshit forced on them with no way to disable it.

Choice is very important.

> versus what they actually do.

Do manipulations work? Of course they do. Are people happy when they are manipulated? Of course they are not.

> Recommendations aren't advertisements, sites don't make money off them

They absolutely do. Recommendations aren't there because Twitter wants to be helpful — they'd be more user-respecting as I said above if that was the case. They're there because they drive engagement metrics up, and those in turn translate into someone's KPI.


> It's what consumers want.

Do consumers want it, or is it merely taking advantage of some more subconscious human behavior patterns. And if the latter, is this something that is bad for humankind?


Oh mate - definitely something that the dopamine dealers need to confront at some point. ‘Want’ vs actual material benefit to life


Tobacco 2.0


Consumers want a lot of things with negative externalities - goods that cost less because they're produced with slave labor, transportation that emits greenhouse gases, etc. Their preference shouldn't trump the obligation not to harm third parties.

Automated recommendations of a human-curated set of content - e.g. Netflix recommendations for its suite of programming - are much less objectionable, because they can't amplify anything the organization has not intentionally decided to present. It's the combination of UGC and ML recommendations that presents problems.


> But most of the time people like getting content recommended. It's what consumers want.

Do they, or do they just boost some KPI that suits proxy for actual utility?

Anecdotally even in non-tech circles most of my friends complain about how bad recommended content has gotten, or roll their eyes at whatever "personalized" ad for garbage they've been recommended.


I disagree, this isn't the nuclear option, the nuclear option is forcing these platforms to have a more editorial role in the content they're serving and that comes with a whole bunch of good and a boatload of bad.

Gigantic unmoderated platforms existing like this that promote random snippets of speech to drive user engagement and ad-revenue is a thing that shouldn't exist. The problem we still haven't solved is how to specifically kill off platforms of this type without killing forums and discussion boards in general. I think there is a distinction there but I'm not certain precisely what defines it - but if anyone figures it out please let us all know!


we've had forums and discussion boards for decades now that do not have recommendation features. I don't see why we can't put that genie back in the bottle.

IMO the moment you start highlighting things that people didn't explicitly ask for, it's an endorsement.


I think it's like Gerrymandering - yea we can all tell when it's gotten to stupid levels but the supreme court wasn't wrong to want a definition of where the line between "okay" and "bonkers" is. I personally think the decision could've been a bit more aggressive against gerrymandering but we do need some clear line to say "If you're beyond this you're doing an illegal thing" - and while we could close in on that line over time with a slow accumulation of precedent it'd be a lot cleaner to have a decent measure.


Consumers also want cocaine, but that doesn't mean you get to sell it to them with impunity.


We cross a threshold if we treat information like cocaine.

A threshold that calls the First Amendment in the US into question.


Is lying and deceiving people fine, according to the 1st amendment?

For example foreign states that pays armies of internet trolls to in effect choose president in the US -- is that what the 1st amendment wants to happen

I think "information" can kill more people than cocaine, is more dangerous


Don't misunderstand that I'm a First Amendment absolutist.

Indeed, I mean exactly what I said: the questions you raise call the First Amendment into question.


Hmm not sure I understand.

You're saying that maybe the 1st amendment could be edited a bit so that it was more clear that such things aren't okay?

I'm thinking people interpret "free speech" in sometimes quite different ways


>Consumers also want cocaine, but that doesn't mean you get to sell it to them with impunity.

The appropriate situation for those that want cocaine is something similar to the rules around purchasing/possessing/using alcohol.

From an economic standpoint (increased tax revenue, reduced spending on "enforcement" and incarceration, increased economic output because fewer people are in prison, etc.) and a societal standpoint (more resources available to the 2-5% of folks who end up with dependency problems, reduced property crime, not harming communities with significant numbers of residents being pulled out of the community and incarcerated, etc.)

As such, there's no good reason for any mind altering substances to be illegal. Rather, they should be regulated and taxed appropriately.


I can buy tobacco and alcohol at the corner store, but I shouldn't be allowed to buy cocaine?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: