Hacker Newsnew | past | comments | ask | show | jobs | submit | IanCal's commentslogin

A remarkable number of humans given really quite basic feedback will perform actions they know will very directly hurt or kill people.

There are a lot of critiques about quite how to interpret the results but in this context it’s pretty clear lots of humans can be at least coerced into doing something extremely unethical.

Start removing the harm one, two, three degrees and add personal incentives and is it that surprising if people violate ethical rules for kpis?

https://en.wikipedia.org/wiki/Milgram_experiment


> 2012, Australian psychologist Gina Perry investigated Milgram's data and writings and concluded that Milgram had manipulated the results, and that there was a "troubling mismatch between (published) descriptions of the experiment and evidence of what actually transpired." She wrote that "only half of the people who undertook the experiment fully believed it was real and of those, 66% disobeyed the experimenter".[29][30] She described her findings as "an unexpected outcome" that

Its unlikely Milligram played am unbiased role in, if not the sirext cause of the results.


Milgram was flawed, sure. However, you can look at videos of ICE agents being surprised that their community think they're evil and doing evil, when they think they're just law enforcement. There was not even a need for coercion there, only story-telling.

Incorrect. ICE is built off the background of 30-50 years of propaganda against "immigrants", most of it completely untrue.

The same is done for "benefits scroungers", despite the evidence being that welfare fraud only accounts for approximately 1-5% of the cost of administering state welfare, and state welfare would be about 50%+ cheaper to administer if it was a UBI rather than being means-tested. In fact, much of the measures that are implemented with the excuse of "we need to stop benefits scroungers", such as testing if someone is disabled enough to work or not, etc. are simulatenously ineffective and make up most of the cost.

Nevertheless, "benefits scroungers" has entered the zeitgeist in the UK (and the US) because of this propaganda.

The same is true for propaganda against people who have migrated to the UK/US. Many have done so as asylum seekers under horrifying circumstances, and many die in the journey. However, instead of empathy, the media greets them with distaste and horror — dehumanising them in a fundamentally racist way, specifically so that a movement that grants them rights as a workforce never takes off, so that companies can employ them for zero-hour contracts to do work in conditions that are subhuman, and pay them substantially less than minimum wage (It's incredibly beneficial for the economy, unfortunately).


Rightwing propaganda in the USA is part of a concerted effort by the Heritage Foundation, the Powell Memo, Fox News, and supporting players. These things are well understood by researchers and journalists who have produced copious documentation in the form of articles, books, podcast series, etc.

One excellent example is available here[0] in a series by the Lever called Master Plan. According to their website, a book has been written broadening the discussion.

They have played us for fools and evidence of their success is all over the news and our broken society. It's outrageous because none of this was by accident or chance. Forces didn't magically come together in a soup that turned out this way.

0. https://the.levernews.com/master-plan/


What you have quoted says a third of people who thought it was real didn’t disobey the experimenter when they thought they were delivering dangerous and lethal electric shocks to a human. Is that correct?

Maybe there was an edit but it's the opposite, 66% disobeyed.

Right, so a third didn’t disobey.

A third of a half who were believers.

So of the entire populace of Milligram participants, 16.5% believed and obeyed.

That's a much, much smaller claim than the popular belief of what Milligram presented.

However, it's still possible that you only need ~16.5% to believe & obey authority for things like the Nazi death camps to occur.


We immediately only need to consider the half that believed the situation was real, if we are concerned with what people do in believably real situations.

Even if we take the 16% though, that's one in six people willing to deliver very obvious direct harm and/or kill another human from exceptionally mild coercion with zero personal benefit attached other than the benefit of not having to say "no". That is a lot.


No, no you don't; The authority includes that of the scientist.

I’m not sure what you’re trying to say here I’ve said nothing about authority.

Normalization of deviance also contributes towards unethical outcomes, where people would not have selected that outcome originally.

https://en.wikipedia.org/wiki/Normalization_of_deviance


I am moderately certain that this only happens in laissez-faire cultures.

If you deviate from the sub-cultural norms of Wall Street, Jahmunkey, you fucked.

It's fraud or nothing, baby, be sure to respect the warning finger(s) of God when you get intrusive thoughts about exposing some scheme--aka whistleblowing.


> lots of humans can be at least coerced into doing something extremely unethical.

Experience shows coercion is not necessary most of the time, the siren call of money is all it takes.


Still > 0

I think you’re misunderstanding, the person pulling in front is not giving you seconds to make space - they are deliberately trying to crash into you.

I’m always wary of anything that has such a clear example of a case that LLMs “don’t do” yet is trivially achieved by saying “review”.

The slack message result for example saying they’re the lead designer but nothing else (with clearer and better feedback if I say they’re notoriously overloaded, this is without that)

This is a very polite and respectful start, which is great since you are new. However, from a professional standpoint, it is a little too passive and vague. In a busy work environment, saying "no rush at all" or "whenever" often leads to your request being buried at the bottom of a to-do list. Additionally, you haven't told Priya exactly what she is looking at or where to find it. Here is a breakdown of how to strengthen this message to show you are organized and respectful of her time without being pushy. Key Improvements Needed * Include the Link: Never make someone ask "Where are the files?" Always drop the Figma/Drive link immediately. * Be Specific: "Any feedback" is mentally taxing. Direct her attention to what you are actually unsure about (e.g., "the color palette," "the user flow," "the spacing"). * Soft Deadline: Instead of "whenever," give a target. This helps her prioritize her week. Option 1: The Balanced Approach (Recommended) This version strikes the perfect balance between politeness and productivity. > "Hi Priya! I’ve put together the initial mockups for [Project Name]. > [Link to file] > When you have a moment, I’d love your feedback on [specific element, e.g., the layout of the landing page]. I’m hoping to wrap this stage up by [Day/Time], but let me know what works for your schedule. Thanks!" > Option 2: The "Respectful of Time" Approach Use this if you know she is swamped, but you still want to make progress. > "Hi Priya, hope you're having a good week. I've drafted the mockups for [Project Name] here: [Link]. > No major rush, but I’d appreciate your eyes on the [specific section] before I move on to high-fidelity. If it’s easier, feel free to leave comments directly in the file rather than messaging back. Thanks!" > Option 3: The "Quick Check" Approach Use this if you just need a sanity check before spending more time on it. > "Hi Priya, could I get a quick gut check on these mockups? [Link]. > I want to make sure I’m aligning with the new brand guidelines before I build out the rest of the screens. Does this direction look right to you?" > A Quick Checklist Before You Send * [ ] Did you change the permissions? Make sure the link is accessible (view/comment access) so she doesn't have to request access. * [ ] Is the file clean? Delete your "scratchpad" artboards or clearly label the one you want her to review so she doesn't look at the wrong version. Would you like me to help you draft the specific sentence regarding the "specific element" you want her to critique?

> Humans can model the LLM. The LLM can’t model being modeled

Can’t they? Why not?


I see claims like this so often, which amount to the idea that LLMs lack metacognition. (Thinking about their thinking / self-refkection). Of course the obvious solution is: ask them to do that -- they're shockingly good at it!

You really must be able to understand the difference between liking a thing and being addicted to a thing?

If not it’s probably worth just starting with basic definitions of addiction.


The DSM-5 acknowledges only gambling as a diagnosable behavioral addiction. Being addicted to a thing means substance addiction, not TikTok or games or internet.

The dsm-5 is not the only way we use words, but even there it clearly can define between liking gambling and being addicted right?

Sure, I just think we if we're going to pathologize activities with intent to pass laws we ought to at least stick to a science-based approach. Right now there is no basis to conclude that even a single person is addicted to TikTok, since no such diagnosis exists.

The word 'addicted' is used informally in all kinds of contexts where it's a wild exaggeration. Just like people say they have OCD or autism when they sort something, or say they are hypochondriacs when they wash their hands more often than average. Of course people who actually have these conditions might do the same, but a lot of the time it's just perfectly neurotypical people using hyperbole and/or a flawed understanding of psychiatry.

Let's wait until psychiatrists agree on the existence of TikTok addiction and come up with a set of diagnostic criteria. Until such time we should take the existence of such addictions with a grain of salt, and refrain from moral panic.


> since no such diagnosis exists.

The very specifically American diagnostic criteria are not particularly relevant, but more to the point

> we ought to at least stick to a science-based approach. Right now there is no basis to conclude that even a single person is addicted to TikTok,

I'd recommend reading the overview

https://ec.europa.eu/commission/presscorner/detail/en/ip_26_...

> The Commission's preliminary views are based on an in-depth investigation that included an analysis of TikTok's risk assessments reports, internal data and documents and TikTok's responses to multiple requests for information, a review of the extensive scientific research on this topic, and interviews with experts in multiple fields, including behavioural addiction.


The HF page suggests yes, with vllm.

> We've worked hand-in-hand with the vLLM team to have production-grade support for Voxtral Mini 4B Realtime 2602 with vLLM. Special thanks goes out to Joshua Deng, Yu Luo, Chen Zhang, Nick Hill, Nicolò Lucchesi, Roger Wang, and Cyrus Leung for the amazing work and help on building a production-ready audio streaming and realtime system in vLLM.

https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-26...

https://docs.vllm.ai/en/latest/serving/openai_compatible_ser...


If you use something like youtube-dlp you can download the audio from the meetings, and you could try things out in mistrals ai studio.

You could use their api (they have this snippet):

```curl -X POST "https://api.mistral.ai/v1/audio/transcriptions" \ -H "Authorization: Bearer $MISTRAL_API_KEY" \ -F model="voxtral-mini-latest" \ -F file=@"your-file.m4a" \ -F diarize=true \ -F timestamp_granularities="segment"```

In the api it took 18s to do a 20m audio file I had lying around where someone is reviewing a product.

There will, I'm sure, be ways of running this locally up and available soon (if they aren't in huggingface right now) but the API is $0.003/min. If it's something like 120 meetings (10 years of monthly ones) then it's roughly $20 if the meetings are 1hr each. Depending on whether they're 1 or 10 hours (or if they're weekly or monthly but 10 parallel sessions or something) then this might be a price you're willing to pay if you get the results back in an afternoon.

edit - their realtime model can be run with vllm, the batch model is not open


> You can’t instruct individual agents to contribute to global knowledge

Of course you can. What makes you think you can’t? They can use task trackers like anyone.

Also you can have review on prs, stages where you ask for review and then have them create tickets for improvements (we’re using two http crates, refactor to just use this one).


If you smoke you might not realise just how absolutely awful it smells and how it sticks to material. Sounds of children don’t.

I don't know man. I once heard the sound of children laughing and having fun, and now I've got kids of my own.

So does poor hygiene, smelly dog, a strong perfume you like but maybe not everyone, a newborn poop smell. And for all of those we have washing machines.

If your neighbours have so many kids that the newborn poop smell is filling your home if you open the window for a while and making your curtains and clothes stink of it because it’s that strong you are well within your rights to complain.

Do you smoke? These comparisons are quite odd unless you don’t realise just how strong the smell is.


These aren’t tests against autonomous cars though these are tests against what would happen if you used, say, gpt4o to figure out what to do.

Overkill I’m sure for many things but I’m curious as to whether there’s a TLA kind of solution for this sort of thing. It feels like it could although it depends how well modelled things are (also aware this is a 30s thought and lots of better qualified people work on this full time).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: