Hacker Newsnew | past | comments | ask | show | jobs | submit | alex_sf's commentslogin

That's not worth 45k. It's barely worth anything for a typical website, tbh.

Tbf, there is no one with a ‘serious DevSecOps background’. It’s an incredibly strong hint that the person is largely a goof.


Maybe, but the group of people they are/were working with are Extremely Serious, and Not Goofs.

This person was in communications of the 160th Special Operations Aviation Regiment, the group that just flew helicopters into Venezuela. ... And it looks like a very unusual connection to Delta Force.


> Ralph loops are also stupid because they don't make use of kv cache properly.

This is a cost/resources thing. If it's more effective and the resources are available, it's completely fine.


If the goal is to reduce the number of fatal mistakes, why is that argument garbage?


Because it's unacceptable to replace a perfectly good driver in control of their vehicle with a vehicle that might just randomly kill them.

Traffic accidents don't happen randomly at all. If you are not too tired, drunk or using any substances, and not speeding, your chances of causing a serious traffic accident are miniscule.

These are all things you can control (one way or another). You can also adjust your driving to how you are feeling (eg take extra looks around you when you are a bit tired).


This feels like the trolley problem applied at scale. Will you deploy a self driving system that is perfect and stops all fatal accidents but kills one randomly selected person everyday?


Nope: there is no moral justification to potentially kill a person not participating in the risky activity of driving just so we could have other people be driven around.

Would you sign up for such a system if you can volunteer to participate in it, with now those random killings being restricted to those who've signed up for it, including you?

In all traffic accidents, there is some irresponsibility that led to one event or the other, other than natural disasters that couldn't be predicted. A human or ten is always to blame.

Not to mention that the problems are hardly equivalent. For instance, a perfect system designed to stop all accidents would likely have crawled to a stop: stationary vehicles have pretty low chances of accidents. I can't think of anyone who would vote to increase their chances of dying without any say in it, and especially not as some computer-generated lottery.


> Would you sign up for such a system if you can volunteer to participate in it, with now those random killings being restricted to those who've signed up for it, including you?

I mean, we already have. You volunteer to participate in a system where ~40k people die in the US every year by engaging in travel on public roadways. If self-driving reduces that to 10k, that's a win. You're not really making any sense.


But none of that is random.

Eg. NYC (population estimate 8.3M) had 273 fatalities in 2021 (easy to find full year numbers for): https://www.triallaw1.com/data-shows-2021-was-the-deadliest-...

USA (population estimate 335M) had 42,915 (estimated) according to https://www.nhtsa.gov/press-releases/early-estimate-2021-tra...

USA-wide rate is 1 in 7,800 people dying in traffic accidents yearly, whereas NYC has a rate of 1 in 30,000. I am sure it's even lower for subway riders vs drivers. Even drivers, somebody doing 4k miles a year has different chances than somebody doing 40k. People usually adapt their driving style after having kids which also reduces the chances of them being in a collision.

Basically, your life choices and circumstances influence your chances of dying in a traffic accident.

At the extreme, you can go live on a mountaintop, produce your own food and not have to get in contact with a vehicle at all (and some cultures even do).

FWIW, I responded to a rethorical question about killings being random: they are not random today, even if there is a random element to them!

If you want to sign up to a completely random and expected chance of death that you can't influence at all, good luck! I don't.


In traffic incidents, humans drivers are rarely held accountable. It is notoriously difficult to get a conviction for vehicular manslaughter. It is almost always ruled an accident, and insurance pays rather than the human at fault.

Traffic fatalities often kill others, not just the car occupants. Thus, if a self-driving system causes half as many fatalities as a human, shouldn't the moral imperative be to increase self-driving and eventually ban human driving?


> If you are not too tired, drunk or using any substances, and not speeding, your chances of causing a serious traffic accident are miniscule.

You realize that like.. other people exist, right?


You realize that I said "causing"?

For people to die in a traffic accident, there needs to be a traffic accident. They are usually caused by impaired humans, which means that they are very often involved in traffic accidents (basically, almost all of them have at least one party of the sort), whereas non-impaired people mostly do not participate in traffic accidents as often.

This is a discussion of chances and probabilities: not being impaired significantly reduces your chance of being in a traffic accident since being impaired significantly increases it. I am not sure what's unclear about that?


Taking RLHF into account: it's not actually generating the most plausible completion, it's generating one that's worse.


> A fairly reliable determinant for how the Court will rule is found using a materialist analysis. That is, the Court will generally side with corporations and capital owners when given the choice.

This is a big claim. Do you have any evidence to support it?

In the wake of someone trying to prove the same for Congress, it was conclusively shown that the opposite was true:

https://www.vox.com/2016/5/9/11502464/gilens-page-oligarchy-...

I see several opinion pieces making the same claim, but no actual studies of their decisions.

More importantly: the concern can't and shouldn't be the income of the parties involved in a suit, but who is right and who isn't.


Rocky and CentOS are both based on Red Hat Enterprise Linux (RHEL).

CentOS used to be a free and open source downstream version of RHEL. Keeping the history short: Red Hat effectively acquired CentOS and discontinued it as a downstream version of RHEL. They turned it into 'CentOS Stream', which is, more or less, a continuously delivered upstream version of RHEL. This isn't acceptable for a large number of the CentOS user base.

One of the original founders of CentOS, Gregory Kurtzer, started Rocky as an alternative. It's basically what CentOS used to be: a free and open source downstream version of RHEL.


Huh, that's interesting. Isn't Fedora the upstream version of Red Hat already? Or is the main distinction that CentOS Stream is rolling release?

I'm pretty much on the complete opposite end of the Linux ecosystem, working primarily on embedded systems.


Fedora's more playground / cutting edge technology demonstrator for Redhat developers. Anything showing up in Fedora won't be included in RHEL for several years, assuming everything goes well.

CENTOS Stream slots in between Fedora and RHEL, keeping a bit ahead of the RHEL stable release.


Lambda availability is awful.


A really good token predictor is still a token predictor.


No, we're past that point. it's no longer the most useful way to describe these things, we need to understand that they already have some sort of "understanding" which is very similar if not equal to what we understand by understanding.

Don't take my word for it, listen to Geoffrey Hinton explain it instead: https://youtu.be/qpoRO378qRY?t=1988


Instruction tuning is distinct from RLHF. Instruction tuning teaches the model to understand and respond (in a sensible way) to instructions, versus 'just' completing text.

RLHF trains a model to adjust it's output based on a reward model. The reward model is trained from human feedback.

You can have an instruction tuned model with no RLHF, RLHF with no instruction tuning, or instruction tuning and RLHF. Totally orthogonal.


In this case Open AI used RLHF to instruct-tune gpt3. Your pedantism here is unnecessary.


Not to be pedantic, but it’s “pedantry”.


It's not being pedantic. RLHF and instruction tuning are completely different things. Painting with watercolors does not make water paint.

Nearly all popular local models are instruction tuned, but are not RLHF'd. The OAI GPT series are not the only LLMs in the world.


Man it really doesn't need to be said that RLHF is not the only way to instruct tune. The point of my comment was to say that was how GPT3.5 was instruct tuned, via RLHF through a question answer dataset.

At least we have this needless nerd snipe so others won't be potentially misled by my careless quip.


But that's still false. RLHF is not instruction fine-tuning. It is alignment. GPT 3.5 was first fine-tuned (supervised, not RL) on an instruction dataset, and then aligned to human expectations using RLHF.


You're right, thanks for the correction


It sounds like we both know that's the case, but there's a ton of incorrect info being shared in this thread re: RLHF and instruction tuning.

Sorry if it came off as more than looking to clarify it for folks coming across it.


Yes all that misinfo was what lead me to post a quick link. I could have been more clear anyways. Cheers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: