Maybe, but the group of people they are/were working with are Extremely Serious, and Not Goofs.
This person was in communications of the 160th Special Operations Aviation Regiment, the group that just flew helicopters into Venezuela. ... And it looks like a very unusual connection to Delta Force.
Because it's unacceptable to replace a perfectly good driver in control of their vehicle with a vehicle that might just randomly kill them.
Traffic accidents don't happen randomly at all. If you are not too tired, drunk or using any substances, and not speeding, your chances of causing a serious traffic accident are miniscule.
These are all things you can control (one way or another). You can also adjust your driving to how you are feeling (eg take extra looks around you when you are a bit tired).
This feels like the trolley problem applied at scale. Will you deploy a self driving system that is perfect and stops all fatal accidents but kills one randomly selected person everyday?
Nope: there is no moral justification to potentially kill a person not participating in the risky activity of driving just so we could have other people be driven around.
Would you sign up for such a system if you can volunteer to participate in it, with now those random killings being restricted to those who've signed up for it, including you?
In all traffic accidents, there is some irresponsibility that led to one event or the other, other than natural disasters that couldn't be predicted. A human or ten is always to blame.
Not to mention that the problems are hardly equivalent. For instance, a perfect system designed to stop all accidents would likely have crawled to a stop: stationary vehicles have pretty low chances of accidents. I can't think of anyone who would vote to increase their chances of dying without any say in it, and especially not as some computer-generated lottery.
> Would you sign up for such a system if you can volunteer to participate in it, with now those random killings being restricted to those who've signed up for it, including you?
I mean, we already have. You volunteer to participate in a system where ~40k people die in the US every year by engaging in travel on public roadways. If self-driving reduces that to 10k, that's a win. You're not really making any sense.
USA-wide rate is 1 in 7,800 people dying in traffic accidents yearly, whereas NYC has a rate of 1 in 30,000. I am sure it's even lower for subway riders vs drivers. Even drivers, somebody doing 4k miles a year has different chances than somebody doing 40k. People usually adapt their driving style after having kids which also reduces the chances of them being in a collision.
Basically, your life choices and circumstances influence your chances of dying in a traffic accident.
At the extreme, you can go live on a mountaintop, produce your own food and not have to get in contact with a vehicle at all (and some cultures even do).
FWIW, I responded to a rethorical question about killings being random: they are not random today, even if there is a random element to them!
If you want to sign up to a completely random and expected chance of death that you can't influence at all, good luck! I don't.
In traffic incidents, humans drivers are rarely held accountable. It is notoriously difficult to get a conviction for vehicular manslaughter. It is almost always ruled an accident, and insurance pays rather than the human at fault.
Traffic fatalities often kill others, not just the car occupants. Thus, if a self-driving system causes half as many fatalities as a human, shouldn't the moral imperative be to increase self-driving and eventually ban human driving?
For people to die in a traffic accident, there needs to be a traffic accident. They are usually caused by impaired humans, which means that they are very often involved in traffic accidents (basically, almost all of them have at least one party of the sort), whereas non-impaired people mostly do not participate in traffic accidents as often.
This is a discussion of chances and probabilities: not being impaired significantly reduces your chance of being in a traffic accident since being impaired significantly increases it. I am not sure what's unclear about that?
> A fairly reliable determinant for how the Court will rule is found using a materialist analysis. That is, the Court will generally side with corporations and capital owners when given the choice.
This is a big claim. Do you have any evidence to support it?
In the wake of someone trying to prove the same for Congress, it was conclusively shown that the opposite was true:
Rocky and CentOS are both based on Red Hat Enterprise Linux (RHEL).
CentOS used to be a free and open source downstream version of RHEL. Keeping the history short: Red Hat effectively acquired CentOS and discontinued it as a downstream version of RHEL. They turned it into 'CentOS Stream', which is, more or less, a continuously delivered upstream version of RHEL. This isn't acceptable for a large number of the CentOS user base.
One of the original founders of CentOS, Gregory Kurtzer, started Rocky as an alternative. It's basically what CentOS used to be: a free and open source downstream version of RHEL.
Fedora's more playground / cutting edge technology demonstrator for Redhat developers. Anything showing up in Fedora won't be included in RHEL for several years, assuming everything goes well.
CENTOS Stream slots in between Fedora and RHEL, keeping a bit ahead of the RHEL stable release.
No, we're past that point. it's no longer the most useful way to describe these things, we need to understand that they already have some sort of "understanding" which is very similar if not equal to what we understand by understanding.
Instruction tuning is distinct from RLHF. Instruction tuning teaches the model to understand and respond (in a sensible way) to instructions, versus 'just' completing text.
RLHF trains a model to adjust it's output based on a reward model. The reward model is trained from human feedback.
You can have an instruction tuned model with no RLHF, RLHF with no instruction tuning, or instruction tuning and RLHF. Totally orthogonal.
Man it really doesn't need to be said that RLHF is not the only way to instruct tune. The point of my comment was to say that was how GPT3.5 was instruct tuned, via RLHF through a question answer dataset.
At least we have this needless nerd snipe so others won't be potentially misled by my careless quip.
But that's still false. RLHF is not instruction fine-tuning. It is alignment.
GPT 3.5 was first fine-tuned (supervised, not RL) on an instruction dataset, and then aligned to human expectations using RLHF.
reply