Two things that we use for my son's CI are a cable wrap, which gives the cable a bit of reinforcement, and what we call the "eagle claw" - a little plastic hook that goes on the bottom and hooks around the ear lobe to give it a bit more stability.
I don't know how one could classify USS Callister and Black Museum as having happy endings! The fates of the bad guys in those episodes are so horrible they still haunt me. Doesn't make it any better that they are bad guys!
That's the direction Direct File was aiming in. This year it was able to import W-2s and limited other data, but the vision was always to bring in as much info as possible.
The Wikipedia page for it calls it "mobile-first" and the IRS calls it a tool, and I heard it was released as open-source software, so I would presume it's an independent platform from their regular web page, and not just an increased capability of already-existing online accounts that is already available for every taxpayer.
Yes, I would really like to see research targeting Connexin 26, since it's the most common cause of hearing loss, but it seems it's much more difficult to "cure."
One mistake you making is thinking that rationalists care more about people far away than people in their community. The reality is that they set the value of life the same for all.
If children around you are doing of an easily preventable disease, then yes, help them first! If they just need more arts programs, then you help the children dying in another country first.
That's not a mistake I'm making. Assuming you're talking about bog-standard effective altruists---by (claiming to) value the suffering of people far away as the same as those nearby, they're discounting the people around them heavily compared to other people. Compare to anyone else who values their friends and family and community far more than those far away. Perhaps they're not discounting them to less-than-parity---just less than they are for most people.
But anyway this whole model follows from a basic set of beliefs about quantifying suffering and about what one's ethical responsibilities are, and it answers those in ways most people would find very bizarre by turning them into a math problem that assigns no special responsibility to the people around you. I think that is much more contentious and gross to most people than EA thinks it is. It can be hard to say exactly why in words, but that doesn't make it less true.
To me, the non-local focus of EA/rationalism is, at least partially, a consequence of their historically unusual epistemology.
In college, I became a scale-dependent realist, which is to say, that I'm most confident of theories / knowledge in the 1-meter, 1-day, 1 m/s scales and increasingly skeptical of our understanding of things that are bigger/smaller, have longer/short timeframes, or faster velocities. Maybe there is a technical name for my position? But, it is mostly a skepticism about nearly unlimited extrapolation using brains that evolved under selection for reproduction at a certain scale. My position is not that we can't compute at different scales, but that we can't understand at other scales.
In practice, the rationalists appear to invert their confidence, with more confidence in quarks and light-years than daily experience.
> no special responsibility to the people around you
Musing on the different failure-directions: Pretty much any terrible present thing against people can be rationalized by arguing that one gadzillion distant/future people are more important. That includes religious versions, where the stakes of the holy war may presented as all of future humanity being doomed to infinite torment. There are even some cults that pitch it retroactively: Offer to the priesthood to save all your ancestors who are in hell because of original sin.
The opposite would be to prioritize the near and immediate, culminating in a despotic god-king. This is somewhat more-familiar, we may have more cultural experience and moral tools for detection and prevention.
A check on either process would be that the denigrated real/nearby humans revolt. :p
> they're discounting the people around them heavily compared to other people
This statement of yours makes no sense.
EAs by definition are attempting to remove the innate bias that discounts people far away by instead saying all lives are of equal worth.
>turning them into a math problem that assigns no special responsibility to the people around you
All lives are equal isn't a math problem. "Fuck it blow up the foreigners to keep oil prices low" is a math problem, it is a calculus that the US government has spent decades performing. (One that assigns zero value to lives outside the US.)
If $100 can save 1 life 10 blocks away from me or 5 lives in the next town over, what kind as asshole chooses to let 5 people die vs 1?
And since air travel is a thing, what the hell does "close to us" mean?
For that matter, from a purely selfish POV, helping lift other nations up to become fully advanced economies is hugely beneficial to me, and everyone on earth, in the long run. I'm damn thankful for all the aid my country gave to South Korea, the number of scientific advances that have come out of SK damn well paid for any tax dollars my grandparents paid on many orders of magnitude times over.
> It can be hard to say exactly why in words, but that doesn't make it less true.
This is the part where I shout racism.
Because history has shown it isn't about people being far or close in distance, but rather in how those people look.
Americans have shot down multiple social benefit programs because, and these are what people who voted against those programs directly said was their reasons "white people don't want black people getting the same help white people get."
Whites in America have voted, repeatedly, to keep themselves poor rather than lift themselves and black families out of poverty at the same time.
Of course Americans think helping people in Africa is "weird".
> If $100 can save 1 life 10 blocks away from me or 5 lives in the next town over, what kind as asshole chooses to let 5 people die vs 1?
The thing about strict-utilitarian-morality is that it can't comprehend any other kind of morality, because it evaluates the morality of... moralities... on its own utilitarian basis. And then of course it wins over the others: it's evaluating them using itself!
There are entirely different ethical systems that are not utilitarian which (it seems) most people hold and innately use (the "personal morality" I'm talking about in my earlier post). They are hard to comprehend rationally, but that doesn't make them less real. Strict-utilitarianism seems "correct" in a way that personal morality does not because you are working from a premise "only things that I can understand like math problems can be true". But what I observe in the world is that people's fear of the rationalist/EA mindset comes from the fact that they empirically find this way of thinking to be insidious. Their morality specifically disagrees with that way of thinking: it is not the case that truth comes from scrutable math problems; that is not the point of moral action to them.
The EA philosophy may be put as "well sure but you could change to the math-problem version, it's better". But what I observe is that people largely don't want to. There is a purpose to their choice of moral framework; it's not that they're looking at them all in a vacuum and picking the most mathematically sound one. They have an intrinsic need to keep the people around them safe and they're picking the one that does that best. EA on the other hand is great if everyone around you is safe and you have lots of extra spending money and what you're maximizing for is the feeling of being a good person. But it is not the only way to conceive of moral action, and if you think it is, you're too inside of it to see out.
I'll reiterate I am trying to describe what I see happening when people resist and protest rationalism (and why their complaints "miss" slightly---because IMO they don't have the language to talk about this stuff but they are still afraid of it). I'm sympathetic to EA largely, but I think it misses important things that are crippling it, of the variety above: an inability to recognize other people's moralities and needs and fears doesn't make them go away; it just makes them hate you.
> The thing about strict-utilitarian-morality is that it can't comprehend any other kind of morality, because it evaluates the morality of... moralities... on its own utilitarian basis.
I can comprehend them just fine, but I have a deep seated objection to any system of morality that leaves behind giant piles of dead bodies. We should be trying to minimize the size of the pile of dead bodies (and ideally eliminate the pile altogether!)
Any system or morality that boils down to "I don't care about that pile of dead bodies being huge because those people look different" is in fact not a system morality at all.
Well, you won't find anyone who disagrees with you here. No such morality is being discussed.
The job of a system of morality is to synthesize all the things we want to happen / want to prevent happening into a way of making decisions. One such thing is piles of dead bodies. Another is one's natural moral instincts, like their need to take care of their family, or the feeling of responsibility to invest time and energy into improving their future or their community or repairing justice or helping people who need help, or to attend to their needs for art and meaning and fun and love and respect. A coherent moral system synthesizes these all and figures out how much priority to allocate to each thing in a way that is reasonable and productive.
Any system of morality that takes one of these criteria and discards the rest of them is not a system of morality at all, in the very literal sense that nobody will do it. Most people won't sell out one of their moral impulses for the others, and EA/rationalism feels like it asks them too, since it asks them to place zero value in a lot of things that they inherently place moral value in, and so they find it creepy and weird. (It doesn't ask that explicitly; it asks it by omission. By never considering any other morality and being incapable of considering them, because they are not easily quantifiable/made logical, it asks you to accept a framework that sets you up to ignore most of your needs.)
My angle here is that I'm trying to describe what I believe is already happening. I'm not advocating it; it's already there, like a law of physics.
Perhaps part of it is that local action can often be an order of magnitude more impactful than the “equivalent” action at a distance. If you volunteer in your local community, you not only have fine-grained control over the benefit you bestow, you also know for a fact that you’re doing good. Giving to a charity that addresses an issue on the other side of the world doesn’t afford this level of control, nor this level of certainty. For all you know most of the donation is being embezzled.
I think another part of it is a sort of healthy nativism or in-group preference or whatever you want to call it. It rubs people the wrong way when you say that you care about someone in a different country as much as you care about your neighbors. That’s just…antisocial. Taken to its logical conclusion, a “rationalist” should not only donate all of their disposable income to global charities, they should also find a way to steal as much as possible from their neighbors and donate that, too. After all, those. Holden in Africa need the money much more than their pampered western neighbors.
Irrelevant. Those in power don't actually care about saving money. They care about doing what their deep-pocketed donors want them to do, as well as fulfilling their ideology, however misguided and backward it may be.
There are a handful of exceptions (of which SLC is one), but broadly the airport is legally limited to destinations within a 1250 mile perimeter to keep long haul traffic at IAD/BWI.
I don't know why you're getting downvoted, but you are correct. It's essential that as many people as possible know about and use Direct File this year, to provide evidence that it ought to be continued.
I don't think that is average for US tech workers. I don't believe the vast majority of US tech firms have such a thing.
I wouldn't call that "far left" myself – although "far left" as used by American conservatives is a pejorative colloquialism whose meaning has shifted from its traditional definition (Trotskyists, Stalinists, Maoists, etc). Not slang I'd use myself but I can understand it.
I have worked at a place with a bot like that. It's one of over a dozen bots in that repository and seems to have been created and maintained by 3 people over the years. I believe 18F peaked at 250 employees.
Some of the terms are genuinely offensive or unprofessional. I'm not sure about some of them, but I'd expect a government agency to show a higher level of sensitivity and professionalism about their language than a private start-up.
I also note that the bot "lectures" people as a private message.
This is like tiny stuff. What fundamentally matters is the main projects they're working on and if they're doing a good job with that or not.
Maybe a new administration want to to change the culture at GSA/18F. Fine, they can do that.
Nuking an entire department and chucking out a significant chunk of work they've done because of a Slack chatbot and a few minor documents/policies is just mental. It's vindictive score-setting and an ideological purge.
> and chucking out a significant chunk of work they've done
It is unclear how much of the actual work they've done is being "chucked out".
18F did work for various federal agencies, and whatever code 18F wrote for its client agencies would still be in possession of those agencies.
What happens to that code going forward – whether it continues to be maintained by other resources, or whether it just gets archived – is going to be an agency-level decision. Probably some will be kept, others will be thrown out – keeping or abolishing 18F is a separate decision from keeping or abolishing the agency-level projects/initiatives 18F was working on. (And even if 18F had survived, probably some of that code would eventually have been thrown out anyway, since government IT projects frequently end up failing and being cancelled, and 18F involvement is no guarantee against that outcome.)
Obviously, if those projects are going to be kept, removing 18F resources is going to cause a delay to the project – but maybe other resources will be found. It also depends on what percentage of the project resources were from 18F. If a project was 18F-heavy, it may take a big hit, if 18F's contribution was smaller, the negative impact might be smaller.
18F was funded out of the Acquisition Services Fund (ASF), managed by the Federal Acquisition Service (FAS) within GSA. FAS is legally obligated to spend ASF funds on federal technology modernization projects. Without 18F, FAS will have to find some other mechanism to spend those ASF funds on technology modernization. So, while of course there will be a delay, agencies which were relying on 18F may still end up getting help from GSA TTS for their modernization projects. I wouldn't be surprised if ASF funds were redirected towards DOGE, and DOGE was then tasked with working on those projects.
Building a bot to harangue people about pronoun usage seems like a giant waste of time and resources to me. Those sorts of cultural preferences are a feature of only a very very small portion of the US political culture. Maybe nuking the whole department was a bit strong, I don't know, but if I worked at a place that had tools like that I'd quit, and I think a lot of other people would too. Which suggests that the overall culture of 18F was far from the mainstream of America. It should reflect the middle, no?
If a government agency has a culture which appears to lean strongly in one political direction, it is unsurprising that when the opposite political persuasion gets into power, the agency becomes a target.
Traditionally how civil servants handle this, is to be aware of the political sensitivities of both sides, and try to avoid language which overly triggers either. But people seem to be forgetting that tradition, or even intentionally discarding it
Not strictly speaking a government agency, but what about the Judicial Procedures Reform Bill of 1937?
I think there was another possible reason for getting rid of 18F, separate from any concerns about its political culture – there was a lot of overlap in the mission statements of 18F and USDS, and it wasn't clear why both existed. Yes, I do understand that they differed somewhat in their working methods and area of focus, but I don't think anyone can deny that they were both ultimately trying to do the same thing. In fact, at one point 18F was even going to be called USDS, until GSA was forced to pick a different name when they discovered OMB was already using it. Abolishing 18F can be seen as a way of rationalizing federal technology modernization efforts.
And 18F wasn't formally speaking a government agency – it was just a team within GSA. It hadn't been established by law, just by an internal executive branch policy decision. Hence, abolishing it is just an internal restructure within GSA, it isn't a genuine case of "abolishing a government agency".
I believe laid-off 18F workers are still allowed to apply for open positions in the US government, including DOGE positions. So if they are still keen on contributing to 18F's mission, they may have the opportunity.
I don't think a failed attempt at court reform (ideologically motivated or no) from almost a century ago is very convincing evidence that this is typical practice on both sides of the aisle.
> there was a lot of overlap in the mission statements of 18F and USDS, and it wasn't clear why both existed
And now neither of them exist (the vast majority of what once was USDS is gone, and what remains has been converted into "DOGE").
I don't know what the quotes around "abolishing a government agency" indicate -- those words weren't used previously in this thread.
> And now neither of them exist (the vast majority of what once was USDS is gone, and what remains has been converted into "DOGE").
Do any of the old USDS staff survive? I don't know. USDS acting administrator, Amy Gleason, used to work for USDS under the Trump and Biden admins, so it sounds like there is still room for "old USDS" staff in "new USDS" – if they are happy to be there, and if the new administration is happy to have them.
And I don't think DOGE's remit is completely distinct from that of USDS. Of course, DOGE is a lot broader in scope than USDS, but according to Executive Order 14158 which established it, a big part of its mission is software modernization–same as old USDS was–and DOGE staff appear to include a number of software engineers, which also aligns with that mission.
> I don't know what the quotes around "abolishing a government agency" indicate -- those words weren't used previously in this thread.
You asked the question "Which government agencies have been targeted by Democrats for being too conservative?" – which seems to put 18F in the category of "government agencies" - if it isn't one in some sense, then the question isn't asking for a relevant comparator. And the title of this thread is "GSA Eliminates 18F", and "eliminates" is a synonym of "abolition". So, the premise of your question implies "abolishing a government agency". Which in a sense abolishing 18F is, since it was sort-of-kind-of a government agency – but strictly speaking it isn't, since strictly it wasn't – hence the quotes.
40 were laid off and 21 resigned (and Musk claimed that they would have been fired for being Democrats, regardless). That's approximately 60% of the total.
As for "government agency", I was using your language:
> If a government agency has a culture [etc]... the agency becomes a target
Again, your claim here is that it's typical and predictable that new administrations conduct ideological purges on the civil service. So far, you haven't actually been able to name a single example of a Democratic administration doing that, and instead you're saying that maybe 18F was bad anyway, etc. Would you consider just admitting that your claim is false rather than resorting to this "by definition, strictly speaking, the premise of your question implies" ink cloud?
> Again, your claim here is that it's typical and predictable that new administrations conduct ideological purges on the civil service.
No, I'm not denying this is reaching a level which hasn't been seen before.
But, perceptions of political impartiality of civil servants have been greatly eroded.
Imagine if the situation were reversed, if Democrats had a widespread perception that the federal bureaucracy had a pro-GOP/anti-Democrat bias – can you be so sure they wouldn't do similar things?
Direct File is following a phased roll-out approach to avoid the "big launch" problem that tends to plague government tech projects. The goal is to serve all citizens, but taxes are very complex, and it will take time to address all scenarios.
Also, as the below commenter mentioned, states need to agree to be part of Direct File.
https://www.etsy.com/listing/870982894/cochlear-implant-cabl...