The unfortunate answer is that the US seems to be very bad at fighting regulatory corruption which allows small parts of the market to buy laws which give them a moat. Rinse and repeat over the last half century and you get to the situation we're in.
In no way shape or form is the medical industry in the US a free market, it's one of the most heavily regulated sectors in the economy. Remember when the government wanted to make purchasing health insurance mandatory? Forcing employers to pay for their employees health insurance greatly distorts the market. And many other things...
By the way, as much as people complain about the profit seeking motives of insurers, many of them have been performing abmysally in the last six months. As it turns out, our current system is bad for just about everyone.
In Romania the employer takes a cut from the employee's salary and gives it to a government agency for the health insurance (some thing with income tax, social security (pension), etc). I think this is happening in other European countries as well.
Some employers also offer as a bonus a sort of subscription at a private clinic, so you can see a private doctor or have an operation for a lower price or even for free.
In the USA the government health programs for people in low incomes, children and pensioners cost about as much as a typical European single payer health system. Then tax payers get to pay to be gouged by health insurance companies to get any cover for themselves.
In this market, neither the producer nor the consumer are responding to price signals and often neither knows what anything costs. The Payer (literal healthcare industry terminology) does but isn't producing nor consuming the service.
This is why this isn't a free market. It's not about regulation, it's about the system being divorced from responding to market dynamics.
There are degrees of freedom, but within the American framework, medical care is on the less-free end of the spectrum.
Aside all the insurance stuff, you cannot open an MRI imaging lab or similar without a letter of need from the local government. The supply side is quite literally gated by existing players in the market (via campaign bribes and similar).
Just to tack on, dentistry is an example of a somewhat freer market than 'healthcare', and veterinary care is an example of an even freer (though somewhat different) medical service.
Interestingly, it seems from these statistics the median wage for individuals with a Master's is lower than a Bachelor's. I wonder if that's because of immigrants who pursue higher education for visa reasons skewing the data.
Anecdotally, many people get a bachelor's degree to check a box for job applications, whereas many people get a master's degree because they love the field and/or are afraid to leave school.
My friends and I who have a bachelor's degree in CS make more money than my friends who have or are working towards master's degrees in CS, because the former are working in the private sector and the latter are in academia making peanuts.
Other possible reason could be many or most Masters degrees not conferring additional pricing power, and those people’s Bachelors degrees also confer lower pricing power.
Edit: Another possible reason that Masters degrees were less common in the past, so the Bachelors pay statistics skew towards people with more work experience in their higher earning years, whereas the Masters pay statistics skew towards younger people with less work experience.
Masters seems to be a common theme in a few lower paying expansive fields like social work and education. I don't think that someone with a masters is typically making less in the same field all else equal.
Yes, people have likened pre-LLM Internet content to low-background steel.
If in the hypothetical future the continual learning problem gets solved, the AI could just learn from the real world instead of publications and retain that data.
I think that recording dialog with the agent (prompt, the agent's plan, and agent's report after implementation) will become increasingly important in the future.
You will also add a markdown file to the changelog directory named with the current date and time `date -u +"%Y-%m-%dT%H-%M-%SZ"`, record the prompt, and a brief summary of what changes you made, this should be the same summary you gave the developer in the chat.
From that I get the prompt and the summary for each change. It's not perfect but it at least adds some context around the commit.
Isn’t the commit message a better place to add what and why? You might need to feed some info that the agent doesn’t have access to “we are developing feature X this change will such and such to blah blah”. The agent will write a pretty good commit message most of the times. Why do you need a markdown file? Are releasing new versions of the software for third parties?
Cheaper and faster retrieval to be added to the context and discoverable by the agent.
You need more git commands to find the right commit that contains the context you want (either you the human or the LLM burning too many token and time) than just include the right MD file or use grep with proper keywords.
Moreover you could need multiple commits to get the full context, while if you ask the LLM to keep the MD file up to date, you have everything together.
The problem isn't giving MORE context to an agent, it's giving the right context
These things are built for pattern matching, and if you keep their context focused on one pattern, they'll perform much better
You want to avoid dumping in a bunch of data (like a year's worth of git logs) and telling it to sort out what's relevant itself
Better to have pre-processing steps, that find (and maybe summarize) what's relevant, then only bring that into context
You can do that by running your git history through a cheap model, and asking it to extract the relevant bits for the current change. But, that can be overkill and error prone, compared to just maintaining markdown files as you make changes
"You want to avoid dumping in a bunch of data (like a year's worth of git logs) and telling it to sort out what's relevant itself"
So instead you give it a years worth of changelog.md?
"Better to have pre-processing steps, that find (and maybe summarize) what's relevant, then only bring that into context"
So, not a list of commits that touched the relevant files or are associated with relevant issues? That kind of "preprocessing" doesn't count?
"You can do that by running your git history through a cheap model, and asking it to extract the relevant bits for the current change. But, that can be overkill and error prone, compared to just maintaining markdown files as you make changes"
And somehow extracting the same data out of a [relatively] unstructured and context-free (the changelog only has dates and description, that will need to be correlated to actual changes with git anyway...) markdown file is magically less error-prone?
Hey you can try it if you like. That's one of the beauties of the current moment, nobody REALLY knows what works best, just a whole lot of people trying stuff
And no, I wouldn't ever give it a year of changelog.md. I give it a short description of the current functionality, and a well-trimmed list of 'lessons-learned' (specific pitfalls/traps from previous work, so the AI doesn't have to repeat them)
If you think git logs are a good way to give context, try it and and see how it works! My instinct's that it won't work as well as a short readme, but I could be wrong. It's so easy to prototype these days, no reason to not give it a shot
"a short description of the current functionality, and a well-trimmed list of 'lessons-learned'"
Where does that come from?
"And no, I wouldn't ever give it a year of changelog.md."
No, instead you'll "[run] your git history through a cheap model". Except it's "overkill and error prone". So you're writing it up yourself? You didn't do the work, how do you know what the pitfalls and traps are?
How often, in your experience, do people read those auto-generated markdown files? Do you have any empirical data on how useful people find reading other people's agents' auto-generated files?
Why doesn't this apply to human collaborators as well? If you need all this extra metadata to comprehend the changes, isn't that kind of going backwards? You spend time (setting up the agents, building extensive prompts that explain soooo much of how to do things, adding to whatever markdown file you think controls the parrot) and money (so many token$), to get code that you don't comprehend, and just decide to fill your repo with all of the above to... what exactly does all this accomplish? So you can later ask another parrot to "fix" something?
Agree, but current agents don't help with that. I use Copilot, and you can't even dump it preserving complete context, including images, tool call results and subagent outputs. And even if you could, you'd immediately blow up the context trying to ingest that. This needs some supporting tooling, like in today's submission where agent accesses terabytes of CI logs via ClickHouse.
I've had some luck creating tiny skills that produce summaries. E.g. a current TASK.md is generated from a milestone in PLAN.md, and when work is checked in STATUS.md and README.md are regenerated as needed. AGENTS.md is minimal and shrinking as I spread instructions out to the tools.
Part of my CI process when creating skills involves setting token caps and comparing usage rates with and without the skill.
Sure, but OpenAI doesn't have cash. It does have stock.
Even if Nvidia has capped production for now, increased demand still allows them to sell chips at a greater margin. Or, to put another way, presumably Nvidia is charging OpenAI a premium for the privilege of paying with stock.
Anyways, there are many studies showing that rent control is bad in the long term for housing affordability.
reply