This is true - all the global multinationals that essentially make the US stock market earn a good portion of their revenue in foreign currency, so their revenue and profits will increase.
In addition, they are all cheaper when priced in USD, so their stock will go up regardless.
This is just counting short term effect of currency devaluation. Long term there are also effects around trade balance and jobs.
Also, increasing billionaire wealth and burgeoning (but somewhat circular) market capitalizations of companies will seem like a good economy while real income and wealth for the bottom half of Americans keeps falling. The mainstream business media is a gaslight factory completely ignoring the ever-widening K-shaped economic reality that there's a very good economy for the highest income people and a rapidly declining/terrible economy for everyone else.
Totally get that — the “open loops stick around, then randomly resurface months later” feeling is very ADHD-coded.
A lightweight trick that doesn’t require a whole new system:
When you act on something, add a single closure line at the top of the note:
Done: <what happened> — <date> — <where to find it>
It turns the note from “still open?” into “closed loop” in 5 seconds, and future-you stops re-processing it.
If you had to pick one, which is more ADHD-painful for you: too many open loops, or losing track of where the finished thing ended up?
On one end, a farmer or agronomist who just uses a pen, paper, and some education and experience can manage a farm without any computer tooling at all - or even just forecasts the weather and chooses planting times based on the aches in their bones and a finger in the dirt. One who uses a spreadsheet or dedicated farming ERP as a tool can be a little more effective. With a lot of automation, that software tooling can allow them to manage many acres of farms more easily and potentially more accurately. But if you keep going, on the other end, there's just a human who knows nothing about the technicalities but owns enough stock in the enterprise to sit on the board and read quarterly earnings reports. They can do little more than say "Yes, let us keep going in this direction" or "I want to vote in someone else to be on the executive team". Right now, all such corporations have those operational decisions being made by humans, or at least outsourced to humans, but it looks increasingly like an LLM agent could do much of that. It might hallucinate something totally nonsensical and the owner would be left with a pile of debt, but it's hard to say that Seth as just a stockholder is, in any real sense, a farmer, even if his AI-based enterprise grows a lot of corn.
I think it would be unlikely but interesting if the AI decided that in furtherance of whatever its prompt and developing goals are to grow corn, it would branch out into something like real estate or manufacturing of agricultural equipment. Perhaps it would buy a business to manufacture high-tensile wire fence, with a side business of heavy-duty paperclips... and we all know where that would lead!
We don't yet have the legal frameworks to build an AI that owns itself (see also "the tree that owns itself" [1]), so for now there will be a human in the loop. Perhaps that human is intimately involved and micromanaging, merely a hands-off supervisor, or relegated to an ownership position with no real capacity to direct any actions. But I don't think that you can say that an owner who has not directed any actions beyond the initial prompt is really "doing the work".
If that were the case Claude would have come up with the idea to grow corn and it would have reached out to Seth and be giving Seth prompts. That's clearly not what happened though so it's pretty obvious who is leveraging which tool here.
It also doesn't help that Claude is incapable of coming up with an idea, incapable of wanting corn, and has no actual understanding of what corn is.
Generally agree. But lack of "understanding" seems impossible to define in objective terms. Claude could certainly write you a nice essay about the history of corn and its use in culture and industry.
I could get the same thing out of "curl https://en.wikipedia.org/wiki/Corn" but curl doesn't understand what corn is any more than Claude does. Claude doesn't understand corn any more than Wikipedia either. Just like with Wikipedia, everything Claude outputs about corn came from the knowledge of humans which was fed into it by other humans, then requested by other other humans. It's human understanding behind all of it. Claude is just a different way to combine and output examples of human thoughts and human gathered data.
You know it when you see it, but it seems to like an objective definition that stands up to adversarial scrutiny. Where are the boundaries between knowing and repeating? It can be a useful idea to talk about, but if I ever find myself debating whether "knowledge" or "understanding" is happening, there will probably not be any fruitful result. It's only useful if everyone agrees already.
I guess that's basically the idea of the Chinese Room thought experiment.
Copilot Agent and Claude Code use their own sandbox, which requires less setup but is also quite limited. With your own cloud setup, agents can perform better end to end testing, including database dependencies and specific tool calls.
reply