#1 is just Siri/Google Assistant with extra steps (and expense).
#2 could be a scheduled task (cron job or something higher level) that calls a plain old AI provider API. IIRC most providers can even do those natively now.
That would be valid if setting up a scheduled AI task actually took any technical knowledge, but it doesn’t. ChatGPT lets you schedule tasks natively in the UI, and I assume others do as well.
#2: Gemini also has scheduled actions. You can just ask it to give you daily digests about developments in some area, or you could even be in the middle of chatting about some current events, and tell it to notify you when there is new data, etc.
Okay a couple of things here... The first is that not all contracts are equally legally binding. Terms of service would be among the least. The second is that a contract also cannot override the law... You can't break the law just because it's in a contract...
It's most of his name. Long before his full name became common knowledge, you could already Google "Scott Alexander psychiatrist" and find him almost instantly.
That part of things is what really made this entire argument all apart of me.
There are ~50k psychiatrists in the US. Roughly, 1 in 10k people in the US is named Scott. Mathematically, that means knowing "Scott is a psychiatrist" brings you down to ~5 people. Even if we assume there's some outlier clustering of people named Scott who are psychiatrists, we're still talking about some small number.
Surely adding in the middle name essentially makes him uniquely identifiable without an other corroborating information.
Take a moment and apply some common sense to your math. Do you really think there are 5 psychiatrists in the country named Scott? That's off by multiple orders of magnitude.
They’re able to solve complex, unstructured problems independently. They can express themselves in every major human language fluently. Sure, they don’t actually have a brain like we do, but they emulate it pretty well. What’s your definition of thinking?
When OP wrote about LLMs "thinking" he implied that they have an internal conceptual self-reflecting state. Which they don't, they *are* merely next token predicting statistical machines.
reply