Hacker Newsnew | past | comments | ask | show | jobs | submit | rafram's commentslogin

That’s basically the US model as well now.

#1 is just Siri/Google Assistant with extra steps (and expense).

#2 could be a scheduled task (cron job or something higher level) that calls a plain old AI provider API. IIRC most providers can even do those natively now.


#2 also provides an opinion on why a certain development is significant and how it relates so something else I am working on for that client.

The process is definitely more pleasant to me than setting up cron jobs and scripting things. I have a business to run.


Your comment reminds me of the FTP/Dropbox one.

That would be valid if setting up a scheduled AI task actually took any technical knowledge, but it doesn’t. ChatGPT lets you schedule tasks natively in the UI, and I assume others do as well.

(And OpenClaw is hardly nontechnical!)


I mentioned the same thing and it's getting downvoted lol

#2: Gemini also has scheduled actions. You can just ask it to give you daily digests about developments in some area, or you could even be in the middle of chatting about some current events, and tell it to notify you when there is new data, etc.

> We're building OpenPolicy not necessarily to reduce the risk companies have of litigation

Privacy policy is one thing, but that’s what terms of service are for!


Terms of service don't override laws so only a fool thinks that they have any effect on litigation.

If a set of terms not overriding the law makes it useless, what do you think contracts are for?

Okay a couple of things here... The first is that not all contracts are equally legally binding. Terms of service would be among the least. The second is that a contract also cannot override the law... You can't break the law just because it's in a contract...

Parenthesized, comma-separated lists with no “and” is an even stronger tell. Claude loves those.

I also use those extensively, they just flow better, especially if you have an "and" in the surrounding sentence.

It's most of his name. Long before his full name became common knowledge, you could already Google "Scott Alexander psychiatrist" and find him almost instantly.

Yes, but a patient who googled his real name would not find his blog. That was the point.

That part of things is what really made this entire argument all apart of me.

There are ~50k psychiatrists in the US. Roughly, 1 in 10k people in the US is named Scott. Mathematically, that means knowing "Scott is a psychiatrist" brings you down to ~5 people. Even if we assume there's some outlier clustering of people named Scott who are psychiatrists, we're still talking about some small number.

Surely adding in the middle name essentially makes him uniquely identifiable without an other corroborating information.


> Roughly, 1 in 10k people in the US is named Scott.

Seems to be more like one in 425 per SSA.


Take a moment and apply some common sense to your math. Do you really think there are 5 psychiatrists in the country named Scott? That's off by multiple orders of magnitude.

No, but I doubt there are more than 100.

The magnitude is so small that anonymity is essential broken.


That is a very specific set of requirements. I doubt it.

I wouldn’t imagine that fingerprinting them based on request patterns is very difficult.

until your account gets banned.

you can figure out the fingerprinting today, but if they change it tomorrow and wait 5 months to force update everyone, they will catch you and ban


Why?

It sounds great to me. AI-generated music is pretty popular with Warhammer 40k lore as well.

Also I tend to listen to songs for a few days, during which time I feel they're the best thing ever, which also helps with momentum during work.

After a few days I have to find other songs. Since AI music started getting more traction it's been way easier to find great songs.

I understand the criticisms of AI music, but that doesn't take away from the fact that for me and a growing number of people it sounds good.


They’re able to solve complex, unstructured problems independently. They can express themselves in every major human language fluently. Sure, they don’t actually have a brain like we do, but they emulate it pretty well. What’s your definition of thinking?

When OP wrote about LLMs "thinking" he implied that they have an internal conceptual self-reflecting state. Which they don't, they *are* merely next token predicting statistical machines.

This was true in 2023.

And it still is today.

Any idea how SpeciesNet and iNat’s model compare to BioCLIP 2?

https://huggingface.co/imageomics/bioclip-2


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: