Hacker Newsnew | past | comments | ask | show | jobs | submit | user_7832's commentslogin

> Also, you certainly got some brass, linking an entirely AI generated article in a forum where extreme distaste is registered for entirely AI generated posts.

> How is your comment not downvoted to oblivion?

I'm sure there's a polite way to say things.

I heavily dislike LLM content, but if you read the content, it's actually got information of value.


I think you're broadly correct and that's definitely a reason, and I have another example to support it.

Mumbai too has a very similar structure (the core city is basically a peninsula that goes north-south). Our railway lines run N-S as well, with (till the recent Metros) feeder roads connecting them.

Mumbai is also one of the most densely populated cities in the world (#2 by some metrics).

Our local railways have an annual ridership of 2.26 billion [1]. Pretty much everyone agrees they're vital to the city.

1 - https://en.wikipedia.org/wiki/Mumbai_Suburban_Railway


> Good, it shouldn't be two clicks for elderly people to install trojans on their phone that then drain their bank account.

And what makes you think that most scams involve fancy zero days/CVEs/hijacking the OS, and not simple social engineering?

You do not require a malicious apk to receive 2FA codes, or for the gullible user to read them aloud to the scammer. All phones come with an SMS and phone app.

You do not require a malicious apk to send transactions in banking apps (eg tricking people selling their product to send the money.)

You do not require a malicious apk to engage in a pig butchering scam, or to buy gift cards.

> There should be some explicit confirmation that the user knows what they are doing and they are not being scammed. It is long overdue.

I agree. Social engineering counters should have awareness raised by the governments. But blocking 3rd party apps for this is like using a cannon to shoot a mosquito. I'm not sure it makes the slightest of sense.


We can and should address more than one problem at a time.

Malicious APKs are a real problem that exists. I work tangentially in this space.

> But blocking 3rd party apps for this is like using a cannon to shoot a mosquito.

I’d agree, if that was what was going to happen. But it isn’t. Google is not going to block 3rd party apps.


> We can and should address more than one problem at a time.

Very much agree. Here in India, one of the big telecos has now rolled out a system where if you're on a call with an unknown number, OTPs are not sent to the phone till the call ends. IMO systems like this (or ironically - using OEM installed on device AI as a MITM to stop a call when an OTP is heard) are very good ideas.

> Malicious APKs are a real problem that exists. I work tangentially in this space.

Not doubting it for a moment. I've myself installed an app (that in my defense I pretty much suspected to be malware) that was malware. Even a few weeks ago I helped someone remove a hidden app that was draining their battery like anything (idk doing what, crypto mining or something I guess?). Ofc this app had accessibility permissions and would close settings if you tried to uninstall it.

On the flip side, I've also been stopped by my own phone to give accessibility permissions... to TapTap (a FOSS app by legendary developer quinny98) [1].

I should probably add - here in India, UPI scams use(d?) to be very common, let alone "giving someone your OTP" scams. I personally know someone very close who's lost a good bit of money, purely via someone social engineering them to hand over OTPs.

Even today, scamsters call and threaten a "digital arrest" (whatever the fuck that is) to unsuspecting victims. Presumably many hand over their money.

I have absolutely nothing against technical solutions. But IMO social education to never install apps from outside the play store, combined with "Digital Arrest does not exist" ads that the Indian govt is already running, are significantly stronger and resistant to much more things (like I mentioned - pig butchering or gift card scams).

I would be very curious if you had stats for how much is lost to scams via social engineering, vs malware. I asked Gemini (I can share the chat link via some private method of communication if you're interested), and apparently per IC3, it's 13.7B USD for social engineering, vs 1.57B USD for malware. If you have better data, I'd be happy to know more.

> I’d agree, if that was what was going to happen. But it isn’t. Google is not going to block 3rd party apps.

Perhaps I'm a cynical guy (which is true!), but I see zero reason to give google the benefit of doubt when it comes to control. I understand you're perhaps a googler (or you work on the same side) - nothing against it at all. Hardening is 100% helpful.

But companies famously like to increase revenue, and do not care about users. Every app on the play store (and btw there are a ton of scammy ones - I know because I get their ads on Youtube :) nets google some money. There's nothing stopping google from going "Actually we decided to stop all apk installs as people get scammed by them" tomorrow?

There is no fundamental reason to believe them beyond trusting them at their word. And there are many reasons to not believe them, unfortunately.

IMO, the old adage holds true - beating tech is hard, beating humans (with a wrench ;) is easy. Aka, XKCD 538.

1. https://github.com/KieronQuinn/TapTap 2. https://xkcd.com/538/


I am not a Googler and I am not fond of Google, but I don't have any reason to think that the changes they have proposed are some elaborate fabrication.

A decent amount of this fraud is not solely malware or solely social engineering -- there's often elements of both -- where they fool the person into installing the malware which helps to further facilitate the scheme. And in these cases, urgency is often used as part of the SE vector. So I think a 24 waiting period and warning about scams is particularly a good idea to mitigate these issues.


I guess we'll see in 5 years how well these comments will age. I can easily see a future where 3rd party apps are not allowed anymore.

The harder it is to install 3rd party apps, the less people will do it and therefore care about it. When few enough people care, it will be easy for Google to justify turning it off. e.g. "Only scammers/hackers use APK installs"


> The information value of providing (or receiving) a demo has dropped to roughly zero with vibe coding.

Only if you're a software-only startup. If you have hardware, the entire article is still valid.


> Source - I'm a videographer that also works as a cinematographer / director on smaller budget projects.

Tangential - any helpful advice you could give to budding videographers? I'd love to make those nice B-roll images you see in YouTube videos (Engineering Explained comes to mind).

Most advice is either for folks videoing people, or generally for photography. Funny thing is I'd say I'm already a very solid photographer... but my videos (admittedly shot on my phone) never look as good.


Sure. It's a very broad question but...

Learn to shoot static first. Biggest mistake I see people make when they move from photo to video is moving the camera without intention. Master the basic size of shots - wide, mid, closeup - with a variety of stills lenses on a tripod (or in hand with good in camera stabalisation).

Then learn the basic moves - ped, pan, track etc. If you're moving, think about how you're stabalising your camera - gimbal, shoulder rig etc. Most DSLR's do not have good enough stablisation to allow movement without artifacts.

Make sure you understand your camera. For photos you have much more leeway in post. For video I'd recommend always shooting at the camera's native ISO, at 24/25/30 shutter speed, and keeping shutter angle at double the shutter speed (or 180 degrees).

Don't change settings during a shot (other than focus). Set everything to manual, get your ISO, white balance, shutter speed or angle right, and leave it at that for the duration of the shot. If the lighting changes in the shot, your settings should cover the whole extent of the lighting for that shot.

Think about each shot as an image. i.e.: Don't try to catch everything, but focus on a detail, or framing, just as you would with a photo. If you're filming people, how they sit in the frame in relation to the background and other people (how large they are in frame, how they're blocked, whether they're enclosed by foreground detail etc) determines how we see them.

Just focus on all the basic photography stuff - rule of thirds, colour theory, bokeh etc. People just get overwhelmed when they switch to video, but the same rules apply. It's really just moving photographs after all.

Movement is in time, think about a nice frame of a railway line in a landscape - then a train enters and passes through it. Movement is everywhere - water, reflections, shadows, animals. Find a strong frame in nature or the build environment, that has movement, or will have movement passing through it and shoot that.

Then start thinking about how shots connect together. Even B-Roll tells a story and has a rhythm. Wide to closeup, big object to small object, matching motion between shots, directing the viewers eye as it moves across the frame. You're always telling a story, so when you get 'coverage' try to have the story you'll tell in the edit in mind. If you're capturing a place, whats a wide or ultra wide that gives us an emotional impression of the place. What are some details that colour it in. Whats a change thats occurring that ads movement life and purpose.

Basically it's about intentionality and choice. Whats the feeling you're trying to convey and which shots convey it best. A good exercise is trying to shoot a happy event in a threatening or disturbing style, or vice versa. Here's an example where I shot and edited a St Patrick's day parade in a nightmarish style - https://www.youtube.com/watch?v=lpj-fK8obPI

Think in terms of the final video or film rather than individual shots. That's the equivalent of the finished photo.


> Man, paying Google/Apple $5/mo is surely a much better solution for her. And are you really doing 3-2-1 on that?

Just some days back someone on reddit posted how their 14yo son (via a family/linked Google account) used Gemini Live to, err, enjoy himself with the camera on.

All his accounts are now permanently locked for CSAM.

So, yes, not being beholden to a megacorp absolutely has its uses.


That Reddit post was thoroughly debunked as untrue. It had some obvious plot holes and inconsistencies.

Google even came out and said that’s not how account suspensions work: They don’t sequentially ban other accounts that have been associated with a device that was associated with an account, as many pointed out.

I’m surprised how many people fell for that obvious piece of Reddit creative fiction. I think we’ll be hearing about it as an urban legend for years.

Reddit has become a place for posting fiction on advice subs. It started on the relationship advice subs but has spread to all of the advice subs now, like the legal advice post you saw. You have to read Reddit with a lot of skepticism.


Thanks, it's good to know this thing wasn't true. I wasn't aware of it at all.

Unfortunately I have seen other horror stories (dad takes a picture to send to the doctor, it uploads to iCloud/Google photos, account gets banned) to be wary of trusting any such large corp.

Partly tangential, but just yesterday there was a post of someone with a checzk password who got locked out of their iPhone. Now of course an iCloud backup might have actually helped them here, but the reliance on "It's Apple, it'll work" is a very common thing (understandably!), but unfortunately not true.


Oh, by the way - this was the account he used for his business (I don't remember if it was a custom domain). He's pretty much lost his only way of communicating with customers. This isn't just a "whoops, let me make a new email" situation.

(You can go to the legal advice UK subreddit if you want to see the post.)


> (You can go to the legal advice UK subreddit if you want to see the post.)

It was removed quickly because it was obviously untrue. The details of the story weren’t even consistent across the posters comments.


> However on android the sampling rate of the acceleration sensor is limited to 50/s. At least if you install through the official app store.

My understanding is that it’s the same even on iOS (or at least on my iPhone SE 2020). More specifically, the output only measures till 50hz (but the sensor sampling rate is actually 100hz - Nquist, you need double the measured frequency as sampling frequency, yada yada.)


I get 100/s on an iPhone SE2. 50/s on a Samsung Galaxy A16 which was released in 2024 or 2025, but that is due to an API restriction. You can export from phyphox (.xslx or .cvs). You get timestamps in the first column. Phyphox refers to the raw data rate, not Nyquist freq.

The sensors have analog lowpass filters that can be adjusted in order to avoid aliasing.

In general, with more bandwidth you can do more intrusive things. But if you want to tell wether two people ride in the same car, 50 Hz should be sufficient anyways.

Phyphox has a smartphone sensor database:

https://phyphox.org/sensordb/


By the way, it’s important to note that measuring vibrating things can permanently damage the OIS VCs in the camera. (See: Apple’s warning against motorcycle mounts.) my iPhone already had a broken OIS so I didn’t mind as much.

…I'm a bit afraid to ask, but are folks from Greenpeace supposed to be rich or something? (I'm not from the US so idk if it's a cultural thing I'm missing.)

Unless you come from privileged background, you don't exactly have the free time to go and prostest against the destruction of habitat of toads. And even if you do have the time, you probably don't care.

That is a valid point.

How about the privacy darling Flock?


> I love that we're still learning the emergent properties of LLMs!

TBH, this is (very much my opinion btw) the least surprising thing. LLMs (and especially their emergent properties) are still black boxes. Humans have been studying the human brain for millenia, and we are barely better at predicting how humans work (or for eg to what extent free will is a thing). Hell, emergent properties of traffic was not understood or properly given attention to, even when a researcher, as a driver, knows what a driver does. Right now, on the front page, is this post:

> 14. Claude Code Found a Linux Vulnerability Hidden for 23 Years (mtlynch.io)

So it's pretty cool we're learning new things about LLMs, sure, but it's barely surprising that we're still learning it.

(Sorry, mini grumpy man rant over. I just wish we knew more of the world but I know that's not realistic.)


I'm a psychiatry resident who finds LLM research fascinating because of how strongly it reminds me of our efforts to understand the human brain/mind.

I dare say that in some ways, we understand LLMs better than humans, or at least the interpretability tools are now superior. Awkward place to be, but an interesting one.


LLMs are orders of magnitude simpler than brains, and we literally designed them from scratch. Also, we have full control over their operation and we can trace every signal.

Are you surprised we understand them better than brains?


We've been studying brains a lot longer. LLMs are grown, not built. The part that is designed are the low-level architecture - but what it builds from that is incomprehensible and unplanned.


It's not that much longer, really.

LLMs draw origins from, both n-gram language models (ca. 1990s) and neural networks and deep learning (ca. 2000). So we've only had really good ones maybe 6-8 years or so, but the roots of the study go back 30 years at least.

Psychiatry, psychology, and neurology on the other hand, are really only roughly 150 years old. Before that, there wasn't enough information about the human body to be able to study it, let alone the resources or biochemical knowledge necessary to be able to understand it or do much of anything with it.

So, sure, we've studied it longer. But only 5 times longer. And, I mean, we've studied language, geometry, and reasoning for literally thousands of years. Markov chains are like 120 years old, so older than computer science, and you need those to make an LLM.

And if you think we went down some dead-end directions with language models in the last 30 years, boy, have I got some bad news for you about how badly we botched psychiatry, psychology, and neurology!


Embedding „meaning“ in vector spaces goes back to 1950s structuralist linguistics and early information retrieval research, there is a nice overview in the draft for the 3rd edition of speech and language processing https://web.stanford.edu/~jurafsky/slp3/5.pdf


You are still talking about low level infrastructure. This is like studying neurons only from a cellular biology perspective and then trying to understand language acquisition in children. It is very clear from recent literature that the emergent structure and behavior of LLMs is absolutely a new research field.

"Designed" is a bit strong. We "literally" couldn't design programs to do the interesting things LLMs can do. So we gave a giant for loop a bunch of data and a bunch of parameterized math functions and just kept updating the parameters until we got something we liked.... even on the architecture (ie, what math functions) people are just trying stuff and seeing if it works.



> We "literally" couldn't design programs to do the interesting things LLMs can do.

That's a bit of an overstatement.

The entire field of ML is aimed at problems where deterministic code would work just fine, but the amount of cases it would need to cover is too large to be practical (note, this has nothing to do with the impossibility of its design) AND there's a sufficient corpus of data that allows plausible enough models to be trained. So we accept the occasionally questionable precision of ML models over the huge time and money costs of engineering these kinds of systems the traditional way. LLMs are no different.


Saying ML is a field where deterministic code would work just fine conveniently leaves out the difficult part - writing the actual code.... Which we haven't been able to do for most of the tasks at hand.

What you are saying is fantasy nonsense.


They did not leave it out.

> but the amount of cases it would need to cover is too large to be practical (note, this has nothing to do with the impossibility of its design)


It's not only too large - we can't even enumerate all the edge cases, let alone handle them. It's too difficult.

And all you have to do is write an infinite amount of code to cover all possible permutations of reality! No big deal, really.


Using your logic, we don’t need quantum computers to break encryption, we could just use pen and paper.

> would work just fine, but the amount of cases it would need to cover is too large to be practical

So it doesn't work.


It is impossible to design even in a theoretical sense if functional requirements consider matters such as performance and energy consumption. If you have to write petabytes of code you also have to store and execute it.

[flagged]


I'm a psychiatry resident who has been into ML since... at least 2017. I even contemplated leaving medicine for it in 2022 and studied for that, before realizing that I'd never become employable (because I could already tell the models were getting faster than I am).

You would be sorely mistaken to think I'm utterly uninformed about LLM-research, even if I would never dare to claim to be a domain expert.


> Also, we have full control over their operation and we can trace every signal. Are you surprised we understand them better than brains?

Very, monsieur Laplace.


To be fair to your field, that advancement seems expected, no? We can do things to LLMs that we can't ethically or practically do to humans.


I'm still impressed by the progress in interpretability, I remember being quite pessimistic that we'd achieve even what we have today (and I recall that being the consensus in ML researchers at the time). In other words, while capabilities have advanced at about the pace I expected from the GPT-2/3 days, mechanistic interpretability has advanced even faster than I'd hoped for (in some ways, we are very far from completely understanding the ways LLMs work).


Learning about the emergent properties of these black boxes is not surprising, but it's also not daily. I think every new insight is worth celebrating.


Oh I very much agree that it's great to see more research and findings and improvements in this field. I'm just a little puzzled by GP's tone (which suggested that it isn't completely expected to find new things about LLMs, a few years in).


I'm the GP! lol… Not sure how you got that from my tone, but I find these discoveries expected but not routine, and also interesting.


Sorry lol, to me it felt like you were (pleasantly) surprised by this research. IMO I'd hardly be surprised to see breakthroughs in LLM understanding years or even decades from now. I guess I misunderstood your tone.

Indeed. For me, it's also a good reminder that AI is here to stay as technology, that the hype and investment bubble don't actually matter (well, except to those that care about AI as investment vehicle, of which I'm not one). Even if all funding dried out today, even if all AI companies shut down tomorrow, and there are no more models being trained - we've barely begun exploring how to properly use the ones we have.

We have tons of low-hanging fruits across all fields of science and engineering to be picked, in form of different ways to apply and chain the models we have, different ways to interact with them, etc. - enough to fuel a good decade of continued progress in everything.


AI has been here to stay for decades


Maybe, but you couldn't tell that these days, casually scrolling this or any other tech-oriented discussion board.


I mean... You could? AI comes in all kinds of forms. It's been around practically since Eliza. What is (not) here to stay are the techbros who think every problem can be solved with LLMs. I imagine that once the bubble bursts and the LLM hype is gone, AI will go back to exactly what it was before ChatGPT came along. After all, IMO it's quite true that the AIs nobody talks about are the AIs that are actually doing good or interesting things. All of those AIs have been pushed to the backseat because LLMs have taken the driver and passenger seats, but the AIs working on cures for cancer (assuming we don't already have said cure and it just isn't profitable enough to talk about/market) for example are still being advanced.


Saying that LLMs will disappear once the financial hype desinflate is like saying that LLMs are the answer to everything.


Personally I read the GP post with more emphasis on this bit:

> What is (not) here to stay are the techbros who think every problem can be solved with LLMs.

LLMs are in all likelyhood here to stay, but the scumbags doing business around them right now are hopefully going away eventually.


I agree on that part as well, but saying that AI will go back at what it was before ChatGPT came along is false. LLM will still be a standalone product and will be taken for granted. People will (maybe? hopefully?) eventually learn to use them properly and not generate tons of slop for the sake of using AI. Many "AI companies" will disappear from the face of Earth. But our reality has changed.


LLMs will not be just a standalone product. The models will continue to get embedded deep into software stacks, as they're already being today. For example, if you're using a relatively modern smartphone, you have a bunch of transformer models powering local inference for things like image recognition and classification, segmentation, autocomplete, typing suggestions, search suggestions, etc. If you're using Firefox and opted into it, you have local models used to e.g. summarize contents of a page when you long-click on a link. Etc.

LLMs are "little people on a chip", a new kind of component, capable of general problem-solving. They can be tuned and trimmed to specialize in specific classes of problems, at great reduction of size and compute requirements. The big models will be around as part of user interface, but small models are going to be increasingly showing up everywhere in computational paths, as we test out and try new use cases. There's so many low-hanging fruits to pick, we're still going to be seeing massive transformations in our computing experience, even if new model R&D stalled today.


To say we've been studying the brain for millennia is an extreme exaggeration. Modern neuroscience is only about 50 years old.


I hate to "umm, akshually" but apparently we have been studying the brain for thousands of years. I wasn't talking about purely modern neuroscience (which ironically for our topic of emergence, (often till recently/still in most places) treats the brain as the sum of its parts - be them neurons or neurotransmitters).

> The earliest reference to the brain occurs in the Edwin Smith Surgical Papyrus, written in the 17th century BC.

I was actually thinking of ancient greeks when writing my comment, but I suppose Egyptians have even older records than them.

From https://en.wikipedia.org/wiki/History_of_neuroscience


None of that counts as studying the brain. It's like saying rubbing sticks together to make fire counts as studying atomic energy. Those early "researchers" were hopelessly far away from even the most tangential understanding of the workings of the brain.


But fundamentally speaking, they were trying to understand the brain, right? IMO that counts as science/study in my books. They understood parts/basics of intracranial pressure so long back.

And if we say it's not science if it's not correct, well, (modern) physics isn't a science then, right? ;) As we haven't unified relativity with quantum mechanics?


I came here to say this :)


Studies of LLMs belong in their own field of science, just like psychology is not being studied in the physics department.


Interestingly enough, for a while physics used to be studied by philosophers (and used to be put in the natural philosophy basket, together with biology and most other hard sciences).


¸That field is called Machine Learning.


No that's still like putting cellular biology and psychology in the same bin.


The intersection of physics isnt psychology it is philosophy, and the same is true (at present) with LLM's

Much as Diogenes mocked Platos definition of a man with a plucked chicken, LLM's revealed what "real" ai would require: contigous learning. That isnt to diminish the power of LLM's (the are useful) but that limitation is a fairly hard one to over come if true AGI is your goal.


Is it because we haven't invented something better than backpropagation yet?

From what I understand, a living neural network learns several orders of magnitude more efficiently than an artificial one.

I'm not sure where that difference comes from. But my brain probably isn't doing back propagation, it's probably doing something very different.


Your brain is doing several different things, because there are different parts of your brain.

(eg different kinds of learning for long-term memory, short-term memory, languages, faces and reflexes.)


What is "contigous" learning, and why is it a hard requirement of AGI?


What do you mean by the intersection of physics?

The intersection of what with physics?


The intersection of disciplines.

Sir Roger Penrose, on quantum consciousness (and there is some regret on his part here) -- OR -- Jacob Barandes for a much more current thinking on this sort of intersectional exploratory thinking.


That is a very interesting thought!


I thought it was determined (slight pun) that free will is not a thing. I'm referring to Sapolsky's book "Determined: A Science of Life Without Free Will)" as an example.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: