Hacker Newsnew | past | comments | ask | show | jobs | submit | galaxyLogic's commentslogin

I think what's lacking in LLMs creating code is they can't "simulate" what a human user would experience while using the system. So they can't really evaluate alternative solutions tothe humna-app interaction.

We humans can imagine it in our mind because we have used the PC a lot. But it is still hard for use to anticipate how the actual system will feel for the end-users. Therefore we build a prototype and once we use the prototype we learn hey this can not possibly work productively. So we must try something else. The LLM does not try to use a virtual prototype and thne learn it is hard to use. Unlike Bill Clinton it doesn't feel our pain.


I don't think there is any mystery to what we call "consciousness". Our senses and brain have evolved so we can "sense" the external world, so we can live in it and react to it. So why couldn''t we also sense what is happening inside our brains?

Our brain needs to sense our "inner talk" so we can let it guide our decision-making and actions. If we couldn't remember sentences, we couldn't remember "facts" and would be much worse for that. And talking with our "inner voice" and hearing it, isn't that what most people would call consciousness?


This is not nearly as profound as you make it out to be: a computer program also doesn't sense the hardware that it runs on, from its point of view it is invisible until it is made explicit: peripherals.

You also don’t consciously use your senses until you actively think about them. Same as “you are now aware of your breathing”. Sudden changes in a sensation may trigger them to be conscious without “you” taking action, but that’s not so different. You’re still directing your attention to something that’s always been there.

I agree with the poster (and Daniel Dennet and others) that there isn’t anything that needs explaining. It’s just a question framing problem, much like the measurement problem in quantum mechanics.


another one that thinks they solved the hard problem of consciousness by addressing the easy problem. how on earth does a feedback system cause matter to "wake up"? we are making lots of progress on the easy problem though

This is not as good a refusal as you think it is. To me (and I imagine, the parent poster) there is no extra logical step needed. The problem IS solved in this sense.

If it’s completely impossible to even imagine what the answer to a question is, as is the case here, it’s probably the wrong question to pose. Is there any answer you’d be satisfied by?

To me the hard problem is more or less akin to looking for the true boundaries of a cloud: a seemingly valid quest, but one that can’t really be answered in a satisfactory sense, because it’s not the right one to pose to make sense of clouds.


> If it’s completely impossible to even imagine what the answer to a question is, as is the case here, it’s probably the wrong question to pose. Is there any answer you’d be satisfied by?

I would be very satisfied to have an answer, or even just convincing heuristic arguments, for the following:

(1) What systems experience consciousness? For example, is a computer as conscious as a rock, as conscious as a human, or somewhere in between? (2) What are the fundamental symmetries and invariants of consciousness? Does it impact consciousness whether a system is flipped in spacetime, skewed in spacetime, isomorphically recast in different physical media, etc.? (3) What aspects of a system's organization give rise to different qualia? What does the possible parameter space (or set of possible dynamical traces, or what have you) of qualia look like? (4) Is a consciousness a distinct entity, like some phase transition with a sharp boundary, or is there no fundamentally rigorous sense in which we can distinguish each and every consciousness in the universe? (5) What explains the nature of phenomena like blindsight or split brain patients, where seemingly high-level recognition, coordination, and/or intent occurs in the absence of any conscious awareness? Generally, what behavior-affecting processes in our brains do and do not affect our conscious experience?

And so on. I imagine you'll take issue with all of these questions, perhaps saying that "consciousness" isn't well defined, or that an "explanation" can only refer to functional descriptions of physical matter, but I figured I would at least answer your question honestly.


I think most of them are valid questions!

(1) is perhaps more of a question requiring a strict definition of consciousness in the first place, making it mostly circular. (2) and especially (3) are the most interesting, but they seem part of the easy problem instead. And I’d say we already have indications that the latter option of (4) is true, given your examples from (5) and things like sleep (the most common reason for humans to be unconscious) being in distinct phases with different wake up speed (pun partially intended). And if you assume animals to be conscious, then some sleep with only one hemisphere at a time. Are they equally as conscious during that?

My imaginary timeline of the future has scientific advancements would lead to us noticing what’s different between a person’s brain in their conscious and unconscious states, then somehow generalize it to a more abstract model of cognition decoupled from our biological implementation, and then eventually tackle all your questions from there. But I suspect the person I originally replied to would dismiss them as part of the easy problem instead, i.e. completely useless for tackling the hard problem! As far as I’m concerned, it’s the hard problem that I take issue with, and the one that I claim isn’t real.


I much agree, especially on the importance of defining what we mean by the word "conscicousness", before we say we cannot explain it. Is a rock conscious? Sure according to some deifinition of the word. Probably everybody would agree that there are different levels of consciousness, and maybe we'd need different names for them.

Animals are clearly conscious in that they observe the world and react to it and even try to proactively manipulate it.

The next level of consciousness, and what most people probably mean when they use the word is human ability to "think in language". That opens up a whole new level, of consciousness, because now we can be conscious of our inner voice. We are conscious of ourselves, apart from the world. Our inner voice can say things about the thing which seems to be the thing uttering the words in our mind. Me.

Is there anything more to consciousness than us being aware that we are conscious? It is truly a wondrous experience which may seem like a hard problem to explain, hence the "Hard Problem of Consciousness", right? But it's not so mysterious if we think of it in terms of being able to use and hear and understand language. Without language our consciousness would be on the level of most animals I assume. Of course it seems that many animals use some kind of language. But, do they hear their "inner voice"? Hard to say. I would guess not.

And so again, in simple terms, what is the question?


This is precisely the matter, I wholeheartedly agree. The metacognition that we have, that only humans are likely to have, is the root behind the millennium-long discussions on consciousness. And the hard problem stems from whatever was left of traditional philosophers getting hit by the wall of modern scientific progress, not wanting to let go of the mind as some metaphysical entity beyond reality, with qualia and however many ineffable private properties.

The average person may not know the word qualia, but “is your red the same as my red” is a popular question among kids and adults. Seems to be a topic we are all intrinsically curious about. But from a physical point of view, the qualia of red is necessarily some collection of neurons firing in some pattern, highly dependent on the network topology. Knowing this, then the question (as it was originally posed) is immediately meaningless. Mutatis mutandis, same exact argument for consciousness itself.


Talking of "qualia" I think feeling pain is a good example. We all feel pain from time to time. It is a very conscious experience. But surely animals feel pain as well, and it is that feeling that makes them avoid things that cause them pain.

Evolution just had to give us some way to "feel", to be conscious, about some things causing us pain while other things cause us pleasure. We are conscious of them, and I don't think there's any "hard question" about why we feel them :-)


How about AI-generated widgets? I just tell AI what I want to see in a widget and it creates it?

Maybe simply "Show news about this topic"?


I think that's what Google Disco is:

https://www.theverge.com/tech/842000/google-disco-browser-ai...

Maybe? I really struggled to understand this product from the description and screenshots alone.


But placebo works, right? But it only works if you don't know that it is placebo you are getting.

Placebos often work, even when a placebo is known to be a placebo.

I use rain-sounds or white noise plus noise-cancelling headphones to drown out my neighbor's TV. It bugs me that I have to hear advertisements coming over the wall when I wake up. If I'm really pissed off I turn on some reggae music with good bass. It always calms me down.

This makes me think how AI turns SW development upside down. In traditonal development we write code which is the answer to our problems. With AI we write questions and get the answers. Neither is easy, finding the correct questions can be a lot fo work, whereas if you have some existing code you already have the answers, but you may not have the questions (= "specs") written down anywhere, at least not very well, typically.

At least in my experience the AI agents work best when you give them a description of a concrete code change like "Write a function which does this here" rather than vague product ideas like "The user wants this problem solved". But coming up with the prompts for an exact code change is often harder than writing the code.

> both the actor model (and its relative, CSP) in non-distributed systems solely in order to achieve concurrency has been a massive boondoggle and a huge dead end.

Why is that so?


Well, lots of people have tried it and spent a lot of money on it and don't seem to have derived any benefit from doing so.

Actors can be made to do structured concurrency as long as you allow actors to wait for responses from other actors, and implement hierarchy so if an actor dies , its children do as well. And that’s how I use them! So I have to say the OP is just ignorant of how actors are used in practice.

> Actors can be made to do structured concurrency as long as you allow actors to wait for responses from other actors

At which point they're very much not actors any more. You've lost the deadlock avoidance, you can't do the `become`-based stuff that looks so great in small demos. At that point what are you gaining from using actors at all?


If you don't think actors are useful just because you need to wait for responses, I guess you've never used actors. That's just so implausible someone would say that if they just, you know, did it.

To adapt the analogy from the link in the root comment, this is akin to saying "`goto` can be made to do structured programming as long as you strictly ensure that the control flow graph is reducible". Which is to say, it is a true statement that manages to miss the point: the power of both structured programming and structured concurrency comes from defining new primitives that fundamentally do the right thing and don't even give you the option to do the wrong thing, thus producing a more reliable system. There's no "as long as you...", it just works.

Isn't this a bit theoretical since most real world systems are distributed these days, using a browser as GUI?

Except Akka in Java and for the entirety of Erlang and its children Elixir and Gleam. You obviously can scale those to multiple systems, but they provide a lot of benefit in local single process scenarios too imo.

Things like data pipelines, and games etc etc.


If I'm not mistaken ROOM (ObjecTime, Rational Rose RealTime) was also heavily based on it. I worked in a company that developed real time software for printing machines with it and liked it a lot.

I've worked on a number of systems that used Akka in a non-distributed way and it was always an overengineered approach that made the system more complex for no benefit.

Fair, I worked a lot on data pipelines and found the actor model worked well in that context. I particularly enjoyed it in the Elixir ecosystem where I was building on top of Broadway[0]

Probably has to do with not fighting the semantics of the language.

[0] https://elixir-broadway.org/


Really depends of the ergonomics of the language. In erlang/elixir/beam langs etc, its incredibly ergonomic to write code that runs on distributed systems.

you have to try really hard to do the inverse. Java's ergonomics, even with Akka, lends its self to certain design patterns that don't lend itself to writing code for distributed systems.


It is political. Designing everything around cars benefits the class of people called "Car Owners". Not so much people who don't have the money or desire to buy a car.

Although, congestion pricing is a good counter-example. On the surface it looks like it is designed to benefit users of public transportation. But turns out it also benefits car-owners, because it reduces traffic jams and lets you get to your destination with your own car faster.


>Designing everything around cars benefits the class of people called "Car Owners".

Designing everything around cars hurts everyone including car owners. Having no option but to drive everywhere just sucks.


But the AD for my Cadillac says I’m an incredible person for driving it, that cant be wrong.

No, it benefits car manufacturers and sellers, and mechanics and gas stations.

Network/snowball effects are not all good. If local businesses close because everybody drives to WalMart to save a buck, now other people around those local businesses also have to buy a car.

I remember a couple of decades ago when some bus companies in the UK were privatized, and they cut out the "unprofitable" feeder routes.

Guess what? More people in cars, and those people didn't just park and take the bus when they got to the main route, either.


>No, it benefits car manufacturers and sellers, and mechanics and gas stations.

Everybody thinks they're customers when they buy a car, but they're really the product. These industries, and others, are the real customers


> Everybody thinks they're customers

So much so that my comment attracted downvotes.

C'est la vie.


But having a car is kind of bad. Maybe you remember when everyone smoked, and there was stuff for smokers everywhere. Sure that made it easier for smokers, but ultimately that wasn't good for them (nor anyone around them).

From the article: "Claude Code can rip out one service and replace it with another in minutes. ..."

Doesn't that assume there are many interchangable services available on the web which essentially do the same thing?

I can see this would be the case if there were many online services for say compiling C++ code. But for more human-centric services, are there many "replaceable services" out there? An API is not only its syntax, but also its semantics.


I recently had to fill out a PDF form to send it to the Social Security Admininistration. They didn't have the option of submitting it online so I had to print it out and take it to them.

I filled out the PDF using FireFox PDF-editor, at which point it occurred to me, this is not so different from using an application which has a form for me to enter data into it.

Maybe in a few years Government has a portal where I can submit any of their forms as PDF documents, and they would probably use AI to store the contents of the form into a database.

A PDF-form is kind of a Universal API, especially when AI can extract and validate the data from it. Of all the API-formats I've seen I think PDF-forms is the most human-friendly. Each "API" is defined by the form-identifier in the PDF-form. It is easy for humans to use, and pretty easy for office-clerks to create such forms, especially with the help of AI. I wonder will this, or something similar, catch on?


A pdf can be anything and everything. It's just a wrapper around text, images, html, you can even embed javascript. There's already pdf forms that are user-editable (without a pdf editor). Not all features are available on all pdf viewers though.

If we're at the point where they use ai to make form pdfs, might as well cut the middleman and ask the ai to generate a form on a website.


The thing I think is that PDFs can be understood by both humans and AI. And they work even if power goes down as long as we have enough paper printouts and pens to fill out the forms. They can be shared by sending them by physical mail, no web needed. But they can be of course "uploaded" as well.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: