It is different to what you do. If I tell you that this is already a thing, you might go back to the drawing board, and do something from scratch. Maybe do some abstract drawing with numbers for brainstorming. A language model is not able to do this, the starting point for a language model is always the training data. That is why there is so many instances where you see some wrong (or correct) response from ChatGPT and when the other person corrects this, the model just agrees to whatever the user says. That is the right thing to do according to language etiquette, but it has nothing to do with what is true and right. (It invokes the image of a sociopath manager trying to sell you a product---they will find a way to agree with you to close the deal.)
I don't know what introspective is, but I know it when I see it. People around me genuinely come up with new concepts---some of what they came up decades ago with is now ubiquitous---and the sources is often not language. It comes from observing the world with your eyes, from physical or natural mechanisms. If you want to put it into the language of models: we just have so much more data to draw on. And we have a good feedback mechanism. If you invent a toy, you can build it and test it. Language models only get second hand feedback from users. They cannot prototype stuff if the data isn't out there already.
>It is different to what you do. If I tell you that this is already a thing, you might go back to the drawing board, and do something from scratch.
Wouldn't your "something from scratch" idea, be based on your "training set" (knowledge you've learned in your life), and ways of re-arranging it inside your brain, using neuron stuctures created, shaped, and reiforced in certain ways by exposure to said training set and various kinds of re-inforcement?
Human brains training data has orders of magnitude more complexity than text. Language models are amazing but they can only do text, based on previously available text. We have higher dimensional models and we can relate to those from entirely different contexts. Same thing to me limits 'computer vision' severely. We get 3d interactive models to train our brains with, machine learning models are restricted to grids of pixels.
There is never any 'magic'. Magic is just a word for things we don't understand. This is beside the point. Just like you'll never reach orbit with a cannon, it is useful to know the limits of the tools. There will never be an isolated language model trained on bodies of text capable of reasoning, and people shouldn't expect outputs of language models to be more than accidentally cogent word salads.
One implication though, is that LLMs can currently come up with novel mixes of existing ideas. It might be a good blender, integrating different pieces into a new whole.
Yes, but the language model does not have the feedback mechanism we have. We can test ideas against reality. Language models can make up all kinds of crap until there is data somewhere mentioning that it's not going to work. You could come up with an idea and workshop it, e.g., seeing if it's physically feasible to make something, before sharing it with others, language models cannot.
I don't know what introspective is, but I know it when I see it. People around me genuinely come up with new concepts---some of what they came up decades ago with is now ubiquitous---and the sources is often not language. It comes from observing the world with your eyes, from physical or natural mechanisms. If you want to put it into the language of models: we just have so much more data to draw on. And we have a good feedback mechanism. If you invent a toy, you can build it and test it. Language models only get second hand feedback from users. They cannot prototype stuff if the data isn't out there already.