>It is different to what you do. If I tell you that this is already a thing, you might go back to the drawing board, and do something from scratch.
Wouldn't your "something from scratch" idea, be based on your "training set" (knowledge you've learned in your life), and ways of re-arranging it inside your brain, using neuron stuctures created, shaped, and reiforced in certain ways by exposure to said training set and various kinds of re-inforcement?
Human brains training data has orders of magnitude more complexity than text. Language models are amazing but they can only do text, based on previously available text. We have higher dimensional models and we can relate to those from entirely different contexts. Same thing to me limits 'computer vision' severely. We get 3d interactive models to train our brains with, machine learning models are restricted to grids of pixels.
There is never any 'magic'. Magic is just a word for things we don't understand. This is beside the point. Just like you'll never reach orbit with a cannon, it is useful to know the limits of the tools. There will never be an isolated language model trained on bodies of text capable of reasoning, and people shouldn't expect outputs of language models to be more than accidentally cogent word salads.
Wouldn't your "something from scratch" idea, be based on your "training set" (knowledge you've learned in your life), and ways of re-arranging it inside your brain, using neuron stuctures created, shaped, and reiforced in certain ways by exposure to said training set and various kinds of re-inforcement?