I'd suggest reading the recently leaked Google memo for some context about why open source LLMs are important (and are disruptive from the perspective of a large company). It gives a good insight into why closed source models like GPT-4 might be overtaken by open source even if they can't directly compete at the moment.
Typical reasons are highly specialised models that are cheap and fast to train, lack of censorship, lack of API and usage restrictions, lightweight variants and so on. The reason there's a lot of excitement right now is indeed how fast the space is moving.
Typical reasons are highly specialised models that are cheap and fast to train, lack of censorship, lack of API and usage restrictions, lightweight variants and so on. The reason there's a lot of excitement right now is indeed how fast the space is moving.
https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...