Will progress in artificial intelligence continue to accelerate, or have we already hit a plateau? Computer scientist Jennifer Golbeck interrogates some of the most high-profile claims about the promises and pitfalls of AI, cutting through the hype to clarify what’s worth getting excited about — and what isn’t.
YES
The transforms these LLMs are built on are not as efficient as they are novel. Without repeatability there is little hope for improvement. There isn’t enough energy in the world to get to an AGI using a transform model. We’re also running out of LLM free datasets to train on.
https://arxiv.org/html/2211.04325v2
https://arxiv.org/pdf/2302.06706v1
I really love that training llms on LLM output has been proven to cause it to unravel into nonsense. And rather than thinking about that before releasing, all these mega corps had to make profit in the short term first, and now the Internet is polluted with LLM output everywhere. I don’t know that they will be able to generate a newer version than 2021
Reminds me of low-background steel