Lots of ChatGPT users have been complaining that the powerful AI model has been getting “lazier” and “dumber.” When I put this to ChatGPT, it reminded me that as an AI language model, it doesn’t have feelings or consciousness. Only humans can get dumber.
The more technical question then is, is it getting less accurate? An executive at OpenAI initially said the users complaining about degraded performance were wrong.
Then research out of Stanford University and UC Berkeley found that the performance of ChatGPT’s 3.5 version and 4.0 version varied greatly, with the newer model underperforming on some tasks. For example, ChatGPT turned out “more formatting mistakes in code generation” last month than it did earlier in the year, according to the research.
This has important implications. The promise of machine learning is that, well, it learns. But as the research paper noted, it is now an “interesting question whether an LLM service like GPT4 is consistently getting 'better' over time.”
What happens if AI models don’t automatically improve?
No comments:
Post a Comment