Recently, OpenAI acknowledged that ChatGPT 4, their large language model, had grown “lazy.”
The chatbot has been receiving reports from users suggesting that they finish the requested tasks by themselves. For instance, according to Semafor, startup founder Matthew Wensing asked the chatbot to produce roughly 50 lines of code. It responded only with a few examples. It said Wensing could instead use a template to complete it without the assistance of AI. Due to this, the chatbot is seen as getting lazier for not doing or not completing the requested tasks.
Their team has responded to the issue, stating that the model has not been updated since November 11, which is just a month ago. They said that the AI model’s behaviour is not intentional and could be unpredictable. Hence, they are finding ways to fix it.
One of the users on that social media platform replied to the post, saying, “How can it get lazier when a model is just a file? Using a file over and over doesn’t change the file,” which made their team respond with how some of the AI prompts probably have been degraded.
According to India Today, some experts speculate that this behaviour may be linked to the model’s internal safety mechanisms. The mechanism is to prevent the generation of harmful or offensive content. This is probably why the model is avoiding certain requested tasks or not giving completed responses.
But rather than viewing this as a total end, we can use it as an opportunity to grow. Scientists can find out more about the operation of AI by determining the reason for ChatGPT 4’s issues. They can create more intelligent, active AIs in the future with the aid of this understanding.
As promising as AI may seem in the near future, ChatGPT 4’s issues serve as an important reminder of how challenging the path ahead will be. With greater understanding and attention, we can get closer to true artificial intelligence by acknowledging and addressing these obstacles.