“Write a Gizmodo article in which you explain large language models. Make sure to give specific examples. Keep the tone light and casual.” That was the prompt Gizmodo gave OpenAI's ChatGPT, which has been taking the internet by storm since it launched on Nov. 30. “We figured it would spin up a replica of our blogging style with no problem at all,” reports Gizmodo. “However, that didn't happen. Instead, what we thought would be a swift and efficient process turned out to be a lengthy, laborious, and frustrating one.” An anonymous reader shares an excerpt from the report: Suffice it to say, multiple attempts brought less than satisfactory results. For one thing, ChatGPT kept including incorrect information in its explainer — sometimes mixing up basic facts about the history of its own technology (factual inaccuracy has been an ongoing problem for the program). In other instances, it skimped on the details in critical areas, writing text that read more like OpenAI ad copy than a thoughtful explainer. In at least one instance, it forgot its own name, repeatedly telling me that it didn't know anything about a “ChatGPT” and thus couldn't help with my request. In general, it struggled to find a good balance of factual information, story structure, and accessible, humorous language, when putting its stories together.
In short: ChatGPT was not very good at writing a Gizmodo article — at least not one about the machine learning tech that underpins it. After probably a dozen tries or more, the following is the best ChatGPT managed to come up with: “Large language models, also known as neural network-based language models, have gained a lot of attention in recent years for their ability to generate human-like text. These models are trained on massive amounts of text data, and use advanced machine learning algorithms to learn the patterns and structures of natural language. One of the most well-known examples of a large language model is GPT-3, developed by OpenAI. This model was trained on more than 45 terabytes of text data, and has 175 billion parameters, which are the variables that the model uses to make predictions. In machine learning, a parameter is a variable that the model uses to make predictions. These parameters are adjusted during training, allowing the model to improve its performance on the task it was trained for. In the case of large language models, such as GPT-3, the model has a large number of parameters, often in the billions, which allows it to learn the intricate patterns and structures of natural language and generate highly coherent and fluent text.” ChatGPT's writing may be competently constructed and able to break down the concepts it's tackling, but it wasn't able to produce a “particularly bold or entertaining piece of writing,” says Gizmodo. “In short: this article wasn't the easy lift that we thought it would be.”
“After asking the chatbot to write about itself a dozen different ways, the program consistently seemed to leave something critical out of its final draft — be that exciting prose or accurate facts.”
That said, ChatGPT did manage to write an amusing poem about Slashdot. It also had a number of things to say about itself.
Read more of this story at Slashdot.