OpenAI, the famous artificial research company based in California, USA, has recently introduced its latest program titled GPT-3 with the aim to create an intelligence that is as flexible, dynamic and deep as the human mind. Though similar to Google’s autocomplete program, this invention has certain elements that will seek to redefine technology in the coming decade.
The program GPT-3 can be said to be the initial steps taken by OpenAI towards creating an intelligence that possesses all the intricacies of the human mind. GPT-3 or Generative Pre-trained transformer 3 is the third release in the entire series of the autocomplete tools developed by the company. Having taken several years for development, the program has started of multiple innovations within the artificial intelligence based text-generation domain. In order to function, GPT-3 makes use of different data patterns. In other words, there is a huge bank of text that is fed into the system which then runs through them to check for statistical regularities. These are then stored as weighted connections in GPT-3’s neural network in its various nodes. However, it is to be noted that these regularities are not known to humans. Thus, there is no human intervention in the entire process. What the program does is that it hunts for data patterns and then utilizes these patterns to make the text prompt complete.
Operating with 175 billion parameters, makes it scale of operations quite large enabling it to handle a huge a number of autocomplete tasks. This is obviously higher than GPT-1 which only operated using 117 million parameters. GPT-3 has a huge databank comprising pags from Wikipedia, multiple web sites as well as a huge repository of digital books. The entire training data thus includes recipes, articles, news, manuals, guides, religious discourse, and various other sources of knowledge and information that one can possibly think of. The biggest advantage of this autocomplete tool lies in its flexibility and ability to store a vast amount of information in text format.
Multiple experiments have been conducted to using the commercial API of GPT-3 to identify different use cases. People, in fact, have come up with various samples like a chatbot enables conversation between historical personalities, solve syntax puzzles, compose tunes for guitar, write fiction, transform text styles, autocomplete images and many such other interesting things. For all these, what GPT-3 does is that it just needs some examples of the desired output. Once the user enters the correct prompt, the program generates the required output.
Though impressive, GPT-3 has multiple flaws which results in it making silly mistakes which does serve as a cause of concern. Another problem with the program is that as a whole the system works well, but not much at a detailed level. It has been found that the output generated by the program often has errors that humans almost never do. Therefore, detailed testing is crucial to check for possible errors. One has also observed mistakes when responding to mathematical questions or even trivia. Another issue with the program is that the output generated has been found to be biased.
The question that one needs to ask now is that can these errors be fixed by fine tuning the inputs, referred to as prompt. For this, it is necessary to have a complete understanding of the language of the program. A certain degree of fine tuning is necessary. If one prompt doesn’t work, then one has to work around it.
At the end it can be said that probably one needs to assess the program’s ability to be able to answer, and that it can perform various tasks without any kind of supervision. Whatever it is, the future of GPT-3 does look vast and promising and useful. The invention is surely helpful and we need to just do a work-around to evoke the desired response.