Generative Pre-trained Transformer 3 (GPT-3) embraces and augments the GPT-2 model architecture, including pre-normalization, modified initialization, and reversible tokenization. It exhibits strong performance on many Natural Language Processing (NLP) tasks.

GPT-3 is an auto-regressive artificial intelligence algorithm developed by OpenAI, an AI-powered research laboratory located in San Francisco, California.

It is a massive artificial neural network that takes help from deep learning to generate human-like text and is trained on huge text datasets with thousands of billions of words. It is the third-generation AI language prediction model in the GPT-n series and the successor to GPT-2.

1

In simple words, OpenAI GPT-3 was fed inputs the ways how billions of people write and also was taught how to pick up on writing patterns based on user entry. Once few inputs are offered, the model will generate intelligent text following the submitted pattern and structure. It is also the largest AI language algorithm that produces billions of words a day.

GPT-3 working process

This artificial intelligence algorithm is a program that can calculate the word or even the character which must appear in a text given in relation to the words around it. This is called the conditional probability of words. It is a generative neural network that allows out a numeric score or a yes or no answer. It also generates long sequences of the original text as its output.

The total number of weights the OpenAI GPT-3 dynamically holds in its memory and utilizes to process every query is 175 billion.

#gpt-3 #openai #artificial-intelligence #algorithms #ai #nlp #ml

A Brief Intro to the GPT-3 Algorithm
1.35 GEEK