Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text.

It is the third-generation language prediction model in the GPT-n series created by OpenAI, a for-profit San Francisco-based artificial intelligence research laboratory.

GPT-3’s full version has a capacity of 175 billion machine learning parameters, which is over two orders of magnitude greater than that of its predecessor, GPT-2.

GPT-3, which was introduced in May 2020, and is in beta testing as of July 2020, is part of a trend in natural language processing (NLP) systems of pre-trained language representations.

Written by

Welcome to my Data Science blog. Please visit my career portfolio at https://mruanova.com 🚀🌎

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store