BERT

Bidirectional Encoder Representations from Transformers (BERT) is a Transformer-based machine learning technique for natural language processing (NLP) pre-training developed by Google.

BERT was created and published in 2018 by Jacob Devlin and his colleagues from Google.

As of 2019, Google has been leveraging BERT to better understand user searches.

The original English-language BERT has two models: (1) the BERTBASE: 12 Encoders with 12 bidirectional self-attention heads, and (2) the BERTLARGE: 24 Encoders with 24 bidirectional self-attention heads. Both models are pre-trained from unlabeled data extracted from the BooksCorpus with 800M words and English Wikipedia with 2,500M words.

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Mau Rua

Welcome to my software engineering blog. Please visit my career portfolio at https://mruanova.com 🚀🌎