**Error: The term ng is not recognized as the name of a cmdlet, function, script file, or operable program.**

Resolved by adding “c:\users\mau\AppDara\Roaming\npm” to PATH on environment variables.

**Error: The term ng is not recognized as the name of a cmdlet, function, script file, or operable program.**

Resolved by adding “c:\users\mau\AppDara\Roaming\npm” to PATH on environment variables.

I updated my career portfolio using Angular 13 https://mruanova.com

Deleting from Google is extremely difficult and they do this on purpose.

**Bidirectional Encoder Representations from Transformers** (**BERT**) is a Transformer-based machine learning technique for natural language processing (NLP) pre-training developed by Google.

BERT was created and published in 2018 by Jacob Devlin and his colleagues from Google.

As of 2019, Google has been leveraging BERT to better understand user searches.

The original English-language BERT has two models: (1) the BERTBASE: 12 Encoders with 12 bidirectional self-attention heads, and (2) the BERTLARGE: 24 Encoders with 24 bidirectional self-attention heads. Both models are pre-trained from unlabeled data extracted from the BooksCorpus with 800M words and English Wikipedia with 2,500M words.

The dot product is a scalar. The dot product of two vectors gives you the value of the magnitude of one vector multiplied by the magnitude of the projection of the other vector on the first vector.

The cross product is a vector. The magnitude of the cross product of two vectors is the magnitude of one vector multiplied by the magnitude of the projection of the other vector in the direction orthogonal to the first vector.

** Q-learning** is a model-free reinforcement learning algorithm to learn quality of actions telling an agent what action to take under what circumstances. It does not require a model (hence the connotation “model-free”) of the environment, and it can handle problems with stochastic transitions and rewards, without requiring adaptations.

For any finite Markov decision process (FMDP), *Q*-learning finds an optimal policy in the sense of maximizing the expected value of the total reward over any and all successive steps, starting from the current state.

*Q*-learning can identify an optimal action-selection policy for any given FMDP, given infinite exploration time and a partly-random policy.

“Q” names the function that the algorithm computes with the maximum expected rewards for an action taken in a given state.

Welcome to my Data Science blog. Please visit my career portfolio at https://mruanova.com 🚀🌎