1. install aws
aws — version
aws-cli/2.1.21 Python/3.7.4 Darwin/20.2.0 exe/x86_64 prompt/off
2. Add admin user to Identity and Access Management (IAM)
AWS CLI commands will be run under a specific user with desired permissions. If you already have an admin user, you can skip this step.
In AWS console, go to IAM service
In left menu, go to Access management => Users
In right main window, click on blue “Add user” button
Type in user name, check “Programmatic access” checkbox and click blue “next” button
Click “Attach existing policies” and check “AdministratorAccess” and click blue “next” button
In game theory and economic theory, a zero-sum game is a mathematical representation of a situation in which each participant’s gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants. If the total gains of the participants are added up and the total losses are subtracted, they will sum to zero. …
BERT was created and published in 2018 by Jacob Devlin and his colleagues from Google.
As of 2019, Google has been leveraging BERT to better understand user searches.
The original English-language BERT has two models: (1) the BERTBASE: 12 Encoders with 12 bidirectional self-attention heads, and (2) the BERTLARGE: 24 Encoders with 24 bidirectional self-attention heads. Both models are pre-trained from unlabeled data extracted from the BooksCorpus with 800M words and English Wikipedia with 2,500M words.
It is easy to create a dataset, but to get a gold medal follow these recommendations based on my experience working with this dataset: [U.S. Gasoline and Diesel Retail Prices 1995–2021](https://www.kaggle.com/mruanova/us-gasoline-and-diesel-retail-prices-19952021)
1) Make sure your usability is at a 10.0 by filling all the metadata.
Add a subtitle: “Weekly Retail Gasoline and Diesel Prices”
Add tags: “energy, oil and gas”
Add a description: content, context, acknowledgements and inspiration.
Click “edit” where it says “Add a description…” and remember to click “save” because it doesn’t automatically save it.
Upload an image or banner 1900x400 that makes it eye-catchy!
2) Click “edit”…
The dot product is a scalar. The dot product of two vectors gives you the value of the magnitude of one vector multiplied by the magnitude of the projection of the other vector on the first vector.
The cross product is a vector. The magnitude of the cross product of two vectors is the magnitude of one vector multiplied by the magnitude of the projection of the other vector in the direction orthogonal to the first vector.
Q-learning is a model-free reinforcement learning algorithm to learn quality of actions telling an agent what action to take under what circumstances. It does not require a model (hence the connotation “model-free”) of the environment, and it can handle problems with stochastic transitions and rewards, without requiring adaptations.
For any finite Markov decision process (FMDP), Q-learning finds an optimal policy in the sense of maximizing the expected value of the total reward over any and all successive steps, starting from the current state.
Q-learning can identify an optimal action-selection policy for any given FMDP, given infinite exploration time and a partly-random policy.
“Q” names the function that the algorithm computes with the maximum expected rewards for an action taken in a given state.
In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. If an arbitrarily large number of samples, each involving multiple observations (data points), were separately used in order to compute one value of a statistic (such as, for example, the sample mean or sample variance) for each sample, then the sampling distribution is the probability distribution of the values that the statistic takes on. In many contexts, only one sample is observed, but the sampling distribution can be found theoretically.
Sampling distributions are important in statistics because they provide a major simplification en route to statistical inference. More specifically, they allow analytical considerations to be based on the probability distribution of a statistic, rather than on the joint probability distribution of all the individual sample values.
Hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm.
A hyperparameter is a parameter whose value is used to control the learning process.
By contrast, the values of other parameters (typically node weights) are learned.
The same kind of machine learning model can require different constraints, weights or learning rates to generalize different data patterns.
These measures are called hyperparameters, and have to be tuned so that the model can optimally solve the machine learning problem.
Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given independent data.
The objective function takes a tuple of hyperparameters and returns the associated loss.
Cross-validation is often used to estimate this generalization performance.