# Chapter 8: Model Training and Evaluation

Welcome to Chapter 8! After setting up our training loop in the previous chapter, we're now ready to put our model to the test. In this chapter, we'll execute the training loop, observe how our model evolves during training, and evaluate the final model. We'll also walk through the Python code for model training and evaluation, explaining each line in detail.

Executing the training loop is like setting our model off on a learning journey. We've given it the tools it needs - the model function, the loss function, and the training loop - and now it's time for it to learn from the data.

In Python, we execute the training loop by calling the

`train`

function we defined in the previous chapter. We pass in our input data `X`

, the actual values `Y`

, the initial weights, the learning rate, and the number of epochs. Here's how we do it:weights = train(X, Y, weights, lr=0.1, epochs=50)

This line of code starts the training process. The model will go through 50 epochs, making predictions, calculating the loss, adjusting the weights, and learning from the data.

As our model goes through the training process, it's important to monitor its predictions and losses. This can give us insight into how well the model is learning.

In each epoch, our model makes predictions for the input data based on its current weights. It then calculates the loss, which measures how far off these predictions are from the actual values. The goal is to minimize this loss.

Let's consider an example. Suppose in the first epoch, our model predicts an output of 0.5 for an input of 0.5. But the actual output is 1.0. The difference between the actual and predicted output is 0.5, which is quite large. This means the model's prediction is far off, and the loss will be high.

But as the model goes through more epochs, it adjusts its weights to reduce the loss. So in a later epoch, it might predict an output of 0.8 for the same input of 0.5. The difference between the actual and predicted output is now smaller, which means the model's prediction is more accurate, and the loss is lower.

Another important aspect to monitor during training is the change in weights. The weights are what the model adjusts to learn from the data and reduce the loss.

In each epoch, the model calculates the gradients of the loss with respect to the weights. It then adjusts the weights in the opposite direction of the gradients. This is known as gradient descent.

For example, suppose in the first epoch, the weights are 1.0. But the gradients are -0.2. The model will adjust the weights by subtracting the product of the learning rate (let's say 0.1) and the gradients. So the new weights will be 1.0 - 0.1 * -0.2 = 1.02.

As the model goes through more epochs, it continues to adjust the weights based on the gradients. Over time, the weights should converge to a value that minimizes the loss.

After the model has gone through all the epochs, it's time to evaluate the final model. This involves looking at the final weights and the final loss.

The final weights tell us what the model has learned. For our simple linear model, the weights are the coefficient of the input. So if the final weights are close to the coefficient we used to generate our dataset (which was 10), that means our model has learned well.

The final loss tells us how accurate our model's predictions are. The lower the loss, the more accurate the predictions.

Now let's walk through the Python code for model training and evaluation.

weights = train(X, Y, weights, lr=0.1, epochs=50)

This line of code calls the

`train`

function, passing in the input data `X`

, the actual values `Y`

, the initial weights, the learning rate of 0.1, and the number of epochs as 50. The function returns the final weights after training, which we store in the `weights`

variable.print(f"Final weights: {weights}")

This line of code prints the final weights. These are the weights that the model has learned after going through the training process.

That's it for this chapter! You now understand how to execute the training loop, observe the model's learning process, and evaluate the final model. In the next chapter, we'll test our model on unseen data to see how well it generalizes. Stay tuned!