Appendix

Welcome to the appendix! Here, we'll provide the full Python code for the simple neural network we built, a glossary of key terms used throughout the book, and additional resources for learning. We'll explain each part of the code in detail, and provide examples and explanations for the key terms. Let's get started!

A.1: Full Python Code for the Neural Network

Here is the full Python code for the simple neural network we built. We'll break it down and explain each part in detail.

import numpy as np

np.random.seed(0)

X = np.round(np.random.rand(100, 1), 3)
Y = np.round(10 * X + 0.2 * np.random.randn(100, 1), 3)

weights = 1.000

def model(X, weights):
    return np.dot(X, weights)

def loss(Y_true, Y_pred):
    return np.mean((Y_true - Y_pred) ** 2)

def train(X, Y, weights, lr, epochs):
    for epoch in range(epochs):
        Y_pred = model(X, weights)
        current_loss = loss(Y, Y_pred)
        gradients = -2 * np.dot(X.T, (Y - Y_pred)) / len(X)
        weights -= lr * gradients
        Y_pred = model(X, weights)
        new_loss = loss(Y, Y_pred)
    return weights

weights = train(X, Y, weights, lr=0.1, epochs=50)

import matplotlib.pyplot as plt

X_test = np.round(np.random.rand(100, 1), 3)
Y_test = np.round(10 * X_test + 0.2 * np.random.randn(100, 1), 3)

def test(X, Y, weights):
    Y_pred = model(X, weights)
    test_loss = loss(Y, Y_pred)
    return Y_pred

Y_pred = test(X_test, Y_test, weights)

plt.figure(figsize=(8, 6))
plt.scatter(X_test, Y_test, color='blue', label='True values')
plt.scatter(X_test, Y_pred, color='red', label='Predicted values')
plt.legend()
plt.xlabel('Input')
plt.ylabel('Output')
plt.title('True vs Predicted Values')
plt.show()

def predict(X, weights):
    return np.dot(X, weights)

new_input = np.array([[-100]])
prediction = predict(new_input, weights)

This code starts by importing the necessary library, numpy, and setting a random seed for reproducibility. We then create our dataset, initialize our weights, and define our model and loss function. The training loop is defined next, where we make predictions, calculate the loss, compute the gradients, and update the weights. After training, we test our model on a new dataset and plot the true vs predicted values. Finally, we define a function to make predictions for specific inputs.

A.2: Glossary of Key Terms

Here are definitions for some key terms used throughout the book:

  1. Neural Network: A computational model inspired by the human brain's neural networks. It consists of interconnected layers of nodes or "neurons" that can learn to make predictions or decisions without being explicitly programmed to perform the task.

  2. Weights: Parameters in the neural network that transform input data within the network's layers. As the network learns from the data, it adjusts these weights to improve its predictions.

  3. Model: In machine learning, a model is the output of a machine learning algorithm trained on data. The model represents what was learned by a machine learning algorithm.

  4. Loss Function: A method of evaluating how well a specific algorithm models the given data. If predictions deviate too much from actual results, loss function would cough up a very large number. Gradually, with the help of some optimization function, loss function learns to reduce the error in prediction.

  5. Gradient Descent: An optimization algorithm that's used when training a machine learning model. It's based on a convex function and tweaks its parameters iteratively to minimize a given function to its local minimum.

  6. Learning Rate: A tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function.

  7. Epoch: One complete pass through the entire training dataset while training a machine learning model.

  8. Numpy: A library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays.

  9. Matplotlib: A plotting library for the Python programming language and its numerical mathematics extension NumPy.

  10. Overfitting: A concept in machine learning where a model becomes too complex and learns the noise in the data, which negatively impacts the model's ability to generalize.

  11. Regularization: A technique used to prevent overfitting by adding an additional penalty to the loss function.

A.3: Additional Resources for Learning

Here are some additional resources for learning more about neural networks and machine learning:

  1. Books: "Python Machine Learning" by Sebastian Raschka, "Deep Learning with Python" by François Chollet.

  2. Online Courses: "Machine Learning" by Stanford University on Coursera, "Deep Learning for Coders" by fast.ai.

  3. Websites/Blogs: Towards Data Science, Medium, Analytics Vidhya.

  4. Python Libraries Documentation: Scikit-Learn, TensorFlow, PyTorch, Keras.

Remember, the best way to learn is by doing. Don't be afraid to experiment with different models, datasets, and techniques. Happy learning!

Contributors

Andrew Gao, Sarah Gao, William Gao

Last updated