- Precompute is only a way to cache some steps to speed it up
- Study logs and Exponentiation math
- Softmax is x/sum
- sum of softmax should be 1
- inputs should be >= 0
- Since images are matricies, multiply by some number to make it less washed out
- Fast AI you can resize all the images with a
data.resize()
to speed up training
- If you're using a deeper model, if the dataset is similar to the dataset that trained the model, this will help (more about this later in the course)
- Stands for batch normalization
- You can use this to predict a single image while passing in your image
- If you index an array with [None] it turns the array input into a tensor since that's what predict is expecting
- You're have to transform your image using tfms_from_model
- Matrix of N x N is multiplied by each N x N section of the image and outputs the next layer (Excel Example)
- This is done again and again until the last layer
- Some layers may check for left edges, lower edges, eyes, etc
- Instead of softmax use the sigmoid function
- learn.summary
- Shows what each layer is doing