"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"plt.plot(x,x*(x > 0),clip_on=False,linewidth=4);"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Networks\n",
"\n",
"\n",
"\n",
"Terminology alert: networks of neurons are sometimes called *multilayer perceptrons*, despite not using the step function."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"%%html\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Networks\n",
"\n",
" The number of input neurons corresponds to the number of features.\n",
"\n",
"The number of output neurons corresponds to the number of label classes. For binary classification, it is common to have two output nodes.\n",
"\n",
"Layers are typically *fully connected*."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Neural Networks\n",
"\n",
"The universal approximation theorem says that, if some reasonable assumptions are made, a feedforward neural network with a finite number of nodes can approximate any continuous function to within a given error $\\epsilon$ over a bounded input domain.\n",
"\n",
"The theorem says nothing about the design (number of nodes/layers) of such a network.\n",
"\n",
"The theorem says nothing about the *learnability* of the weights of such a network.\n",
"\n",
"These are open theoretical questions.\n",
"\n",
"Given a network design, how are we going to learn weights for the neurons?"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Stochastic Gradient Descent\n",
"\n",
"\n",
"Randomly select $m$ training examples $X_j$ and compute the gradient of the loss function ($L$). Update weights and biases with a given _learning rate_ $\\eta$.\n",
"$$ w_k' = w_k-\\frac{\\eta}{m}\\sum_j^m \n",
"\\frac{\\partial L_{X_j}}{\\partial w_k}$$\n",
"$$b_l' = b_l-\\frac{\\eta}{m}\n",
" \\sum_j^m \\frac{\\partial L_{X_j}}{\\partial b_l}\n",
"$$\n",
"\n",
"Common loss functions: logistic, hinge, cross entropy, euclidean"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Loss Functions\n",
"\n",
"
\n",
"\n",
"x = 1 is a correct prediction, x = -1 a wrong prediction"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Backpropagation\n",
"\n",
"\n",
"Backpropagation is an efficient algorithm for computing the partial derivatives needed by the gradient descent update rule. For a training example $x$ and loss function $L$ in a network with $N$ layers:\n",
"\n",
"1. **Feedforward**. For each layer $l$ compute\n",
" $$a^{l} = \\sigma(z^{l})$$\n",
" where $z$ is the weighted input and $a$ is the activation induced by $x$ (these are vectors representing all nodes of layer $l$).\n",
" \n",
"2. **Compute output error**\n",
"$$\\delta^{N} = \\nabla_a L \\odot \\sigma'(z^N)$$\n",
"where $ \\nabla_a L_j = \\partial L / \\partial a^N_j$, the gradient of the loss with respect to the output activations. $\\odot$ is the elementwise product.\n",
"\n",
"3. **Backpropagate the error**\n",
"$$\\delta^{l} = ((w^{l+1})^T \\delta^{l+1}) \\odot\n",
" \\sigma'(z^{l})$$\n",
" \n",
"4. **Calculate gradients**\n",
"$$\\frac{\\partial L}{\\partial w^l_{jk}} = a^{l-1}_k \\delta^l_j \\text{ and } \\frac{\\partial L}{\\partial b^l_j} = \\delta^l_j$$\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Backpropagation as the Chain Rule\n",
"\n",
" \n",
"\n",
"$$\\frac{\\partial L}{\\partial a^l} \\cdot \\frac{\\partial a^l}{\\partial z^l} \\cdot \\frac{\\partial z^l}{\\partial a^{l-1}} \\cdot \\frac{\\partial a^{l-1}}{\\partial z^{l-1}} \\cdot \\frac{\\partial z^{l-1}}{\\partial a^{l-2}} \\cdots \\frac{\\partial a^{1}}{\\partial z^{l}} \\cdot \\frac{\\partial z^{l}}{\\partial x} $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Deep Learning\n",
"\n",
"
\n",
"\n",
"A deep network is not more powerful (recall can approximate any function with a single layer), but may be more concise - can approximate some functions with many fewer nodes."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Convolutional Neural Nets\n",
"\n",
"
\n",
"\n",
"Image recognition challenge results. Purple are deep learning methods."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Convolution Filters\n",
"\n",
"A filter applies a *convolution kernel* to an image. \n",
"\n",
"The kernel is represented by an $n$x$n$ matrix where the target pixel is in the center. \n",
"\n",
"The output of the filter is the sum of the products of the matrix elements with the corresponding pixels.\n",
"\n",
"Examples from [Wikipedia](https://en.wikipedia.org/wiki/Kernel_(image_processing)):\n",
"\n",
"
\n",
"
\n",
"
\n",
"\n",
"
\n",
"
\n",
"\n",
"
\n",
"
\n",
"
Identity
Blur
Edge Detection
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Feature Maps\n",
"\n",
"We can think of a kernel as identifying a *feature* in an image and the resulting image as a feature map that has high values (white) where the feature is present and low values (black) elsewhere.\n",
"\n",
"*Feature maps retain the **spatial relationship** between features present in the original image.*\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Convolutional Layers\n",
"\n",
" A single kernel is applied across the input. For each output feature map there is a single set of weights."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Convolutional Layers\n",
"\n",
"For images, each pixel is an input feature. Each hidden layer is a set of feature maps.\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Pooling\n",
"\n",
"Pooling layers apply a fixed convolution (usually the non-linear MAX kernel). The kernel is usually applied with a *stride* to reduce the size of the layer.\n",
" * faster to train\n",
" * fewer parameters to fit\n",
" * less sensitive to small changes (MAX)\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"Consider an input image with 100 pixels. In a classic neural network, we hook these pixels up to a hidden layer with 10 nodes. In a CNN, we hook these pixels up to a convolutional layer with a 3x3 kernel and 10 output feature maps."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"%%html\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"
"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"plt.plot(losses)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is the **batch loss**."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Testing MNIST"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"slideshow": {
"slide_type": "-"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy 0.9684\n"
]
}
],
"source": [
"correct = 0\n",
"with torch.no_grad(): #no need for gradients - won't be calling backward to clear them\n",
" for img, label in test_loader:\n",
" img, label = img.to('cuda'), label.to('cuda')\n",
" output = F.softmax(model(img),dim=1)\n",
" pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability\n",
" correct += pred.eq(label.view_as(pred)).sum().item()\n",
" \n",
"print(\"Accuracy\",correct/len(test_loader.dataset))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Some Failures\n",
"\n",
"*Not from this particular network\n",
"\n",
"
\n",
"Top label is correct. Bottom is prediction from a CNN."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Generative vs. Discriminative\n",
"\n",
"A *generative* model produces as output the input of a discriminative model: $P(X|Y=y)$ *or* $P(X,Y)$\n",
"\n",
"