yellow = Linear(-1, 0, 0.25)
ycolor = Color("#fde699")
draw_with_hard_points(yellow, ycolor, Color("white"))
graph(
minitorch.operators.relu,
[yellow.forward(*pt) for pt in s2_hard],
[yellow.forward(*pt) for pt in s1_hard],
3,
0.25,
c=ycolor,
)
$$ \begin{eqnarray*} h_ 1 &=& \text{ReLU}(\text{lin}(x; w^0, b^0)) \\ \end{eqnarray*} $$
green = Linear(1, 0, -0.8)
gcolor = Color("#d1e9c3")
draw_with_hard_points(green, gcolor, Color("white"))
$$ \begin{eqnarray*} h_ 2 &=& \text{ReLU}(\text{lin}(x; w^1, b^1)) \\ \end{eqnarray*} $$
graph(
minitorch.operators.relu,
[green.forward(*pt) for pt in s2_hard],
[green.forward(*pt) for pt in s1_hard],
3,
0.25,
c=gcolor,
)
draw_nn_graph(green, yellow)
$$ \begin{eqnarray*} \text{lin}(x; w, b) &=& x_1 \times w_1 + x_2 \times w_2 + b \\ h_ 1 &=& \text{ReLU}(\text{lin}(x; w^0, b^0)) \\ h_ 2 &=& \text{ReLU}(\text{lin}(x; w^1, b^1))\\ m(x_1, x_2) &=& \text{lin}(h; w, b) \end{eqnarray*} $$
Model
class Network(minitorch.Module):
def __init__(self):
super().__init__()
self.unit1 = LinearModule()
self.unit2 = LinearModule()
self.classify = LinearModule()
def forward(self, x):
# yellow
h1 = self.unit1.forward(x).relu()
# green
h2 = self.unit2.forward(x).relu()
return self.classify.forward((h1, h2))
$$ \begin{eqnarray*} \text{lin}(x; w, b) &=& x_1 \times w_1 + x_2 \times w_2 + b \\ h_ 1 &=& \text{ReLU}(\text{lin}(x; w^0, b^0)) \\ h_ 2 &=& \text{ReLU}(\text{lin}(x; w^1, b^1))\\ m(x_1, x_2) &=& \text{lin}(h; w, b) \end{eqnarray*} $$
Parameters: $w_1, w_2, w^0_1, w^0_2, w^1_1, w^1_2, b, b^0, b^1$
$$ \begin{eqnarray*} \mathbf{h} &=& \text{ReLU}(\mathbf{W}^{(0)} \mathbf{x} + \mathbf{b}^{(0)}) \\ m(\mathbf{x}) &=& \mathbf{W} \mathbf{h} + \mathbf{b} \end{eqnarray*} $$
Parameters: $\mathbf{W}, \mathbf{b}, \mathbf{W}^{(0)}, \mathbf{b}^{(0)}$
0-Dimensional Scalar
Scalar from module-0
matrix(5, 1)
matrix(3, 5)
tensor(0.75, 2, 3, 5)
x.dims)x.shape)x.size)matrix(3, 5)
matrix(4, 3)
x[0, 1, 2]tensor(0.75, 2, 3, 5,
colormap=lambda i, j, k: drawing.aqua if (i, j, k) == (0, 1, 2) else drawing.white)
Unary
new_tensor = x.log()
Binary (for now, only same shape)
new_tensor = x + x
Reductions
new_tensor = x.sum()
set_svg_height(200)
draw_boxes(["$x$", "$f(x)$"], [1])
Storage : 1-D array of numbers of length size
Strides : tuple that provides the mapping from user indexing
to the position in the 1-D storage.
set_svg_height(200)
d = (
matrix(5, 2, "n", colormap=color(5, 2))
/ vstrut(1)
/ matrix(1, 10, "s", colormap=lambda i, j: color(5, 2)(j % 5, j // 5))
)
d.connect(("n", 3, 0), ("s", 0, 3)).connect(("n", 3, 1), ("s", 0, 8))
d = (
matrix(2, 5, "n", colormap=lambda i, j: color(5, 2)(j, i))
/ vstrut(1)
/ matrix(1, 10, "s", colormap=color(1, 10))
)
d.connect(("n", 0, 3), ("s", 0, 6)).connect(("n", 1, 3), ("s", 0, 7))
d = (
tensor(0.5, 2, 2, 3, "n", colormap=lambda i, j, k: color(4, 3)(i * 2 + j, k))
/ vstrut(1)
/ matrix(1, 12, "s", colormap=color(1, 12))
)
d.connect(("n", 0, 1, 1), ("s", 0, 4)).connect_perim(
("n", 1, 0, 2), ("s", 0, 2 + 6), unit_x - unit_y, -unit_y
)
Can transpose without copying.
matrix(2, 5, colormap=color(2, 5)) | chalk.hstrut(1) | matrix(5, 2, colormap=lambda i,j: color(2, 5)(j,i))
How do I move to the next in the row? Column?
How do I find the index for data?
tensor(0.75, 2, 2, 2)
Index for position 0? Position 1? Position 2?
$[0, 0, 0], [0, 0, 1], [0, 1, 0]$
(
tensor(0.5, 2, 2, 2, "n", colormap=lambda i, j, k: color(4, 2)(i * 2 + j, k))
/ vstrut(1)
/ matrix(1, 8, "s", colormap=color(1, 8))
)
tensor.py - Tensor Variabletensor_functions.py - Tensor Functionstensor_data.py - Storage and Indexingtensor_ops.py - Low-level tensor operations