From 9ad280030bde38b07e743578c80b17df94f2cbf7 Mon Sep 17 00:00:00 2001 From: David Rotermund <54365609+davrot@users.noreply.github.com> Date: Fri, 5 Jan 2024 16:20:11 +0100 Subject: [PATCH] Update README.md Signed-off-by: David Rotermund <54365609+davrot@users.noreply.github.com> --- pytorch/replace_autograd/README.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/pytorch/replace_autograd/README.md b/pytorch/replace_autograd/README.md index c030f5b..cd6bed2 100644 --- a/pytorch/replace_autograd/README.md +++ b/pytorch/replace_autograd/README.md @@ -64,6 +64,21 @@ class FunctionalLinear(torch.autograd.Function): return grad_input, grad_weight, grad_bias ``` +We can now add it to our own class. First we have to add it to the class via + +```python +self.functional_linear = FunctionalLinear.apply +``` + +in the \_\_init\_\_() function. Then we have to use it in the forward function: + +```python +return self.functional_linear(input, self.weight, self.bias) +``` + +Here we combine it also with normal autograd operations. **Not everything needs to be in the our own autograd function. In fact, try to put as little as possible into your own autograd function and let the rest handle by torch. Less is more.** + + ```python class MyOwnLayer(torch.nn.Module): def __init__( @@ -114,3 +129,4 @@ class MyOwnLayer(torch.nn.Module): return f"in_features={self.in_features}, out_features={self.out_features}, bias={self.bias is not None}" ``` +![Figure_1.png](Figure_1.png)