因为pytorch会自动舍弃图计算的中间结果,因此想要获取这些数值就须要使用钩子函数。html
钩子函数包括Variable的钩子和nn.Module钩子,用法类似。python
import torch from torch.autograd import Variable grad_list = [] grad_listx = [] def print_grad(grad): grad_list.append(grad) def print_gradx(grad): grad_listx.append(grad) x = Variable(torch.randn(2, 1), requires_grad=True) y = x*x + 2 z = torch.mean(torch.pow(y, 2)) lr = 1e-3 y.register_hook(print_grad) x.register_hook(print_gradx) z.backward() x.data -= lr * x.grad.data print("x.grad.data-------------") print(x.grad.data) print("y-------------") print(grad_list) print("x-------------") print(grad_listx)
- 输出: 记录了y的梯度,而后x.data=记录x的梯度app
/opt/conda/bin/python2.7 /root/rjw/pytorch_test/pytorch_exe03.py x.grad.data------------- 32.3585 14.8162 [torch.FloatTensor of size 2x1] y------------- [Variable containing: 7.1379 4.5970 [torch.FloatTensor of size 2x1] ] x------------- [Variable containing: 32.3585 14.8162 [torch.FloatTensor of size 2x1] ] Process finished with exit code 0
register_forward_hook
& register_backward_hook
register_hook
,可在module前向传播或反向传播时注册钩子。每次前向传播执行结束后会执行钩子函数(hook)。前向传播的钩子函数具备以下形式:hook(module, input, output) -> None
,而反向传播则具备以下形式:hook(module, grad_input, grad_output) -> Tensor or None
。