可视化梯度数据
可视化需要用到的一些重要函数。
register_backward_hook
The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> Tensor or None
he hook should not modify its arguments, but it can optionally return a new gradient with respect to input that will be used in place of grad_input in subsequent computations.
register_forward_hook
The hook will be called every time after forward() has computed an output. It should have the following signature:
hook(module, input, output) -> None or modified output
The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called`
register_forward_pre_hook
The hook will be called every time before forward() is invoked. It should have the following signature:
hook(module, input) -> None or modified input
The hook can modify the input. User can either return a tuple or a single modified value in the hook.
state_dict
Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names.
parameters
Returns an iterator over module parameters.
|
|
Examples
Example 1
|
|
Example 2
|
|