Skip to content

Commit f6585e8

Browse files
donglixpsoumith
authored andcommitted
if RNN's hx is None, requires_grad=False (pytorch#2274)
When the initial hidden states of RNN are ``None'', we don't need to compute their gradients.
1 parent 0b00095 commit f6585e8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torch/nn/modules/rnn.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ def forward(self, input, hx=None):
135135
hx = torch.autograd.Variable(input.data.new(self.num_layers *
136136
num_directions,
137137
max_batch_size,
138-
self.hidden_size).zero_())
138+
self.hidden_size).zero_(), requires_grad=False)
139139
if self.mode == 'LSTM':
140140
hx = (hx, hx)
141141

0 commit comments

Comments
 (0)