RuntimeError:梯度计算所需的变量之一已被就地操作修改:PyTorch 错误

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: PyTorch error

提问人:singa1994 提问时间:11/21/2020 最后编辑:Anshika Singhsinga1994 更新时间:11/21/2020 访问量:995

问:

我正在尝试在PyTorch中运行一些代码,但此时我被堆积起来了:

在第一次迭代中,鉴别器和生成器的向后操作都运行良好

....

self.G_loss.backward(retain_graph=True)

self.D_loss.backward()

...

在第二次迭代中,执行时,我收到以下错误:self.G_loss.backward(retain_graph=True)

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8192, 512]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

根据 ,鉴别器网络中以下最后一行对此负责:torch.autograd.set_detect_anomaly

    bottleneck = bottleneck[:-1]
    self.embedding = x.view(x.size(0), -1)
    self.logit = self.layers[-1](self.embedding)

奇怪的是,我在其他代码中使用了该网络架构,并且它正常工作。有什么建议吗?

完整的错误:

    site-packages\torch\autograd\__init__.py", line 127, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8192, 512]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
Python 神经网络 pytorch

评论


答:

0赞 singa1994 11/21/2020 #1

通过删除带有行的代码来解决loss += loss_val