MLP 神经网络

MLP neural network

提问人:Milad 提问时间:11/17/2023 最后编辑:Milad 更新时间:11/20/2023 访问量:27

问:

而不是在我网络的训练阶段的 SGD。火车循环是这样的,但我经常因为训练损失而得到 nan。问题是什么? 另外,如果有人对此有很好的代码,我很感激分享。谢谢。

# # Hyperparameters
learning_rate = 0.1
num_epochs = 500
# Loss function and optimizer
criterion = nn.MSELoss()
optimizer = LBFGS(model.parameters(), max_iter=10, history_size=40
# Training loop
train_losses = []
val_losses = []

for epoch in range(num_epochs):
    model.train()
    epoch_loss = 0.0

    for batch_idx, (features, labels) in enumerate(train_dataloader):
        def closure():
            if torch.is_grad_enabled():
                optimizer.zero_grad()
            outputs = model(features.float())
            loss = criterion(outputs, labels)
            if loss.requires_grad:
                loss.backward()
            return loss.item()  # Return the loss value, not the loss variable

        # L-BFGS step
        optimizer.step(closure)

        # Calculate the loss for monitoring (do not backpropagate)
        epoch_loss += closure()

        # if batch_idx % 200 == 0:
        #     print(f'Epoch [{epoch + 1}/{num_epochs}], Batch [{batch_idx + 1}/{len(train_dataloader)}], Loss: {epoch_loss / (batch_idx + 1):.4f}')

    # Calculate average epoch loss
    epoch_loss /= len(train_dataloader)
    train_losses.append(epoch_loss)
    print(f'Epoch [{epoch + 1}/{num_epochs}], Training Loss: {epoch_loss}')

# Plotting
plt.figure()
plt.title(f"Error of epochs", color='white', backgroundcolor='teal')
plt.plot(np.arange(0, len(train_losses)), train_losses, color='green', linewidth=2, markersize=12)
plt.xlabel('epochs')
plt.ylabel('Error')
plt.show()

此外,我的所有代码都可以在以下语言中找到: https://colab.research.google.com/drive/16Kg-4jO36Ql6GoGoo5wEdqKpF0fTPeWT#scrollTo=8mdCkWTNTIRf

我尝试了很多方法,但我做不到。

机器学习 深度学习 神经网络 MLP

评论


答: 暂无答案