提问人:Adilet Kamchibekov 提问时间:11/13/2023 最后编辑:Adilet Kamchibekov 更新时间:11/13/2023 访问量:24
如何在多层感知器(MLP)中反向传播?
How to backpropagate in multi-layer perceptron (MLP)?
问:
这是我的反向传播函数代码
back(target) {
// Learning rate
const learningRate = 0.1;
let delta = null;
for (let layer = this.layers.length - 1; layer >= 0; layer--) {
const neurons = this.layers[layer];
// Calculating gradient for output neurons is different from hidden layers
if (layer == this.layers.length - 1) {
delta = neurons.map((n, i) => (n - target[i]) * this.sigmoid_deriv(n));
// Update biases of the output neurons
this.biases[layer - 1] = this.biases[layer - 1].map(
(b, i) => b - learningRate * delta[i]
);
continue;
}
const weights = this.weights[layer];
const w_gradient = neurons.map((n) => delta.map((d) => n * d));
this.weights[layer] = w_gradient.map((gradients, n) =>
gradients.map((gradient, w) => weights[n][w] - learningRate * gradient)
);
// update delta for next itereation
delta = multiply(
dot(delta, transpose(weights)),
neurons.map((n) => this.sigmoid_deriv(n))
);
if (layer > 0) {
this.biases[layer - 1] = this.biases[layer - 1].map(
(b, i) => b - learningRate * delta[i]
);
}
}
}
我正在生成 0-1 之间的随机权重和偏差。 一切似乎都很好,但根本没有学习。 我不确定我是否正确理解了此时梯度的计算。任何帮助或指导点都是值得赞赏的。
答: 暂无答案
评论