CNN 的准确性卡在 Adam 和 SGD 上

CNN Accuracy Stuck on Both Adam and SGD

提问人:SpaceFox0210 提问时间:9/17/2023 最后编辑:Ro.oTSpaceFox0210 更新时间:9/21/2023 访问量:27

问:

我正在制作使用监督学习和强化学习来玩 Gomoku 的代理。问题发生在监督学习中。精度和损耗停止增加/减少。损失图精度图。粉红色是亚当,黄色是SGD

我的模型和源代码:

def InYeongGoModel(input_shape, is_policy_net = True):
    model = Sequential()
    model.add(Conv2D(64, (7, 7), input_shape=input_shape, padding='same', activation='relu', data_format='channels_first'))
    model.add(BatchNormalizationV2())
    model.add(Conv2D(64, (7, 7), input_shape=input_shape, padding='same', activation='relu', data_format='channels_first'))
    model.add(BatchNormalizationV2())
    model.add(Dropout(0.2))
    model.add(Conv2D(128, (5, 5), input_shape=input_shape, padding='same', activation='relu', data_format='channels_first'))
    model.add(BatchNormalizationV2())
    model.add(Conv2D(128, (5, 5), input_shape=input_shape, padding='same', activation='relu', data_format='channels_first'))
    model.add(BatchNormalizationV2())
    model.add(Dropout(0.2))   
    model.add(Conv2D(64, (3, 3), input_shape=input_shape, padding='same', activation='relu', data_format='channels_first'))
    model.add(BatchNormalizationV2())
    model.add(Conv2D(64, (3, 3), input_shape=input_shape, padding='same', activation='relu', data_format='channels_first'))
    model.add(BatchNormalizationV2())
    model.add(Dropout(0.2))

    if is_policy_net:
        model.add(Conv2D(filters=1, kernel_size=1, padding='same', data_format='channels_first', activation='softmax'))
        model.add(Flatten())
        return model
    def data_generator(self, states, actions, rewards, batch_size):
        n = states.shape[0]
        num_moves = len(actions)
        indices = np.arange(n)
        
        while True:
            np.random.shuffle(indices)
            
            for start_idx in range(0, n, batch_size):
                end_idx = min(start_idx + batch_size, n)
                batch_indices = indices[start_idx:end_idx]
                
                batch_states = states[batch_indices]
                batch_actions = actions[batch_indices]
                batch_rewards = rewards[batch_indices]
                
                y = np.zeros((batch_indices.shape[0], num_moves))
                
                for i, action in enumerate(batch_actions):
                    reward = batch_rewards[i]
                    y[i][action] = reward
                
                yield batch_states, y

    def train(self, experience, lr=0.00001, clipnorm=1.0, batch_size:int=256, epochs:int = 1):
        opt = SGD(learning_rate=lr, clipnorm=clipnorm) 
        #opt = Adam()
        self.model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])

        n = experience.states.shape[0]

        #current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
        #tensorboard_callback = TensorBoard(log_dir=f"./logs_{current_time}")
        
        checkpoint_callback = ModelCheckpoint(
            filepath=f"{self.encoder.name()}_alphago_sl_checkpoint_{{epoch:02d}}",
            save_best_only=False,
            period=10  # Save model every 10 epochs
        )
        
        generator = self.data_generator(experience.states, experience.actions, experience.rewards, batch_size)
        
        self.model.fit(
            generator,
            steps_per_epoch=n // batch_size,
            epochs=epochs,
            #callbacks=[tensorboard_callback, checkpoint_callback]
        )

我使用 Adam 和 SGD,学习率为 0.01、0.001、1e-4。但是每个 lr 都没有解决问题。 数据集的长度为 1280000,批大小从 256 到 512 不等。 总周期为100个,学习时间约为4小时(2.4分钟/周期)。有没有解决方案,或者更快的训练方法?

TensorFlow 深度学习 conv-neural-network 代理 gomoku

评论


答: 暂无答案