译(六十一)-PyTorch不使用momentum直接改变学习率

stackoverflow热门问题目录

如有翻译问题欢迎评论指出,谢谢。

有一说一,这篇提问有够离谱,提问回答总共三个,全是同一个人,虽然他评论说这是 @某某某 提出的

PyTorch:不使用momentum直接改变学习率

  • patapouf_ai asked:

    • PyTorch 能否在训练时动态改变学习率(不提前指定好如何变化)?

    • 例如以下的优化器:

    • optim = torch.optim.SGD(model.parameters(), lr=0.01)
    • 发现学习率太高了,所以想设成 0.001 之类的数据,但没找到 optim.set_lr(0.001) 这类的操作,有可以满足需求的操作吗?

  • Answers:

    • patapouf_ai – vote: 130

    • 对于存储于 optim.param_groups[i]['lr'] 的学习率,optim.param_groups 是不同权重组的列表,可以有不同的学习率。因此可以这样:

    • for g in optim.param_groups:
        g['lr'] = 0.001
    • 一点小技巧。

    • 另一种方式,

    • 正如评论所提到的那样,如果学习率只取决于你的 epoch,那可以试试学习率调度器

    • 例如(修改于文档案例):

    • torch.optim.lr_scheduler import LambdaLR
      optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
      # Assuming optimizer has two groups.
      lambda_group1 = lambda epoch: epoch // 30
      lambda_group2 = lambda epoch: 0.95 ** epoch
      scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
      for epoch in range(100):
        train(...)
        validate(...)
        scheduler.step()
    • 这里是预设的学习率调度器,可以减少停滞。

    • patapouf_ai – vote: 10

    • 除了 patapouf_ai 提到的循环,还可以这样:

    • optim.param_groups[0]['lr'] = 0.001

PyTorch: How to change the learning rate of an optimizer at any given moment (no LR schedule)

  • patapouf_ai asked:

    • Is it possible in PyTorch to change the learning rate of the optimizer in the middle of training dynamically (I don\’t want to define a learning rate schedule beforehand)?
      PyTorch 能否在训练时动态改变学习率(不提前指定好如何变化)?

    • So let\’s say I have an optimizer:
      例如以下的优化器:

    • optim = torch.optim.SGD(model.parameters(), lr=0.01)
    • Now due to some tests which I perform during training, I realize my learning rate is too high so I want to change it to say 0.001. There doesn\’t seem to be a method optim.set_lr(0.001) but is there some way to do this?
      我发现学习率太高了,所以想设成 0.001 之类的数据,但没找到 optim.set_lr(0.001) 这类的操作,有可以满足需求的操作吗?

  • Answers:

    • patapouf_ai – vote: 130

    • So the learning rate is stored in optim.param_groups[i]['lr'].optim.param_groups is a list of the different weight groups which can have different learning rates. Thus, simply doing:
      对于存储于 optim.param_groups[i]['lr'] 的学习率,optim.param_groups 是不同权重组的列表,可以有不同的学习率。因此可以这样:

    • for g in optim.param_groups:
        g['lr'] = 0.001
    • will do the trick.
      一点小技巧。

    • Alternatively,
      另一种方式,

    • as mentionned in the comments, if your learning rate only depends on the epoch number, you can use a learning rate scheduler.
      正如评论所提到的那样,如果学习率只取决于你的 epoch,那可以试试学习率调度器

    • For example (modified example from the doc):
      例如(修改于文档案例):

    • torch.optim.lr_scheduler import LambdaLR
      optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
      # Assuming optimizer has two groups.
      lambda_group1 = lambda epoch: epoch // 30
      lambda_group2 = lambda epoch: 0.95 ** epoch
      scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
      for epoch in range(100):
        train(...)
        validate(...)
        scheduler.step()
    • Also, there is a prebuilt learning rate scheduler to reduce on plateaus.
      这里是预设的学习率调度器,可以减少停滞。

    • patapouf_ai – vote: 10

    • Instead of a loop in patapouf_ai\’s answer, you can do it directly via:
      除了 patapouf_ai 提到的循环,还可以这样:

    • optim.param_groups[0]['lr'] = 0.001

You may also like...

发表评论

您的电子邮箱地址不会被公开。

CAPTCHAis initialing...