Skip to content

Commit

Permalink
Merge branch 'main' of github.com:Elvin-Ma/pytorch_guide into main
Browse files Browse the repository at this point in the history
  • Loading branch information
Elvin-Ma committed Aug 19, 2023
2 parents 8c17bc3 + c19f46a commit ab651c6
Show file tree
Hide file tree
Showing 2 changed files with 51 additions and 0 deletions.
2 changes: 2 additions & 0 deletions 0-install_guide/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
cuda: 在Nvidia 的卡上跑模型要用到;
cpu: 在cpu上跑模型;
rocm: amd 显卡对应的计算平台;
7. cuda 和 driver 版本对应:
[cuda driver 对照表](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html)

# conda 环境管理
1. (base) C:\Users\86183> : base : 我们现在在base 这个环境里;
Expand Down
49 changes: 49 additions & 0 deletions 0-torch_base/optimizer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# 优化器展示


## 1 adam
[参考链接](https://pytorch.org/docs/master/generated/torch.optim.Adam.html?highlight=adam#torch.optim.Adam)

```python
CLASS torch.optim.Adam(
params, # 可迭代的parameters, 或者是装有parameter组的字典
lr=0.001, # 学习率
betas=(0.9, 0.999), # 用于计算梯度及其平方的移动平均值系数
eps=1e-08, # 防止分母为0
weight_decay=0, # 权重衰减系数,L2 惩罚, 默认为0
amsgrad=False, # 是否使用该算法的amsgrad 变体
*,
foreach=None, #
maximize=False, # 最大化梯度--> 梯度提升
capturable=False, # 在CUDA图中捕获此实例是否安全
differentiable=False, # 是否可进行自动微分
fused=None # 是否使用融合实现(仅CUDA)。
)

add_param_group(param_group)
'''
用于向优化器中添加新的参数组;
参数组:一组共享相同超参数(学习率、权重衰减等)的模型参数;
通过定义不同的参数组,可以为模型的不同部分或不同层,设置不同的超参数;
这在微调预训练的网络时很有用.
'''

```

*添加参数组
```python
import torch
import torch.optim as optim

# 创建模型和优化器
model = torch.nn.Linear(10, 2)
optimizer = optim.SGD(model.parameters(), lr=0.1)

# 创建新的参数组
new_params = [{'params': model.parameters(), 'lr': 0.01}]

# 将新的参数组添加到优化器中
optimizer.add_param_group(new_params)
```

## 2 sgd

0 comments on commit ab651c6

Please sign in to comment.