1.模型保存与加载

Q:pytorch中的序列化与反序列化

  • torch.save(obj, f, pickle_module=<module 'pickle' from '/home/v-yuega/miniconda/envs/torch_doc/lib/python3.7/pickle.py'>, pickle_protocol=2, _use_new_zipfile_serialization=True)
  • obj:对象,f:输出路径
  • torch.load(f, map_location=None, pickle_module=<module 'pickle' from '/home/v-yuega/miniconda/envs/torch_doc/lib/python3.7/pickle.py'>, **pickle_load_args)
  • f:文件路径,map_location:指定存放位置,cpu or gpu

Q:模型保存的两种方式是什么?

  • 1.保存整个Module:torch.save(net, path)
  • 2.保存模型参数:torch.save(net.state_dict(), path)
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 

2.模型finetune

In [ ]:
 
In [ ]:
 

3.GPU的使用

Q:to函数如何转换数据类型和设备?

  • ```python x = torch.ones((3, 3)) x = x.to(torch.float64)

x = torch.ones((3, 3)) x = x.to("cuda")

linear = nn.Linear(2, 2) linear.to(torch.double)

gpu1 = torch.device("cuda") linear.to(gpu1) ```

  • 注意:张量不执行inplace,模型执行inplace