跨境派

跨境派

跨境派,专注跨境行业新闻资讯、跨境电商知识分享!

当前位置:首页 > 综合服务 > 社群媒体 > pytorch分布式训练报错RuntimeError: Socket Timeout

pytorch分布式训练报错RuntimeError: Socket Timeout

时间:2024-04-11 16:15:36 来源:网络cs 作者:亙句 栏目:社群媒体 阅读:

标签: 分布  训练 

出错背景:在我的训练过程中,因为任务特殊性,用的是多卡训练单卡测试策略。模型测试的时候,由于数据集太大且测试过程指标计算量大,因此测试时间较长。

报错信息:

 File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 940, in __init__    self._reset(loader, first_iter=True)  File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 971, in _reset    self._try_put_index()  File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1205, in _try_put_index    index = self._next_index()  File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 508, in _next_index    return next(self._sampler_iter)  # may raise StopIteration  File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 227, in __iter__    for idx in self.sampler:  File "/home/anys/GRALF/AlignTransReID/TransReID/datasets/sampler_ddp.py", line 148, in __iter__    seed = shared_random_seed()  File "/home/anys/GRALF/AlignTransReID/TransReID/datasets/sampler_ddp.py", line 108, in shared_random_seed    all_ints = all_gather(ints)  File "/home/anys/GRALF/AlignTransReID/TransReID/datasets/sampler_ddp.py", line 77, in all_gather    group = _get_global_gloo_group()  File "/home/anys/GRALF/AlignTransReID/TransReID/datasets/sampler_ddp.py", line 18, in _get_global_gloo_group    return dist.new_group(backend="gloo")  File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2503, in new_group    pg = _new_process_group_helper(group_world_size,  File "/home/anys/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 588, in _new_process_group_helper    pg = ProcessGroupGloo(RuntimeError: Socket Timeout

从报错信息中可以看到是数据加载的时候,创建进程引起的超时,解决方法就是将“进程”的“存活”时间加大:

torch.distributed.new_group(backend="gloo",timeout=datetime.timedelta(days=1))

当出现超时报错时,可以先检查所有创建子进程的地方,将timeout调大。

本文链接:https://www.kjpai.cn/news/2024-04-11/157159.html,文章来源:网络cs,作者:亙句,版权归作者所有,如需转载请注明来源和作者,否则将追究法律责任!

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。

文章评论