一边复现一边更新吧
复现U-Mamba
一.环境安装
出错步骤
1.步骤pip install causal-conv1d==1.1.1
错误:ERROR: Could not build wheels for causal-conv1d, which is required to install pyproject.toml-based projects
解决办法:下载之后直接安装,如果想要最新版本的,第三步可以忽略**
2.步骤:pip install mamba-ssm
这里训练的时候我会出错!!!!!!!!!!我把这个版本的mamba-ssm卸载了,安装了特定版本:mamba-ssm=1.1.1
错误:Guessing wheel URL: https://github.com/state-spaces/mamba/releases/download/v1.1.1/mamba_ssm-1.1.1+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
error:
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for mamba-ssm
Running setup.py clean for mamba-ssm
Failed to build mamba-ssm
ERROR: Could not build wheels for mamba-ssm, which is required to install pyproject.toml-based projects
解决办法:点击错误里面给出的网址,下载whl文件,然后使用pip install 路径+文件名安装
3.前一段时间换了一个服务器,上面的问题完全不会出现,感觉是原来的服务器网不好,但是这个服务器安装的时候也出现问题
问题: FileNotFoundError: [Errno 2] No such file or directory: ‘/usr/local/cuda/bin/nvcc’
解决方案:不太懂服务器,新的服务器也找不到/usr/local/cuda/这个文件夹,原来的服务器就有,问了师兄,师兄说cuda都装在自己环境下,外面只有一个驱动,我就自己安装了。
参考:linux非root下安装CUDA
二.运行
1.处理数据
不知道怎么用nnunetv2的,可以参考这个:
nnunetv2
2.训练
我的是已经处理好的数据,3d的,用的这个命令
nnUNetv2_train DATASET_ID 3d_fullres all -tr nnUNetTrainerUMambaEnc
- 1
因为要配置环境变量,整了这个train.sh文件:
export nnUNet_results="data/nnUNet_results"#换成自己的路径
export nnUNet_preprocessed="data/nnUNet_raw"
export nnUNet_raw="data/nnUNet_raw"
export CUDA_VISIBLE_DEVICES=0
nnUNetv2_train 206 3d_fullres 0 -tr nnUNetTrainerUMambaBot #206是数据集号码,0是不用交叉验证,直接划分好训练集和验证集
- 1
- 2
- 3
- 4
- 5
运行命令:
source U-Mamba/train.sh
- 1
3.出错
TypeError: causal_conv1d_fwd(): incompatible function arguments. The following argument types are supported:
1. (arg0: torch.Tensor, arg1: torch.Tensor, arg2: Optional[torch.Tensor], arg3: Optional[torch.Tensor], arg4: bool) -> torch.Tensor
解决办法:https://github.com/bowang-lab/U-Mamba/issues
把mam-ssm换成作者环境里的版本
三.测试
建立test.sh文件
nnUNetv2_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -d DATASET_ID -c CONFIGURATION -tr nnUNetTrainerUMambaBot --disable_tta
- 1
**出错:**FileNotFoundError: [Errno 2] No such file or directory: ‘U-Mamba/data/nnUNet_results/Dataset208_tubu/nnUNetTrainerUMambaBot__nnUNetPlans__3d_fullres/fold_1/checkpoint_final.pth’
查看目录
没有checkpoint_final.pth
找到umamba/nnunetv2/inference/predict_from_raw_data.py的759行,
parser.add_argument('-chk', type=str, required=False, default='checkpoint_final.pth',
help='Name of the checkpoint you want to use. Default: checkpoint_final.pth')
- 1
- 2
- 3
根据需要改成checkpoint_best.pth或checkpoint_latest.pth
**注意:**而且我的是fold_0,所以在test.sh添加-f 0
**出错:**啊!!!好多奇奇怪怪的错误,我也不知道为什么,一顿操作就好了。但是也不懂为什么
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
Traceback (most recent call last):
File "umamba/bin/nnUNetv2_predict", line 33, in
sys.exit(load_entry_point('nnunetv2', 'console_scripts', 'nnUNetv2_predict')())
File "U-Mamba/umamba/nnunetv2/inference/predict_from_raw_data.py", line 833, in predict_entry_point
predictor.predict_from_files(args.i, args.o, save_probabilities=args.save_probabilities,
File "U-Mamba/umamba/nnunetv2/inference/predict_from_raw_data.py", line 250, in predict_from_files
return self.predict_from_data_iterator(data_iterator, save_probabilities, num_processes_segmentation_export)
File "U-Mamba/umamba/nnunetv2/inference/predict_from_raw_data.py", line 343, in predict_from_data_iterator
for preprocessed in data_iterator:
File "U-Mamba/umamba/nnunetv2/inference/data_iterators.py", line 109, in preprocessing_iterator_fromfiles
raise RuntimeError('Background workers died. Look for the error message further up! If there is '
RuntimeError: Background workers died. Look for the error message further up! If there is none then your RAM was full and the worker was killed by the OS. Use fewer workers or get more RAM in that case!
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
不知道是Error: mkl-service + Intel® MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.还是RuntimeError: Background workers died. Look for the error message further up! If there is none then your RAM was full and the worker was killed by the OS. Use fewer workers or get more RAM in that case!
对于Error: mkl-service + Intel® MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.,我参考这里加了:
export MKL_SERVICE_FORCE_INTEL=1
'MKL_THREADING_LAYER' = 'GNU'
- 1
- 2
对于RuntimeError: Background workers died. Look for the error message further up! If there is none then your RAM was full and the worker was killed by the OS. Use fewer workers or get more RAM in that case!,我参考nnunet里面的issues加了:
-npp 1 -nps 1
- 1
成功运行了,但是还是依然还有:Error: mkl-service + Intel® MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
解决了但是不知道为什么,摊手无奈.jpg
评论记录:
回复评论: