首页 最新 热门 推荐

  • 首页
  • 最新
  • 热门
  • 推荐

【6D位姿估计】FoundationPose 跑通demo 训练记录

  • 25-02-18 21:00
  • 3154
  • 12548
blog.csdn.net

前言

本文记录在FoundationPose中,跑通基于CAD模型为输入的demo,输出位姿信息,可视化结果。

然后分享NeRF物体重建部分的训练,以及RGBD图为输入的demo。

1、搭建环境

方案1:基于docker镜像(推荐)

首先下载开源代码:https://github.com/NVlabs/FoundationPose

然后执行下面命令,拉取镜像,并构建镜像环境

  1. cd docker/
  2. docker pull wenbowen123/foundationpose && docker tag wenbowen123/foundationpose foundationpose
  3. bash docker/run_container.sh
  4. bash build_all.sh

构建完成后,可以用docker exec 进入镜像容器中。

方案2:基于Conda(比较麻烦)

首先安装 Eigen3

  1. cd $HOME && wget -q https://gitlab.com/libeigen/eigen/-/archive/3.4.0/eigen-3.4.0.tar.gz && \
  2. tar -xzf eigen-3.4.0.tar.gz && \
  3. cd eigen-3.4.0 && mkdir build && cd build
  4. cmake .. -Wno-dev -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=-std=c++14 ..
  5. sudo make install
  6. cd $HOME && rm -rf eigen-3.4.0 eigen-3.4.0.tar.gz

然后参考下面命令,创建conda环境

  1. # create conda environment
  2. create -n foundationpose python=3.9
  3. # activate conda environment
  4. conda activate foundationpose
  5. # install dependencies
  6. python -m pip install -r requirements.txt
  7. # Install NVDiffRast
  8. python -m pip install --quiet --no-cache-dir git+https://github.com/NVlabs/nvdiffrast.git
  9. # Kaolin (Optional, needed if running model-free setup)
  10. python -m pip install --quiet --no-cache-dir kaolin==0.15.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-2.0.0_cu118.html
  11. # PyTorch3D
  12. python -m pip install --quiet --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py39_cu118_pyt200/download.html
  13. # Build extensions
  14. CMAKE_PREFIX_PATH=$CONDA_PREFIX/lib/python3.9/site-packages/pybind11/share/cmake/pybind11 bash build_all_conda.sh

2、基于CAD模型为输入的demo

首先,下载模型权重,里面包括两个文件夹;点击下载(模型权重)

在工程目录中创建weights/,将下面两个文件夹放到里面。

然后,下载测试数据,里面包括两个压缩文件;点击下载(demo数据)

在工程目录中创建demo_data/,加压文件,将下面两个文件放到里面。

 运行 run_demo.py,实现CAD模型为输入的demo

python run_demo.py --debug 2

如果是服务器运行,没有可视化的,需要注释两行代码:

    if debug>=1:
      center_pose = [email protected](to_origin)
      vis = draw_posed_3d_box(reader.K, img=color, ob_in_cam=center_pose, bbox=bbox)
      vis = draw_xyz_axis(color, ob_in_cam=center_pose, scale=0.1, K=reader.K, thickness=3, transparency=0, is_input_rgb=True)

      # cv2.imshow('1', vis[...,::-1])
      # cv2.waitKey(1)

然后看到demo_data/mustard0/,里面生成了ob_in_cam、track_vis文件夹

ob_in_cam 是位姿估计的结果,用txt文件存储,示例文件:

  1. 6.073544621467590332e-01 -2.560715079307556152e-01 7.520291209220886230e-01 -4.481770694255828857e-01
  2. -7.755840420722961426e-01 -3.960975110530853271e-01 4.915038347244262695e-01 1.187708452343940735e-01
  3. 1.720167100429534912e-01 -8.817789554595947266e-01 -4.391765296459197998e-01 8.016449213027954102e-01
  4. 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 1.000000000000000000e+00

track_vis 是可视化结果,能看到多张图片:

3、NeRF物体重建训练

下载训练数据,Linemod和YCB-V两个公开数据集的示例:

点击下载(RGBD参考数据)

示例1:训练Linemod数据集

修改代码bundlesdf/run_nerf.py,修改为use_refined_mask=False,即98行:

mesh = run_one_ob(base_dir=base_dir, cfg=cfg, use_refined_mask=False)

 然后执行命令:

python bundlesdf/run_nerf.py --ref_view_dir /DATASET/lm_ref_views --dataset linemod

 如果是服务器运行,没有可视化的,需要安装xvfb

  1. sudo apt-get update
  2. sudo apt-get install -y xvfb

 然后执行命令:

xvfb-run -s "-screen 0 1024x768x24" python bundlesdf/run_nerf.py --ref_view_dir model_free_ref_views/lm_ref_views --dataset linemod

因为训练NeRF需要渲染的,使用xvfb进行模拟。

能看到打印信息:

  1. bundlesdf/run_nerf.py:61: DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to that of iio.v3.imread. To keep the current behavior (and make this warning disappear) use `import imageio.v2 as imageio` or call `imageio.v2.imread` directly.
  2. rgb = imageio.imread(color_file)
  3. [compute_scene_bounds()] compute_scene_bounds_worker start
  4. [compute_scene_bounds()] compute_scene_bounds_worker done
  5. [compute_scene_bounds()] merge pcd
  6. [compute_scene_bounds()] compute_translation_scales done
  7. translation_cvcam=[0.00024226 0.00356217 0.00056694], sc_factor=19.274929219577043
  8. [build_octree()] Octree voxel dilate_radius:1
  9. [__init__()] level:0, vox_pts:torch.Size([1, 3]), corner_pts:torch.Size([8, 3])
  10. [__init__()] level:1, vox_pts:torch.Size([8, 3]), corner_pts:torch.Size([27, 3])
  11. [__init__()] level:2, vox_pts:torch.Size([64, 3]), corner_pts:torch.Size([125, 3])
  12. [draw()] level:2
  13. [draw()] level:2
  14. level 0, resolution: 32
  15. level 1, resolution: 37
  16. level 2, resolution: 43
  17. level 3, resolution: 49
  18. level 4, resolution: 56
  19. level 5, resolution: 64
  20. level 6, resolution: 74
  21. level 7, resolution: 85
  22. level 8, resolution: 98
  23. level 9, resolution: 112
  24. level 10, resolution: 128
  25. level 11, resolution: 148
  26. level 12, resolution: 169
  27. level 13, resolution: 195
  28. level 14, resolution: 223
  29. level 15, resolution: 256
  30. GridEncoder: input_dim=3 n_levels=16 level_dim=2 resolution=32 -> 256 per_level_scale=1.1487 params=(26463840, 2) gridtype=hash align_corners=False
  31. sc_factor 19.274929219577043
  32. translation [0.00024226 0.00356217 0.00056694]
  33. [__init__()] denoise cloud
  34. [__init__()] Denoising rays based on octree cloud
  35. [__init__()] bad_mask#=3
  36. rays torch.Size([128387, 12])
  37. [train()] train progress 0/1001
  38. [train_loop()] Iter: 0, valid_samples: 524161/524288, valid_rays: 2048/2048, loss: 309.0942383, rgb_loss: 0.0216732, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 301.6735840, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 7.2143111, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.1152707,
  39. [train()] train progress 100/1001
  40. [train()] train progress 200/1001
  41. [train()] train progress 300/1001
  42. [train()] train progress 400/1001
  43. [train()] train progress 500/1001
  44. Saved checkpoints at model_free_ref_views/lm_ref_views/ob_0000001/nerf/model_latest.pth
  45. [train_loop()] Iter: 500, valid_samples: 518554/524288, valid_rays: 2026/2048, loss: 1.0530750, rgb_loss: 0.0009063, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.2142579, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 0.8360301, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0008409,
  46. [extract_mesh()] query_pts:torch.Size([42875, 3]), valid:42875
  47. [extract_mesh()] Running Marching Cubes
  48. [extract_mesh()] done V:(4536, 3), F:(8986, 3)
  49. [train()] train progress 600/1001
  50. [train()] train progress 700/1001
  51. [train()] train progress 800/1001
  52. [train()] train progress 900/1001
  53. [train()] train progress 1000/1001
  54. Saved checkpoints at model_free_ref_views/lm_ref_views/ob_0000001/nerf/model_latest.pth
  55. [train_loop()] Iter: 1000, valid_samples: 519351/524288, valid_rays: 2029/2048, loss: 0.4827633, rgb_loss: 0.0006563, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.0935674, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 0.3876466, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0001022,
  56. [extract_mesh()] query_pts:torch.Size([42875, 3]), valid:42875
  57. [extract_mesh()] Running Marching Cubes
  58. [extract_mesh()] done V:(5265, 3), F:(10328, 3)
  59. [extract_mesh()] query_pts:torch.Size([42875, 3]), valid:42875
  60. [extract_mesh()] Running Marching Cubes
  61. [extract_mesh()] done V:(5265, 3), F:(10328, 3)
  62. [()] OpenGL_accelerate module loaded
  63. [()] Using accelerated ArrayDatatype
  64. [mesh_texture_from_train_images()] Texture: Texture map computation
  65. project train_images 0/16
  66. project train_images 1/16
  67. project train_images 2/16
  68. project train_images 3/16
  69. project train_images 4/16
  70. project train_images 5/16
  71. project train_images 6/16
  72. project train_images 7/16
  73. project train_images 8/16
  74. project train_images 9/16
  75. project train_images 10/16
  76. project train_images 11/16
  77. project train_images 12/16
  78. project train_images 13/16
  79. project train_images 14/16
  80. project train_images 15/16

重点留意,损失的变化:

[train()] train progress 0/1001
[train_loop()] Iter: 0, valid_samples: 524161/524288, valid_rays: 2048/2048, loss: 309.0942383, rgb_loss: 0.0216732, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 301.6735840, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 7.2143111, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.1152707,

[train()] train progress 100/1001
[train()] train progress 200/1001
[train()] train progress 300/1001
[train()] train progress 400/1001
[train()] train progress 500/1001
Saved checkpoints at model_free_ref_views/lm_ref_views/ob_0000001/nerf/model_latest.pth
[train_loop()] Iter: 500, valid_samples: 518554/524288, valid_rays: 2026/2048, loss: 1.0530750, rgb_loss: 0.0009063, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.2142579, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 0.8360301, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0008409,

默认训练1000轮,训练也挺快的。

在lm_ref_views/ob_0000001/中生成了nerf文件夹,存放下面文件:

在lm_ref_views/ob_0000001/model中,生成了的model.obj,后续模型推理或demo直接使用它。

示例2:训练YCB-V数据集

python bundlesdf/run_nerf.py --ref_view_dir /DATASET/ycbv/ref_views_16 --dataset ycbv

如果是服务器运行,没有可视化的,需要安装xvfb

  1. sudo apt-get update
  2. sudo apt-get install -y xvfb

 然后执行命令:

xvfb-run -s "-screen 0 1024x768x24" python bundlesdf/run_nerf.py --ref_view_dir /DATASET/ycbv/ref_views_16 --dataset ycbv

因为训练NeRF需要渲染的,使用xvfb进行模拟。

4、RGBD图输入demo

这里以Linemod数据集为示例,首先下面测试数据集

点击下载(测试数据)

然后加压文件,存放路径:FoundationPose-main/model_free_ref_views/lm_test_all

官方代码有问题,需要替换两个代码:datareader.py、run_linemod.py

run_linemod.py

  1. # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
  2. #
  3. # NVIDIA CORPORATION and its licensors retain all intellectual property
  4. # and proprietary rights in and to this software, related documentation
  5. # and any modifications thereto. Any use, reproduction, disclosure or
  6. # distribution of this software and related documentation without an express
  7. # license agreement from NVIDIA CORPORATION is strictly prohibited.
  8. from Utils import *
  9. import json,uuid,joblib,os,sys
  10. import scipy.spatial as spatial
  11. from multiprocessing import Pool
  12. import multiprocessing
  13. from functools import partial
  14. from itertools import repeat
  15. import itertools
  16. from datareader import *
  17. from estimater import *
  18. code_dir = os.path.dirname(os.path.realpath(__file__))
  19. sys.path.append(f'{code_dir}/mycpp/build')
  20. import yaml
  21. import re
  22. def get_mask(reader, i_frame, ob_id, detect_type):
  23. if detect_type=='box':
  24. mask = reader.get_mask(i_frame, ob_id)
  25. H,W = mask.shape[:2]
  26. vs,us = np.where(mask>0)
  27. umin = us.min()
  28. umax = us.max()
  29. vmin = vs.min()
  30. vmax = vs.max()
  31. valid = np.zeros((H,W), dtype=bool)
  32. valid[vmin:vmax,umin:umax] = 1
  33. elif detect_type=='mask':
  34. mask = reader.get_mask(i_frame, ob_id)
  35. if mask is None:
  36. return None
  37. valid = mask>0
  38. elif detect_type=='detected':
  39. mask = cv2.imread(reader.color_files[i_frame].replace('rgb','mask_cosypose'), -1)
  40. valid = mask==ob_id
  41. else:
  42. raise RuntimeError
  43. return valid
  44. def run_pose_estimation_worker(reader, i_frames, est:FoundationPose=None, debug=0, ob_id=None, device='cuda:0'):
  45. torch.cuda.set_device(device)
  46. est.to_device(device)
  47. est.glctx = dr.RasterizeCudaContext(device=device)
  48. result = NestDict()
  49. for i, i_frame in enumerate(i_frames):
  50. logging.info(f"{i}/{len(i_frames)}, i_frame:{i_frame}, ob_id:{ob_id}")
  51. print("\n### ", f"{i}/{len(i_frames)}, i_frame:{i_frame}, ob_id:{ob_id}")
  52. video_id = reader.get_video_id()
  53. color = reader.get_color(i_frame)
  54. depth = reader.get_depth(i_frame)
  55. id_str = reader.id_strs[i_frame]
  56. H,W = color.shape[:2]
  57. debug_dir =est.debug_dir
  58. ob_mask = get_mask(reader, i_frame, ob_id, detect_type=detect_type)
  59. if ob_mask is None:
  60. logging.info("ob_mask not found, skip")
  61. result[video_id][id_str][ob_id] = np.eye(4)
  62. return result
  63. est.gt_pose = reader.get_gt_pose(i_frame, ob_id)
  64. pose = est.register(K=reader.K, rgb=color, depth=depth, ob_mask=ob_mask, ob_id=ob_id)
  65. logging.info(f"pose:\n{pose}")
  66. if debug>=3:
  67. m = est.mesh_ori.copy()
  68. tmp = m.copy()
  69. tmp.apply_transform(pose)
  70. tmp.export(f'{debug_dir}/model_tf.obj')
  71. result[video_id][id_str][ob_id] = pose
  72. return result, pose
  73. def run_pose_estimation():
  74. wp.force_load(device='cuda')
  75. reader_tmp = LinemodReader(opt.linemod_dir, split=None)
  76. print("## opt.linemod_dir:", opt.linemod_dir)
  77. debug = opt.debug
  78. use_reconstructed_mesh = opt.use_reconstructed_mesh
  79. debug_dir = opt.debug_dir
  80. res = NestDict()
  81. glctx = dr.RasterizeCudaContext()
  82. mesh_tmp = trimesh.primitives.Box(extents=np.ones((3)), transform=np.eye(4)).to_mesh()
  83. est = FoundationPose(model_pts=mesh_tmp.vertices.copy(), model_normals=mesh_tmp.vertex_normals.copy(), symmetry_tfs=None, mesh=mesh_tmp, scorer=None, refiner=None, glctx=glctx, debug_dir=debug_dir, debug=debug)
  84. # ob_id
  85. match = re.search(r'\d+$', opt.linemod_dir)
  86. if match:
  87. last_number = match.group()
  88. ob_id = int(last_number)
  89. else:
  90. print("No digits found at the end of the string")
  91. # for ob_id in reader_tmp.ob_ids:
  92. if ob_id:
  93. if use_reconstructed_mesh:
  94. print("## ob_id:", ob_id)
  95. print("## opt.linemod_dir:", opt.linemod_dir)
  96. print("## opt.ref_view_dir:", opt.ref_view_dir)
  97. mesh = reader_tmp.get_reconstructed_mesh(ref_view_dir=opt.ref_view_dir)
  98. else:
  99. mesh = reader_tmp.get_gt_mesh(ob_id)
  100. # symmetry_tfs = reader_tmp.symmetry_tfs[ob_id] # !!!!!!!!!!!!!!!!
  101. args = []
  102. reader = LinemodReader(opt.linemod_dir, split=None)
  103. video_id = reader.get_video_id()
  104. # est.reset_object(model_pts=mesh.vertices.copy(), model_normals=mesh.vertex_normals.copy(), symmetry_tfs=symmetry_tfs, mesh=mesh) # raw
  105. est.reset_object(model_pts=mesh.vertices.copy(), model_normals=mesh.vertex_normals.copy(), mesh=mesh) # !!!!!!!!!!!!!!!!
  106. print("### len(reader.color_files):", len(reader.color_files))
  107. for i in range(len(reader.color_files)):
  108. args.append((reader, [i], est, debug, ob_id, "cuda:0"))
  109. # vis Data
  110. to_origin, extents = trimesh.bounds.oriented_bounds(mesh)
  111. bbox = np.stack([-extents/2, extents/2], axis=0).reshape(2,3)
  112. os.makedirs(f'{opt.linemod_dir}/track_vis', exist_ok=True)
  113. outs = []
  114. i = 0
  115. for arg in args[:200]:
  116. print("### num:", i)
  117. out, pose = run_pose_estimation_worker(*arg)
  118. outs.append(out)
  119. center_pose = [email protected](to_origin)
  120. img_color = reader.get_color(i)
  121. vis = draw_posed_3d_box(reader.K, img=img_color, ob_in_cam=center_pose, bbox=bbox)
  122. vis = draw_xyz_axis(img_color, ob_in_cam=center_pose, scale=0.1, K=reader.K, thickness=3, transparency=0, is_input_rgb=True)
  123. imageio.imwrite(f'{opt.linemod_dir}/track_vis/{reader.id_strs[i]}.png', vis)
  124. i = i + 1
  125. for out in outs:
  126. for video_id in out:
  127. for id_str in out[video_id]:
  128. for ob_id in out[video_id][id_str]:
  129. res[video_id][id_str][ob_id] = out[video_id][id_str][ob_id]
  130. with open(f'{opt.debug_dir}/linemod_res.yml','w') as ff:
  131. yaml.safe_dump(make_yaml_dumpable(res), ff)
  132. print("Save linemod_res.yml OK !!!")
  133. if __name__=='__main__':
  134. parser = argparse.ArgumentParser()
  135. code_dir = os.path.dirname(os.path.realpath(__file__))
  136. parser.add_argument('--linemod_dir', type=str, default="/guopu/FoundationPose-main/model_free_ref_views/lm_test_all/000015", help="linemod root dir") # lm_test_all lm_test
  137. parser.add_argument('--use_reconstructed_mesh', type=int, default=1)
  138. parser.add_argument('--ref_view_dir', type=str, default="/guopu/FoundationPose-main/model_free_ref_views/lm_ref_views/ob_0000015")
  139. parser.add_argument('--debug', type=int, default=0)
  140. parser.add_argument('--debug_dir', type=str, default=f'/guopu/FoundationPose-main/model_free_ref_views/lm_test_all/debug') # lm_test_all lm_test
  141. opt = parser.parse_args()
  142. set_seed(0)
  143. detect_type = 'mask' # mask / box / detected
  144. run_pose_estimation()

datareader.py

  1. # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
  2. #
  3. # NVIDIA CORPORATION and its licensors retain all intellectual property
  4. # and proprietary rights in and to this software, related documentation
  5. # and any modifications thereto. Any use, reproduction, disclosure or
  6. # distribution of this software and related documentation without an express
  7. # license agreement from NVIDIA CORPORATION is strictly prohibited.
  8. from Utils import *
  9. import json,os,sys
  10. BOP_LIST = ['lmo','tless','ycbv','hb','tudl','icbin','itodd']
  11. BOP_DIR = os.getenv('BOP_DIR')
  12. def get_bop_reader(video_dir, zfar=np.inf):
  13. if 'ycbv' in video_dir or 'YCB' in video_dir:
  14. return YcbVideoReader(video_dir, zfar=zfar)
  15. if 'lmo' in video_dir or 'LINEMOD-O' in video_dir:
  16. return LinemodOcclusionReader(video_dir, zfar=zfar)
  17. if 'tless' in video_dir or 'TLESS' in video_dir:
  18. return TlessReader(video_dir, zfar=zfar)
  19. if 'hb' in video_dir:
  20. return HomebrewedReader(video_dir, zfar=zfar)
  21. if 'tudl' in video_dir:
  22. return TudlReader(video_dir, zfar=zfar)
  23. if 'icbin' in video_dir:
  24. return IcbinReader(video_dir, zfar=zfar)
  25. if 'itodd' in video_dir:
  26. return ItoddReader(video_dir, zfar=zfar)
  27. else:
  28. raise RuntimeError
  29. def get_bop_video_dirs(dataset):
  30. if dataset=='ycbv':
  31. video_dirs = sorted(glob.glob(f'{BOP_DIR}/ycbv/test/*'))
  32. elif dataset=='lmo':
  33. video_dirs = sorted(glob.glob(f'{BOP_DIR}/lmo/lmo_test_bop19/test/*'))
  34. elif dataset=='tless':
  35. video_dirs = sorted(glob.glob(f'{BOP_DIR}/tless/tless_test_primesense_bop19/test_primesense/*'))
  36. elif dataset=='hb':
  37. video_dirs = sorted(glob.glob(f'{BOP_DIR}/hb/hb_test_primesense_bop19/test_primesense/*'))
  38. elif dataset=='tudl':
  39. video_dirs = sorted(glob.glob(f'{BOP_DIR}/tudl/tudl_test_bop19/test/*'))
  40. elif dataset=='icbin':
  41. video_dirs = sorted(glob.glob(f'{BOP_DIR}/icbin/icbin_test_bop19/test/*'))
  42. elif dataset=='itodd':
  43. video_dirs = sorted(glob.glob(f'{BOP_DIR}/itodd/itodd_test_bop19/test/*'))
  44. else:
  45. raise RuntimeError
  46. return video_dirs
  47. class YcbineoatReader:
  48. def __init__(self,video_dir, downscale=1, shorter_side=None, zfar=np.inf):
  49. self.video_dir = video_dir
  50. self.downscale = downscale
  51. self.zfar = zfar
  52. self.color_files = sorted(glob.glob(f"{self.video_dir}/rgb/*.png"))
  53. self.K = np.loadtxt(f'{video_dir}/cam_K.txt').reshape(3,3)
  54. self.id_strs = []
  55. for color_file in self.color_files:
  56. id_str = os.path.basename(color_file).replace('.png','')
  57. self.id_strs.append(id_str)
  58. self.H,self.W = cv2.imread(self.color_files[0]).shape[:2]
  59. if shorter_side is not None:
  60. self.downscale = shorter_side/min(self.H, self.W)
  61. self.H = int(self.H*self.downscale)
  62. self.W = int(self.W*self.downscale)
  63. self.K[:2] *= self.downscale
  64. self.gt_pose_files = sorted(glob.glob(f'{self.video_dir}/annotated_poses/*'))
  65. self.videoname_to_object = {
  66. 'bleach0': "021_bleach_cleanser",
  67. 'bleach_hard_00_03_chaitanya': "021_bleach_cleanser",
  68. 'cracker_box_reorient': '003_cracker_box',
  69. 'cracker_box_yalehand0': '003_cracker_box',
  70. 'mustard0': '006_mustard_bottle',
  71. 'mustard_easy_00_02': '006_mustard_bottle',
  72. 'sugar_box1': '004_sugar_box',
  73. 'sugar_box_yalehand0': '004_sugar_box',
  74. 'tomato_soup_can_yalehand0': '005_tomato_soup_can',
  75. }
  76. def get_video_name(self):
  77. return self.video_dir.split('/')[-1]
  78. def __len__(self):
  79. return len(self.color_files)
  80. def get_gt_pose(self,i):
  81. try:
  82. pose = np.loadtxt(self.gt_pose_files[i]).reshape(4,4)
  83. return pose
  84. except:
  85. logging.info("GT pose not found, return None")
  86. return None
  87. def get_color(self,i):
  88. color = imageio.imread(self.color_files[i])[...,:3]
  89. color = cv2.resize(color, (self.W,self.H), interpolation=cv2.INTER_NEAREST)
  90. return color
  91. def get_mask(self,i):
  92. mask = cv2.imread(self.color_files[i].replace('rgb','masks'),-1)
  93. if len(mask.shape)==3:
  94. for c in range(3):
  95. if mask[...,c].sum()>0:
  96. mask = mask[...,c]
  97. break
  98. mask = cv2.resize(mask, (self.W,self.H), interpolation=cv2.INTER_NEAREST).astype(bool).astype(np.uint8)
  99. return mask
  100. def get_depth(self,i):
  101. depth = cv2.imread(self.color_files[i].replace('rgb','depth'),-1)/1e3
  102. depth = cv2.resize(depth, (self.W,self.H), interpolation=cv2.INTER_NEAREST)
  103. depth[(depth<0.1) | (depth>=self.zfar)] = 0
  104. return depth
  105. def get_xyz_map(self,i):
  106. depth = self.get_depth(i)
  107. xyz_map = depth2xyzmap(depth, self.K)
  108. return xyz_map
  109. def get_occ_mask(self,i):
  110. hand_mask_file = self.color_files[i].replace('rgb','masks_hand')
  111. occ_mask = np.zeros((self.H,self.W), dtype=bool)
  112. if os.path.exists(hand_mask_file):
  113. occ_mask = occ_mask | (cv2.imread(hand_mask_file,-1)>0)
  114. right_hand_mask_file = self.color_files[i].replace('rgb','masks_hand_right')
  115. if os.path.exists(right_hand_mask_file):
  116. occ_mask = occ_mask | (cv2.imread(right_hand_mask_file,-1)>0)
  117. occ_mask = cv2.resize(occ_mask, (self.W,self.H), interpolation=cv2.INTER_NEAREST)
  118. return occ_mask.astype(np.uint8)
  119. def get_gt_mesh(self):
  120. ob_name = self.videoname_to_object[self.get_video_name()]
  121. YCB_VIDEO_DIR = os.getenv('YCB_VIDEO_DIR')
  122. mesh = trimesh.load(f'{YCB_VIDEO_DIR}/models/{ob_name}/textured_simple.obj')
  123. return mesh
  124. class BopBaseReader:
  125. def __init__(self, base_dir, zfar=np.inf, resize=1):
  126. self.base_dir = base_dir
  127. self.resize = resize
  128. self.dataset_name = None
  129. self.color_files = sorted(glob.glob(f"{self.base_dir}/rgb/*"))
  130. if len(self.color_files)==0:
  131. self.color_files = sorted(glob.glob(f"{self.base_dir}/gray/*"))
  132. self.zfar = zfar
  133. self.K_table = {}
  134. with open(f'{self.base_dir}/scene_camera.json','r') as ff:
  135. info = json.load(ff)
  136. for k in info:
  137. self.K_table[f'{int(k):06d}'] = np.array(info[k]['cam_K']).reshape(3,3)
  138. self.bop_depth_scale = info[k]['depth_scale']
  139. if os.path.exists(f'{self.base_dir}/scene_gt.json'):
  140. with open(f'{self.base_dir}/scene_gt.json','r') as ff:
  141. self.scene_gt = json.load(ff)
  142. self.scene_gt = copy.deepcopy(self.scene_gt) # Release file handle to be pickle-able by joblib
  143. assert len(self.scene_gt)==len(self.color_files)
  144. else:
  145. self.scene_gt = None
  146. self.make_id_strs()
  147. def make_scene_ob_ids_dict(self):
  148. with open(f'{BOP_DIR}/{self.dataset_name}/test_targets_bop19.json','r') as ff:
  149. self.scene_ob_ids_dict = {}
  150. data = json.load(ff)
  151. for d in data:
  152. if d['scene_id']==self.get_video_id():
  153. id_str = f"{d['im_id']:06d}"
  154. if id_str not in self.scene_ob_ids_dict:
  155. self.scene_ob_ids_dict[id_str] = []
  156. self.scene_ob_ids_dict[id_str] += [d['obj_id']]*d['inst_count']
  157. def get_K(self, i_frame):
  158. K = self.K_table[self.id_strs[i_frame]]
  159. if self.resize!=1:
  160. K[:2,:2] *= self.resize
  161. return K
  162. def get_video_dir(self):
  163. video_id = int(self.base_dir.rstrip('/').split('/')[-1])
  164. return video_id
  165. def make_id_strs(self):
  166. self.id_strs = []
  167. for i in range(len(self.color_files)):
  168. name = os.path.basename(self.color_files[i]).split('.')[0]
  169. self.id_strs.append(name)
  170. def get_instance_ids_in_image(self, i_frame:int):
  171. ob_ids = []
  172. if self.scene_gt is not None:
  173. name = int(os.path.basename(self.color_files[i_frame]).split('.')[0])
  174. for k in self.scene_gt[str(name)]:
  175. ob_ids.append(k['obj_id'])
  176. elif self.scene_ob_ids_dict is not None:
  177. return np.array(self.scene_ob_ids_dict[self.id_strs[i_frame]])
  178. else:
  179. mask_dir = os.path.dirname(self.color_files[0]).replace('rgb','mask_visib')
  180. id_str = self.id_strs[i_frame]
  181. mask_files = sorted(glob.glob(f'{mask_dir}/{id_str}_*.png'))
  182. ob_ids = []
  183. for mask_file in mask_files:
  184. ob_id = int(os.path.basename(mask_file).split('.')[0].split('_')[1])
  185. ob_ids.append(ob_id)
  186. ob_ids = np.asarray(ob_ids)
  187. return ob_ids
  188. def get_gt_mesh_file(self, ob_id):
  189. raise RuntimeError("You should override this")
  190. def get_color(self,i):
  191. color = imageio.imread(self.color_files[i])
  192. if len(color.shape)==2:
  193. color = np.tile(color[...,None], (1,1,3)) # Gray to RGB
  194. if self.resize!=1:
  195. color = cv2.resize(color, fx=self.resize, fy=self.resize, dsize=None)
  196. return color
  197. def get_depth(self,i, filled=False):
  198. if filled:
  199. depth_file = self.color_files[i].replace('rgb','depth_filled')
  200. depth_file = f'{os.path.dirname(depth_file)}/0{os.path.basename(depth_file)}'
  201. depth = cv2.imread(depth_file,-1)/1e3
  202. else:
  203. depth_file = self.color_files[i].replace('rgb','depth').replace('gray','depth')
  204. depth = cv2.imread(depth_file,-1)*1e-3*self.bop_depth_scale
  205. if self.resize!=1:
  206. depth = cv2.resize(depth, fx=self.resize, fy=self.resize, dsize=None, interpolation=cv2.INTER_NEAREST)
  207. depth[depth<0.1] = 0
  208. depth[depth>self.zfar] = 0
  209. return depth
  210. def get_xyz_map(self,i):
  211. depth = self.get_depth(i)
  212. xyz_map = depth2xyzmap(depth, self.get_K(i))
  213. return xyz_map
  214. def get_mask(self, i_frame:int, ob_id:int, type='mask_visib'):
  215. '''
  216. @type: mask_visib (only visible part) / mask (projected mask from whole model)
  217. '''
  218. pos = 0
  219. name = int(os.path.basename(self.color_files[i_frame]).split('.')[0])
  220. if self.scene_gt is not None:
  221. for k in self.scene_gt[str(name)]:
  222. if k['obj_id']==ob_id:
  223. break
  224. pos += 1
  225. mask_file = f'{self.base_dir}/{type}/{name:06d}_{pos:06d}.png'
  226. if not os.path.exists(mask_file):
  227. logging.info(f'{mask_file} not found')
  228. return None
  229. else:
  230. # mask_dir = os.path.dirname(self.color_files[0]).replace('rgb',type)
  231. # mask_file = f'{mask_dir}/{self.id_strs[i_frame]}_{ob_id:06d}.png'
  232. raise RuntimeError
  233. mask = cv2.imread(mask_file, -1)
  234. if self.resize!=1:
  235. mask = cv2.resize(mask, fx=self.resize, fy=self.resize, dsize=None, interpolation=cv2.INTER_NEAREST)
  236. return mask>0
  237. def get_gt_mesh(self, ob_id:int):
  238. mesh_file = self.get_gt_mesh_file(ob_id)
  239. mesh = trimesh.load(mesh_file)
  240. mesh.vertices *= 1e-3
  241. return mesh
  242. def get_model_diameter(self, ob_id):
  243. dir = os.path.dirname(self.get_gt_mesh_file(self.ob_ids[0]))
  244. info_file = f'{dir}/models_info.json'
  245. with open(info_file,'r') as ff:
  246. info = json.load(ff)
  247. return info[str(ob_id)]['diameter']/1e3
  248. def get_gt_poses(self, i_frame, ob_id):
  249. gt_poses = []
  250. name = int(self.id_strs[i_frame])
  251. for i_k, k in enumerate(self.scene_gt[str(name)]):
  252. if k['obj_id']==ob_id:
  253. cur = np.eye(4)
  254. cur[:3,:3] = np.array(k['cam_R_m2c']).reshape(3,3)
  255. cur[:3,3] = np.array(k['cam_t_m2c'])/1e3
  256. gt_poses.append(cur)
  257. return np.asarray(gt_poses).reshape(-1,4,4)
  258. def get_gt_pose(self, i_frame:int, ob_id, mask=None, use_my_correction=False):
  259. ob_in_cam = np.eye(4)
  260. best_iou = -np.inf
  261. best_gt_mask = None
  262. name = int(self.id_strs[i_frame])
  263. for i_k, k in enumerate(self.scene_gt[str(name)]):
  264. if k['obj_id']==ob_id:
  265. cur = np.eye(4)
  266. cur[:3,:3] = np.array(k['cam_R_m2c']).reshape(3,3)
  267. cur[:3,3] = np.array(k['cam_t_m2c'])/1e3
  268. if mask is not None: # When multi-instance exists, use mask to determine which one
  269. gt_mask = cv2.imread(f'{self.base_dir}/mask_visib/{self.id_strs[i_frame]}_{i_k:06d}.png', -1).astype(bool)
  270. intersect = (gt_mask*mask).astype(bool)
  271. union = (gt_mask+mask).astype(bool)
  272. iou = float(intersect.sum())/union.sum()
  273. if iou>best_iou:
  274. best_iou = iou
  275. best_gt_mask = gt_mask
  276. ob_in_cam = cur
  277. else:
  278. ob_in_cam = cur
  279. break
  280. if use_my_correction:
  281. if 'ycb' in self.base_dir.lower() and 'train_real' in self.color_files[i_frame]:
  282. video_id = self.get_video_id()
  283. if ob_id==1:
  284. if video_id in [12,13,14,17,24]:
  285. ob_in_cam = [email protected]_tfs[ob_id][1]
  286. return ob_in_cam
  287. def load_symmetry_tfs(self):
  288. dir = os.path.dirname(self.get_gt_mesh_file(self.ob_ids[0]))
  289. info_file = f'{dir}/models_info.json'
  290. with open(info_file,'r') as ff:
  291. info = json.load(ff)
  292. self.symmetry_tfs = {}
  293. self.symmetry_info_table = {}
  294. for ob_id in self.ob_ids:
  295. self.symmetry_info_table[ob_id] = info[str(ob_id)]
  296. self.symmetry_tfs[ob_id] = symmetry_tfs_from_info(info[str(ob_id)], rot_angle_discrete=5)
  297. self.geometry_symmetry_info_table = copy.deepcopy(self.symmetry_info_table)
  298. def get_video_id(self):
  299. return int(self.base_dir.split('/')[-1])
  300. class LinemodOcclusionReader(BopBaseReader):
  301. def __init__(self,base_dir='/mnt/9a72c439-d0a7-45e8-8d20-d7a235d02763/DATASET/LINEMOD-O/lmo_test_all/test/000002', zfar=np.inf):
  302. super().__init__(base_dir, zfar=zfar)
  303. self.dataset_name = 'lmo'
  304. self.K = list(self.K_table.values())[0]
  305. self.ob_ids = [1,5,6,8,9,10,11,12]
  306. self.ob_id_to_names = {
  307. 1: 'ape',
  308. 2: 'benchvise',
  309. 3: 'bowl',
  310. 4: 'camera',
  311. 5: 'water_pour',
  312. 6: 'cat',
  313. 7: 'cup',
  314. 8: 'driller',
  315. 9: 'duck',
  316. 10: 'eggbox',
  317. 11: 'glue',
  318. 12: 'holepuncher',
  319. 13: 'iron',
  320. 14: 'lamp',
  321. 15: 'phone',
  322. }
  323. # self.load_symmetry_tfs()
  324. def get_gt_mesh_file(self, ob_id):
  325. mesh_dir = f'{BOP_DIR}/{self.dataset_name}/models/obj_{ob_id:06d}.ply'
  326. return mesh_dir
  327. class LinemodReader(LinemodOcclusionReader):
  328. def __init__(self, base_dir='/mnt/9a72c439-d0a7-45e8-8d20-d7a235d02763/DATASET/LINEMOD/lm_test_all/test/000001', zfar=np.inf, split=None):
  329. super().__init__(base_dir, zfar=zfar)
  330. self.dataset_name = 'lm'
  331. if split is not None: # train/test
  332. print("## split is not None")
  333. with open(f'/mnt/9a72c439-d0a7-45e8-8d20-d7a235d02763/DATASET/LINEMOD/Linemod_preprocessed/data/{self.get_video_id():02d}/{split}.txt','r') as ff:
  334. lines = ff.read().splitlines()
  335. self.color_files = []
  336. for line in lines:
  337. id = int(line)
  338. self.color_files.append(f'{self.base_dir}/rgb/{id:06d}.png')
  339. self.make_id_strs()
  340. self.ob_ids = np.setdiff1d(np.arange(1,16), np.array([7,3])).tolist() # Exclude bowl and mug
  341. # self.load_symmetry_tfs()
  342. def get_gt_mesh_file(self, ob_id):
  343. root = self.base_dir
  344. print(f'{root}/../')
  345. print(f'{root}/lm_models')
  346. print(f'{root}/lm_models/models/obj_{ob_id:06d}.ply')
  347. while 1:
  348. if os.path.exists(f'{root}/lm_models'):
  349. mesh_dir = f'{root}/lm_models/models/obj_{ob_id:06d}.ply'
  350. break
  351. else:
  352. root = os.path.abspath(f'{root}/../')
  353. mesh_dir = f'{root}/lm_models/models/obj_{ob_id:06d}.ply'
  354. break
  355. return mesh_dir
  356. def get_reconstructed_mesh(self, ref_view_dir):
  357. mesh = trimesh.load(os.path.abspath(f'{ref_view_dir}/model/model.obj'))
  358. return mesh
  359. class YcbVideoReader(BopBaseReader):
  360. def __init__(self, base_dir, zfar=np.inf):
  361. super().__init__(base_dir, zfar=zfar)
  362. self.dataset_name = 'ycbv'
  363. self.K = list(self.K_table.values())[0]
  364. self.make_id_strs()
  365. self.ob_ids = np.arange(1,22).astype(int).tolist()
  366. YCB_VIDEO_DIR = os.getenv('YCB_VIDEO_DIR')
  367. self.ob_id_to_names = {}
  368. self.name_to_ob_id = {}
  369. # names = sorted(os.listdir(f'{YCB_VIDEO_DIR}/models/'))
  370. if os.path.exists(f'{YCB_VIDEO_DIR}/models/'):
  371. names = sorted(os.listdir(f'{YCB_VIDEO_DIR}/models/'))
  372. for i,ob_id in enumerate(self.ob_ids):
  373. self.ob_id_to_names[ob_id] = names[i]
  374. self.name_to_ob_id[names[i]] = ob_id
  375. else:
  376. names = []
  377. if 0:
  378. # if 'BOP' not in self.base_dir:
  379. with open(f'{self.base_dir}/../../keyframe.txt','r') as ff:
  380. self.keyframe_lines = ff.read().splitlines()
  381. # self.load_symmetry_tfs()
  382. '''for ob_id in self.ob_ids:
  383. if ob_id in [1,4,6,18]: # Cylinder
  384. self.geometry_symmetry_info_table[ob_id] = {
  385. 'symmetries_continuous': [
  386. {'axis':[0,0,1], 'offset':[0,0,0]},
  387. ],
  388. 'symmetries_discrete': euler_matrix(0, np.pi, 0).reshape(1,4,4).tolist(),
  389. }
  390. elif ob_id in [13]:
  391. self.geometry_symmetry_info_table[ob_id] = {
  392. 'symmetries_continuous': [
  393. {'axis':[0,0,1], 'offset':[0,0,0]},
  394. ],
  395. }
  396. elif ob_id in [2,3,9,21]: # Rectangle box
  397. tfs = []
  398. for rz in [0, np.pi]:
  399. for rx in [0,np.pi]:
  400. for ry in [0,np.pi]:
  401. tfs.append(euler_matrix(rx, ry, rz))
  402. self.geometry_symmetry_info_table[ob_id] = {
  403. 'symmetries_discrete': np.asarray(tfs).reshape(-1,4,4).tolist(),
  404. }
  405. else:
  406. pass'''
  407. def get_gt_mesh_file(self, ob_id):
  408. if 'BOP' in self.base_dir:
  409. mesh_file = os.path.abspath(f'{self.base_dir}/../../ycbv_models/models/obj_{ob_id:06d}.ply')
  410. else:
  411. mesh_file = f'{self.base_dir}/../../ycbv_models/models/obj_{ob_id:06d}.ply'
  412. return mesh_file
  413. def get_gt_mesh(self, ob_id:int, get_posecnn_version=False):
  414. if get_posecnn_version:
  415. YCB_VIDEO_DIR = os.getenv('YCB_VIDEO_DIR')
  416. mesh = trimesh.load(f'{YCB_VIDEO_DIR}/models/{self.ob_id_to_names[ob_id]}/textured_simple.obj')
  417. return mesh
  418. mesh_file = self.get_gt_mesh_file(ob_id)
  419. mesh = trimesh.load(mesh_file, process=False)
  420. mesh.vertices *= 1e-3
  421. tex_file = mesh_file.replace('.ply','.png')
  422. if os.path.exists(tex_file):
  423. from PIL import Image
  424. im = Image.open(tex_file)
  425. uv = mesh.visual.uv
  426. material = trimesh.visual.texture.SimpleMaterial(image=im)
  427. color_visuals = trimesh.visual.TextureVisuals(uv=uv, image=im, material=material)
  428. mesh.visual = color_visuals
  429. return mesh
  430. def get_reconstructed_mesh(self, ob_id, ref_view_dir):
  431. mesh = trimesh.load(os.path.abspath(f'{ref_view_dir}/ob_{ob_id:07d}/model/model.obj'))
  432. return mesh
  433. def get_transform_reconstructed_to_gt_model(self, ob_id):
  434. out = np.eye(4)
  435. return out
  436. def get_visible_cloud(self, ob_id):
  437. file = os.path.abspath(f'{self.base_dir}/../../models/{self.ob_id_to_names[ob_id]}/visible_cloud.ply')
  438. pcd = o3d.io.read_point_cloud(file)
  439. return pcd
  440. def is_keyframe(self, i):
  441. color_file = self.color_files[i]
  442. video_id = self.get_video_id()
  443. frame_id = int(os.path.basename(color_file).split('.')[0])
  444. key = f'{video_id:04d}/{frame_id:06d}'
  445. return (key in self.keyframe_lines)
  446. class TlessReader(BopBaseReader):
  447. def __init__(self, base_dir, zfar=np.inf):
  448. super().__init__(base_dir, zfar=zfar)
  449. self.dataset_name = 'tless'
  450. self.ob_ids = np.arange(1,31).astype(int).tolist()
  451. self.load_symmetry_tfs()
  452. def get_gt_mesh_file(self, ob_id):
  453. mesh_file = f'{self.base_dir}/../../../models_cad/obj_{ob_id:06d}.ply'
  454. return mesh_file
  455. def get_gt_mesh(self, ob_id):
  456. mesh = trimesh.load(self.get_gt_mesh_file(ob_id))
  457. mesh.vertices *= 1e-3
  458. mesh = trimesh_add_pure_colored_texture(mesh, color=np.ones((3))*200)
  459. return mesh
  460. class HomebrewedReader(BopBaseReader):
  461. def __init__(self, base_dir, zfar=np.inf):
  462. super().__init__(base_dir, zfar=zfar)
  463. self.dataset_name = 'hb'
  464. self.ob_ids = np.arange(1,34).astype(int).tolist()
  465. self.load_symmetry_tfs()
  466. self.make_scene_ob_ids_dict()
  467. def get_gt_mesh_file(self, ob_id):
  468. mesh_file = f'{self.base_dir}/../../../hb_models/models/obj_{ob_id:06d}.ply'
  469. return mesh_file
  470. def get_gt_pose(self, i_frame:int, ob_id, use_my_correction=False):
  471. logging.info("WARN HomeBrewed doesn't have GT pose")
  472. return np.eye(4)
  473. class ItoddReader(BopBaseReader):
  474. def __init__(self, base_dir, zfar=np.inf):
  475. super().__init__(base_dir, zfar=zfar)
  476. self.dataset_name = 'itodd'
  477. self.make_id_strs()
  478. self.ob_ids = np.arange(1,29).astype(int).tolist()
  479. self.load_symmetry_tfs()
  480. self.make_scene_ob_ids_dict()
  481. def get_gt_mesh_file(self, ob_id):
  482. mesh_file = f'{self.base_dir}/../../../itodd_models/models/obj_{ob_id:06d}.ply'
  483. return mesh_file
  484. class IcbinReader(BopBaseReader):
  485. def __init__(self, base_dir, zfar=np.inf):
  486. super().__init__(base_dir, zfar=zfar)
  487. self.dataset_name = 'icbin'
  488. self.ob_ids = np.arange(1,3).astype(int).tolist()
  489. self.load_symmetry_tfs()
  490. def get_gt_mesh_file(self, ob_id):
  491. mesh_file = f'{self.base_dir}/../../../icbin_models/models/obj_{ob_id:06d}.ply'
  492. return mesh_file
  493. class TudlReader(BopBaseReader):
  494. def __init__(self, base_dir, zfar=np.inf):
  495. super().__init__(base_dir, zfar=zfar)
  496. self.dataset_name = 'tudl'
  497. self.ob_ids = np.arange(1,4).astype(int).tolist()
  498. self.load_symmetry_tfs()
  499. def get_gt_mesh_file(self, ob_id):
  500. mesh_file = f'{self.base_dir}/../../../tudl_models/models/obj_{ob_id:06d}.ply'
  501. return mesh_file

运行run_linemod.py:

python run_linemod.py

能看到文件夹model_free_ref_views/lm_test_all/000015/track_vis/

里面存放可视化结果:

分享完成~

本文先介绍到这里,后面会分享“6D位姿估计”的其它数据集、算法、代码、具体应用示例。

注:本文转载自blog.csdn.net的一颗小树x的文章"https://blog.csdn.net/qq_41204464/article/details/138619210"。版权归原作者所有,此博客不拥有其著作权,亦不承担相应法律责任。如有侵权,请联系我们删除。
复制链接
复制链接
相关推荐
发表评论
登录后才能发表评论和回复 注册

/ 登录

评论记录:

未查询到任何数据!
回复评论:

分类栏目

后端 (14832) 前端 (14280) 移动开发 (3760) 编程语言 (3851) Java (3904) Python (3298) 人工智能 (10119) AIGC (2810) 大数据 (3499) 数据库 (3945) 数据结构与算法 (3757) 音视频 (2669) 云原生 (3145) 云平台 (2965) 前沿技术 (2993) 开源 (2160) 小程序 (2860) 运维 (2533) 服务器 (2698) 操作系统 (2325) 硬件开发 (2492) 嵌入式 (2955) 微软技术 (2769) 软件工程 (2056) 测试 (2865) 网络空间安全 (2948) 网络与通信 (2797) 用户体验设计 (2592) 学习和成长 (2593) 搜索 (2744) 开发工具 (7108) 游戏 (2829) HarmonyOS (2935) 区块链 (2782) 数学 (3112) 3C硬件 (2759) 资讯 (2909) Android (4709) iOS (1850) 代码人生 (3043) 阅读 (2841)

热门文章

101
推荐
关于我们 隐私政策 免责声明 联系我们
Copyright © 2020-2025 蚁人论坛 (iYenn.com) All Rights Reserved.
Scroll to Top