fastreid重识别算法笔记
2023-12-25 19:10:15
1、安装
参考INSTALL.md的安装步骤
conda create -n fastreid python=3.7
conda activate fastreid
conda install pytorch==1.6.0 torchvision tensorboard -c pytorch
pip install -r docs/requirements.txt
pip install Cython
加速查询
外为了加速索引速度,进入fast-reid/fastreid/evaluation/rank_cylib/目录,输入make all编译文件以加速查询
Compile with cython to accelerate evalution
cd fastreid/evaluation/rank_cylib; make all
2、运行命令
train
python3 tools/train_net.py --config-file ./configs/Market1501/bagtricks_R50.yml MODEL.DEVICE "cuda:0"
Evaluation
python3 tools/train_net.py --config-file ./configs/Market1501/bagtricks_R50.yml --eval-only MODEL.WEIGHTS ./logs/market1501/bagtricks_R50/model_best.pth MODEL.DEVICE "cuda:0"
demo
这个代码就是加载模型(调用predictor.py),提取查询图像的特征,并保存为npy文件。保存在demo_output文件夹下,一张图像对一个npy文件。这些包含特征向量的npy文件可供后续向量检索使用。
python demo/demo.py --config-file ./configs/Market1501/bagtricks_R50.yml --input 1233_c1s5_042441_01.jpg --output datasets --opts MODEL.WEIGHTS logs/market1501/bagtricks_R50/model_best.pth
visualize_result
python demo/visualize_result.py --vis-label --config-file ./configs/Market1501/bagtricks_R50.yml --dataset-name 'Market1501' --output outputs_2/ --opts MODEL.WEIGHTS ./logs/market1501/bagtricks_R50/model_best.pth
export
python tools/deploy/onnx_export.py --config-file ./configs/Market1501/bagtricks_R50.yml --name baseline_R50 --output outputs/onnx_model --opts MODEL.WEIGHTS ./logs/market1501/bagtricks_R50_r50/model_best.pth
要在base-bagtricks.yml中,修改网络结构
onnx inference
python tools/deploy/onnx_inference.py --model-path outputs/onnx_model/baseline_R50.onnx --input tools/deploy/test_data/*.jpg --output onnx_output
3、报错处理
from fastreid.evaluation import evaluate_rank
ImportError: cannot import name 'evaluate_rank' from 'fastreid.evaluation' (./fastreid/evaluation/__init__.py)
在demo/visualize_result.py中第18行代码
from fastreid.evaluation import evaluate_rank---》from fastreid.evaluation.rank import evaluate_rank
代码阅读
1、读取数据集
以market1501为例,读取数据集的文件在fastreid/data/datasets/market1501.py,返回的data中格式是—图片地址、行人id、摄像头id
...
train = lambda: self.process_dir(self.train_dir)
query = lambda: self.process_dir(self.query_dir, is_train=False)
gallery = lambda: self.process_dir(self.gallery_dir, is_train=False) + \
(self.process_dir(self.extra_gallery_dir, is_train=False) if self.market1501_500k else [])
super(Market1501, self).__init__(train, query, gallery, **kwargs)
def process_dir(self, dir_path, is_train=True):
img_paths = glob.glob(osp.join(dir_path, '*.jpg'))
pattern = re.compile(r'([-\d]+)_c(\d)')
data = []
for img_path in img_paths:
# 获取行人的id和拍摄的摄像头id
pid, camid = map(int, pattern.search(img_path).groups())
if pid == -1:
continue # junk images are just ignored
# 断言语句,如果满足下面条件,则继续向下运行;否则,程序直接返回断言错误
assert 0 <= pid <= 1501 # pid == 0 means background
assert 1 <= camid <= 6
camid -= 1 # index starts from 0
if is_train:
pid = self.dataset_name + "_" + str(pid)
camid = self.dataset_name + "_" + str(camid)
# print(pid,camid)
# data中格式是---图片地址、行人id、摄像头id
data.append((img_path, pid, camid))
# print(len(data),"ddd")
# exit()
return data
2、推理过程
推理的代码在fastreid/modeling/meta_arch/baseline.py中
2.1 图片预处理
可以看出,在预处理中,会对图像进行减均值除方差。(但是,在测试的代码,没有进行减均值除方差,为什么呢?)
def preprocess_image(self, batched_inputs):
"""
Normalize and batch the input images.
"""
if isinstance(batched_inputs, dict):
images = batched_inputs['images']
elif isinstance(batched_inputs, torch.Tensor):
images = batched_inputs
else:
raise TypeError("batched_inputs must be dict or torch.Tensor, but get {}".format(type(batched_inputs)))
# print(self.pixel_mean,self.pixel_std)
# 图像数据(images)减去平均值(self.pixel_mean)再除以标准差
# print(images.shape)
# print(self.pixel_mean,self.pixel_std)
images.sub_(self.pixel_mean).div_(self.pixel_std)
return images
2.2 模型推理
def forward(self, batched_inputs):
# 图片的预处理
images = self.preprocess_image(batched_inputs)
# 主干网络的特征提取
features = self.backbone(images)
if self.training:
assert "targets" in batched_inputs, "Person ID annotation are missing in training!"
targets = batched_inputs["targets"]
# PreciseBN flag, When do preciseBN on different dataset, the number of classes in new dataset
# may be larger than that in the original dataset, so the circle/arcface will
# throw an error. We just set all the targets to 0 to avoid this problem.
if targets.sum() < 0: targets.zero_()
outputs = self.heads(features, targets)
losses = self.losses(outputs, targets)
return losses
else:
outputs = self.heads(features)
return outputs
self.head会对主干网络提取的特征映射到新的特征空间(嵌入空间),使得在这个新的空间中,同类别的样本更近,不同类别的样本更远,达到更好的分类效果
代码在fastreid/modeling/heads/embedding_head.py中
def forward(self, features, targets=None):
"""
See :class:`ReIDHeads.forward`.
"""
pool_feat = self.pool_layer(features) # features.shape : torch.Size([64, 512, 24, 8]) resnet34
neck_feat = self.bottleneck(pool_feat)
neck_feat = neck_feat[..., 0, 0]
# Evaluation
# fmt: off
if not self.training: return neck_feat
# fmt: on
# Training
if self.cls_layer.__class__.__name__ == 'Linear':
logits = F.linear(neck_feat, self.weight)
else:
logits = F.linear(F.normalize(neck_feat), F.normalize(self.weight))
# Pass logits.clone() into cls_layer, because there is in-place operations
cls_outputs = self.cls_layer(logits.clone(), targets)
# fmt: off
if self.neck_feat == 'before': feat = pool_feat[..., 0, 0]
elif self.neck_feat == 'after': feat = neck_feat
else: raise KeyError(f"{self.neck_feat} is invalid for MODEL.HEADS.NECK_FEAT")
# fmt: on
return {
"cls_outputs": cls_outputs, # torch.Size([64, 751]) resnet34
# logits 矩阵中的每一个元素与 self.cls_layer.s 这个值相乘
# 计算每个对数概率(logit)和一个缩放因子(self.cls_layer.s)的乘积。这种操作通常用于调整模型在计算损失时对于不同类别预测的重视程度。
"pred_class_logits": logits.mul(self.cls_layer.s), # torch.Size([64, 751]) resnet34
"features": feat, # torch.Size([64, 751]) resnet34
}
文章来源:https://blog.csdn.net/qq_42178122/article/details/135203416
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!