Focus on the visible regions: semantic-guided alignment model for occluded person re-identification
Sensors, 2020
Recommended citation: Q. Yang, P. Wang, Z. Fang and Q. Lu. "Focus on the visible regions: semantic-guided alignment model for occluded person re-identification." Sensors, 2020, 20(16): 4431. (https://www.mdpi.com/1424-8220/20/16/4431/htm)
- When persons are occluded by various obstacles, the noise caused by the occluded area greatly affects the pedestrian retrieval results.
- We propose a semantic-guided alignment model that uses image semantic information to separate useful information from occlusion noise.
- We fuse the probability maps with the global features of the image, and derive an automatic cropping method to guide the model to focus on public visible human regions and local features.
- We propose a measurement strategy that only calculates the distance of public areas between images, thereby suppressing the spatial misalignment.
- Experimental results confirm that our method achieves top performance in the holistic pedestrian re-identification problem.
Recommended citation: Q. Yang, P. Wang, Z. Fang and Q. Lu, Focus on the visible regions: semantic-guided alignment model for occluded person re-identification. Sensors, 2020, 20(16): 4431.