Skip to the content.

Introduction

We are proud to announce a new large scale multimodal event-based human pose/action dataset CDEHP Dataset(Color, Depth, and Event based Human Pose) with Color (RGB), Depth, and Event modules. CDEHP Dataset uses RGB, Depth, and Event cameras to simultaneously capture actions from a variety of subjects, bringing a new multimodal benchmark to the research of event-based human pose estimation and action recognition. Of course, you can also use this dataset to carry out research work in the RGB-D field or event camera field. If you are interested to build a multimodal event-based dataset, please refer to How to Make.

Dataset Statistic

Scenes Modality # Sub # Act # Frames of Train Set # Frames of Test Set # Total Frames
Outdoor RGB, Depth, Event 10 13 82.785K 18.486K 101.271K
Indoor RGB, Depth, Event 20 25 30.232K 14.643K 44.875K
Both RGB, Depth, Event 30 25* 113.017K 33.129K 146.146K

*The indoor set contains all the action categories in the dataset. There are 13 action classes overlapped between the indoor and outdoor sets.

Action Class

Indoor Actions (13)

ID Action Name ID Action Name ID Action Name ID Action Name
A1 walking A2 squat jump A3 boxing A4 picking up
A5 jumping jack A6 crotch high five A7 sweeping A8 alternate jumping lunge
A9 big jump A10 sit-up jump A11 shuttlecock kicking A12 throwing
A13 spinning            

Outdoor Actions (25)

ID Action Name ID Action Name ID Action Name ID Action Name
A1 walking A2 running A3 squat jump A4 frog jump
A5 jump fwd/bwd/left/right A6 boxing A7 picking up A8 cartwheel
A9 jumping jack A10 crotch high five A11 crawling A12 rope skipping
A13 sweeping A14 mopping A15 cycling A16 alternate jumping lunge
A17 big jump A18 sit-up jump A19 kicking A20 jump shot
A21 shuttlecock kicking A22 spinning A23 throwing A24 long jump
A25 burpee            

Visualization of Action Samples

Indoor samples of all action classes Outdoor samples of all action classes

Citation

To cite our datasets, please refer to:

@article{Zhanpeng Shao:2024,
   TITLE      = {A temporal densely connected recurrent network for event-based human pose estimation},
   JOURNAL    = {Pattern Recognition},
   VOLUME     = {147},
   PAGES      = {110048},
   YEAR       = {2024},
   ISSN       = {0031-3203},
   DOI        = {https://doi.org/10.1016/j.patcog.2023.110048},
   URL        = {https://www.sciencedirect.com/science/article/pii/S0031320323007458},
   AUTHOR     = {Zhanpeng Shao and Xueping Wang and Wen Zhou and Wuzhen Wang and Jianyu Yang and Youfu Li},
   KEYWORDS   = {Event signal, Human pose estimation, Dense connections, Recurrent network, Dataset},
}

Use Dataset

Projects

Team

Zhanpeng Shao

Zhanpeng Shao

Associate Professor with College of Information Science and Engineering, Hunan Normal University. Specifically interested in activity understanding in videos.

Wen Zhou

Wen Zhou

Obtained a Master's degree from Zhejiang University of Technology.

Wuzhen Wang

Wuzhen Wang

Obtained a Master's degree from Zhejiang University of Technology, currently work for ByteDance.

Contact