Ubuntu16+cuda10.1 下安裝AlphePose人體姿態檢測框架

1.安裝anaconda, 可參考網上的安裝教程,這裏不再贅述。

官網下載:https://www.anaconda.com/distribution/#download-section

 

2.安裝顯卡驅動和cuda-10.1+cudnn.,可以參考我之前博客

https://mp.csdn.net/console/editor/html/105434809

 

3.AlphePose代碼安裝:

# 1.1 Create a conda virtual environment.
conda create -n alphapose python=3.6 -y
conda activate alphapose

# 1.2 Install PyTorch 
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch

# 1.3 Get AlphaPose
git clone https://github.com/MVIG-SJTU/AlphaPose.git
cd AlphaPose

# 1.4 install
export PATH=/usr/local/cuda/bin/:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
python -m pip install cython
sudo apt-get install libyaml-dev
python setup.py build develop


Install with pip

# 1. Install PyTorch
pip install torch torchvision

# 2. Get AlphaPose
git clone https://github.com/MVIG-SJTU/AlphaPose.git
cd AlphaPose

# 3. install
export PATH=/usr/local/cuda/bin/:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
pip install cython
sudo apt-get install libyaml-dev
python setup.py build develop --user

4.模型下載

4.1 Download the object detection model manually: yolov3-spp.weights(Google Drive | Baidu pan). Place it into detector/yolo/data.

4.2 For pose tracking, download the object tracking model manually: JDE-1088x608-uncertainty(Google Drive | Baidu pan). Place it into detector/tracker/data.

4.3 Download our pose models. Place them into pretrained_models. All models and details are available in our Model Zoo.

只是測試識別的話可以不用下載4.2的模型

5.運行

識別視頻

python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --video AlphaPose_video.avi  --outdir examples/res  --detector yolo  --save_img --save_video

識別圖片

python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章