一、刷機
參照官網教程https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit,不多贅述,注意micro SD卡推薦使用速度等級爲Class 10的sd卡。
二、刷機後其他的tips(網路配置+apt換源+pip換源)
網絡配置
有線網的配置:
華科校園網配置(有線)參照我之前的blog:https://blog.csdn.net/vslyu/article/details/83790487,不多贅述。
無線網的配置:
USB無線網卡Nvidia推薦使用的是EDIMAX-7811,貌似看網上的有一些其他的無線網卡(比方說小米的)也可以直接用。
apt換source
apt換source(arm architecture的hardware,教育網維護的貌似只有清華和中科大的source,推薦中科大的source,使用穩定,rep同步國外的source更新)。
備份sources.list文件:
sudo mv /etc/apt/sources.list /etc/apt/sources.list.bak
將 /etc/apt/sources.list修改爲以下內容:
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-updates main restricted universe multiverse
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-security main restricted universe multiverse
deb-src http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-security main restricted universe multiverse
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic-backports main restricted universe multiverse
deb http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic main universe restricted
deb-src http://mirrors.ustc.edu.cn/ubuntu-ports/ bionic main universe restricted
修改完了update即可:
sudo apt update
pip換source
pip換source(推薦清華的)
參看https://mirrors.tuna.tsinghua.edu.cn/help/pypi/
pip install pip -U
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
確認是否修改成功:
vslyu@vslyu-nano-tx:~$ more /home/vslyu/.config/pip/pip.conf [global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple
Nvidia編譯器nvcc的環境變量配置:
export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH
}}
三、training——tensorflow的配置(gpu版本)
tensorflow由兩種配置方式:
一種爲使用bazel build源碼編譯,該方式配置靈活,支持C/C++/Java/Go/JS的API接口可選配置,支持Intel/Nvidia/ARM等多種硬件可選配置,具體配置方法參看https://www.tensorflow.org/官網的教程,jeston系列可以看這個jetsonhacks的blog教程: http://www.jetsonhacks.com/2017/09/14/build-tensorflow-on-nvidia-jetson-tx2-development-kit/,該方式配置週期較長(筆者在志強 [email protected]上面編譯python的.whl文件接近花了12個小時,編譯C++的接口花了一個小時)。
一種爲安裝已經預配置好的定製的tensorflow版本,直接pip install就可以。該方式的優點是配置簡單,速度快,具體速度看網速而定,缺點是不太靈活,只能使用定製的package。這裏在jetson nano上面配置tensorflow,選擇Nvidia定製的版本即可(已經默認開啓tensorrt的支持)。該方式安裝配置參考https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html。
使用Nvidia定製嵌入式jeston平臺版本配置tensorflow(python3):
安裝相關的依賴:
sudo apt-get install libhdf5-serial-dev hdf5-tools zlib1g-dev zip libjpeg8-dev libhdf5-dev python3-pip
upgrade pip3:
sudo pip3 install -U pip
安裝相關的python package:
pip3 install -U numpy
pip3 install -U h5py #大致需要半個小時
pip3 install -U grpcio absl-py py-cpuinfo psutil portpicker six mock requests gast astor termcolor
安裝tensorflow(大約需要半個小時):
sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu
配置成功後的樣子:
vslyu@vslyu-nano-tx:~$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>>
四、inference——TensorRT的配置
pycuda的配置:https://www.jianshu.com/p/775394de61cf
https://devtalk.nvidia.com/default/topic/1056369/b/t/post/5356083/
https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-pycuda
Jeston nano刷機後自帶TensorRT的庫:
export CPATH=$CPATH:/usr/local/cuda-10.0/targets/aarch64-linux/include
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda-10.0/targets/aarch64-linux/lib/
pip3 install 'pycuda>=2017.1.1'
五、編輯工具jupyter notebook配置
vslyu@vslyu-nano-tx:~/training$ sudo pip3 jupyter
大約需要半個小時搞定。
附錄
pip install pip -U升級後“Import Error:cannot import name main”的錯誤:
參照https://blog.csdn.net/zong596568821xp/article/details/80410416
修改/usr/bin/pip:
將原來的:
from pip import main
if __name__ == '__main__':
sys.exit(main())
修改爲:
from pip import __main__
if __name__ == '__main__':
sys.exit(__main__.main())
pip install pycuda的error-"src/cpp/cuda.hpp:14:10: fatal error: cuda.h: No such file or directory"
解決方法:
export CPATH=$CPATH:/usr/local/cuda/targets/aarch64-linux/include
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda-10.0/targets/aarch64-linux/lib/