-
Notifications
You must be signed in to change notification settings - Fork 6
5. Companion Computer
We use the following hardware:
- Orin Nano 8GB Dev Kit with SSD
- Ubuntu 20.04 Laptop (22.04 should be ok too)
- Shorting Pin
- Bluetooth Keyboard + Mouse
- DVI to Mini-Display Cable
- USB-A to USB-C Cable
We setup the Orin via the SDK Manager route with JetPack 6.0. SD card route fails due to dev kits not shipping with the necessary updates for the QSPI bootloader and so we have to use the SDK Manager to update said bootloader over USB.
- Download the SDK Manager from https://developer.nvidia.com/sdk-manager onto the Ubuntu 20.04 laptop (the version I downloaded was: sdkmanager_2.0.0-11405_amd64.deb). Install the SDK Manager.
- Short the FC REC and GND pin on the edge of the Orin (labelled horizontal pins right below the fan unit).
- With the USB cable, plug the USB-C side into the Orin and the USB-A side into the Ubuntu 20.04 laptop.
- With the Display cable, plug the DVI side into the Orin and the Mini-Display into a monitor. Plug in external keyboard + mouse to the Orin.
- Power up the Orin and launch the SDK Manager right after. The SDK Manager should detect the Orin and ask you to select between a regular Orin Nano and the 8GB dev kit. Select the latter.
- Go through the installation steps (I skipped the DeepStream SDK). Under OEM configuration, choose Runtime (and not Pre-Config) and under Storage Device, choose nVMe.
Some useful links (though these might get changed post-documentation date by nVidia):
- https://docs.nvidia.com/sdk-manager/install-with-sdkm-jetson/index.html
- https://developer.nvidia.com/embedded/learn/jetson-orin-nano-devkit-user-guide/software_setup.html
Once completed, it's time to configure the Orin with our stuff. For the sake of completeness, here's what I installed.
Some quality of life installations that you can skip. I use VSCode for coding, Firefox for browsing and Terminator for easier terminals. If you want these then download the .deb for the arm64 version of vscode at https://code.visualstudio.com/download and then:
# Install VSCode
cd Downloads
sudo apt install ./<file>.deb
# Install Firefox and Terminator
sudo apt-get install terminator firefox
# Install screen
sudo apt install screen
# Clean up a little
sudo apt autoremove
The VSCode throws a "Download is performed unsandboxed..." error but the installation still works.
These stuff are most likely necessary. We start with ROS2, specifically ROS2 Humble:
# First ensure that the Ubuntu Universe repository is enabled.
sudo apt install software-properties-common
sudo add-apt-repository universe
# Now add the ROS 2 GPG key with apt.
sudo apt update && sudo apt install curl -y
sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg
# Then add the repository to your sources list.
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null
# Install ROS2 packages
sudo apt update
sudo apt upgrade
sudo apt install ros-humble-desktop
sudo apt install ros-dev-tools
This follows from the tutorial here
The other important package to install is conda:
wget https://repo.anaconda.com/archive/Anaconda3-2024.02-1-Linux-aarch64.sh (note the arm flavour)
bash Anaconda3-2024.02-1-Linux-aarch64.sh
Next we set up the micro-xrce-dds-agent for communicating between the Pixracer and the Orin over the serial port:
# In a terminal on the Orin do:
sudo snap install micro-xrce-dds-agent --edge
# Some permissions need to be configured. Install some stuff that will allow us to access the configurations
sudo apt install selinux-utils
sudo apt install selinux-policy-default
# Disable said configuration
sudo gedit /etc/selinux/config
SELINUX=disabled (default is permissive)
# Test out the serial communication
sudo micro-xrce-dds-agent serial --dev /dev/ttyTHS0 --baudrate 921600
And now its time for the StanfordMSL workspace. This is where TrajBridge and SFTI-Program will reside. These steps are almost identical to what we did with the SITL:
# Create and navigate to our workspace folder (feel free to change the name)
mkdir StanfordMSL
cd StanfordMSL
Then we install TrajBridge (for ROS2-PX4 link):
# Clone TrajBridge
git clone https://github.com/StanfordMSL/TrajBridge.git
# Initialize the submodules
cd TrajBridge
git submodule update --init --recursive
# Build
cd TrajBridge (to get to the actual TrajBridge workspace)
colcon build (don't forget to source your ROS2 setup bash. e.g: source /opt/ros/humble/setup.bash)
Then, we install the SFTI package (for some neural network based controllers). The package follows the same idea as SimpleController back when we did the SITL simulation: a stand-alone package that is linked to TrajBridge through only the latter's install/setup.bash.
# Clone and checkout appropriate branch
git clone https://github.com/StanfordMSL/SFTI-Program.git
(checkout appropriate branch)
# Initailize the submodules
git submodule update --init --recursive
# Build acados from source within its submodule
cd acados
mkdir -p build
cd build
cmake -DACADOS_WITH_QPOASES=ON ..
# add more optional arguments e.g. -DACADOS_WITH_OSQP=OFF/ON -DACADOS_INSTALL_DIR=<path_to_acados_installation_folder> above
make install -j4
# The default downloaded version of tera renderer currently doesn't support the Orin Nano, so I have compiled both an x86
# flavor and an arm flavor and saved it into a forked repository in MSL. The user need only to change the file that is used
# (t_renderer) to the correct architecture. Navigate to SFTI-Program/acados/bin and rename the file accordingly
# If x86: t_renderer_x86 -> t_renderer
# If ARM: t_renderer_arm -> t_renderer
# Install the SFTI requirements
(edit the environment.yml to point the local pip installs and prefix config to the correct folders)
(UNTESTED: might have to switch to python 3.8 in the environment.yml if you're on Ubuntu 20.04)
conda env create -f environment.yml
conda activate sfti-env
We are now ready to fly! Here's an example run using vehicle rates MPC in SITL:
# In one terminal launch the micro-xrce
micro-xrce-dds-agent udp4 -p 8888
# In another terminal, launch the simulation
cd PX4-Autopilot
make px4_sitl gazebo-classic
# In the third terminal, launch the trajbridge state machine
source /opt/ros/humble/setup.bash (might be prudent to put this in .bashrc)
cd StanfordMSL/TrajBridge/TrajBridge
source install/setup.bash
ros2 launch px4_comm trajbridge.launch.py
# In the fourth terminal, the SFTI node
conda activate sfti-env
source /opt/ros/humble/setup.bash (if you haven't already done so)
cd StanfordMSL/TrajBridge/TrajBridge
source install/setup.bash (if you haven't already done so)
cd SFTI-Program/scripts/
python sfti2command_node.py --cohort 101 --course orbit --pilot vr_M_Iceman --frame iris
And an actual run in hardware:
# On the computer that you want to relay the mocap from:
ros2 launch px4_comm mocap.launch.py
# On the computer that serves as your gcs:
ssh [email protected]
screen
# In one terminal launch the micro-xrce
micro-xrce-dds-agent serial --dev /dev/ttyTSH0 -b 921600
# In another terminal (ctrl,a,c) launch the trajbridge node
source /opt/ros/humble/setup.bash (might be prudent to put this in .bashrc
cd StanfordMSL/TrajBridge/TrajBridge
source install/setup.bash
ros2 launch px4_comm trajbridge.launch.py
# In another terminal (ctrl,a,c) the SFTI node
conda activate sfti-env
source /opt/ros/humble/setup.bash (if you haven't already done so)
cd StanfordMSL/TrajBridge/TrajBridge
source install/setup.bash (if you haven't already done so)
cd SFTI-Program/scripts/
python sfti2command_node.py --cohort 101 --course orbit --pilot vr_M_Iceman --frame carl
- Sometimes, the gcc version that conda calls is not compatible with ROS2. To fix this, update your gcc by doing
conda install -c conda-forge gcc=12.1.0
- To install with windows subsystem for linux (wsl), after creating the sfti-env
a. have to change torch and torchvision version
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
b. have to modify bashrc to point to cuda 11.8export CUDA_HOME=/usr/local/cuda-11.8
export PATH=$CUDA_HOME/bin:$PATH
c. have to point to correct acados directoryexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:"/home/lowjunen/StanfordMSL/SFTI-Program/acados/lib"
export ACADOS_SOURCE_DIR="/home/lowjunen/StanfordMSL/SFTI-Program/acados"
- Sometimes the micro-xrce points to a different port like ttyTHS0,ttyTHS1