diff --git a/notebooks/road_following/data_collection.ipynb b/notebooks/road_following/data_collection.ipynb index 5cffd00f..6da986b9 100644 --- a/notebooks/road_following/data_collection.ipynb +++ b/notebooks/road_following/data_collection.ipynb @@ -83,6 +83,7 @@ "outputs": [], "source": [ "# IPython Libraries for display and widgets\n", + "import ipywidgets\n", "import traitlets\n", "import ipywidgets.widgets as widgets\n", "from IPython.display import display\n", @@ -105,114 +106,26 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Display Live Camera Feed" + "### Data Collection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "First, let's initialize and display our camera like we did in the teleoperation notebook. \n", + "Let's display our camera like we did in the teleoperation notebook, however this time with using a special ipywidget called `jupyter_clickable_image_widget` that lets you click on the image and take the coordinates for data annocation.\n", + "This eliminates the needs of using the gamepad for data annocation.\n", "\n", - "We use Camera Class from JetBot to enable CSI MIPI camera. Our neural network takes a 224x224 pixel image as input. We'll set our camera to that size to minimize the filesize of our dataset (we've tested that it works for this task). In some scenarios it may be better to collect data in a larger image size and downscale to the desired size later." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "camera = Camera()\n", - "\n", - "image_widget = widgets.Image(format='jpeg', width=224, height=224)\n", - "target_widget = widgets.Image(format='jpeg', width=224, height=224)\n", - "\n", - "x_slider = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, description='x')\n", - "y_slider = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, description='y')\n", - "\n", - "def display_xy(camera_image):\n", - " image = np.copy(camera_image)\n", - " x = x_slider.value\n", - " y = y_slider.value\n", - " x = int(x * 224 / 2 + 112)\n", - " y = int(y * 224 / 2 + 112)\n", - " image = cv2.circle(image, (x, y), 8, (0, 255, 0), 3)\n", - " image = cv2.circle(image, (112, 224), 8, (0, 0,255), 3)\n", - " image = cv2.line(image, (x,y), (112,224), (255,0,0), 3)\n", - " jpeg_image = bgr8_to_jpeg(image)\n", - " return jpeg_image\n", - "\n", - "time.sleep(1)\n", - "traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)\n", - "traitlets.dlink((camera, 'value'), (target_widget, 'value'), transform=display_xy)\n", - "\n", - "display(widgets.HBox([image_widget, target_widget]), x_slider, y_slider)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Create Gamepad Controller\n", - "\n", - "This step is similar to \"Teleoperation\" task. In this task, we will use gamepad controller to label images.\n", - "\n", - "The first thing we want to do is create an instance of the Controller widget, which we'll use to label images with \"x\" and \"y\" values as mentioned in introduction. The Controller widget takes a index parameter, which specifies the number of the controller. This is useful in case you have multiple controllers attached, or some gamepads appear as multiple controllers. To determine the index of the controller you're using,\n", - "\n", - "Visit http://html5gamepad.com.\n", - "Press buttons on the gamepad you're using\n", - "Remember the index of the gamepad that is responding to the button presses\n", - "Next, we'll create and display our controller using that index." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "controller = widgets.Controller(index=0)\n", - "\n", - "display(controller)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Connect Gamepad Controller to Label Images\n", - "\n", - "Now, even though we've connected our gamepad, we haven't yet attached the controller to label images! We'll connect that to the left and right vertical axes using the dlink function. The dlink function, unlike the link function, allows us to attach a transform between the source and target. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "widgets.jsdlink((controller.axes[2], 'value'), (x_slider, 'value'))\n", - "widgets.jsdlink((controller.axes[3], 'value'), (y_slider, 'value'))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Collect data\n", + "We use Camera Class from JetBot to enable CSI MIPI camera. Our neural network takes a 224x224 pixel image as input. We'll set our camera to that size to minimize the filesize of our dataset (we've tested that it works for this task). In some scenarios it may be better to collect data in a larger image size and downscale to the desired size later.\n", "\n", - "The following block of code will display the live image feed, as well as the number of images we've saved. We store\n", - "the target X, Y values by\n", + "The following block of code will display the live image feed for you to click on for annocation on the left, as well as the snapshot of last annotated image (with a green circle showing where you clicked) on the right.\n", + "Below it shows the number of images we've saved. \n", "\n", - "1. Place the green dot on the target\n", - "2. Press 'down' on the DPAD to save\n", - "\n", - "This will store a file in the ``dataset_xy`` folder with files named\n", + "When you clicki on the left live image, it stores a file in the ``dataset_xy`` folder with files named\n", "\n", "``xy___.jpg``\n", "\n", - "When we train, we load the images and parse the x, y values from the filename" + "When we train, we load the images and parse the x, y values from the filenam" ] }, { @@ -221,6 +134,8 @@ "metadata": {}, "outputs": [], "source": [ + "from jupyter_clickable_image_widget import ClickableImageWidget\n", + "\n", "DATASET_DIR = 'dataset_xy'\n", "\n", "# we have this \"try/except\" statement because these next functions can throw an error if the directories exist already\n", @@ -229,28 +144,45 @@ "except FileExistsError:\n", " print('Directories not created becasue they already exist')\n", "\n", - "for b in controller.buttons:\n", - " b.unobserve_all()\n", - "\n", - "count_widget = widgets.IntText(description='count', value=len(glob.glob(os.path.join(DATASET_DIR, '*.jpg'))))\n", - "\n", - "def xy_uuid(x, y):\n", - " return 'xy_%03d_%03d_%s' % (x * 50 + 50, y * 50 + 50, uuid1())\n", + "camera = Camera()\n", "\n", - "def save_snapshot(change):\n", - " if change['new']:\n", - " uuid = xy_uuid(x_slider.value, y_slider.value)\n", + "# create image preview\n", + "camera_widget = ClickableImageWidget(width=camera.width, height=camera.height)\n", + "snapshot_widget = ipywidgets.Image(width=camera.width, height=camera.height)\n", + "traitlets.dlink((camera, 'value'), (camera_widget, 'value'), transform=bgr8_to_jpeg)\n", + "\n", + "# create widgets\n", + "count_widget = ipywidgets.IntText(description='count')\n", + "# manually update counts at initialization\n", + "count_widget.value = len(glob.glob(os.path.join(DATASET_DIR, '*.jpg')))\n", + "\n", + "def save_snapshot(_, content, msg):\n", + " if content['event'] == 'click':\n", + " data = content['eventData']\n", + " x = data['offsetX']\n", + " y = data['offsetY']\n", + " \n", + " # save to disk\n", + " #dataset.save_entry(category_widget.value, camera.value, x, y)\n", + " uuid = 'xy_%03d_%03d_%s' % (x, y, uuid1())\n", " image_path = os.path.join(DATASET_DIR, uuid + '.jpg')\n", " with open(image_path, 'wb') as f:\n", - " f.write(image_widget.value)\n", + " f.write(camera_widget.value)\n", + " \n", + " # display saved snapshot\n", + " snapshot = camera.value.copy()\n", + " snapshot = cv2.circle(snapshot, (x, y), 8, (0, 255, 0), 3)\n", + " snapshot_widget.value = bgr8_to_jpeg(snapshot)\n", " count_widget.value = len(glob.glob(os.path.join(DATASET_DIR, '*.jpg')))\n", + " \n", + "camera_widget.on_msg(save_snapshot)\n", "\n", - "controller.buttons[13].observe(save_snapshot, names='value')\n", - "\n", - "display(widgets.VBox([\n", - " target_widget,\n", + "data_collection_widget = ipywidgets.VBox([\n", + " ipywidgets.HBox([camera_widget, snapshot_widget]),\n", " count_widget\n", - "]))" + "])\n", + "\n", + "display(data_collection_widget)" ] }, { @@ -314,9 +246,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.8" + "version": "3.6.9" } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } diff --git a/notebooks/road_following/data_collection_gamepad.ipynb b/notebooks/road_following/data_collection_gamepad.ipynb new file mode 100644 index 00000000..bfe227e2 --- /dev/null +++ b/notebooks/road_following/data_collection_gamepad.ipynb @@ -0,0 +1,322 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Road Following - Data Collection (using Gamepad)\n", + "\n", + "If you've run through the collision avoidance sample, your should be familiar following three steps\n", + "\n", + "1. Data collection\n", + "2. Training\n", + "3. Deployment\n", + "\n", + "In this notebook, we'll do the same exact thing! Except, instead of classification, you'll learn a different fundamental technique, **regression**, that we'll use to\n", + "enable JetBot to follow a road (or really, any path or target point). \n", + "\n", + "1. Place the JetBot in different positions on a path (offset from center, different angles, etc)\n", + "\n", + "> Remember from collision avoidance, data variation is key!\n", + "\n", + "2. Display the live camera feed from the robot\n", + "3. Using a gamepad controller, place a 'green dot', which corresponds to the target direction we want the robot to travel, on the image.\n", + "4. Store the X, Y values of this green dot along with the image from the robot's camera\n", + "\n", + "Then, in the training notebook, we'll train a neural network to predict the X, Y values of our label. In the live demo, we'll use\n", + "the predicted X, Y values to compute an approximate steering value (it's not 'exactly' an angle, as\n", + "that would require image calibration, but it's roughly proportional to the angle so our controller will work fine).\n", + "\n", + "So how do you decide exactly where to place the target for this example? Here is a guide we think may help\n", + "\n", + "1. Look at the live video feed from the camera\n", + "2. Imagine the path that the robot should follow (try to approximate the distance it needs to avoid running off road etc.)\n", + "3. Place the target as far along this path as it can go so that the robot could head straight to the target without 'running off' the road.\n", + "\n", + "> For example, if we're on a very straight road, we could place it at the horizon. If we're on a sharp turn, it may need to be placed closer to the robot so it doesn't run out of boundaries.\n", + "\n", + "Assuming our deep learning model works as intended, these labeling guidelines should ensure the following:\n", + "\n", + "1. The robot can safely travel directly towards the target (without going out of bounds etc.)\n", + "2. The target will continuously progress along our imagined path\n", + "\n", + "What we get, is a 'carrot on a stick' that moves along our desired trajectory. Deep learning decides where to place the carrot, and JetBot just follows it :)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Labeling example video\n", + "\n", + "Execute the block of code to see an example of how to we labeled the images. This model worked after only 123 images :)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import HTML\n", + "HTML('')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Import Libraries" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "So lets get started by importing all the required libraries for \"data collection\" purpose. We will mainly use OpenCV to visualize and save image with labels. Libraries such as uuid, datetime are used for image naming. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# IPython Libraries for display and widgets\n", + "import traitlets\n", + "import ipywidgets.widgets as widgets\n", + "from IPython.display import display\n", + "\n", + "# Camera and Motor Interface for JetBot\n", + "from jetbot import Robot, Camera, bgr8_to_jpeg\n", + "\n", + "# Python basic pakcages for image annotation\n", + "from uuid import uuid1\n", + "import os\n", + "import json\n", + "import glob\n", + "import datetime\n", + "import numpy as np\n", + "import cv2\n", + "import time" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Display Live Camera Feed" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First, let's initialize and display our camera like we did in the teleoperation notebook. \n", + "\n", + "We use Camera Class from JetBot to enable CSI MIPI camera. Our neural network takes a 224x224 pixel image as input. We'll set our camera to that size to minimize the filesize of our dataset (we've tested that it works for this task). In some scenarios it may be better to collect data in a larger image size and downscale to the desired size later." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "camera = Camera()\n", + "\n", + "image_widget = widgets.Image(format='jpeg', width=224, height=224)\n", + "target_widget = widgets.Image(format='jpeg', width=224, height=224)\n", + "\n", + "x_slider = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, description='x')\n", + "y_slider = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, description='y')\n", + "\n", + "def display_xy(camera_image):\n", + " image = np.copy(camera_image)\n", + " x = x_slider.value\n", + " y = y_slider.value\n", + " x = int(x * 224 / 2 + 112)\n", + " y = int(y * 224 / 2 + 112)\n", + " image = cv2.circle(image, (x, y), 8, (0, 255, 0), 3)\n", + " image = cv2.circle(image, (112, 224), 8, (0, 0,255), 3)\n", + " image = cv2.line(image, (x,y), (112,224), (255,0,0), 3)\n", + " jpeg_image = bgr8_to_jpeg(image)\n", + " return jpeg_image\n", + "\n", + "time.sleep(1)\n", + "traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)\n", + "traitlets.dlink((camera, 'value'), (target_widget, 'value'), transform=display_xy)\n", + "\n", + "display(widgets.HBox([image_widget, target_widget]), x_slider, y_slider)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create Gamepad Controller\n", + "\n", + "This step is similar to \"Teleoperation\" task. In this task, we will use gamepad controller to label images.\n", + "\n", + "The first thing we want to do is create an instance of the Controller widget, which we'll use to label images with \"x\" and \"y\" values as mentioned in introduction. The Controller widget takes a index parameter, which specifies the number of the controller. This is useful in case you have multiple controllers attached, or some gamepads appear as multiple controllers. To determine the index of the controller you're using,\n", + "\n", + "Visit http://html5gamepad.com.\n", + "Press buttons on the gamepad you're using\n", + "Remember the index of the gamepad that is responding to the button presses\n", + "Next, we'll create and display our controller using that index." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "controller = widgets.Controller(index=0)\n", + "\n", + "display(controller)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Connect Gamepad Controller to Label Images\n", + "\n", + "Now, even though we've connected our gamepad, we haven't yet attached the controller to label images! We'll connect that to the left and right vertical axes using the dlink function. The dlink function, unlike the link function, allows us to attach a transform between the source and target. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "widgets.jsdlink((controller.axes[2], 'value'), (x_slider, 'value'))\n", + "widgets.jsdlink((controller.axes[3], 'value'), (y_slider, 'value'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Collect data\n", + "\n", + "The following block of code will display the live image feed, as well as the number of images we've saved. We store\n", + "the target X, Y values by\n", + "\n", + "1. Place the green dot on the target\n", + "2. Press 'down' on the DPAD to save\n", + "\n", + "This will store a file in the ``dataset_xy`` folder with files named\n", + "\n", + "``xy___.jpg``\n", + "\n", + "When we train, we load the images and parse the x, y values from the filename" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "DATASET_DIR = 'dataset_xy'\n", + "\n", + "# we have this \"try/except\" statement because these next functions can throw an error if the directories exist already\n", + "try:\n", + " os.makedirs(DATASET_DIR)\n", + "except FileExistsError:\n", + " print('Directories not created becasue they already exist')\n", + "\n", + "for b in controller.buttons:\n", + " b.unobserve_all()\n", + "\n", + "count_widget = widgets.IntText(description='count', value=len(glob.glob(os.path.join(DATASET_DIR, '*.jpg'))))\n", + "\n", + "def xy_uuid(x, y):\n", + " return 'xy_%03d_%03d_%s' % (x * 50 + 50, y * 50 + 50, uuid1())\n", + "\n", + "def save_snapshot(change):\n", + " if change['new']:\n", + " uuid = xy_uuid(x_slider.value, y_slider.value)\n", + " image_path = os.path.join(DATASET_DIR, uuid + '.jpg')\n", + " with open(image_path, 'wb') as f:\n", + " f.write(image_widget.value)\n", + " count_widget.value = len(glob.glob(os.path.join(DATASET_DIR, '*.jpg')))\n", + "\n", + "controller.buttons[13].observe(save_snapshot, names='value')\n", + "\n", + "display(widgets.VBox([\n", + " target_widget,\n", + " count_widget\n", + "]))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Next" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Once you've collected enough data, we'll need to copy that data to our GPU desktop or cloud machine for training. First, we can call the following terminal command to compress our dataset folder into a single zip file. \n", + "\n", + "> If you're training on the JetBot itself, you can skip this step!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The ! prefix indicates that we want to run the cell as a shell (or terminal) command.\n", + "\n", + "The -r flag in the zip command below indicates recursive so that we include all nested files, the -q flag indicates quiet so that the zip command doesn't print any output" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def timestr():\n", + " return str(datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))\n", + "\n", + "!zip -r -q road_following_{DATASET_DIR}_{timestr()}.zip {DATASET_DIR}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You should see a file named road_following_.zip in the Jupyter Lab file browser. You should download the zip file using the Jupyter Lab file browser by right clicking and selecting Download." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.8" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} \ No newline at end of file diff --git a/scripts/create-sdcard-image-from-scratch.sh b/scripts/create-sdcard-image-from-scratch.sh index 0299233c..da01e2e2 100755 --- a/scripts/create-sdcard-image-from-scratch.sh +++ b/scripts/create-sdcard-image-from-scratch.sh @@ -1,12 +1,20 @@ #!/bin/bash +set -e + password='jetbot' +# Record the time this script starts +date +# Get the full dir name of this script +DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" + # Keep updating the existing sudo time stamp sudo -v while true; do sudo -n true; sleep 120; kill -0 "$$" || exit; done 2>/dev/null & # Enable i2c permissions +echo -e "\e[100m Enable i2c permissions \e[0m" sudo usermod -aG i2c $USER # Make swapfile @@ -18,27 +26,41 @@ sudo swapon /var/swapfile sudo bash -c 'echo "/var/swapfile swap swap defaults 0 0" >> /etc/fstab' # Install pip and some python dependencies +echo -e "\e[104m Install pip and some python dependencies \e[0m" sudo apt-get update sudo apt install -y python3-pip python3-pil -sudo pip3 install --upgrade numpy +sudo -H pip3 install Cython +sudo -H pip3 install --upgrade numpy + +# Install jtop +echo -e "\e[100m Install jtop \e[0m" +sudo -H pip3 install jetson-stats + # Install the pre-built TensorFlow pip wheel +echo -e "\e[48;5;202m Install the pre-built TensorFlow pip wheel \e[0m" sudo apt-get update -sudo apt-get install -y libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev +sudo apt-get install -y libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran sudo apt-get install -y python3-pip -sudo pip3 install -U pip -sudo pip3 install -U numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.6 enum34 futures testresources setuptools protobuf -sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.14.0+nv19.10 +sudo pip3 install -U pip testresources setuptools numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11 +# TF-1.15 +sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 'tensorflow<2' # Install the pre-built PyTorch pip wheel -wget https://nvidia.box.com/shared/static/phqe92v26cbhqjohwtvxorrwnmrnfx1o.whl -O torch-1.3.0-cp36-cp36m-linux_aarch64.whl -sudo pip3 install numpy torch-1.3.0-cp36-cp36m-linux_aarch64.whl +echo -e "\e[45m Install the pre-built PyTorch pip wheel \e[0m" +cd +wget -N https://nvidia.box.com/shared/static/yr6sjswn25z7oankw8zy1roow9cy5ur1.whl -O torch-1.6.0rc2-cp36-cp36m-linux_aarch64.whl +sudo apt-get install -y python3-pip libopenblas-base libopenmpi-dev +sudo -H pip3 install Cython +sudo -H pip3 install numpy torch-1.6.0rc2-cp36-cp36m-linux_aarch64.whl # Install torchvision package +echo -e "\e[45m Install torchvision package \e[0m" +cd git clone https://github.com/pytorch/vision cd vision -git checkout v0.4.0 -sudo python3 setup.py install +#git checkout v0.4.0 +sudo -H python3 setup.py install # Install torch2trt cd $HOME @@ -47,21 +69,40 @@ cd torch2trt sudo python3 setup.py install # Install traitlets (master, to support the unlink() method) -sudo python3 -m pip install git+https://github.com/ipython/traitlets@master +echo -e "\e[48;5;172m Install traitlets \e[0m" +#sudo python3 -m pip install git+https://github.com/ipython/traitlets@master +sudo python3 -m pip install git+https://github.com/ipython/traitlets@dead2b8cdde5913572254cf6dc70b5a6065b86f8 # Install jupyter lab -sudo apt install -y nodejs npm -sudo pip3 install jupyter jupyterlab -sudo jupyter labextension install @jupyter-widgets/jupyterlab-manager +echo -e "\e[48;5;172m Install Jupyter Lab \e[0m" +sudo apt install -y curl +curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash - +sudo apt install -y nodejs libffi-dev +sudo -H pip3 install jupyter jupyterlab +sudo -H jupyter labextension install @jupyter-widgets/jupyterlab-manager jupyter lab --generate-config -#jupyter notebook password python3 -c "from notebook.auth.security import set_password; set_password('$password', '$HOME/.jupyter/jupyter_notebook_config.json')" +# fix for permission error +sudo chown -R jetbot:jetbot ~/.local/share/ + +# Install jupyter_clickable_image_widget +echo "\e[42m Install jupyter_clickable_image_widget \e[0m" +cd +sudo apt-get install libssl1.0-dev +git clone https://github.com/jaybdub/jupyter_clickable_image_widget +cd jupyter_clickable_image_widget +git checkout tags/v0.1 +sudo -H pip3 install -e . +sudo jupyter labextension install js +sudo jupyter lab build + # Install bokeh sudo pip3 install bokeh sudo jupyter labextension install @bokeh/jupyter_bokeh + # install jetbot python module cd sudo apt install -y python3-smbus @@ -88,3 +129,7 @@ sudo systemctl disable nvzramconfig.service # Copy JetBot notebooks to home directory cp -r ~/jetbot/notebooks ~/Notebooks +echo -e "\e[42m All done! \e[0m" + +#record the time this script ends +date diff --git a/scripts/run-script-and-save-log.sh b/scripts/run-script-and-save-log.sh index fb129e4b..3a5ce941 100755 --- a/scripts/run-script-and-save-log.sh +++ b/scripts/run-script-and-save-log.sh @@ -1,3 +1,4 @@ #!/bin/bash -./create-sdcard-image-from-scratch.sh 2>&1 | tee ~/jetbot-create-sdcard-image.log +DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" +${DIR}/create-sdcard-image-from-scratch.sh 2>&1 | tee ~/jetbot-create-sdcard-image.log