From d7ee6809728516e71e9e6b901c6611c27e758dea Mon Sep 17 00:00:00 2001 From: Tom Howard Date: Wed, 17 Jan 2024 16:14:22 +0000 Subject: [PATCH] fixed all relevant (for now) rel links --- docs/com2009/assignment1/part1/subscriber.md | 2 +- docs/com2009/assignment1/part2/move_square.md | 4 ++-- docs/com2009/assignment1/part2/twist-tips.md | 2 +- docs/com2009/assignment1/part4/move_client.md | 2 +- docs/com2009/assignment1/part4/move_server.md | 2 +- docs/com2009/assignment1/part5/action_client.md | 4 ++-- .../part5/preemptive_action_client.md | 2 +- .../assignment1/part6/object_detection.md | 4 ++-- .../part6/object_detection_complete.md | 2 +- docs/com2009/assignment2/README.md | 8 ++++---- docs/others/amr31001/README.md | 4 ++-- docs/others/amr31001/lab1.md | 2 +- docs/others/amr31001/lab2.md | 4 ++-- docs/waffles/exercises.md | 16 +++++++--------- docs/waffles/fact-finding.md | 12 ++++++------ docs/waffles/launching-ros.md | 6 +++--- docs/waffles/shutdown.md | 2 +- docs/waffles/tips.md | 2 +- 18 files changed, 39 insertions(+), 41 deletions(-) diff --git a/docs/com2009/assignment1/part1/subscriber.md b/docs/com2009/assignment1/part1/subscriber.md index f03a6084..2e8fb3e0 100644 --- a/docs/com2009/assignment1/part1/subscriber.md +++ b/docs/com2009/assignment1/part1/subscriber.md @@ -29,7 +29,7 @@ Copy **all** the code below into your `subscriber.py` file and (again) *make sur 6. Finally, the code is executed by again performing a `__name__` check, creating an instance of the `Subscriber()` class and calling the `main()` method from that class. !!! warning "Fill in the Blank!" - Replace the `{BLANK}` in the code above with the name of the topic that our [`publisher.py` node](../publisher) was set up to publish to! + Replace the `{BLANK}` in the code above with the name of the topic that our [`publisher.py` node](./publisher.md) was set up to publish to! ## Don't Forget the Shebang! {#dfts} diff --git a/docs/com2009/assignment1/part2/move_square.md b/docs/com2009/assignment1/part2/move_square.md index 70a606ca..5cdc8921 100644 --- a/docs/com2009/assignment1/part2/move_square.md +++ b/docs/com2009/assignment1/part2/move_square.md @@ -26,7 +26,7 @@ Moving in a square can be achieved by switching between two different movement s 1. Import the `Twist` message for publishing velocity commands to `/cmd_vel`. 2. Import the `Odometry` message, for use when subscribing to the `/odom` topic. -3. Import the `euler_from_quaternion` function to convert orientation from quaternions (as provided in the `Odometry` message) to Euler angles (about [the principal axes](../../part2/#principal-axes)). +3. Import the `euler_from_quaternion` function to convert orientation from quaternions (as provided in the `Odometry` message) to Euler angles (about [the principal axes](../part2.md#principal-axes)). 4. Finally, import some useful mathematical operations (and `pi`), which you may find useful: | Mathematical Operation | Python Implementation | @@ -53,7 +53,7 @@ Moving in a square can be achieved by switching between two different movement s 6. Here we obtain the robot's **current** position coordinates. -7. And here we obtain the robot's current orientation (in quaternions) and convert it to Euler angles (in radians) about [the principal axes](../../part2/#principal-axes), where: +7. And here we obtain the robot's current orientation (in quaternions) and convert it to Euler angles (in radians) about [the principal axes](../part2.md#principal-axes), where: * "roll" = θx * "pitch" = θy * "yaw" = θz diff --git a/docs/com2009/assignment1/part2/twist-tips.md b/docs/com2009/assignment1/part2/twist-tips.md index f55edf46..61596a3e 100644 --- a/docs/com2009/assignment1/part2/twist-tips.md +++ b/docs/com2009/assignment1/part2/twist-tips.md @@ -4,7 +4,7 @@ title: Working with Twist Messages in Python # Working with Twist Messages in Python -From [the Part 1 publisher exercise](../../part1/#ex5), we know how to publish a `String` type message to a topic in Python, but how do we apply the same principles to a `Twist` message (on the `/cmd_vel` topic)? Let's have a look at this... +From [the Part 1 publisher exercise](../part1.md#ex5), we know how to publish a `String` type message to a topic in Python, but how do we apply the same principles to a `Twist` message (on the `/cmd_vel` topic)? Let's have a look at this... First, you need to import the `rospy` library, as well as the `Twist` message type from the `geometry_msgs` library: diff --git a/docs/com2009/assignment1/part4/move_client.md b/docs/com2009/assignment1/part4/move_client.md index e56c6649..dd81f963 100644 --- a/docs/com2009/assignment1/part4/move_client.md +++ b/docs/com2009/assignment1/part4/move_client.md @@ -48,7 +48,7 @@ Copy **all** the code below into your `move_client.py` file and review the annot 10. To finish off, we print the response to the terminal to give the user some feedback. Job done! !!! warning "Fill in the Blank!" - Consider the `import` statement for [the service *Server* that we created earlier](../move_server)... Which part of the `SetBool` Service message was imported here? Now consider that you need to build a client to call this service... which part of the `SetBool` Service message is needed in order to *call* a service? + Consider the `import` statement for [the service *Server* that we created earlier](./move_server.md)... Which part of the `SetBool` Service message was imported here? Now consider that you need to build a client to call this service... which part of the `SetBool` Service message is needed in order to *call* a service? **Note:** the same `{BLANK}` appears in two places in the code above - *the answer is the same in both places!* diff --git a/docs/com2009/assignment1/part4/move_server.md b/docs/com2009/assignment1/part4/move_server.md index 2ed6149d..a88c3f16 100644 --- a/docs/com2009/assignment1/part4/move_server.md +++ b/docs/com2009/assignment1/part4/move_server.md @@ -83,7 +83,7 @@ Copy **all** the code below into your `move_server.py` file and review the annot 17. The `rospy.spin()` function keeps our node running indefinitely (so that the callback function can continue to execute, whenever the service is called). !!! warning "Fill in the Blank!" - Which message package does [the `Twist` message](../../part2/#twist-py) belong to? + Which message package does [the `Twist` message](../part2/twist-tips.md) belong to?

← Back to Part 4 - Exercise 1 diff --git a/docs/com2009/assignment1/part5/action_client.md b/docs/com2009/assignment1/part5/action_client.md index e9bfe3a1..f7e0d446 100644 --- a/docs/com2009/assignment1/part5/action_client.md +++ b/docs/com2009/assignment1/part5/action_client.md @@ -6,7 +6,7 @@ title: Part 5 Camera Sweep Action Client Copy **all** the code below into your `move_client.py` file. Then, review the code annotations to understand how it all works. -(Oh, and [DFTS](../../part1/subscriber/#dfts)!) +(Oh, and [DFTS](../part1/subscriber.md#dfts)!) ```py title="action_client.py" --8<-- "snippets/action_client.py" @@ -55,7 +55,7 @@ Copy **all** the code below into your `move_client.py` file. Then, review the c !!! warning "Fill in the Blank!" Which attribute of the `feedback_data` object tells us how many images have been captured over the course of the *Camera Sweep* Action? There are a number of ways we can work this out: - 1. You could use the same approach as we used [earlier](../#camera_sweep_msg_params). + 1. You could use [the same approach as we used earlier](../part5.md#camera_sweep_msg_params). 1. You could run `rosmsg info tuos_msgs/CameraSweepFeedback` in a terminal. 1. You could use the autocomplete/variable suggestions provided in VS Code! diff --git a/docs/com2009/assignment1/part5/preemptive_action_client.md b/docs/com2009/assignment1/part5/preemptive_action_client.md index a1594db0..9ae63eb5 100644 --- a/docs/com2009/assignment1/part5/preemptive_action_client.md +++ b/docs/com2009/assignment1/part5/preemptive_action_client.md @@ -45,7 +45,7 @@ Copy **all** the code below into your `preemptive_action_client.py` file and the 11. **Fill in the Blank!** !!! warning "Fill in the Blank!" - We have contained all our code inside a nice Python Class now, but how do we actually instantiate it and invoke the Action Call? (We've been doing this from [the very beginning](../../part1/publisher), and the process is very much the same here!) + We have contained all our code inside a nice Python Class now, but how do we actually instantiate it and invoke the Action Call? (We've been doing this from [the very beginning](../part1/publisher.md), and the process is very much the same here!)

← Back to Part 5 - Exercise 3 diff --git a/docs/com2009/assignment1/part6/object_detection.md b/docs/com2009/assignment1/part6/object_detection.md index 4a1763c4..33a938ab 100644 --- a/docs/com2009/assignment1/part6/object_detection.md +++ b/docs/com2009/assignment1/part6/object_detection.md @@ -6,7 +6,7 @@ title: "Part 6 Object Detection Node" Copy **all** the code below into your `object_detection.py` file, and **make sure you read the annotations**! -.. oh, and I'm sure I don't need to say it by now, but... [DFTS](../../part1/subscriber/#dfts)! +.. oh, and I'm sure I don't need to say it by now, but... [DFTS](../part1/subscriber.md#dfts)! ```py title="object_detection.py" --8<-- "snippets/object_detection.py" @@ -14,7 +14,7 @@ Copy **all** the code below into your `object_detection.py` file, and **make sur 1. Of course, we always need to import `rospy` so that Python can work with ROS. What we're also importing here is the Python `Path` class from [the `pathlib` module](https://docs.python.org/3/library/pathlib.html), which will be used to do a few file operations. -2. Then, we're importing the OpenCV library for Python (remember the Python API [that we talked about earlier](../part6/#opencv)), which is called `cv2`, and *also* that ROS-to-OpenCV bridge interface that we talked about earlier too: `cv_bridge`. +2. Then, we're importing the OpenCV library for Python (remember the Python API [that we talked about earlier](../part6.md#opencv)), which is called `cv2`, and *also* that ROS-to-OpenCV bridge interface that we talked about earlier too: `cv_bridge`. From `cv_bridge` we're importing the `CvBridge` and `CvBridgeError` classes from the `cv_bridge` library specifically. diff --git a/docs/com2009/assignment1/part6/object_detection_complete.md b/docs/com2009/assignment1/part6/object_detection_complete.md index 42fab5f2..f739bd1d 100644 --- a/docs/com2009/assignment1/part6/object_detection_complete.md +++ b/docs/com2009/assignment1/part6/object_detection_complete.md @@ -4,7 +4,7 @@ title: "Part 6 Object Detection Node (Complete)" # Part 6 Object Detection Node (Complete) -Here's a full example of the `object_detection.py` node that you should have developed during [Part 6 Exercise 2](../#ex2). Also included here is an illustration of how to use the `cv2.circle()` method to create a marker on an image illustrating the centroid of the detected feature, as discussed [here](../part6/#image-moments). +Here's a full example of the `object_detection.py` node that you should have developed during [Part 6 Exercise 2](../part6.md#ex2). Also included here is an illustration of how to use the `cv2.circle()` method to create a marker on an image illustrating the centroid of the detected feature, as discussed [here](../part6.md#image-moments). ```py title="object_detection_complete.py" --8<-- "snippets/object_detection_complete.py" diff --git a/docs/com2009/assignment2/README.md b/docs/com2009/assignment2/README.md index dd032daf..52fd8aa7 100644 --- a/docs/com2009/assignment2/README.md +++ b/docs/com2009/assignment2/README.md @@ -16,10 +16,10 @@ There are **four tasks** in total that you must complete for this Assignment. Ea | Task | Details | Marks | | :---: | :--- | :---: | -| 1 | [Velocity Control](../task1) | 20/100 | -| 2 | [Avoiding Obstacles](../task2) | 20/100 | -| 3 | [Navigation](../task3) | 25/100 | -| 4 | [Search & Explore](../task4) | 35/100 | +| 1 | [Velocity Control](./task1.md) | 20/100 | +| 2 | [Avoiding Obstacles](./task2,md) | 20/100 | +| 3 | [Navigation](./task3.md) | 25/100 | +| 4 | [Search & Explore](./task4.md) | 35/100 | diff --git a/docs/others/amr31001/README.md b/docs/others/amr31001/README.md index 9ac8b6ad..96c6aa1d 100644 --- a/docs/others/amr31001/README.md +++ b/docs/others/amr31001/README.md @@ -6,5 +6,5 @@ title: AMR31001 Industry 4.0 As part of this module you will take part in two lab sessions in the Diamond, where you will learn about how ROS can be used to program and control robots. You'll do some Python programming and look at how sensor data can be used to control a robot's actions. -* [Lab 1: Mobile Robotics](./lab1) -* [Lab 2: Feedback Control](./lab2) +* [Lab 1: Mobile Robotics](./lab1.md) +* [Lab 2: Feedback Control](./lab2.md) diff --git a/docs/others/amr31001/lab1.md b/docs/others/amr31001/lab1.md index cbe8a37d..2d8dc6df 100644 --- a/docs/others/amr31001/lab1.md +++ b/docs/others/amr31001/lab1.md @@ -13,7 +13,7 @@ ROS is an open-source, industry-standard robot programming framework, used in a ROS allows us to programme robots using a range of different programming languages (including C++, Java, MATLAB etc.), but we'll be using Python for these labs. In addition to this, ROS runs on top of a Linux operating system called *'Ubuntu'*, and so we'll also learn a bit about how to use this too. -We'll be working with robots called *'TurtleBot3 Waffles'*, which you can [find out a bit more about here](../../../about/robots). +We'll be working with robots called *'TurtleBot3 Waffles'*, which you can [find out a bit more about here](../../about/robots.md). !!! warning "Pre-Lab Work" You **must** have completed the Pre-Lab Test before you can make a start on this lab. This is available on the AMR31001 Blackboard Course Page. diff --git a/docs/others/amr31001/lab2.md b/docs/others/amr31001/lab2.md index 71634c01..dd549094 100644 --- a/docs/others/amr31001/lab2.md +++ b/docs/others/amr31001/lab2.md @@ -214,7 +214,7 @@ In the previous lab we used some ROS commands to identify and interrogate active ``` !!! info "Post-lab Quiz" - What does all this mean? We discussed this [last time (in relation to the `/cmd_vel` topic)](../lab1/#rostopic_info_explained), and you may want to have a look back at this to refresh your memory! + What does all this mean? We discussed this [last time (in relation to the `/cmd_vel` topic)](./lab1.md#rostopic_info_explained), and you may want to have a look back at this to refresh your memory! One of the key things that this does tell us is that the `/odom` topic transmits data using a `nav_msgs/Odometry` message. All topics use standard message types to pass information around the ROS network. This is so that any node on the ROS network knows how to deal with the data, if it needs to. `nav_msgs/Odometry` is one of these standard message types. @@ -295,7 +295,7 @@ You should have noticed that (as the robot moved around) the `x` and `y` terms c #### :material-pen: Exercise 2: Odometry-based Navigation {#ex2} -Now that we know about the odometry system and what it tells us, let's see how this could be used as a feedback signal to inform robot navigation. You may recall that [last time](../lab1/#ex6) you created a ROS Node to make your robot to follow a square motion path on the floor. This was time-based though: given the speed of motion (tuning or moving forwards) it was possible to determine the time it would take for the robot to move by a required distance. Having determined this, we then added timers to our node, to control the switch between moving forwards and turning on the spot, in order to generate the square motion path. +Now that we know about the odometry system and what it tells us, let's see how this could be used as a feedback signal to inform robot navigation. You may recall that [last time](./lab1.md#ex6) you created a ROS Node to make your robot to follow a square motion path on the floor. This was time-based though: given the speed of motion (tuning or moving forwards) it was possible to determine the time it would take for the robot to move by a required distance. Having determined this, we then added timers to our node, to control the switch between moving forwards and turning on the spot, in order to generate the square motion path. In theory though, we can do all this with odometry instead, so let's have a go at that now... diff --git a/docs/waffles/exercises.md b/docs/waffles/exercises.md index 707ea509..1a11de70 100644 --- a/docs/waffles/exercises.md +++ b/docs/waffles/exercises.md @@ -4,9 +4,9 @@ title: "ROS & Waffle Basics" # ROS & Waffle Basics -Having completed the steps on [the previous page](../launching-ros), your robot and laptop should now be paired, and ROS should be up and running. The next thing to do is bring the robot to life! +Having completed the steps on [the previous page](./launching-ros.md), your robot and laptop should now be paired, and ROS should be up and running. The next thing to do is bring the robot to life! -On this page you'll work through a series of exercises with the TurtleBot3 (aka, the Waffle) **in your teams**, exploring how the robot works whilst also getting an initial insight into how ROS works too. A number of the exercises here are similar to those that you'll do (or perhaps have *already* done) individually in simulation for [Assignment #1](../../com2009/assignment1/). As you'll soon see, whether you're working with a real robot or a simulation, a lot of the principles are the same for both. +On this page you'll work through a series of exercises with the TurtleBot3 (aka, the Waffle) **in your teams**, exploring how the robot works whilst also getting an initial insight into how ROS works too. A number of the exercises here are similar to those that you'll do (or perhaps have *already* done) individually in simulation for [Assignment #1](../com2009/assignment1/README.md). As you'll soon see, whether you're working with a real robot or a simulation, a lot of the principles are the same for both. ### Quick Links @@ -25,7 +25,7 @@ On this page you'll work through a series of exercises with the TurtleBot3 (aka, Throughout Lab Assignment #1 you will use a ready-made ROS application called `turtlebot3_teleop_keyboard` to drive a Waffle around a range of simulated environments. This works in exactly the same way with a real robot in a real world too: -1. Open up a new terminal instance on the laptop either by using the ++ctrl+alt+t++ keyboard shortcut, or by clicking the Terminal App icon, we'll refer to this as **TERMINAL 1**. In this terminal enter the following `rosrun` command to launch `turtlebot3_teleop_keyboard` (note that it's exactly [the same command as you use in simulation](../../com2009/assignment1/part1/#teleop) too): +1. Open up a new terminal instance on the laptop either by using the ++ctrl+alt+t++ keyboard shortcut, or by clicking the Terminal App icon, we'll refer to this as **TERMINAL 1**. In this terminal enter the following `rosrun` command to launch `turtlebot3_teleop_keyboard` (note that it's exactly [the same command as you use in simulation](../com2009/assignment1/part1.md#teleop) too): *** **TERMINAL 1:** @@ -55,7 +55,7 @@ ROS applications are organised into *packages*. Packages are basically folders c *Scripts* tell the robot what to do and how to act. In ROS, these scripts are called *nodes*. *ROS Nodes* are executable programs that perform specific robot tasks and operations. These are typically written in C++ or Python, but it's possible to write ROS Nodes using other programming languages too. -In the initial setup of the robot on the previous page ([Step 3](../launching-ros/#step-3-launching-ros)) you simultaneously established a ROS Network ("the ROS Master") *and* launched a range of different nodes on the robot with a `roslaunch` command. Then, in [Exercise 1 above](#exMove) you launched the `turtlebot3_teleop_key` node on the laptop: +In the initial setup of the robot on the previous page ([Step 3](./launching-ros.md#step-3-launching-ros)) you simultaneously established a ROS Network ("the ROS Master") *and* launched a range of different nodes on the robot with a `roslaunch` command. Then, in [Exercise 1 above](#exMove) you launched the `turtlebot3_teleop_key` node on the laptop:

@@ -93,7 +93,7 @@ The key difference between `roslaunch` and `rosrun` then is that with `roslaunch #### :material-pen: Exercise 2: Cloning Your Team's ROS Package to the Robot Laptop {#exClone} -In the Assignment #2 "Getting Started" tasks that you should have completed earlier you should have [created your team's Assignment #2 ROS package](../../com2009/assignment2/getting-started/#create-pkg) and [pushed it to GitHub](../../com2009/assignment2/getting-started/#github). In this exercise you will now clone it on to the Robotics Laptop and create your first Python ROS node within it. +In the Assignment #2 "Getting Started" tasks that you should have completed earlier you should have [created your team's Assignment #2 ROS package](../com2009/assignment2/getting-started.md#create-pkg) and [pushed it to GitHub](../com2009/assignment2/getting-started.md#github). In this exercise you will now clone it on to the Robotics Laptop and create your first Python ROS node within it. !!! warning "WiFi" Remember, the Robotics Laptop needs to be connected to the "DIA-LAB" WiFi network in order for the robot and laptop to communicate with one another, but DIA-LAB is an internal network, and you won't be able to access the internet! @@ -396,7 +396,6 @@ Much like the `rosnode list` command, we can use `rostopic list` to list all the The motion of any mobile robot can be defined in terms of its three *principal axes*: `X`, `Y` and `Z`. In the context of our TurtleBot3 Waffle, these axes (and the motion about them) are defined as follows:
- ![](../images/waffle/principal_axes.svg){width=600}
@@ -416,7 +415,6 @@ geometry_msgs/Vector3 angular Our TurtleBot3 robot only has two motors, so it doesn't actually have six DOFs! The two motors can be controlled independently, which gives it what is called a *"differential drive"* configuration, but this still only allows it to move with **two degrees of freedom** in total, as illustrated below.
- ![](../images/waffle/velocities.svg){width=600}
@@ -426,7 +424,7 @@ It can therefore only move **linearly** in the **x-axis** (*Forwards/Backwards*) Making a robot move with ROS is simply a case of publishing the right ROS Message (`Twist`) to the right ROS Topic (`/cmd_vel`). In some of the previous exercises above you used the Keyboard Teleop node to drive the robot around, a bit like a remote control car. In the background here all that was really happening was that the Teleop node was converting our keyboard button presses into velocity commands and publishing these to the `/cmd_vel` topic. -In reality, robots need to be able to navigate complex environments autonomously, which is quite a difficult task, and requires us to build bespoke applications. We can build these applications using Python, and we'll look at the core concepts behind this now by building a simple node that will allow us to make our robot a bit more "autonomous". What we will do here forms the basis of the more complex applications that you will learn about in [Assignment #1](../../com2009/assignment1/) and implement in [Assignment #2](../../com2009/assignment2/) to bring a real robot to life! +In reality, robots need to be able to navigate complex environments autonomously, which is quite a difficult task, and requires us to build bespoke applications. We can build these applications using Python, and we'll look at the core concepts behind this now by building a simple node that will allow us to make our robot a bit more "autonomous". What we will do here forms the basis of the more complex applications that you will learn about in [Assignment #1](../com2009/assignment1/README.md) and implement in [Assignment #2](../com2009/assignment2/README.md) to bring a real robot to life! 1. You will create your first ROS node inside your team's `com2009_team999` ROS package, which you should have cloned to the laptop earlier on. This package should now correctly reside within the Catkin Workspace on the laptop's filesystem. Navigate to this from **TERMINAL 1** using the `roscd` command: @@ -535,7 +533,7 @@ In reality, robots need to be able to navigate complex environments autonomously Simultaneous Localisation and Mapping (SLAM) is a sophisticated tool that is built into ROS. Using data from the robot's LiDAR sensor, plus knowledge of how far the robot has moved[^odom] the robot is able to create a map of its environment *and* keep track of its location within that environment at the same time. IN the exercise that follows you'll see easy it is to implement SLAM on the real robot. -[^odom]: You'll learn much more about "Robot Odometry" in [Assignment #1 Part 2](../../com2009/assignment1/part2), and in the COM2009 Lectures. +[^odom]: You'll learn much more about "Robot Odometry" in [Assignment #1 Part 2](../com2009/assignment1/part2.md), and in the COM2009 Lectures. #### :material-pen: Exercise 7: Using SLAM to create a map of the environment {#exSlam} diff --git a/docs/waffles/fact-finding.md b/docs/waffles/fact-finding.md index fc22e3e0..7638dc26 100644 --- a/docs/waffles/fact-finding.md +++ b/docs/waffles/fact-finding.md @@ -10,7 +10,7 @@ Each mission is linked to a particular part of the Assignment #1 course and, ide ### Mission 1: Publishing Velocity Commands -:material-link-variant: **Connection to Assignment #1**: [Part 2 Exercise 3](../../com2009/assignment1/part2/#ex3) +:material-link-variant: **Connection to Assignment #1**: [Part 2 Exercise 3](../com2009/assignment1/part2.md#ex3) In the above Assignment #1 exercise you learnt how to publish velocity commands from the command-line to make the robot move in simulation. Repeat this in simulation if you need a reminder on how it all worked. @@ -27,7 +27,7 @@ angular: z: 0.0" ``` -(replacing some `0.0`s above with [applicable values](../../about/robots/#max_vels)) +(replacing some `0.0`s above with [applicable values](../about/robots.md#max_vels)) Next, look at the usage information for the `rostopic pub` command by entering: @@ -44,7 +44,7 @@ When you stop the `rostopic pub` command what happens to the robot, and how does ### Mission 2: The Camera Image Topic {#mission2} -:material-link-variant: **Connection to Assignment #1**: The whole of [Part 6](../../com2009/assignment1/part6/) +:material-link-variant: **Connection to Assignment #1**: The whole of [Part 6](../com2009/assignment1/part6.md) In Part 6 you worked extensively with the robot's camera and its images, which were published to the `/camera/rgb/image_raw` topic. @@ -55,7 +55,7 @@ On the laptop, use ROS command-line tools such as `rostopic list` and `rostopic ### Mission 3: Camera Image Resolution -:material-link-variant: **Connection to Assignment #1**: The questions before [Part 6 Exercise 1](../../com2009/assignment1/part6/#cam_img_questions) +:material-link-variant: **Connection to Assignment #1**: The questions before [Part 6 Exercise 1](../com2009/assignment1/part6.md#cam_img_questions) At the start of Part 6 we explore the messages published to the robot's camera image topic. Here you need to work out which part of these messages indicate the *resolution* of the camera images (i.e.: the `height` and `width` of the images, in pixels). You may recall what this was, but if not, go back and interrogate this again to find out what resolution the *simulated* robot's camera images are transmitted at (you'll need to use `rostopic echo`). @@ -69,7 +69,7 @@ On the Robotics Laptop use `rostopic echo` again, to interrogate the real robot ### Mission 4: Out of Range LiDAR Data -:material-link-variant: **Connection to Assignment #1**: ["Interpreting `/LaserScan` Data" (Part 3)](../../com2009/assignment1/part3/#interpreting-laserscan-data) +:material-link-variant: **Connection to Assignment #1**: ["Interpreting `/LaserScan` Data" (Part 3)](../com2009/assignment1/part3.md#interpreting-laserscan-data) The robot's LiDAR sensor can only obtain measurements from objects within a certain distance range. In Part 3 we look at how to work out what this range is, using the `rostopic echo` command. Apply the same techniques to the real robot now to discover the **maximum** and **minimum** distances that the real robot's LiDAR sensor can measure. @@ -82,7 +82,7 @@ Use the `rostopic echo` command to interrogate the ROS network running between y ### Mission 5: Object Detection -:material-link-variant: **Connection to Assignment #1**: [Part 6 Exercise 3](../../com2009/assignment1/part6/#ex3) +:material-link-variant: **Connection to Assignment #1**: [Part 6 Exercise 3](../com2009/assignment1/part6.md#ex3) In general, image detection gets a little more challenging in the real-world, where the same object might appear (to a robot's camera) to have slightly different colour tones under different light conditions, from different angles, in different levels of shade, etc. In simulation, you may build an extremely effective `colour_search.py` node to detect each of the four coloured pillars in the `tuos_simulations/coloured_pillars` world. See how well this now works in the real world now by running the same code on your real Waffle. diff --git a/docs/waffles/launching-ros.md b/docs/waffles/launching-ros.md index 36d92fe7..ab9ec8c9 100644 --- a/docs/waffles/launching-ros.md +++ b/docs/waffles/launching-ros.md @@ -9,7 +9,7 @@ The first step is to launch ROS on the Waffle. !!! info "Important" Launching ROS on the Waffle enables the *ROS Master*[^rosmaster]. The ROS Master **always** runs on the Waffle. It's therefore important to complete the steps on this page **in full** before you do anything else, otherwise the ROS Master will not be running, the robot's core functionality won't be active, and you won't be able to do anything with it! -[^rosmaster]: What *is* the ROS Master!? We'll talk about that on the next page ([here](../exercises/#ros_master)) +[^rosmaster]: What *is* the ROS Master!? We'll talk about that on the next page ([here](./exercises.md#ros_master)) ## Step 1: Identify your Waffle @@ -21,7 +21,7 @@ Robots are named as follows: ## Step 2: Pairing your Waffle to a Laptop -[As discussed earlier](../intro/#laptops), you'll be provided with one of our Robotics Laptops to work with in the lab, and the robot needs to be paired with this in order for the two to work together. +[As discussed earlier](./intro.md#laptops), you'll be provided with one of our Robotics Laptops to work with in the lab, and the robot needs to be paired with this in order for the two to work together. 1. Open up a terminal instance on the laptop, either by using the ++ctrl+alt+t++ keyboard shortcut, or by clicking the Terminal App icon in the favourites bar on the left-hand side of the desktop: @@ -111,4 +111,4 @@ Voltage: 12.40V [100%] ## At the End of Each Lab Session -When you've finished working with a robot it's really important to **shut it down properly** before turning off the power switch. Please refer to the [safe shutdown procedures](../shutdown) for more info. \ No newline at end of file +When you've finished working with a robot it's really important to **shut it down properly** before turning off the power switch. Please refer to the [safe shutdown procedures](./shutdown.md) for more info. \ No newline at end of file diff --git a/docs/waffles/shutdown.md b/docs/waffles/shutdown.md index ff76c174..05bf905c 100644 --- a/docs/waffles/shutdown.md +++ b/docs/waffles/shutdown.md @@ -24,7 +24,7 @@ rm -rf ~/catkin_ws/src/com2009_team999 ## Shutdown Procedures -As you should know, the Waffles are powered by [a Single Board Computer (SBC)](../../about/robots/#tb3), which runs a full-blown operating system (Ubuntu 20.04). As with any operating system, it's important to **shut it down properly**, rather than simply disconnecting the power, to avoid any data loss or other issues. +As you should know, the Waffles are powered by [a Single Board Computer (SBC)](../about/robots.md#tb3), which runs a full-blown operating system (Ubuntu 20.04). As with any operating system, it's important to **shut it down properly**, rather than simply disconnecting the power, to avoid any data loss or other issues. Therefore, once you've finished working with a robot during a lab session, follow the steps below to shut it down. diff --git a/docs/waffles/tips.md b/docs/waffles/tips.md index d3c800e1..83677863 100644 --- a/docs/waffles/tips.md +++ b/docs/waffles/tips.md @@ -38,4 +38,4 @@ Switching into 'Simulation Mode' (run 'robot_mode robot' to work with a real rob ``` !!! note - When you're ready to switch back to a real robot, the `waffle` CLI tool will switch you back into *"Real Robot Mode"* automatically! Just follow [the steps here](../launching-ros). \ No newline at end of file + When you're ready to switch back to a real robot, the `waffle` CLI tool will switch you back into *"Real Robot Mode"* automatically! Just follow [the steps here](./launching-ros.md). \ No newline at end of file