diff --git a/20150422/48156511_README.md b/20150422/48156511_README.md
new file mode 100644
index 0000000..31d4c19
--- /dev/null
+++ b/20150422/48156511_README.md
@@ -0,0 +1,9 @@
+# homework20150422_48156511
+
+ - Open new terminal and execute command below:
+ ```bash
+roslaunch homework20150422 48156511_face_detect.launch
+```
+
+ - You can see 2 image_view (one shows raw image view, and another shows the edge of raw image.)
+ - You can regulate edge image by changing thresh1 and thresh2 in the py file.
diff --git a/20150422/README.md b/20150422/README.md
index 30f3a37..31d4c19 100644
--- a/20150422/README.md
+++ b/20150422/README.md
@@ -1,138 +1,9 @@
-# homework20150422
+# homework20150422_48156511
-1. execute `talker/listener` sample
-
- **Goal:** Comprehend node, topic, publish/subscriber and command line tools for ROS
-
- - Open new terminal and execute command below:
- ```bash
-# Terminal 1
-roscore
-```
- - Open new terminal and execute command below
- ```bash
- # Terminal 2
-rosrun homework20150422 talker.l
-```
- - Open new terminal and execute command below:
- ```bash
- # Terminal 3
-rosrun homework20150422 listener.l
-```
-
- - Open new terminal and execute commands below:
- ```bash
- # Terminal 4
-rostopic list # you can find all topics advertised
-rostopic info /chatter # you can see detail information of /chatter topic
-rostopic echo /chatter # you can also see the message published as /chatter topic
-```
-
-2. execute `turtlesim` sample
-
- **Goal:** Comprehend ros distributed system with multiple launguages, and its launching system called `roslaunch`
-
- - Open new terminal and execute commands below:
- ```bash
- # Terminal 1
-roslaunch homework20150422 turtlesim.launch
+ - Open new terminal and execute command below:
+ ```bash
+roslaunch homework20150422 48156511_face_detect.launch
```
- - Focus the terminal and push `up` `down` `right` `left` keys.
-
- - See what nodes are launching by `rosnode list`:
- ```bash
- # Terminal 2
-rosnode list # you can see what nodes are on ROS
-```
-
- - Now let's take a look at `turtlesim.launch`
- ```xml
-
-
-
-
-
-
-```
-
- - You can see 2 `` tags in launch file. This means these 2 nodes are launched.:
- ```xml
-
-```
- - `name` attribute: name of the node
-
- **NOTE** (each node name is already named in each nodes, but once you set name different from that written at original node, it will be overwriten (we call it `remapped`))
- - `pkg` attribute: package of the node
- - `type` attribute: program name of the node (we can use each independent node by `rosrun `)
-
- - Now let's look at next node example:
- ```xml
-
-
-
-```
- - `args` attribute (optional): arguments passed when launching node
- - `output` attribute (optional): when this has value `screen`, we can see output log of this node. (This is important when launching nodes on another machine, and need to transmit log output to your terminal)
- - ``: remapping topic name
- Detail: `turtlesim-teleop.l` has publisher of `cmd_vel`, so if you launch this node with `rosrun`, this node publishes `/cmd_vel` topic. But, if remapped `cmd_vel` to `/turtle1/cmd_vel` as described above, this node changes the topic name from `/cmd_vel` to `/turtle1/cmd_vel`.
- You can change subscribe/publish topic of the node without changing any code.
-
-
-3. execute `face_detector` sample:
-
- **GOAL** Comprehend nested `launch`, topic remapping
-
- - Open new terminal and execute command below:
- ```bash
-roslaunch homework20150422 camera.launch
-```
-
- - You can see the image captured from the camera of your PC
-
- - Press `Ctrl-C` to stop this launch
-
- - Open new terminal and execute command below:
- ```bash
-roslaunch homework20150422 face_detect.launch
-```
-
- - You can see 2 image_view (one shows raw image view, and another shows the image drawn rectangle over the detected face using OpenCV)
-
- ```xml
-
- ...
-
- ...
-```
-
- - You can use arguments in launch file
- - You can also set arguments when launches launch file
- e.g.: `roslaunch homework20150422 face_detect.launch show_debug_image:=false`
-
- ```xml
-
-```
- - You can include another `launch` files.
-
-4. Make your own image processing node
-
- - previous `face_detector` example has node doing 3 things:
- - Subscribe `sensor_msgs/Image` message
- - Processs image subscribed (detecting face, and draw rectangle on each image)
- - Publish processed image
- - Let's make your own node to process image, and visualize processed image
- - Please send pull request below:
- - your source codes of image processing that:
- - Subscribe `sensor_msgs/Image` message
- - Processs image subscribed (any processing is ok)
- - Publish processed image
- - make `launch` file to show following:
- - original camera image
- - processed image
- - add `README` which describes your node
+ - You can see 2 image_view (one shows raw image view, and another shows the edge of raw image.)
+ - You can regulate edge image by changing thresh1 and thresh2 in the py file.
diff --git a/20150422/image/edge_detect.png b/20150422/image/edge_detect.png
new file mode 100644
index 0000000..c12c6d5
Binary files /dev/null and b/20150422/image/edge_detect.png differ
diff --git a/20150422/launch/48156511_face_detect.launch b/20150422/launch/48156511_face_detect.launch
new file mode 100644
index 0000000..c57fc39
--- /dev/null
+++ b/20150422/launch/48156511_face_detect.launch
@@ -0,0 +1,16 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/20150422/scripts/48156511_face_detect_mono.py b/20150422/scripts/48156511_face_detect_mono.py
new file mode 100755
index 0000000..540c3b7
--- /dev/null
+++ b/20150422/scripts/48156511_face_detect_mono.py
@@ -0,0 +1,115 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Author: Yuki Furuta
+
+# 1. import rospy to enable ROS feature of python
+import rospy
+
+# 2. import message files for python
+# cf. you can find what contains in a message by executing `rosmsg show `
+# e.g. rosmsg show sensor_msgs/Image
+from sensor_msgs.msg import Image # This imports sensor_msgs/Image
+from geometry_msgs.msg import Twist # This imports geometry_msgs/Twist
+
+# Tips: This is special utility for converting between ROS Image message and OpenCV Image format.
+from cv_bridge import CvBridge
+
+import cv2 # this imports opencv python interface
+
+# Since indigo, opencv is not released from ROS infrastructure.
+cascade_path = "/usr/share/opencv/haarcascades/haarcascade_frontalface_alt2.xml"
+
+class FaceDetectorMonoNode(object):
+ """
+This class has feature to subscribe "image" and detect people face,
+then publishes those positions as geometry_msgs/Twist message.
+"""
+
+ def __init__(self):
+ # make instance of object for image processing
+ try:
+ self.bridge = CvBridge()
+ self.cascade = cv2.CascadeClassifier(cascade_path)
+ except Exception as e:
+ rospy.logerr("Error: %s" % e)
+
+ # 4. subscribe "image" topic whose message type is sensor_msgs/Image
+ self.image_subscriber = rospy.Subscriber("image", Image, self.image_callback)
+
+ # declare publishers
+ self.face_pose_publisher = rospy.Publisher("twist", Twist)
+ self.debug_image_publisher = rospy.Publisher("debug_image", Image)
+
+
+
+ # 5. define callback function for image topic
+ def image_callback(self, msg):
+ img_size = (msg.width, msg.height)
+
+ # 6. convert ROS sensor_msgs/Image to OpenCV format
+ img_mat = self.bridge.imgmsg_to_cv2(msg)
+
+ # 7. convert image to grey image
+ img_grey = cv2.cvtColor(img_mat, cv2.cv.CV_BGR2GRAY)
+
+ # 8. detect edge(Canny)
+ #you can regulate edge image by changing thresh1,2 ( 0-255 )
+ thresh1 = 50
+ thresh2 = 100
+ edge = cv2.Canny(img_grey,thresh1,thresh2)
+
+ # 9. publish debug image
+ pub_debug_img_msg = self.bridge.cv2_to_imgmsg(edge, encoding="mono8")
+ self.debug_image_publisher.publish(pub_debug_img_msg)
+
+
+ # # 7. detect face in an image
+ # face_rects = self.cascade.detectMultiScale(img_grey,
+ # scaleFactor=1.1,
+ # minNeighbors=1,
+ # minSize=(1,1))
+
+ # rospy.loginfo("%s" % msg)
+
+ # if len(face_rects) > 0:
+ # rect = face_rects[0] # use first rect
+ # face_rect_origin = (rect[0], rect[1]) # (x,y)
+ # face_rect_size = (rect[2] - rect[0], rect[3] - rect[1]) # (width, height)
+ # face_rect_center = (face_rect_origin[0] + face_rect_size[0] * 0.5,
+ # face_rect_origin[1] + face_rect_size[1] * 0.5)
+
+ # # 8. logging face position
+ # rospy.loginfo("face detected at (x, y) = (%d, %d)" % face_rect_center)
+
+ # # 9. compute center of face relative to center of camera image
+ # # Note that face_relative_center has value (-img_size/2 ~ +img_size/2)
+ # img_center = (img_size[0] * 0.5, img_size[1] * 0.5)
+ # face_relative_center = (face_rect_center[0] - img_center[0],
+ # face_rect_center[1] - img_center[1])
+
+ # # 10. make Twist message and set values
+ # pub_msg = Twist()
+ # pub_msg.linear.x = face_relative_center[1] / img_center[1]
+ # pub_msg.angular.z = face_relative_center[0] / img_center[0]
+
+ # # 12. publish debug image
+ # color = (0, 0, 255)
+ # cv2.circle(img_mat, face_rect_origin,
+ # 100,
+ # color, thickness=2)
+ # pub_debug_img_msg = self.bridge.cv2_to_imgmsg(img_mat, encoding="bgr8")
+ # self.debug_image_publisher.publish(pub_debug_img_msg)
+
+ # # 11. publish message
+ # self.face_pose_publisher.publish(pub_msg)
+ # else:
+ # rospy.loginfo("no face detected.")
+
+if __name__ == '__main__':
+ # 3. At first, we must set node name, and register to the master.
+ # In this case, node name is face_detector_mono
+ rospy.init_node("face_detector_mono")
+ face_detector = FaceDetectorMonoNode()
+
+ # wait for message comming
+ rospy.spin()
diff --git a/20150624/gazebo-48156511.png b/20150624/gazebo-48156511.png
new file mode 100644
index 0000000..09d0ead
Binary files /dev/null and b/20150624/gazebo-48156511.png differ
diff --git a/20150624/rviz-48156511.png b/20150624/rviz-48156511.png
new file mode 100644
index 0000000..b5c1f25
Binary files /dev/null and b/20150624/rviz-48156511.png differ
diff --git a/20150708homework/48156511reaching.png b/20150708homework/48156511reaching.png
new file mode 100644
index 0000000..02b8c60
Binary files /dev/null and b/20150708homework/48156511reaching.png differ