IntelRealsenseZR300-TF座標轉換 系列第五篇

  接着上述的工作,在將近一個月的摸索實踐中,終於完成了抓取的工作。其中包含tf座標系的轉換以及最後位置的調試工作。


這一篇先講述如果進行根據camera_link以及base_link之間的轉換關係完成一個點、一組點的進階座標轉換。接着之前的手眼標定,連接起相機以及機械手臂的tf_tree.

#rosrun rqt_tf_tree rqt_tf_tree

這裏要注意在camera_link中包含的子樹camera_rgb_optical_frame以及camera_rgb_frame兩個座標系是不同的,從rviz裏面來看其position大致重合,但是orentation是不一樣的,而且相機檢測到的可樂罐的位置座標所屬是在camera_rgb_optical_frame下的,所以在轉換過程中(TF點的transform)一定要注意函數裏面的參數是什麼意思,座標系代表的是哪一個座標系,不然不清不楚在後面的位置標定過程中會出現很多錯誤。


  開始tf轉換過程的總結。首先當然是參考的我們官網教程,不可否認,雖然在他人的csdn博客可以學到或者拿到想看的內容,但是相比之下還是官網內容比較詳細,如果某句程序、函數不懂可以很快的理解並修改。下面是核心的subscriber以及publisher(c++版本以及python版本)

http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29 

http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29

以及依然可以替代參考的幾篇csdn博客

https://blog.csdn.net/Will_Ye/article/details/79485964

https://blog.csdn.net/start_from_scratch/article/details/50762293/

https://blog.csdn.net/Will_Ye/article/details/79996311

由開發流程來看,首先要查看所要得到的物體位置信息(如何挖數據),接着要轉換一個點從camera_link到base_link下,接着便是將物體的實時位置從camera_rgb_optical_frame到base_link座標系下面。

這裏插一個在pyhon裏面查看所用到的函數:

ipython()
help(tf.TransformBroadcaster.sendTransform)
可以查看該函數的參數及相關信息。

                   -----------------------------------------------------1---------------------------------------------------------

1.查看物體位置信息:

之前已經提到RecognizedObjectArray這個數組存放的是物體的位置信息,包括position orentation 以及header等等。這個數組是linemod算法在識別到物體位置以後將位置存儲到這裏(並且以話題的形式publish消息,所以我們能在rviz裏面看到物體的標定位置)

下面是之前定義存放識別到物體信息存放的數組位置listener.py(寫的一個簡單的判斷打印物體信息的位置)

#!/usr/bin/env python
#coding=utf-8
import rospy

# from std_msgs.msg import String  
from object_recognition_msgs.msg import RecognizedObjectArray

def callback(data):
    if len(data.objects)>0:
        print "coke detected~~~~~~~~~~~~~~~~~~~~"
        rospy.loginfo(rospy.get_caller_id() + 'heard %s', data.objects[0].pose.pose.pose)
        # rospy.loginfo(rospy.get_caller_id() + 'heard %s', data.objects[0].pose)
    else:
        print "nothing detected................."

def listener():
    # In ROS, nodes are uniquely named. If two nodes with the same
    # name are launched, the previous one is kicked off. The
    # anonymous=True flag means that rospy will choose a unique
    # name for our 'listener' node so that multiple listeners can
    # run simultaneously.

    rospy.init_node('listener', anonymous=True)
    rospy.Subscriber('recognized_object_array', RecognizedObjectArray, callback)
    # spin() simply keeps python from exiting until this node is stopped
    rospy.spin()
 
if __name__ == '__main__':
    listener()

 ---------------------------------------關於import-----------------------------------------

from A模塊 import B函數=import A 使用A.B(參數)。要注意的是雖然使用from import以後不需要在函數名加前綴,但是B函數只默認是A模塊的函數(如果同名的話),這樣同名但是分屬不同模塊 不同功能的函數就會造成衝突。所以優先使用import ,之後使用 A模塊.B函數(C參數)的形式。

------------------------------------------------------------------------------------------------

這個程序只是打印物體的位置信息並且加了一個簡單的判斷,讓我們來看爲什麼要這樣寫(怎麼來挖數據)。挖數據核心主要是rospy.Subscriber('recognized_object_array', RecognizedObjectArray, callback)用來訂閱RecognizedObjectArray話題並且一旦接收到消息就調用回調函數callback。而另一句打印 rospy.loginfo(rospy.get_caller_id() + 'heard %s', data.objects[0].pose.pose.pose) ,前面的id沒什麼用,後面的回調函數先判斷是不是空的內容(是否識別到物體),data指代了RecognizedObjectArray的所有信息,也就是一個形參。用下面的命令來查看數據:

#rosmsg show RecognizedObjectArray

用來查看發佈的消息的類型以及包含的內容

這樣來看data=RecognizedObjectArray,然後data.object[]=RecognizedObjectArray[],也就是指向消息的第二個變量object[]數組,如果用object[0]代表取當前檢測到的第一個物體位置信息的數組信息。繼續挖,data.object[]=RecognizedObjectArray[]包含了幾個對象,比如header、type、confidence、pose(未截到圖)等等以及他們對應的數據類型。運行listener來繼續看

#rosrun ur10_rg2_moveit_config listener.py打印識別到的物體位置

上面是隻挖到data.object[].pose。這樣來看可以看到,data.object[].pose.pose.pose即指到只有posetion和orientation數據處。這樣我們就可以挖到指定的數據,並且對挖數據有一個直觀的學習瞭解。

                   -----------------------------------------------------2---------------------------------------------------------

2.將一個點從camera_link到base_link:

首先說明之前標定兩座標系以後以及發佈了一個關係:在創建的tf_broadcaster廣播兩座標關係並且放置在tf::transform
函數中。

#rosrun robot_setup_tf tf_broadcaster
#include <ros/ros.h>
#include <tf/transform_broadcaster.h>

int main(int argc, char** argv){
  ros::init(argc, argv, "robot_tf_publisher");
  ros::NodeHandle n;

  ros::Rate r(100);

  tf::TransformBroadcaster broadcaster;

  while(n.ok()){
    broadcaster.sendTransform(
      tf::StampedTransform(
        tf::Transform(tf::Quaternion(0.0144077350003, -0.0295559697343, -0.0183787903043, 0.999290289101), tf::Vector3(0.244332058703, -0.0155924006203, -0.0434545849503)),
        ros::Time::now(),"base_link", "camera_link"));
    r.sleep();
  }
}

編寫了一個tf_listener_OnePosetransform.cpp 用於單獨發佈一個點,這個以後在位置標定也有作用:

#rosrun robot_setup_tf tf_listener_OnePosetransform
#include <ros/ros.h>
// #include <geometry_msgs/PointStamped.h>
#include "geometry_msgs/PoseStamped.h"
#include "geometry_msgs/Pose.h"
#include <tf/transform_listener.h>
#include <tf/transform_broadcaster.h>
#include <visualization_msgs/MarkerArray.h> 

geometry_msgs::PoseStamped real_pose;
geometry_msgs::PoseStamped transed_pose;

void transformPose(const tf::TransformListener& listener){
  //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame
  // geometry_msgs::PointStamped laser_point;
  // geometry_msgs::PoseStamped msg;
  // laser_point.header.frame_id = "camera_link";
  // real_pose.header.frame_id = "camera_link";
  real_pose.header.frame_id = "camera_rgb_optical_frame";
  // real_pose.header.frame_id = "camera_rgb_frame";
  //we'll just use the most recent transform available for our simple example
  // laser_point.header.stamp = ros::Time();
  real_pose.header.stamp = ros::Time();
  real_pose.pose.position.x = -0.0521919690073; 
  real_pose.pose.position.y = 0.0947469994426; 
  real_pose.pose.position.z = 0.928996682167;   
  real_pose.pose.orientation.w = 0.487851679325; 
  real_pose.pose.orientation.x = 0.518789410591; 
  real_pose.pose.orientation.y = -0.516209602356; 
  real_pose.pose.orientation.z = 0.47580036521; 
  //just an arbitrary point in space

  try{
    // geometry_msgs::PointStamped base_point;
    // listener.transformPoint("base_link", laser_point, base_point);
    listener.transformPose("base_link", real_pose, transed_pose);

    ROS_INFO("camera_rgb_frame: (%.2f, %.2f. %.2f, %.2f, %.2f, %.2f, %.2f) -----> base_link: (%.4f, %.4f, %.4f, %.4f, %.4f, %.4f, %.4f) at time %.2f",
        real_pose.pose.position.x, real_pose.pose.position.y, real_pose.pose.position.z, real_pose.pose.orientation.x, real_pose.pose.orientation.y, real_pose.pose.orientation.z, real_pose.pose.orientation.w,
        transed_pose.pose.position.x, transed_pose.pose.position.y, transed_pose.pose.position.z, transed_pose.pose.orientation.x, transed_pose.pose.orientation.y, transed_pose.pose.orientation.z, transed_pose.pose.orientation.w, transed_pose.header.stamp.toSec());
  }
  catch(tf::TransformException& ex){
    ROS_ERROR("Received an exception trying to transform a point from \"camera_rgb_optical_frame\" to \"base_link\": %s", ex.what());
  }
}

int main(int argc, char** argv){
  ros::init(argc, argv, "tf_listener_Posetransform");
  ros::NodeHandle n;

  tf::TransformListener listener(ros::Duration(10));

  //we'll transform a point once every second
  ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPose, boost::ref(listener)));
  ros::Publisher my_publisher_realPose = n.advertise<geometry_msgs::PoseStamped>("real_pose", 1000);
  ros::Publisher my_publisher_transedPose = n.advertise<geometry_msgs::PoseStamped>("transed_pose", 1000);
  ros::Rate loop_rate(10);
  while (ros::ok())
      {
        my_publisher_transedPose.publish(transed_pose);
        my_publisher_realPose.publish(real_pose);

        ros::spinOnce();
        loop_rate.sleep();
      }
  ros::spin();
  return 0;
}

這裏的點是之後自己檢測到物體的點自己寫進去的,實際中可以修改來幫助自己查看點的位置變化。這裏的real_pose以及transed_pose都是geometry_msgs::PoseStamped消息類型的,以後查看轉換信息也要在這裏查看。tf_listener_OnePosetransform文件中只定義了一個節點,但是它同時完成了訂閱(subscriber)以及發佈(publish)的工作。listener.transformPose("base_link", real_pose, transed_pose);監聽之前發佈的兩座標系關係,利用之前的transformPose函數即可將camera_link下的一個座標位置(自定義的real_pose座標)轉換到base_link下面。另外一定要注意座標所屬的座標系real_pose.header.frame_id = "camera_rgb_optical_frame"。%.2f表示f爲浮點型數據,2表示只展示到小數點後兩位,所以可以改爲4位。1000代表緩存消息的最大內存。

----------------------------------關於錯誤 輸出位置結果 nan nan nan nan--------------------------------

如果在輸出過程中,位置結果出現nan nan nan nan ,表示位置信息錯誤,可能的原因是節點發送的消息 信息並不符合規範,nan表示not a number。所以應該檢查檢點發送的信息 用rosmsg echo /path topic 來檢查。

----------------------------------------------------------------------------------------------------------------------------

接着就是要將這兩個點發布出去

ros::Publisher my_publisher_realPose = n.advertise<geometry_msgs::PoseStamped>("real_pose", 1000);

my_publisher_realPose是自己命名的消息名,消息類型爲geometry_msgs::PoseStamped,話題名爲real_pose(這樣就可以在rviz裏面add poseStamped類型,選擇相應話題即可在rviz裏面顯示觀察。)。

上面那一行程序只是說要發佈這個消息,但是具體的發佈操作卻是下一步:

my_publisher_realPose.publish(real_pose);

這樣我們就完成了一個自定義點的從camera_rgb_optical_frame到base_link的轉換。

用rosmsg即可查看: 要挖的話也是一樣的道理。

rosmsg show geometry_msgs/PoseStamped 打印poseStamped的消息

                   -----------------------------------------------------3---------------------------------------------------------

3.將物體位置從camera_rgb_optical_frame到base_link轉換

#rosrun robot_setup_tf tf_listener

創建tf_listener.cpp(包括所需的依賴項 創建過程在官網的編寫subscriber以及publisher都有提到,讀者可以自行到文章最前面查看)

(原諒我 在tf這塊參考代碼比較多的是cpp文件,所以在修改完程序以後每次都得cmake重新編譯,而且訂閱和發佈都是cpp來完成的,不過在最後的控制機械臂運行的python程序中又變成在python裏面訂閱這個消息,所以讀者要注意博主在這塊的限制(更熟悉cpp一點))。

#include <ros/ros.h>
#include "geometry_msgs/PoseStamped.h"
#include "geometry_msgs/Pose.h"
#include <tf/transform_listener.h>
#include <tf/transform_broadcaster.h>
// #include "std_msgs/String.h"
#include <object_recognition_msgs/RecognizedObjectArray.h> //using ORK
#include <visualization_msgs/MarkerArray.h> 

//實時轉換採集到的可樂罐的位置點轉換

std::string Object_id = "";
double Object_assurance = 0;
geometry_msgs::PoseStamped Object_pose;
geometry_msgs::PoseStamped transed_pose;
bool firstCB = false;

void myCallback(const object_recognition_msgs::RecognizedObjectArray objects_msg) //回調函數 
{ 
  double confident = 0;
  int id = -1;
  
  if (firstCB == false && (int)objects_msg.objects.size() == 1) {
        Object_id.assign(objects_msg.objects[0].type.key.c_str());
        firstCB = true;
    }
  
  for (int i = 0; i < objects_msg.objects.size(); ++i) {
    if (Object_id.compare(objects_msg.objects[i].type.key.c_str()) == 0) {
      if (objects_msg.objects[i].confidence > confident) {
          
          confident = objects_msg.objects[i].confidence;
          id = i;
          }
        }
    }

  if (id >= 0) {
        Object_pose.pose = objects_msg.objects[id].pose.pose.pose;
        Object_assurance = objects_msg.objects[id].confidence;
    } else {
        confident = 0;
    }
}
 
void transformPose(const tf::TransformListener& listener){
  //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame
  // geometry_msgs::PointStamped laser_point;
  // geometry_msgs::PoseStamped transed_pose;
  // laser_point.header.frame_id = "camera_link";
  // msg.header.frame_id = "camera_link";

  // Object_pose.header.frame_id = "camera_rgb_frame";
  Object_pose.header.frame_id = "camera_rgb_optical_frame";
  //we'll just use the most recent transform available for our simple example
  // laser_point.header.stamp = ros::Time();
  ROS_INFO("Waiting for the object");
  if (Object_assurance > 0.8) {
      ROS_INFO("Best Similarity = %f ", Object_assurance);
      ROS_INFO("pose x is: %f", Object_pose.pose.position.x);
      ROS_INFO("pose y is: %f", Object_pose.pose.position.y);
      ROS_INFO("pose z is: %f", Object_pose.pose.position.z);

      bool tferr = true;
      while (tferr) {
        tferr = false;
      
      try{  
        listener.transformPose("base_link", Object_pose, transed_pose);
        }
      catch (tf::TransformException &exception) {
          ROS_ERROR("Received an exception trying to transform a point from \"camera_rgb_frame\" to \"base_link\": %s", exception.what());
          tferr = true;
          ros::Duration(0.1).sleep(); 
          ros::spinOnce();
        }

      } //while

    ROS_INFO("camera_rgb_optical_frame: (%.2f, %.2f. %.2f, %.2f, %.2f, %.2f, %.2f) -----> base_link: (%.2f, %.2f, %.2f, %.2f, %.2f, %.2f, %.2f) at time %.2f",
        Object_pose.pose.position.x, Object_pose.pose.position.y, Object_pose.pose.position.z, Object_pose.pose.orientation.x, Object_pose.pose.orientation.y, Object_pose.pose.orientation.z, Object_pose.pose.orientation.w,
        transed_pose.pose.position.x, transed_pose.pose.position.y, transed_pose.pose.position.z, transed_pose.pose.orientation.x, transed_pose.pose.orientation.y, transed_pose.pose.orientation.z, transed_pose.pose.orientation.w, transed_pose.header.stamp.toSec());
  }  //if
} //void

int main(int argc, char** argv){
  ros::init(argc, argv, "robot_tf_listener");
  ros::NodeHandle n;

  ros::Subscriber my_subscriber_object= n.subscribe("recognized_object_array",1000,myCallback);
  tf::TransformListener listener(ros::Duration(10));

  //we'll transform a point once every second
  ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPose, boost::ref(listener)));
  ros::Publisher my_publisher_transPose = n.advertise<geometry_msgs::PoseStamped>("transformPose", 1000);

  ros::Publisher my_publisher_ObjectPose = n.advertise<geometry_msgs::PoseStamped>("ObjectPose", 1000);

  ros::Rate loop_rate(10);
  while (ros::ok())
    {
      // std_msgs::String msg;
      // msg.data = ss.str(); 
      // ROS_INFO("=====transformedPose======: (%.2f, %.2f, %.2f, %.2f, %.2f, %.2f, %.2f)", transed_pose.pose.position.x, transed_pose.pose.position.y, transed_pose.pose.position.z, transed_pose.pose.orientation.x, transed_pose.pose.orientation.y, transed_pose.pose.orientation.z, transed_pose.pose.orientation.w);
      my_publisher_transPose.publish(transed_pose);
      my_publisher_ObjectPose.publish(Object_pose);

      ros::spinOnce();
      loop_rate.sleep();
    }
    ros::spin();
    return 0;
}

這個就是最終的tf變換過程所需要的內容程序。這個程序中完成了三件事:

(1)物體信息的訂閱、簡單判斷以及發佈

(2)將物體位置從camera_rgb_optical_frame到base_link轉換

  (3)    將轉換後的位置信息發佈出去。

這段程序和之前一個點的轉換過程類似所以不再贅述,值得注意的是:

geometry_msgs::PoseStamped Object_pose;
geometry_msgs::PoseStamped transed_pose;

物體的位置以及轉換後的位置要聲明爲全局變量。


其他:

終於在十月份的尾巴完成了這個小項目的工作任務,當然今天只是對tf這一塊兒做個簡單總結,明天繼續總結完剩下的調整標定工作,記錄怎樣分析錯誤並一步步解決最終使機械臂抓取到物體。像下面就是之前一直卡的一個情況,下一篇博客也是最爲重要也是最完整的找自己出現的錯誤,修改程序並抓到物體,也可以發自己的測試視頻。目前來看這個算法在抓取任務方面還是蠻靠譜的,雖然穩定性還是不是很好,尤其在邊遠的位置處檢測效果不好(很容易卡主沒有檢測結果),當然只要有結果就能抓起來證明自己在其他的程序並沒有問題,算法的問題也是今後論文創新的關鍵。

記錄自己的一月,完成明天的最後內容的總結就可以繼續下一步的工作了,繼續努力,↖(^ω^)↗加油!


附:還有一個可以直接運行得到當前機械手位置的python程序 算作一個插件

#rosrun ur10_rg2_moveit_config my_pose_print.py打印當前機械手位置

#!/usr/bin/env python

# Software License Agreement (BSD License)
#
# Copyright (c) 2013, SRI International
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above
#    copyright notice, this list of conditions and the following
#    disclaimer in the documentation and/or other materials provided
#    with the distribution.
#  * Neither the name of SRI International nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Author: Acorn Pooley, Mike Lautman

## BEGIN_SUB_TUTORIAL imports
##
## To use the Python MoveIt! interfaces, we will import the `moveit_commander`_ namespace.
## This namespace provides us with a `MoveGroupCommander`_ class, a `PlanningSceneInterface`_ class,
## and a `RobotCommander`_ class. (More on these below)
##
## We also import `rospy`_ and some messages that we will use:
##
import math
import tf
import sys
import copy
import rospy
import moveit_commander
import moveit_msgs.msg
import geometry_msgs.msg
from math import pi
from std_msgs.msg import String
from moveit_commander.conversions import pose_to_list
from ur_control.srv import *
import random



class MoveGroupPythonIntefaceTutorial(object):
    """MoveGroupPythonIntefaceTutorial"""
    def __init__(self):
        super(MoveGroupPythonIntefaceTutorial, self).__init__()

        ## BEGIN_SUB_TUTORIAL setup
        ##
        ## First initialize `moveit_commander`_ and a `rospy`_ node:
        moveit_commander.roscpp_initialize(sys.argv)
        rospy.init_node('end_effector_pose_print',
                        anonymous=True)

        ## Instantiate a `RobotCommander`_ object. This object is the outer-level interface to
        ## the robot:
        robot = moveit_commander.RobotCommander()

        ## Instantiate a `PlanningSceneInterface`_ object.  This object is an interface
        ## to the world surrounding the robot:
        scene = moveit_commander.PlanningSceneInterface()

        ## Instantiate a `MoveGroupCommander`_ object.  This object is an interface
        ## to one group of joints.  In this case the group is the joints in the Panda
        ## arm so we set ``group_name = panda_arm``. If you are using a different robot,
        ## you should change this value to the name of your robot arm planning group.
        ## This interface can be used to plan and execute motions on the Panda:
        group_name = "manipulator"
        group = moveit_commander.MoveGroupCommander(group_name)

        ## We create a `DisplayTrajectory`_ publisher which is used later to publish
        ## trajectories for RViz to visualize:
        display_trajectory_publisher = rospy.Publisher('/move_group/display_planned_path',
                                                       moveit_msgs.msg.DisplayTrajectory,
                                                       queue_size=20)

        ## END_SUB_TUTORIAL

        ## BEGIN_SUB_TUTORIAL basic_info
        ##
        ## Getting Basic Information
        ## ^^^^^^^^^^^^^^^^^^^^^^^^^
        # We can get the name of the reference frame for this robot:
        planning_frame = group.get_planning_frame()
        print "============ Reference frame: %s" % planning_frame

        # We can also print the name of the end-effector link for this group:
        eef_link = group.get_end_effector_link()
        print "============ End effector: %s" % eef_link

        # We can get a list of all the groups in the robot:
        group_names = robot.get_group_names()
        print "============ Robot Groups:", robot.get_group_names()

        # Sometimes for debugging it is useful to print the entire state of the
        # robot:
        print "============ Printing robot state"
        print robot.get_current_state()
        print ""
        ## END_SUB_TUTORIAL

        # Misc variables
        self.box_name = ''
        self.robot = robot
        self.scene = scene
        self.group = group
        self.display_trajectory_publisher = display_trajectory_publisher
        self.planning_frame = planning_frame
        self.eef_link = eef_link
        self.group_names = group_names


    def print_current_endeffector_pose(self):
        print self.group.get_current_pose()
        pass


def main():
  try:
    print "============ Press `Enter` to begin to pose endeffector_pose"
    # raw_input()
    tutorial = MoveGroupPythonIntefaceTutorial()

    rospy.sleep(1.0)
    while not rospy.is_shutdown():
        print("---")
        tutorial.print_current_endeffector_pose()
        rospy.sleep(0.5)
        pass

    print "endeffector_pose complete!"
  except rospy.ROSInterruptException:
    return
  except KeyboardInterrupt:
    return

if __name__ == '__main__':
  main()

--------------------------------------------------更新2019.7.19--------------------------------------------------------------

因爲最近半年都在準備發論文以及準備雅思,所以都沒有登csdn回覆。希望8.24雅思順利,繼續努力把。之後準備好雅思及申請好後,會總結一下今年前兩個半月paper做的3D 物體識別、6D位姿估計以及重疊物體的檢測等內容。

另外,有很多人通過郵箱聯繫到了我我也給出了我的解答。如有問題請發郵件到: [email protected] 最近依然不會回覆csdn。

如果有人感興趣想加入的話,自動化所招實習生啦!!!

【自動化所類腦中心-類腦機器人實習生招聘】
類腦認知機器人算法及應用實習生:主要負責基於ROS的認知機器人開發,要求自動控制及相關專業(或個人十分感興趣),具有機器人ROS開發經驗/python編程基礎者優先,最好能實習6個月以上。我們將提供Baxter機器人、Robotnik機器底盤、NAO機器人等作爲硬件支撐實習生的研究。
有意者請聯繫[email protected]

--------------------------------------------------------7.19--------------------------------------------------------------------------------------

-------------------------------------------------------------------------------2020.3.2更新-------------------------------------------------------------

最近在準備面試,有問題可以聯繫上面的郵箱。最終的文章代碼鏈接以及原文、實現步驟已經在系列終結篇給出。

另外本人畫的場景設計畫在站酷上,歡迎交流www.zcool.com.cn/u/20178144

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章