A compilation of concepts I want to remember...

Navigation
 » Home
 » About Me
 » Github

Autonomous Mobile Robot #2: Inference as a ROS service

01 Apr 2018 » amr, deeplearning, machinelearning

Series:

  1. Autonomous Mobile Robot #1: Data collection to a trained model
  2. Autonomous Mobile Robot #2: Inference as a ROS service
  3. Autonomous Mobile Robot #3: Pairing with a PS3 Controller for teleop
  4. Autonomous Mobile Robot #4: Using GCP Storage

Inference as a service

We ended the previous post with a trained model. The trained model now can be integrated into a ROS system, and allow for inference given an input feature, image in this case.

Out of the possible options that we can consider, topics, services, actions, using services seems to provide the functionality that fits for this particular use case.

ROS service

The ROS Wiki provides an adequate description of a ROS service. Note that calls to services are blocking, in other words processes are blocked until the requested service completes.

“Request / reply is done via a Service, which is defined by a pair of messages: one for the request and one for the reply. A providing ROS node offers a service under a string name, and a client calls the service by sending the request message and awaiting the reply. Client libraries usually present this interaction to the programmer as if it were a remote procedure call.”

For detailed information and tutorials reference the wiki.[1,2]

Implementing the service provider

The service we would like to provide is given an image, we would like to predict the associated throttle and steering command. We can consider the serverside implementation, waiting for a client to call the specified service. Since prediction is based on a deep net, we assume that the service will be running on hardware with access to GPU. Cuda should be available.

In order to set up the service, we can follow the below steps.

  1. Specify the service request message type and response message type as a .srv file, and place in the /srv directory. PredictCommand.srv
  2. Update the CMakeLists.txt by adding PredictCommand.srv to add_service_files(). CMakeLists.txt
  3. Implement the service interface as found in nn_controller_service.py which consists of initiating the service, and defining a handler as below: nn_controller_service.py
def _init_nn_controller_service(self):
    """ Initialize nn controller service. """
    self._service_nn_controller = rospy.Service(
        'amr_nn_controller_service',
        PredictCommand,
        self._nn_controller_handler
    )
    rospy.loginfo("NN controller service initialized")

def _nn_controller_handler(self, req):
    """ Handler for nn controller service.
    Args:
        req - request
    Returns:
        res - ROS response, commands
    """
    try:
        cv_img = cv2.imdecode(np.fromstring(req.image.data, np.uint8), 1)
    except CvBridgeError as e:
        rospy.logerr(e)

    output = run_inference(self._m, cv_img)

    cmd_msg = Command2D()
    cmd_msg.header =  std_msgs.msg.Header()
    cmd_msg.header.stamp = rospy.Time.now()
    cmd_msg.lazy_publishing = True
    cmd_msg.x = output[0]  # throttle
    cmd_msg.y = output[1]  # steer

    return cmd_msg

Note the handle calls the run_inference() method. This can be redefined to match the training environment used. For example, the process will differ if the training was done using TensorFlow as opposed to PyTorch. For this first pass, we used PyTorch for training, thus run_inference() was defined per below.

def run_inference(model, img, use_cuda=1):
    """ Runs inference given a PyTorch model.
    Args:
        model - pytorch model
        img - numpy darray
        use_cuda - 1 is True, 0 is False
    Returns:
        throttle - throttle command
        steer - steer command
    """
    model.eval()
    trans = imagenet_transforms()['eval_transforms']
    img = trans(torch.from_numpy(img.transpose(2,0,1))).unsqueeze(0)
    if use_cuda:
        img = img.cuda()
  
    img = torch.autograd.Variable(img)
    # Cuda tensor to numpy doesnt support GPU, use .cpu() to move to host mem. 
    throttle, steer = model(img).data.cpu().numpy()[0]
    print(throttle, steer)

Now the service is ready to be called.

Implementing the client side

In our project, we are running the client on a Raspi, a relatively resource constrained SoC. The packages running on the raspi is responsible for taking a monocular image and publishing the image. We have placed the client in the amr_nn_controller package. We also need to define the service in the /srv directory, and update the CMakeLists.txt.

class NNControllerClient(object):

    """
    Given an image input, a dl model infers commands.
    Controller subscribes to raw image topic, and calls the
    'amr_nn_controller_service'.
    """

    def __init__(self, img_topic, cmd_topic):
        self._img_topic = img_topic
        self._cmd_topic = cmd_topic

        rospy.wait_for_service('amr_nn_controller_service')
        rospy.loginfo("Service 'amr_nn_controller_service' is available...")
        self._serve_get_prediction = rospy.ServiceProxy(
            'amr_nn_controller_service',
            PredictCommand,
            persistent=True
        )

        # Initialize subscriber and publisher
        self._image_sub = rospy.Subscriber(self._img_topic, CompressedImage, self._sub_callback)
        self._cmd_pub = rospy.Publisher(self._cmd_topic, Command2D, queue_size=10)

    def _sub_callback(self, img_msg):
        """ Handler for image subscriber.
        Args:
            img_msg - ROS image
        """
        try:
            resp = self._serve_get_prediction(img_msg)
        except rospy.ServiceException as e:
            rospy.logerr("Service call failed: {}".format(e))
        self._cmd_pub.publish(resp.commands)

We initialize a subscriber to subscribe to the topic publishing the images. Every time an image is published, the call back makes a request to the service to get commands for control. Once the service returns with a valid response, the publisher publishes the commands to the command topic, making the commands public.

Instructions

Instructions on how to demo.

On the hardware with GPU available (Jetson TX1, TX2, Cloud etc)

Make sure you have placed the trained model in the models directory in amr_nn_controller_service package.

$ cd amr_core/amr_master/amr_nn_controller_service/launch/
$ roslaunch amr_nn_controller_service.launch

On the raspi:

$ cd amr_core/amr_worker/amr_bringup/launch/
$ roslaunch amr_nn_bringup.launch

References

  1. http://wiki.ros.org/Services
  2. http://wiki.ros.org/rospy/Overview/Services