Grasping and Placing Operation for Labware Transportation in Life Science Laboratories using Mobile Robots

Grasping and Placing Operation for Labware Transportation in Life Science Laboratories using Mobile Robots

Volume 2, Issue 3, Page No 1227-1237, 2017

Author’s Name: Mohammed Myasar Ali1, a), Hui Liu1, Norbert Stoll2, Kerstin Thurow1

View Affiliations

1Center for Life Science Automation (celisca), Rostock University, 18119 Rostock, Germany

2Institute of Automation, Rostock University, 18119 Rostock, Germany

a)Author to whom correspondence should be addressed. E-mail: mohammed.myasar.ali@celisca.de

Adv. Sci. Technol. Eng. Syst. J. 2(3), 1227-1237 (2017); a  DOI: 10.25046/aj0203155

Keywords: Robotic arm manipulation, Labware transportation, Kinect sensor V2, Design of grippers and labware containers

Share

321 Downloads

Export Citations

In automated working environments, mobile robots can be used for different purposes such as material handling, domestic services, and objects transportation. This work presents a grasping and placing operation for multiple labware and tube racks in life science laboratories using the H20 mobile robots. The H20 robot has dual arms where each arm consists of 6 revolute joints with 6-DOF and 2-DOF grippers. The labware, which have to be manipulated and transported, contain chemical and biological components. Therefore, an accurate approach for object recognition and position estimation is required. The recognition and pose estimation of the desired objects are very essential to guide the robotic arm in the manipulation tasks. In this work, the problem statement of H20 transportation system with the proposed methodology are presented. Different strategies (visual and non-visual) of labware manipulation using mobile robots are described. The H20 robot is equipped with a Kinect V2 sensor to identify and estimate the position of the target. The local features recognition based on SURF algorithm (Speeded-Up Robust Features) is used. The recognition process is performed for the required labware and holder to perform the grasping and placing operation. A strategy is proposed to find the required holder and to check its emptiness for the placing tasks. Different styles of grippers and labware containers are used to manipulate different weights of labware and to realize a safe transportation. The parts of mobile robot transportation system are communicated with each other using Asynchronous socket Channels.

Received: 06 June 2017, Accepted: 07 July 2017, Published Online: 21 July 2017

1.       Introduction

H20 mobile robot.This paper is an extension of work originally presented in the international conference on Mechatronics (Mechatronika 2016) [1]. This work shows grasping and placing operations for labware and tube racks transportation in life science laboratories using mobile robots. Different purposes can be performed using mobile robots such as product transportation [2], teleoperation for the tasks with power tools [3], domestic services [4]-[6], or material handling [7]. The development of laboratories in the field of life science depends significantly on the new techniques of automated solutions in all related areas. Different tasks such as biological testing, samples preparation, and samples analysis are performed in the laboratories using automated equipments. The robots are very necessary in the field of life sciences. All the workstations in the same or in different laboratories can be connected using stationary and mobile robots. This connection ensures a 24/7 operation and reduces tedious and routine work for the employees. Object transportation using mobile robots is one of the most important tasks in the automation field since it increases productivity and saves human resources. For safe transportation, the mobile robots have to perform the object manipulation processes in addition to the path planning, mapping, and localization. H20 robots and Kinect V2 are used in this work to perform a visual manipulation for multiple labware and tube racks. The H20 robot is a wireless networked autonomous humanoid mobile robot. It has a PC tablet, dual arms, and an indoor GPS navigation system. Some technical procedures have been implemented at the Center for Life Science Automation (celisca) to develop the transportation system of the H20 mobile robots [8]-[10]. The H20 robot and different labware with tube racks are shown in Fig. 1 and Fig. 2 respectively.

  • Different labware and tube racks.

For grasping and placing operation of multiple labware, the object identification and localization are very necessary to guide the robotic arm to the target. This requires an arm control based kinematic model to find the joints’ values which move the robotic arm to perform the task. The kinematic analysis is the way of describing the motion of arm links without considering the forces causing this motion. The kinematic analysis is classified into two parts: forward kinematics (FK) and inverse kinematics (IK). Using the forward kinematics equations, the end-effector pose relative to the arm base is found according to the given joints’ angles. On the other hand, the inverse kinematics equations are used to find the required joints’ angles for the given pose of the end effector with respect to the arm base. The joints’ limits and workspace of the robotic arm have to be taken into consideration in the manipulation process. In general, the FK solution is easier to be found than the IK solution especially for serial arms. The IK problem can be solved using two approaches: analytic and numeric. The analytical solution is more preferable to be applied in real time applications because it is computationally faster than the numerical approach. Also, all possible solutions can be found using the analytical approach. But, the IK analytical solution can only be derived for manipulators with a particular structure. For example, if 3 consecutive joints’ axes are parallel, or intersect at a single point, then the analytical solution can be found. This solution often cannot be generalized to other robotic arms. Ali et al. proposed a reverse decoupling mechanism method to solve the IK problem analytically for a 6-DOF (degree of freedom) robotic arm [11]. O’Flaherty et al. developed the same method to find the closed-form solution of IK problem for a HUBO2+ humanoid robot arms [12]. The reverse decoupling mechanism method has been used to find the analytical solution of the IK problem for the arms of the H20 robot [13].

The visual approach of grasping and placing requires the use of a suitable visual sensor with a suitable object identification algorithm to realize a successful operation. Various features can be extracted from the image to find the target in the view. Color, edges, corners, shape, and local features can be considered as target characteristics for detection task. The detection of multiple characteristics related to the target improves the success rate of the process. Some applications depend on the object color as a feature to be found in the image. Different color systems can be used for this task such as RGB, HSV (hue, saturation, value), HSL (hue, saturation, lightness), etc. HSV color system can be considered as the more reliable in color segmentation in comparison with other systems [14], [15]. The HSV system is more powerful against dynamic lightness and brightness conditions. Additional information has to be used such as the target shape and area in case of presenting different objects with the same color in the view. Sanchez-Lopez et al. proposed a fast method for object tracking using HSV color segmentation and service robot [16]. Yamazaki et al. used HSV color with edges and shape for manipulation of foods and kitchen tools with daily assistive robot [17]. To recognize the required object which has a specific textures, feature matching algorithms have to be used. Generally, the local features recognition requires an off-line step to save the images of the targets in a data base as references. Then, the matching algorithm is performed by extracting local features from the reference image related to the target to be identified with the current live image. Visual matching based on local feature descriptors such as SIFT (scale invariant feature transform), SURF (Speeded-Up robust features), and FAST (Features from Accelerated Segment Test) are very popular algorithms in this field [18]-[20]. These algorithms are somehow independent to changes in orientation, scale, and illumination. Chen et al. presented an identification approach for textured objects using SIFT algorithm [21]. Grundmann et al. recognized household items using object classification based on SIFT algorithm [22]. Anh et al. proposed an object tracking method based on SURF algorithm for safe grasping operations [23]. To estimate the target position, stereo vision and 3D cameras are considered the most appropriate solutions. A 3D camera such as the Kinect sensor is more preferable for this task. Kinect provides directly the depth data without implementing any steps in image processing as in the case of stereo vision. Chung et al. proposed an intelligent service robot equipped with Kinect sensor to help humans in object transportation [2]. Stueckler et al. described a strategy for objects segmentation and manipulation using the depth frame of the Kinect sensor [24]. The design of arm grippers and labware containers is one of the most important issues which plays a main role to achieve a safe grasping and placing operation. Some factors have to be taken into consideration to design a suitable style of grippers like the task quality, the end effector structure, and the characteristics of the target like its weight, shape, etc.

In this work, a grasping and placing operation is presented to achieve labware transportation in life science laboratories. Five H20 mobile robots are used for maneuvering between the adjacent labs and workstations for transporting multiple labware and tube racks which contain chemical and biological components. Local features matching based on SURF algorithm is used to identify multiple labware. The navigation and arm control systems of the H20 robots are developed using C-sharp programming language. SURF algorithm is the most appropriate method that can be implemented using C-sharp. The inverse kinematics solutions for the head and arms of H20 robot are listed in this work. This paper is organized as follows: in section 2, the problem statement and proposed methodology are discussed. The head and arm structures of the H20 robot with IK solutions are given in section 3. Different designs of grippers and labware containers are described in section 4. Visual (using Kinect) and blind (using sonar sensor) manipulation for the labware are presented in sections 5 and 6 respectively. Section 7 shows the target localization followed by the labware transportation process. Finally, the paper concludes the current results.

2.       Problem Statement and Proposed Methodology

The future of life science laboratories depends significantly on the innovations of automated solutions in the entire scope. The stationary and mobile robots play an important role in the automated environment. In the Center for Life Science Automation, different automation islands are connected with each other by the cooperation of stationary and mobile robots. For transportation task, mobile robots usually follow a predefined path from the starting station to the ending station using a guidance control system. The H20 mobile robots use the Stargazer sensor with ceiling landmarks (see Fig. 3) for maneuvering between the adjacent labs (Hagisonic Company, Korea) [25]. This guidance system is not accurate enough and causes positioning errors in the range of ±3cm and ±2cm in the X and Y axes respectively in front of the workstation. The glossy lighting and sunlight blind the Stargazer to recognize the ceiling landmarks which leads to positioning errors. Also, the odometry system, that updates the robot pose, accumulates errors for different reasons such as different wheel diameters, wheel-slippage, and wheel misalignment. Therefore, the mobile robot transportation requires a reliable grasping and placing operation for the labware which contain chemical and biological components. This operation depends on 3 main factors as shown in Fig. 4. The identification and localization process for the desired labware has to be performed. The other essential issue is to calculate the arm joints to guide the end effector to the desired pose by solving the inverse Kinematics problem. Also, the design of arm gripper and labware containers has to be taken into consideration to realize a secure manipulation.

  • Stargazer sensor with ceiling landmarks.
  • Requirements of grasping and placing operation.

The transportation of multiple labware between different workstations and labs requires an appropriate management systems. This is important to perform the automation of individual areas in such a way that comprehensive life science processes are realized. The hierarchical workflow management system (HWMS) controls the workflow between the stationary and mobile robots [26]. Related to the mobile robots, the management system sends the plan to the transportation system as shown in Fig. 5. The plan information includes the starting position, end position, and the required labware to be transported. The transportation system of the mobile robots includes 3 main parts: the robot remote center, the navigation system, and the grasping/placing system. The grasping/placing system is splitted into two parts, object identification and localization and the arm control. The object identification and localization software with the visual sensor is utilized to recognize the target and to estimate its pose. The pose information is sent to the arm kinematic control to guide the robotic arm.

  • Overall structure of the mobile robot transportation.

3.       Kinematic Analysis of H20 Robot

The H20 mobile robot has dual 6-DOF arms with 2-DOF grippers. It has also a head with 2 revolute joints (2-DOF). Fig. 6 shows the joints structure of the head and arms. According to Fig. 6, the values of d3 and d5 are 0.236m and 0.232m respectively. Also, there is a distance of de=0.069m between the wrist joint and the end-effector (E). The Denavit-Hartenberg (D-H) representation is used to describe the translation and rotation relationship between the arm links. According to this D-H representation, there are four parameters to analyze the manipulator: the link length (ai-1), the link twist (αi-1), the link offset (di), and the joint angle (θi) where (i) refers to the joint number [27]. By following the D-H rules, the homogeneous transformations between adjacent links are defined. The D-H parameters and the rotational limit for each joint of the H20 arms are described in Table 1.

  • Arms and head structures of H20 robot.

Table 1: D-H parameters and joints limit.

Left and Right Arms
θi

α(i-1)

(L)

α(i-1)

(R)

a(i-1)

(LR)

di (m)

(L)

di (m)

(R)

Joints limit (LR)
θ1 0o 0o 0 0 0 -20o~192o
θ2 90o -90o 0 0 0 -200o~-85o
θ3 90o -90o 0 -0.236 0.236 -195o~15o
θ4 -90o 90o 0 0 0 -129o~0o
θ5 90o -90o 0 -0.232 0.232 0o~180o
θ6 -90o 90o 0 0 0 -60o~85o

The solution of the IK problem for the H20 arms has been found using the reverse decoupling mechanism method [11]. The inverse matrix of the FK solution has been found which is considered the first step to solve the IK problem. Eq. (1) shows the inverse matrix of the FK model where (inx , iny , inz) is the orthogonal vector, (iox, ioy, ioz) is the orientation vector, (iax , iay , iaz) is the approach vector, and (ipx , ipy , ipz) is the position vector.

The equations below are the solution of the IK problem where (C) and (S) are the abbreviation of angle cosine and sine respectively, (R) and (L) are related to the right and left arms respectively, and (atan2) is the two argument arc tangent function. The equation of joint 4 is the first step that has been found. The other joints’ equations are followed sequentially as follows [13] [28]:

In case that the target is outside the arm workspace, complex numbers are generated in the solution. Therefore, a (real) function is used to keep the real part and ignore the imaginary part for realizing the closest solution to the target position [12].

= wrapToPi(atan2( , ) – ),      (4)

where γ = (atan2 (ipy+de,ipx)), and (wrapToPi) function is used to wrap the angle to the interval between –π and π [12].

The IK solutions of singularity cases have been found too. Three cases of singularity have been determined within the joints limits. The singularity cases occur when one or more degrees of freedom are eliminated because some joint axes are aligned with each other. This leads to infinite number of solutions. The singularity cases of H20 arms are as follows:

  1. A) When θ4 = 0 and θ2 ≠ -π: The axis of the third joint is aligned with the fifth joint axis
  2. B) When θ2 = -π and θ4 ≠ 0: The axis of the first joint is aligned with the third joint axis
  3. C) When θ4 = 0 and θ2 = -π: The joints 1, 3, and 5 are collinear

According to Eq. 2, 3, and 5, there are 8 solutions for the required pose inside the reachable workspace. To choose the suitable solution, a selecting algorithm has to be used [13]. The forward and inverse kinematics solutions have been validated and simulated using MATLAB with robotics toolbox. For the integration of the kinematic model with H20 system, the required angle value of each joint has to be converted to its related servo motors position. This step is very crucial since the H20 arms’ joints are weak and they can be easily bent by the effects of gravity and payload.

Related to the H20 head, a stereo vision system is fixed on a pan-tilt module. This module consists of two revolute joints which are used for moving the head to track the desired object. The transformation matrix from the neck to the head is as follows:

Eq. 8 represents the forward kinematic solution of the head for the given joint angles. Related to the inverse kinematic problem, the given  is in the form:

Thus, the solution of IK for the 2 joints of head is as follows:

4.       Design of Grippers and Labware Container

For labware grasping and placing with the mobile robots, different grippers with labware containers have been designed. Since the labware contain chemical and / or biological components, any kind of spilling has to be avoided. Therefore, a specific design of grippers and labware containers is required to guarantee secure grasping and placing operations. Fig. 7 shows the initial designs of the grippers and labware containers which are attached with handles.

(A)
(B)
(C)
  • Initial designs of grippers and labware containers [13].

Related to Fig. 7.A, the handle was designed to be detected using RGB color detection [29]. Each handle has a specific single color to be identified from the others. The Kinect sensor V1 was used to detect and localize the required handle to manipulate the required labware. In Fig. 4.B, the gripper and handle designs were improved. Flat panel on the upper side of the handle was resolved for fixing different colored or pictorial marks to distinguish multiple handles. Proper gripper has been designed also to fit this handle. SURF algorithm and HSV color segmentation with area and shape detection were used for identification tasks [28]. The required handle was localized using Kinect sensor V2. 350g is the maximum payload that can be manipulated using the designs in Fig. 7.A, and 7.B. The weak wrist joint of the H20 arms as well as the labware container design play a role to limit the manipulated weight. The manipulation of the handle attached with the container requires high torque due to the long lever arm of the wrist joint. The lever arm here represents the distance between the center of the labware weight and the wrist joint. A vertical handle has been designed to manipulate heavier payload as shown in Fig. 7.C. In this case, the wrist joint is rotated to have a vertical axis on the ground. Thus, the lifting process depends on the elbow joint which has more powerful torque [13]. 500g is the possible payload that can be handled using this design. This manner is still not sufficient due to the tube racks which are heavier than 500g. To cope with this issue, a new manipulation style has been created (see Fig. 8). A new gripper has been designed and the handle has been removed from the container. This brings the end-effector closer to the labware weight center. Thus, less torque is required to handle heavier objects since the lever arm of the wrist joint has been decreased. A rubber has been attached to the gripper to increase the friction for secure manipulation. The maximum payload which can be handled with this style is about 700g. Fig. 8 shows the designs of the gripper, labware container, and its holder.

  • Final design of gripper and labware container.

5.       Blind Manipulation using Sonar Sensor

A manipulation strategy has been performed using the developed kinematic model with two sonar sensors [13]. The front base of H20 robot has 3 built-in DUR5200 sonar sensors. One sensor is in the middle and the other two are on the left and right sides. These sensors can detect the range information from 4 cm to 340 cm which can be used for collision avoidance and distance detection. For the grasping and placing operation, the mobile robot has to be straight in front of the workstation. The labware container has a specific posture on the workstation. The pitch and roll orientation of the container related to the robot are fixed. The yaw orientation has to be secured by correcting the robot orientation. This can be done using the two sonar sensors on the base sides (see Fig. 9). The distance (Z) from each channel to the plastic board is checked and the robot rotates till the values of both sensors are equalized. Thus, the distance (Z) between the labware on the workstation and the robot arm base is known. Also, the height (Y) of the workstation related to the robot is fixed and known. Related to the (X) information of the labware position, this can be decided by the workflow management system [27]. Then, the X positioning feedback has to be mapped to be related to the arm base. The X-axis error of the robot position in front of the workstation is about ±2cm. This can be compensated by the design tolerance of gripper and labware container [13].

This strategy is not reliable enough due to some reasons. The positions of holders and boards, which reflect the sonar signals, have to be identical for all workstations. It is not possible in some stations to install this board due to the environment structure. Existence of obstacles leads to wrong estimation from the sonar sensors for the distance and orientation of robot in front of the workstation.  Also, there is no accurate value related to the positon of the required labware in the X-axis due to the lack of positioning feedback in this axis. To cope with all these issues, a visual sensor can be used to identify and localize the target wherever it is located on the workstation.

  • 2 sonar sensors for labware manipulation.

6.       Vision based manipulation using Kinect Sensor

It is very crucial to have an intelligent concept to perform a visual grasping for the required labware which then will be transported to the required place. Stereo vision is one of the common methods to detect the target and estimate its position. But, it is not feasible to use the head stereo cameras of the H20 robot for this task because of the head position related to the workstation. This leads to unclear views to the wide workstation which consists of multiple positions for labware containers. Therefore, the Kinect sensor can be considered as an appropriate choice for the visual approach of labware manipulation. It provides a high quality color and depth information which is directly obtained without applying complicated steps on the image as required in the stereo vision. The Kinect V2 has been fixed on the H20 body using a holder with a suitable height and tilt angle to guarantee a clear and wide view for the whole workstation (see Fig. 1). The Microsoft Kinect sensor V2 uses “time of flight” technology to provide the depth information. It relies upon a novel image sensor that indirectly measures the time for pulses of laser light to travel from a laser projector to a target surface, and back to an image sensor. By using the “time of flight” technology, the Kinect V2 can see just as well in a completely dark room as in a well-lit room. The Kinect sensor with its features are shown in Table 2 and Fig. 10.

Table 2: Features of Kinect Sensor V2.

Features Kinect V2
RGB camera resolution 1920 X 1080
Depth camera resolution 512 X 424
Maximum depth ~ 4.5 M
Minimum depth 50 cm
Horizontal FOV 70 degrees
Vertical FOV 60 degrees
Tilt motor No
USB standard 3.0
Supported OS Win 8, Win 10
  • Kinect sensor V2.

The success of visual manipulation and transportation depends significantly on the success of identification and localization for the required target. The workstation consists of 8 positions of labware holders. According to the arms workspace, the robot can deal with 4 holders (2 for each arm) for grasping and placing operations. Therefore, the robot needs 2 positions in front of the workstation to manipulate all the 8 holders. It is complicated to differentiate the holders according to their appearances. Also, it is not possible to identify the required holder for the grasping task because it will be covered with a labware with its container. To cope with this issue, an initial strategy of grasping and placing operation has been performed. This strategy includes the utilization of 2 marks as a reference for each robot position in front of the workbench. Each mark is centrally located in front of the related robot position to be used as a visual feedback for 4 holders as shown in Fig. 11. Whenever the robot arrives at the required position, the related mark is recognized and localized using Kinect V2 with SURF algorithm.  Then, the mark position is modified to find the positions of the 4 holders related to the robot. The workflow management system sends the plan information to the mobile robot transportation system. This information includes the required task (grasp/place) and the holder number. According to the holder number (1-8), the robot moves to the required position (1 or 2) for performing the task. In this case, each mark is considered as a reference for 4 related holder where the holders 1, 2, 5, and 6 belong to a particular robot position. Whereas, the holders 3, 4, 7, and 8 are related to the other robot position. For this strategy, all the holders’ positions related to the marks have to be fixed and identical in the other workstations in life science laboratories. To cope with this limitation, a specific mark has been attached to each holder for visual grasping and placing operation as shown in Fig. 12. The required mark is identified and the position of its center point is derived to guide the robot in the grasping and placing operation using the kinematic model.

  • Specific mark for each robot position.
  • Workbench holders with their related marks.

For grasping the required labware, probable wrong information related to the holder number is sent to the robot transportation system. Also, there is a possibility that the required labware has been moved to another holder by an employee before the mobile robot arrives the workstation. In this case, it is very crucial to identify and localize the required labware wherever it is located on the workstation. Since the labware and tube racks are covered with transparent or white lids to protect the components from cross contamination, it is not applicable to distinguish them on the workbench. Therefore, a specific mark has been fixed on each labware lid for identification process. Fig. 13 shows two tube racks where one of them is covered with a white lid and the other with a mark fixed on its lid. Different marks have been used to identify multiple labwares. The mark contains the labware information with a particular number for classification purposes. The labware information with a background picture in the mark gives adequate features to differentiate multiple labware.

  • Labware lid with and without mark.

Several methods have been applied to test the performance of object identification and position estimation under different lighting conditions [30]. The tests have been performed for labware identification at the 8 different positions of the workstation. Various procedures have been applied to the Kinect image such as cropping, contrast with brightness correction, and histogram equalization as preprocessing steps before applying SURF algorithm. Histogram equalization process increases the global contrast of the image while the cropping process is used to extract the region of interest (ROI) from the image. The required ROI in this work is the workstation with their labware, holders, and related marks. The cropping step simplifies the recognition process and decreases its required time. The identification of the required labware is assigned by drawing a polygon around its lid mark with cross to specify the center point as shown in Fig. 14.

  • Recognition of the required labware.

For the grasping task, the labware orientation related to the robot can be found using coplanar POSIT algorithm which stands for POSe ITerations. It requires image coordinates of the desired object points (minimum 4 points). It also requires the real model coordinates of these points;  the target has to be previously known. Furthermore, the effective focal length of the used camera has to be known [31]. The orientation angle is very important to correct the arm motion in case that the robot is not straight in front of the workstation. This has been done by finding the corners’ positions of the lid mark in the image coordinates. Also, the real physical positions of the corners related to the mark center point have to be calculated. This information is inserted to the POSIT algorithm to calculate the difference in orientation between the labware and the Kinect camera or the robot. The orientation angle represents the yaw angle of the robotic arm. Fig. 15 shows the corners of the lid mark which are used to calculate the orientation angle.

  • Corners detection of the lid mark.

The holders’ marks are used for identification and position estimation purposes to perform the placing task. But, it is very necessary for the robot to check whether the holder is already occupied by a labware or not before performing the placing task. Therefore, a checking method has been implemented by fixing the holder mark on a mechanical part. A rotational joint is used to show and hide the mark from the Kinect view to determine whether the holder is empty or not. In other words, if the Kinect cannot identify the mark of the required holder, this holder is already occupied as shown in Fig. 16.A. A second method can be applied using a micro switch, micro servo motor, and servo controller to rotate the holder mark as shown in Fig. 16.B.

(B)
(A)
  • Checking the availability of the required holder.

7.       Target Localization and Extrinsic Calibration

The identification step enables the identification of the required labware or holder according to the related mark. Then, the position of the center point of the mark in the image coordinate is derived. This can be done using the corners’ position of the polygon drawn around the detected mark. In order to obtain the real or physical position of this center point related to the Kinect, mapping processes have to be performed. Since the RGB frame of Kinect is not identical with the depth frame, the center point has to be mapped from the image frame to its related point in the depth frame. Then, another mapping step is performed for the point in depth frame to be related to the Kinect space coordinates. The result of these mapping steps is the real position of the mark center point related to the Kinect sensor. The position information of X-axis determines which arm (left or right) can be used for the required operation. The position information of the center point has to be calibrated to calculate the position of the manipulation point in the labware container. The manipulation point represents the position where the end effector of the robotic arm has to reach during the grasping and placing operation. Fig. 17 shows the effective point of the labware container on the workstation. For the grasping task, there are different kinds of labwares which have different height. Therefore, it is not feasible to depend on the labware height to find the manipulation point position. To cope with this issue, the workstation height, which is fixed and identical for all workstations, can be used as a reference to find the manipulation point height.

  • Manipulation point of the labware container.

The position of the manipulation point related to Kinect has to be calibrated to guide the robotic arm to the required point. This can be done by applying the extrinsic calibration step. The purpose of this step is to transform the position information from the Kinect space to be related to the arm shoulder space. Then, the inverse kinematic model is used to find the joints’ values of the arm for guiding its end effector to the target. The calibration from Kinect space to arm shoulder space includes the transformation in translation and in rotation. This belongs to the difference in the position and the tilt angle (t) between the Kinect coordinates and shoulder coordinates according to the Kinect holder as shown in Fig. 18. The Kinect holder has been designed very carefully to be suitable for the grasping and placing operation. The tilt angle of the Kinect plays a role in reducing the light effects for the identification process. The holder has to be fixed very firmly to guarantee that there are no changes in the Kinect position and orientation during the robot movements.

  • Kinect holder.

8.       Labware Transportation Process

The transportation process of multiple labware using the mobile robots is based on the following scheme. In the 1st step, the navigation system receives the transportation plan from the workflow management system through the robot remote center. In the 2nd step, the robot moves to the required laboratory and workstation. Then, the robot checks around the workbench to identify the lid mark related to the desired labware which has to be transported to the destination lab. In the 3rd step, the position of the target manipulation point related to the arm shoulder is found. In the 4th step, the robot uses the arm kinematic model to calculate the required joints’ values depending on the position information. In the 5th step, the arm moves to the required target and grasps the labware container. After grasping operation, the robot moves to the desired destination to place the labware on the required holder. The mobile robot transportation system has been realized using three main coding platforms which are the multifloor navigation system (MFNS) [25], the arm control system (ACS), and the object identification and localization system (OILS). These 3 systems are connected with each other through a asynchronous socket. The 3 coding platforms exchange the orders and informations between each other using 2 client-server communication models as shown in Fig. 19.

  • Client-server models for labware transportation.

One of the important information, which has to be sent to the transportation system, is the approximate weight of the manipulated labware. Since the arms of H20 robot are unstable and have weak wrist joint, the weight information helps to calibrate the wrist joint. This step is very crucial to keep the labware straight during the grasping/placing operation to avoid the spilling of its contents. Also, two labware holders have been fixed on the H20 body for the left and right arms as shown in Fig. 20. These holders support to keep the labware in a safe horizontal posture during the transportation tasks. To supply the Kinect sensor with the required electrical power during the robot movements, a 12V battery with a voltage and current stabilizer has been installed on the robot base. If the grasping operation is performed with a particular arm (right or left), the placing operation for the grasped labware has to be achieved using the same arm. It is not possible to perform the placing operation with the other arm since the labware is positioned on the H20 holder according to the used arm. Therefore, it is very essential to perform the placing task at a position where the required arm can reach. In case that the required labware is not existing in the Kinect view or it is outside the arms workspace, a feedback information is sent to the navigation control. This information includes several decisions such as changing the robot position or skipping to the next task. Fig. 21 shows the overall flowchart of labware manipulation process (grasping/placing). Fig. 22 shows the arm movement steps for grasping the required labware and how the arm places it on the H20 holder.

  • H20 holder frame for labware transportation.
  • Flowchart of labware manipulation process.
  • Arm movement steps for grasping operation.

In the grasping operation, the robotic arm moves from the rest configuration to the manipulation point on the labware container. At the rest configuration, the position of the end effector related to the arm shoulder is (X= 0.56m, Y= 0 m, Z= 0m). This position information is according to the shoulder coordinates which is shown in Fig. 6 where X represents the arm length in rest configuration. Fig. 23 shows a chart for the changes in the end effector position during the arm movement from the rest configuration to the manipulation point. In this example, the position of the manipulation point related to the arm shoulder is (X= 0.18m, Y= 0.46m, Z= 0.03m). The approximate time required to reach the manipulation point is about 30 seconds. Fig. 24 shows the path of the end effector in XY plane for the same example. Furthermore, Table 3 shows the changes of arm joints’ values in degrees from the rest configuration to the manipulation point.

  • End effector position versus time.
  • End effector path in XY plane.

Table 3: Joints values in degrees for grasping task.

Configuration J1 J2 J3 J4 J5 J6
Rest -90° -90° 90°
Manipulation point 32° -95° -92° -64°

The required time for performing the grasping operation is about 69 seconds while 59 seconds are required for the placing operation. The work has been developed using Microsoft Visual Studio 2015 with C# programming language. The project is running on a Windows 10 platform in the H20 tablet. Table 4 shows the overall success rate of the grasping and placing operation for 50 attempts of transportation. Fig. 25 shows the GUI of the arm control system.

Table 4: Labware manipulation tests.

Attempts Successful grasp Successful place
50 92% 90%
  • GUI of arm control system.

9.       Conclusion

The ability to identify and calculate the position of the target in real-time plays an important role in the manipulation process of mobile robots. In this paper, a grasping and placing strategy has been presented to perform multiple labware transportation in life science laboratories. The problem statement with the proposed methodology for the mobile robot transportation system has been discussed. Different labware manipulation strategies have been described which depends on the sonar sensors and Kinect. The identification process of labware and holder, which is based on SURF method, has been integrated to the transportation system. The pose estimation of the labware related to the robot is very significant to guide the robotic arm in the grasping and placing operation. The Kinect V2 can be considered one of the powerful 3D cameras which provide the position information in a fast way. A strategy has been proposed to check whether the required holder is occupied or not for the placing operation. The design of the arm gripper and labware container have been improved to handle heavier payloads and to guarantee a secure manipulation for the labware.

Conflict of Interest

The authors declare no conflict of interest.

Acknowledgment

The study is supported by the Federal Ministry of Education and Research (FKZ: 03Z1KN11, 03Z1KI1) and the German Academic Exchange Service (Ph.D. stipend M. M. Ali).  The authors would also like to thank the Canadian DrRobot Company for the support of the H20 mobile robots, Mr. Ali A. Abdullah for controlling the multifloor navigation system with the technical collaboration, and Mr. Lars Woinar for his contribution in the 3D modelling and printing designs.

  1. M. M. Ali, H. Liu, N. Stoll, and K. Thurow, “Multiple Lab Ware Manipulation in Life Science Laboratories using Mobile Robots”, in IEEE International Conference on Mechatronics (MECHATRONIKA), Prague, Czech Republic, 2016, pp. 415-421.
  2. H. Chung, C. Hou, Y. Chen, and C. Chao, “An Intelligent Service Robot for Transporting Object”, in IEEE International Symposium on Industrial Electronics (ISIE), Taipei, Taiwan, 2013, pp. 1–6.
  3. R. O’Flaherty, P. Vieira, M. X. Grey, P. Oh, A. Bobick, M. Egerstedt, and M. Stilman, “Humanoid Robot Teleoperation for Tasks with Power Tools”, in IEEE International Conference on Technologies for Practical Robot Applications, Woburn, MA, 2013, pp. 1–6.
  4. M. Ciocarlie, K. Hsiao, E. G. Jones, S. Chitta, R. B. Rusu, and I. A. Şucan, “Towards Reliable Grasping and Manipulation in Household Environments”, in 12th International Symposium on Experimental Robotics (ISER), Springer Berlin Heidelberg, 2014, pp. 241–252.
  5. B. Graf, U. Reiser, M. Hägele, K. Mauz, and P. Klein, “Robotic Home assistant Care-O-bot® 3-Product Vision and Innovation Platform”, in IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Tokyo, Japan, 2009, pp. 139–144.
  6. N. Vahrenkamp, D. Berenson, T. Asfour, J. Kuffner, and R. Dillmann, “Humanoid Motion Planning for Dual-arm Manipulation and Re-Grasping Tasks”, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, USA, 2009, pp. 2464–2470.
  7. T. J. Tsay, M. S. Hsu, and R. X. Lin, “Development of a Mobile Robot for Visually Guided Handling of Material”, in IEEE International Conference on Robotics and Automation (ICRA), Taipei, Taiwan, 2003, pp. 3397–3402.
  8. H. Liu, N. Stoll, S. Junginger, and K. Thurow, “A Common Wireless Remote Control System for Mobile Robots in Laboratory”, in IEEE Instrumentation and Measurement Technology Conference (I2MTC), Graz, Austria, 2012, pp. 688–693.
  9. H. Liu, N. Stoll, S. Junginger, and K. Thurow, “Mobile Robot for Life Science Automation,” International Journal of Advanced Robotic Systems, Vol. 10, pp. 1-14, 2013.
  10. A. A. Abdulla, H. Liu, N. Stoll, and K. Thurow, “A Backbone-Floyd Hybrid Path Planning Method for Mobile Robot Transportation in Multi-Floor Life Science Laboratories”, in IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Baden-Baden, Germany, 2016, pp. 406-411.
  11. M. A. Ali, H. A. Park, and C. G. Lee, “Closed-form Inverse Kinematic Joint Solution for Humanoid Robots”, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 2010, pp. 704–709.
  12. R. O’Flaherty, P. Vieira, M. Grey, P. Oh, A. Bobick, M. Egerstedt, and M. Stilman, “Kinematics and Inverse Kinematics for the Humanoid Robot HUBO2”, Georgia Institute of Technology, Atlanta, GA, USA, Technical Report, 2013.
  13. M. M. Ali, H. Liu, N. Stoll, and K. Thurow, “Kinematic Analysis OF 6-DOF Arms for H20 Mobile Robots and Labware Manipulation for Transportation in Life Science Labs”, Journal of Automation, Mobile Robotics & Intelligent Systems, vol. 10, no. 4, 2016, pp. 40–52.
  14. W. Chen, Y. Q. Shi, and G. Xuan, “Identifying Computer Graphics using HSV Color Model and Statistical Moments of Characteristic Functions”, in IEEE International Conference on Multimedia and Expo, Beijing, China, 2007, pp. 1123–1126.
  15. Z.-K. Huang and D.-H. Liu, “Segmentation of Color Image using EM Algorithm in HSV Color Space”, in IEEE International Conference on Information Acquisition, Jeju, Korea, 2007, pp. 316–319.
  16. J. R. Sanchez-Lopez, A. Marin-Hernandez, and E. R. Palacios-Hernandez, “Visual Detection, Tracking and Pose Estimation of a Robotic Arm End Effector”, in the Proceeding of the Robotics Summer Meeting, Veracruz, Mexico, 2011, pp. 41–48.
  17. K. Yamazaki, Y. Watanabe, K. Nagahama, K. Okada, and M. Inaba, “Recognition and Manipulation Integration for a Daily Assistive Robot Working on Kitchen Environments”, in IEEE International Conference on Robotics and Biomimetics, Tianjin, China, 2010, pp. 196–201.
  18. G. Lowe, “Object Recognition from Local Scale-Invariant Features”, in IEEE International Conference on Computer Vision, Corfu, Greece, 1999, pp. 1150-1157.
  19. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “SURF: Speeded Up Robust Features”, Journal of Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346–359, 2008.
  20. E. Rosten and T. Drummond, “Machine Learning for High-Speed Corner Detection”, in Computer Vision–ECCV 2006, Springer, pp. 430–443.
  21. C. H. Chen, H. P. Huang and S. Y. Lo, “Stereo-Based 3D Localization for Grasping Known Objects with a Robotic Arm System”, in the World Congress on Intelligent Control and Automation (WCICA), Taipei, Taiwan, 2011, pp. 309–314.
  22. T. Grundmann, R. Eidenberger, M. Schneider, M. Fiegert, and G. v Wichert, “Robust High Precision 6D Pose Determination in Complex Environments for Robotic Manipulation”, in IEEE International Conference on Robotics and Automation, Alaska, 2010, pp. 1–6.
  23. L. T. Anh and J. B. Song, “Object Tracking and Visual Servoing using Features Computed from Local Feature Descriptor”, in International Conference on Control Automation and Systems (ICCAS), Gyeonggi, South Korea, 2010, pp. 1044-1048.
  24. J. Stueckler, R. Steffens, D. Holz, and S. Behnke, “Real-Time 3D Perception and Efficient Grasp Planning for Everyday Manipulation Tasks”, In Proceedings of 5th European Conference on Mobile Robots (ECMR), Örebro, Sweden, 2011, pp. 177-182.
  25. A. A. Abdulla, H. Liu, N. Stoll, and K. Thurow, “A New Robust Method for Mobile Robot Multifloor Navigation in Distributed Life Science Laboratories”, J. Control Sci. Eng., vol. 2016, Jul. 2016.
  26. X. Gu, S. Neubert, N. Stoll and K. Thurow, “Intelligent Scheduling Method for Life Science Automation Systems”, in IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Baden-Baden, Germany, 2016, pp. 156-161.
  27. J. Denavit and R. S. Hartenberg, “A Kinematic Notation for Lower-Pair Mechanisms Based on Matrices”, ASME Journal of Applied Mechanics, vol. 22, 1955, pp. 215 -221.
  28. M. M. Ali, H. Liu, N. Stoll, and K. Thurow, “Intelligent Arm Manipulation System in Life Science Labs Using H20 Mobile Robot and Kinect Sensor”, in IEEE International Conference on Intelligent Systems (IS’16), Sofia, Bulgaria, 2016, pp. 382-387.
  29. M. M. Ali, H. Liu, R. Stoll, and K. Thurow, “Arm Grasping for Mobile Robot Transportation using Kinect Sensor and Kinematic Analysis”, in IEEE International Conference on Instrumentation and Measurement Technology (I2MTC), Pisa, Italy, 2015, pp. 516–521.
  30. M. M. Ali, H. Liu, N. Stoll, and K. Thurow, “An Identification and Localization Approach of Different Labware for Mobile Robot Transportation in Life Science Laboratories”, in IEEE International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 2016, pp. 353-358.
  31. D. Oberkampf, D. F. DeMenthon, and L. S. Davis, “Iterative Pose Estimation Using Coplanar Feature Points”, Computer Vision and Image Understanding, vol. 63, 1996, pp. 495–511.

Citations by Dimensions

Citations by PlumX

Google Scholar

Scopus