1. Introduction
With the development of human civilization and technologies, human life has changed a lot in the past few decades. One of the technologies that has progressed is robot systems. Intelligent robots are now used in daily life for entertainment, medical care, home security, and services in other fields. Intelligent robots integrate electronics, mechanics, control, automation, and communication technologies. Different types of robots have been developed in recent years to meet a variety of needs. The development of robot systems combines the theoretical expertise of many professionals. Related studies and applications are extensive, including obstacles avoidance, path planning, and visual image processing. How to improve the accuracy of a robot’s performance is one of the main foci in the field of intelligent robotic control. Omnidirectional wheeled mobile robot (WMR) is one of the mobile robot models that has been discussed widely. It has more advantages than normal WMR, such as mode of action, easy control, and high mobility [
1,
2,
3,
4,
5]. Designing an intelligent system for omnidirectional WMR to search and track moving objects automatically is the objective of this study. When omnidirectional WMR is working, one must consider how to adapt to the environment and improve the efficiency of the mission. Many researchers have presented studies on intelligent robot control. Studies on intelligent robots include aspects such as obstacle avoidance, path planning, object tracking, etc. Lin implemented obstacle avoidance and Zigbee control functions for an omnidirectional mobile robot [
6]. Tsai applied a omnidirectional mobile robot to remote monitoring [
7]. Juang presented wheeled mobile robot obstacle avoidance and object following control based on real-time image tracking and fuzzy theory [
8]. Paola proposed multi-sensor surveillance of indoor environments by an autonomous mobile robot. The robot was equipped with a monocular camera, a laser scanner, encoders, and an RFID device, and used a multi-layer decision and control scheme. Fuzzy logic was applied to integrate information from different sensors [
9]. Zhong utilized an omnidirectional mobile robot for map building applications [
10]. Lee et al. proposed three fuzzy control systems for obstacle avoidance, target seeking, and wall following. The fuzzy control system was integrated in the mobile robot and applied to home security patrol [
11]. Chen presented intelligent strategies for a WMR to avoid obstacles and move to a target location. In a short-distance obstacle avoidance model, the WMR utilized signals from ultrasonic sensors to avoid obstacles. In a target-driven obstacle avoidance model, fuzzy theory with sensor signals was used to control the speed of the WMR and make it move to a target location [
12].
One of the purposes of this study is to use real-time images from a webcam to provide target recognition and tracking, which includes color identification of a destination room. In image recognition, we use the fuzzy color histogram classification, which can quickly and accurately recognize environment background color and patterns. Martinez introduced fuzzy naturals-based histograms on fuzzy color spaces. Histograms were fuzzy probability distributions on a fuzzy color space, where the fuzzy probability was calculated as the quotient between a fuzzy natural number and the number of pixels in the image, the former being a fuzzy (non-scalar) cardinality of a fuzzy set [
13]. Chang developed a fuzzy color histogram generated by a self-constructing fuzzy cluster that can reduce the interference from lightness changes for the mean shift tracking algorithm. Their experimental results showed that the tracking approach was more robust than the conventional mean shift tracking algorithm and the computation time only increased a little [
14]. Puranik presented image processing that can be either an image or a set of characteristics or parameters related to an image. The color vision system first classified pixels in a given image into a discrete set of color classes. The objective was to produce a fuzzy system for color classification and image segmentation with the least number of rules and the minimum error rate. Fuzzy sets were defined for the H, S, and L components of the hue, saturation, and lightness (HSL) color [
15]. Chien proposed an image segmentation scheme based on a fuzzy color similarity measure to segment out meaningful objects in an image according to human perception. The scheme defined a set of fuzzy colors based on the HLS color coordinate space. Each pixel in an image is represented by a set of fuzzy colors that were the most similar colors in the color palette selected by humans. Then, a fuzzy similarity measure was proposed for evaluating the similarity of fuzzy colors between two pixels. He recursively merged adjacent pixels to form meaningful objects by the fuzzy similarity measure until there was no similar color between adjacent pixels [
16]. Cho proposed a scheme with swimmer detection and swimmer tracking stages. The detection stage began with employing the mean-shift algorithm to cluster input image, and then choosing the sets using graphical models to train the Gaussian mixture model. Finally, swimmers can be detected in a model-based way [
17].
To identify a human object, Wong presented a human detection algorithm embedded into a thermal imaging system to form a fast and effective trespasser detection system for smart surveillance purpose. A pattern recognition technique was employed whereby the human head was detected and analyzed for its shape, dimension, and position [
18]. Zalid presented data fusion acquired from a CCD camera and a thermal imager. The fusion was realized by means of spatial data from a TOF camera to ensure “natural” representation of a robot’s environment; thus, the thermal and color-related data comprised one stereo image presented to a binocular, head-mounted display [
19]. Kraubling developed four methods to identify persons during tracking. The first method used color information extracted from camera images to distinguish between persons, based on the color of the clothes they wear. The second method used reflectance intensities, which can be provided directly by laser range sensors. The third method was also based on camera information, but employs a probabilistic shape and motion model for each person it tracks, in order to distinguish between them. The last one used a network of dedicated sensors, which directly transmit identification information for each person [
20]. Tzeng proposed a system containing three components: image capture, image pre-processing, and environment determination. The image was first captured by a webcam. Then image pre-processing was used to obtain the moving pixel area by Background Subtraction in order to conduct smoothing filtering and image binary. Finally, moving objects were determined by calculating the variations within an image moving region [
21]. Chung developed a computer vision based on moving object detection and automatic tracking system. He used the temporal frame difference technique to find the moving pixels for moving object detection. The temporal frame difference technique can quickly locate all moving pixels and improve computation efficiency. However, if there is a dynamic background, moving pixels will include the background; therefore he first applied global motion compensation in order to reduce the moving background effect. Then he calculated the standard deviation and the maximum of each block. After statistical analysis, the moving object area can be obtained. The use of mean shift iteration can accurately and efficiently calculate the most similar image mass center location and accomplish tracking moving object [
22]. Machida et al. proposed a tracking control system of human motion with Kinect onboard a mobile robot. The 3D position information on humans obtained from Kinect can be used to control the velocity and attitude of the robot. The Kalman filter was applied to reduce the noise and estimate human’s motion state. The mobile robot was controlled to track moving humans in the view of Kinect [
23]. Šuligoj et al. proposed a method of frame relative displacement in which a multi-agent robot can be used for tracking, tooling, or handling operations with the use of stereo vision in an unstructured laboratory environment. The relative position between the robot’s tool center point and the object of interest is essential information to the robot system. The system has two robot arms: one carries a stereo vision camera system and the other is guided in relation to the object of interest. A marker is used for navigation between the robot and the object of interest. Image processing, marker detection, 3D coordinates extraction, coordinate system transformations, offset coordinates calculation, and communication are handled using the C++ program and TCP/IP protocol [
24]. The differences in our study are that a cheaper webcam is applied to catch moving objects by the use of the mean shift method. Information on the detected object can be sent to a control center via well-installed Wi-Fi communication. Omnidirectional WMR, instead of a two-wheel- or four-wheel-driven robot, is utilized for easy direction control.
An intelligent omnidirectional WMR integrates many functions, such as environment sensing, dynamic decision-making and tracking, behavior control and executing, etc. In WMR movement control, the ultrasound sensor is one of the most used sensors to avoid obstacles [
8]. In [
25], fuzzy logic was used to form parking behavior patterns for path planning. Reference values were searched by a genetic algorithm and applied to obtain the optimal WMR tracking performance. Lu applied a laser sensor and A* algorithm to a WMR that can avoid obstacles and planed moving path [
5]. Many researchers have tried to improve robot systems to make them easy to control, which will help humans and robots to interact more harmoniously in daily life. The main purpose of this study is to integrate a webcam, ultrasound sensor, and RFID reader in two omnidirectional WMRs for indoor patrol. In path planning, the WMR used a webcam to search predefined doors for destination tracking and used ultrasound sensors for obstacle detection. Fuzzy color histogram classification is used to separate moving objects from the background environment. In this study, a mean shift algorithm is applied to classify the object from the background and track moving objects. An RFID reader is used to read the tag and check whether it is the destination or not. By image recognition and path planning, the omnidirectional WMR can automatically search and track moving objects. Omnidirectional WMR is a nonlinear model; a dynamic equation is needed for system analysis and control regulation design. The fuzzy system is applied in fuzzy controller design for its nonlinearity, and the control scheme is simple and flexible. Ultrasonic sensors receive reflecting values when the WMR moves, then these values are used to determine the locations of the obstacles and as inputs of the fuzzy controller to avoid obstacles. The mean shift method can find the highest color density of sample points from the fuzzy color space of the target object and keep tracking it. The contributions of this study are as follows: the proposed control scheme can control omnidirectional WMR to search and track moving objects with real-time image processing and plan a path automatically; the RFID reader identifies destination again when omnidirectional WMR moves forward to the destination; obstacles avoidance is controlled by a fuzzy controller; intruders can be tracked by the mean shift algorithm, and the image of the intruder can be sent to the control center via Wi-Fi communications from the robot; and the proposed dual robot patrol system can perform indoor security service and release human resources.
This paper is divided into five sections. The first section is the introduction of this paper. The other five sections are as follows.
Section 2 introduces image processing methods, which include fuzzy color space and mean shift tracking.
Section 3 introduces the structure of the control scheme. Fuzzy control is applied to obstacle avoidance and tracking moving objects. RFID is applied to verify the destination.
Section 4 shows the results of this study. By image processing and path planning, the omnidirectional wheeled mobile robot can automatically search for and track moving objects.
Section 5 describes the advantages and disadvantages of the method based on the results of this study and then conclusions are given. This section also provides future research directions and suggestions.
3. Control Scheme
The proposed control scheme is performed through experiment verification by an omnidirectional WMR, as shown in
Figure 9. The omnidirectional WMR’s chassis size is 240 mm radius. Three omni wheels are configured in space 120° and three 12V-DC motors are installed to provide rated torque of 68 mNm. The robot is 600 mm in length, 400 mm in width, and 850 mm high. It has mechanical arms and fingers that are able to grip things; their length is 470 mm and 150 mm, respectively. The robot arms have six MX-64 motors, two in the shoulder and one in the elbow on both sides. MX-64 is 40.2 mm in length, 41 mm in width, and 61.1 mm high. Its stall torque is 6.0 mNm. There are two MX-28 motors at the wrist on both sides. MX-28 is 35.6 mm in length, 35.5 mm in width, and 50.6 mm high. Its stall torque is 2.5 mNm. The motors are shown in
Figure 10.
The omnidirectional WMR kinematic model [
6,
7] is described as follows. First, we assume the WMR is set on a coordinate system as shown in
Figure 11, where
is the speed of target direction,
,
,
are the forward speed of each wheel,
,
are the decomposition of
, and
is the angle between the target and WMR.
In
Figure 12, we know
vector consists of
and
. So we can get the
speed for the
direction as in Equation (15):
Then we also know how to get
and
for
direction as in Equations (16) and (17):
There are six ultrasonic sensors installed in front of the second layer of the chassis and they are PARALLAX PING ultrasonic distance sensor. The detection distance is approximately 2 cm to 3 m, the burst frequency is 40 KHz, the current is 30 mA, and the voltage is 5 V, as shown in
Figure 13. The sensors are used to measure the distance and detect and avoid obstacles. Fuzzy theory is applied to obstacle avoidance control. The fuzzy control structure has four inputs and one output. Ultrasonic signals S1, S3, and S6 are inputs and the value is distance. Fuzzy sets are F (far), M (medium), and N (near). Output DA is the angle for left or right; fuzzy sets are VL, L, S, R, and VR. Fuzzy rules are shown as follows, and the fuzzy sets are shown in
Figure 14.
Rule 1: If S1 is N and S3 is N and S6 is M, then DA is VR.
Rule 2: If S1 is N and S3 is N and S6 is F, then DA is VR.
Rule 3: If S1 is N and S3 is M and S6 is N, then DA is VR.
Rule 4: If S1 is N and S3 is M and S6 is M, then DA is VR.
Rule 5: If S1 is N and S3 is M and S6 is F, then DA is R.
Rule 6: If S1 is N and S3 is F and S6 is N, then DA is R.
Rule 7: If S1 is N and S3 is F and S6 is M, then DA is R.
Rule 8: If S1 is N and S3 is F and S6 is F, then DA is S.
Rule 9: If S1 is M and S3 is N and S6 is N, then DA is L.
….
Rule 26: If S1 is F and S3 is F and S6 is M then DA is S.
Rule 27: If S1 is F and S3 is F and S6 is F, then DA is S.
A flowchart of the control sequence is shown in
Figure 15. First, the robot will search for the predefined door in the experimental environment. When the robot detects the specified color target, it moves forward. On the way to this objective, the robot will correct its path by detecting the image range at all times. When the robot reaches the specified distance from the door, it will correct its angle to aim at the door. Then it recognizes the room color and checks whether it has reached the destination or not. If the room color represents the desired destination, the robot moves to the door and RFID, as shown in
Figure 16, reconfirms at the specified distance to verify the target. If the room is not the destination, the robot will search for the next similar target. The RFID will verify the destination again and make sure it is true. When the robot is 500 mm away from the door, the robot will turn to patrol the corridor. When an object appears in the corridor and is intercepted, an image of the object will be transmitted to the control center via Wi-Fi. Staff in the control center can check the object; if it is an intruder they will send a security guard to the scene to ensure safety. If the object is not an intruder, the robot will return to the starting position along the planned route.
A team with two robots is presented in this subsection. These two robots use the same path planning and tracking control scheme. When the first robot finds the intruder, the robot will send a warning message to the control center in the remote site and to the second robot. The second robot acts as a backup security guard.
Figure 17 shows the initial setup and experimental environment with the starting location and destination room given. In
Figure 18, the target door is not detected at the initial position, so the robot will turn left to face the door. When DL section detects the target, the robot turns left and moves forward. If the DR section detects the target, the robot turns right. When the DM section detects the target, the robot stops turning and moves forward to the door, as shown in
Figure 19.
The RFID reader and tag are applied to verify the destination. When omnidirectional WMR approaches a door, the RFID reader will check the tag’s UID. Each RFID tag has its own UID code. Each UID corresponds to a certain room number. A tag posted on the door is shown in
Figure 20.
In
Figure 21, the RFID reader will read the tag UID to verify the destination at a specified distance. If it is the destination, the robot will move forward to the door.
Figure 22 shows that the robot turns right if RFID verifies the correct result. In
Figure 23, the webcam and ultrasonic sensor will verify the destination at a specified distance. If it is the destination, then the second robot will move forward to the door.
Figure 24 shows the second robot turning right after the webcam verifies the correct result.
In
Figure 25, the second robot moves forward along the wall. If the robot is too close to the wall, the WMR will be controlled to move away from the wall.
Figure 26 shows a robot in the corridor environment.
In
Figure 27, twin robots patrol the corridor and check whether there is an intruder or not. At the other end of the corridor, a man walks in front of the robot. The proposed control scheme can detect environmental colors and find the intruder. When an object appears in the corridor and is intercepted, the image of the object and an alarm signal will be transmitted to the control center and to the second robot via DataSocket server connection IP address and Wi-Fi, as shown in
Figure 28 and
Figure 29.
In
Figure 30, when the control center receives the object detection message and the intruder is confirmed, security guards will move to the scene and check the environmental conditions.
5. Conclusions
In this study, we propose an intelligent scheme based on fuzzy control, RFID, image processing, fuzzy color classification, and mean shift tracking to control omnidirectional WMR for indoor patrol. In image processing, we integrate the webcam and vision builder to process images and navigate the robot to the destination. A normal image is in RGB color space; however, the composition of three color components and the produced colors are not so intuitive and are susceptible to light interference. So we use HSL color space to solve the problem of light interference. We apply fuzzy color histogram classification to recognize all colors from the pattern image and define the background color. Mean shift tracking can find the highest color density of sample points from the fuzzy color space of the target objects and the objects can be tracked continuously. In the control scheme we used a human–machine interface by LabVIEW 2014 (National Instruments Corporation, Austin, TX, USA) and integrated the MATLAB 2015b (The MathWorks, Inc., Natick, MA, USA) codes and image information by vision builder to control the robot to track the target and reach the destination. RFID reader and tag’s UID can verify the destination to avoid recognition error from image processing. A fuzzy controller can drive the WMR to avoid obstacles easily. This study develops a dual-robot patrol system. The second robot acts as a backup to the first robot while on patrol. Intruder information can be sent to a remote site via the Internet by both robots. The experimental results show that the proposed control scheme can make the dual-robot system perform indoor security service. The proposed system can therefore free up human resources. Further improvements of this study will be to have the robots extinguish unnecessary sidewalk lights and lock unlocked office doors. Although the goal of this study has been achieved, the disadvantage of the proposed system is that the robots can only be applied to indoor service. In the future, GPS will be installed on the robots so that they can also perform outdoor patrols.