1. Introduction
With the development of computer science, mechanics, electronics, and materials, miniaturized robots have become more of a possibility. In recent decades, micromanipulation has attracted more and more attention due to increasing demand from industrial and scientific research areas such as micro-assembly and biological engineering [
1,
2]. Micromanipulation technology can be applied to micro-electromechanical systems (MEMSs), atomic force microscopy, scanning probe microscopy, and wafer alignments et al. [
3,
4,
5,
6].
Position control and force control are necessary for many of these applications because the manipulated objects, such as living cells and microwires, are often very fragile. Accurate position and force control have to rely on closed-loop feedback control. However, operations on a microscale are not as easy as those on a macroscale. Sensors with high precision and small size are either extremely expensive or do not exist. Biocompatibility may even need to be considered in some environments. Fortunately, the development of computer science and image processing technology has enabled micro-vision to replace traditional position and force sensors. Compared with traditional sensors, micro-vision has the advantages of developability, flexibility, multi-DOF measurement, lower cost, noncontact, and ease of installation [
7,
8]. Other sensors, such as laser measurement sensors, have very strict installation requirements, and even the mechanism needs to be redesigned. Unlike capacitive sensors or piezoelectric sensors, the micro-vision system can be used for measurements in multi-DOF, even in different parts. The micro-vision system only obtains images either from the camera directly or from the camera mounted on microscopy. The authors of [
9] have demonstrated the capability of the micro-vision system in displacement measurements based on a CCD camera and microscopy, which achieved an accuracy of 0.283 µm. Another study [
10] presented a 3-DOF micro/nano positioning system, which employs a micro-vision system for real-time position feedback. With the assistance of microscopy, the image resolution can be at 56.4 nm at a 30 Hz sampling frequency.
In addition to position measurement, micro-vision also presents a good ability in force measurement. Different types of vision-based force measurement methods have been proposed, including investigating the relationship between force and displacement, building the analytical mathematic model for force and deformation, and slip margin analysis et al. [
11,
12,
13]. The general idea of retrieving the force is based on the deformation measured from the images. Two common methods for deformation measurement are template matching and edge detection. The authors of [
14] used two CCD cameras to accomplish accurate position feedback and simple force feedback in a microgripper system. An external camera was used to perform a course positioning. Combined with microscopy, the other camera worked to obtain precise position measurements. Template matching was used for a simple force estimation [
15] and to propose a vision-based force measurement, which switched the measurement of force distribution to a linearly elastic object’s contour deformation. The deformable template matching method was used to retrieve the force, and a +/−3 mN microgripper force resolution was achieved. Another study [
2] utilized finite-element analysis to determine the relationship between microgripper deformation and force. As a result, the displacement and force were subjected to a linear relationship, which reduced the problem of displacement measurement. The authors of [
13] demonstrated a microgripper consisting of a force-sensing arm designed by a compliant right R joint. The moment and deformation angle were fitted as a first-degree equation. The deformation of the microgripper was obtained by Sobel edge detection. In [
16], a 3D finite-element model was used to obtain the relationship between the force and deformation of the microgripper. A nonlinear relation was fitted as a second-order degree equation. Pattern identification was performed to obtain the relative displacement of the microgripper. The authors of [
17] proposed a vision-based force measurement method based on an artificial neural network. The biological cells’ geometric features from images and measured force were used to train the model. Based on zebrafish embryos, a cellular force analysis model was created in [
18]. The injection force was obtained according to the measured cell post-deformation. With the assistance of a microscope and camera, the vision-based force-sensing system achieved a 3.7 µN N resolution at 30 Hz.
Force and position control are other vital problems in the micromanipulation process. Different types of control strategies and methods have been exhibited to meet the challenge of the smooth transition between position and force control. A position and force switching control was adopted in [
19]. An incremental PID and a discrete sliding mode control (SMC) were utilized in the position and force control stages, respectively. An indicator of position and force control was the contact between the microgripper and the object; meanwhile, the sensed force exceeded a certain threshold. Similarly, an incremental-based force and position control strategy was proposed in [
20]. An incremental PID and an incremental discrete sliding mode control were used in the position and force control process, and the state observer is no longer necessary. Rather than using two types of controllers, the authors of [
21] adopted a single control to accomplish position and force control based on the impedance control, which enabled it to work in high-order and high-speed systems. Some advanced control strategies, such as fuzzy control, hybrid control, and neural network control [
22,
23,
24,
25], have also been presented, although some of them are either difficult to implement or computationally expensive.
Although many hardware systems have been presented, few simulation systems exist. Rather than working on a hardware setup, this system was implemented with software simulation. Compared with hardware systems, software simulation has some advantages; for example, it is easier to obtain the control performance in each process, it is more convenient to investigate the effect of different parameters on the performance, and it is easier to test different control strategies and vision-based force measurement methods. Compared with real experiments, simulations can perform experiments with different structural grippers and objects at a lower time and money cost. A successful simulation can provide a valuable reference for the real experiment design. This simulation system is a continuation of the first generation based on MATLAB and Simulink. This generation mainly focused on improvements in control performance and GUI design. The closed-loop position and force control are implemented based on a virtual camera. Different types of control strategies are investigated in each stage. The GUI was designed using MATLAB App Designer.
In this paper,
Section 2 presents the components of the mechanical system.
Section 3 is the key part, which demonstrates the design of the control system and the vision-based force measurement and control.
Section 4 is the GUI design.
Section 5 discusses some assumptions. Finally,
Section 6 gives the conclusion and future work.
2. Mechanical System
The simulation system can be divided into three parts: the mechanical system, the vision system, and the control system. The relationship between these three parts can be seen in
Figure 1. The mechanical system is the 3D gripper model created by the Simulink Simcape Library. The vision system generates images according to the coordinates of the key points from the mechanical system during the simulation process, which provides a feedback role. The control system outputs the gripper joints’ velocity after an analysis of images from the vision system.
The mechanical system includes the gripper, soft object, camera model, and force model.
Figure 2 shows the 3D gripper model, which is comprised of two jaws, two rigid tips, and a plunger. Both jaws can be opened or closed in parallel. The plunger is used to remove the soft object from the tip because the soft object would be stuck on the tip due to the adhesion force on the microscale. The gripper has three translational degrees of freedom (DOFs) in this 3D space and a rotational DOF in the horizontal plane.
Because of the limitations of Simulink, a completely flexible ball cannot be modeled. A soft object model was employed as an alternative. The size of the soft object was about 500 μm.
Figure 3 illustrates the soft object model, which is composed of four flexible beam elements with some overlap in the center area. Each beam element includes two rigid bodies connected by a prismatic joint. The soft object can be squeezed in the x- or y-direction, which is used to simulate the deformation of a ball in the diameter direction. The soft object model took the idea of the lumped-parameter method and the finite-element method [
26].
To simulate a collision effect between the gripper and the soft object, the contact force model was employed from the Simscape Multibody Contact Force Library. In the microscale, the adhesion force is not negligible. In contrast, the dominative forces in the macroscale, such as gravity, are not that important. A simplified Van der Waals force model was created to simulate the adhesion phenomenon between the gripper and the soft object [
27].
The camera is located at the right top of the plane, which has three translational degrees of freedom. The virtual camera is a center camera from Peter Corke’s Machine Vision Toolbox, which generates images with the central project method [
28]. The coordinates transformation from camera frame to image frame can be seen in Equation (1):
where
,
are the target point coordinates with regard to the image frame;
,
are the principal point coordinates in the image view;
,
,
are the target point coordinates with regard to the camera frame;
and
are the focal length and pixel size of the virtual camera.
4. GUI Design
The GUI aims to help users run this system without changing the complicated code. This GUI was designed by MATLAB App Designer; it includes two parts: an image generation simulator and the main app. The Image Generation Simulator app assists users in getting suitable initial coordinates, camera parameters, as well as desired position resolutions and force resolutions. The main app is used to control the whole simulation process, including parameter settings, debugging, playback, calibration, and results reports.
Figure 14 shows the interface of the Image Generation Simulator app, which includes two functions: image showing and resolution calculation. Position resolution and force resolution are calculated automatically if the camera parameters are changed. The image show area updates when the coordinates or camera parameters are changed. The programming structure can be seen in
Figure 15; rather than each edit field and slider being attached to the same image show program, a show image button callback function is called by all the sliders and edit fields, which saves loads of coding.
From
Figure 15, it can be seen that a complete image generation system is planted into this app, including the coordinates generation function and the image generation function. This image generation system is the same as that in the simulation process. The camera parameters and gripper coordinates affect the generated images. This app has been packed as a standalone executable, which can be run on a computer even if it does not have MATLAB. A presentation video can be found in
Appendix A.
Figure 16 presents the steps of the main app. According to the function, it can be divided into three parts: software introduction, parameters setting, and model running. The first two steps are the software introduction. Steps 3, 4, and 5 are used to adjust the simulation parameters. The last four function tabs are model running, debugging, calibration, and playback.
Figure 17 shows the relationship between each tab page. The initial parameters, camera control parameters, and controller parameters are set from the MATLAB App Designer workspace, passed to the base workspace, and, finally, received by the Simulink model.
The simulation and debug function sends a command to the Simulink model to start the model running. After the simulation, the image data and simulation results are saved for the playback and results report. Part of the initial parameters and controller parameters, such as initial coordinates, initial camera parameters, and gripper position control parameters, are also used for the calibration process.
Figure 18 shows the interface of the controller tab page. The controller parameters and even the controller types can be changed for each control stage. Different sample frequencies can be chosen from the 10–40 Hz range. Groups of controller parameters with specific sample frequencies have been saved, which can be updated automatically when different sample frequencies are chosen. Any other controller parameters can also be set by users for the debugging stage.
The simulation page can be seen in
Figure 19. After all the initial parameters are set, the summary area will show the setting parameters. Clicking the run button, all the parameters are passed to the base workspace, and the Simulink model is called. The model status lamp and text area will show the current state of the Simulink model. Until the end of the model running, the Check Simulation Results and Check Simulation Recording buttons are available. These two buttons call back the results report app and playback tab page, respectively. A video of the simulation process can be seen in
Appendix A.
The interface of the debug panel can be seen in
Figure 20. Each simulation stage has a switch and a lamp. If the switch is turned on, the process before this stage will run and the lamps will be on, while the process after this stage will not be run and the corresponding lamps will be off. Clicking the run debug button, the Simulink model will be run.
After either the simulation or debugging, the Check Simulation Results button will lead users to the result report app. The interface of the results report app can be seen in
Figure 21. This app includes three-tab pages: the Gripper Position Control, Jaw Position Control, and Force Control pages. Each page can be used to present the control performance of the corresponding stage and the recording of relative variables. For each figure area, many available figures can be shown, which is very convenient for the debugging process. The pages of jaw position control and force control have a similar layout and function.
The playback function can be used to check the previous simulation recording after loading the generated image file. It included two functions: continuous play and single frame check. In the continuous play mode, the play speed can be adjusted. These two functions can not be run at the same time. The other function would be enabled until one of them comes to an end. In the single image frame check mode, just input the frame number and click the show button, and the specific image frame will be shown in the image area. The calibration function can also be executed on the last page. This process consists of two stages: gripper position control and jaws closing. After setting the jaws’ velocity and turning on the calibration switch, the Simulink model will be run. At the end of the simulation, the calibration results are shown in the figure area. A video of the debugging, playback and calibration processes can also be seen in
Appendix A.
5. Discussion
In this paper, we have made some assumptions to simplify the experiments, which also prepares us for work on the next generation. Firstly, the soft object was assumed to be static and not move. In some environments, the target objects can not only randomly rotate and oscillate but also flow at speed, such as cells in blood vessels. Dynamic grasping would be more interesting and meaningful. To realize the same position/force control performance, the camera motion control and autofocus might be applied. Secondly, in different tasks, such as cell grasping, wafer aligning, and micro-assembling, the shape and material of the targets might be very different. A rotational degree of freedom for the gripper has been designed for an attitude adjustment in the future. In different environments, the force models might be adjusted differently. Lastly, in the holding phase, the object was just held for several seconds to replace a following complex operation, and the adhesion between the object and plunger was assumed to be zero. The releasing process needs to be carefully considered due to the dominative adhesion force on a microscale. Different releasing methods have been discussed in the previous paper to detach the objects from both jaws, such as electric fields, vacuum-based tools, plungers, and mechanical vibration. Compared with the adhesion force between the jaws and the soft object, the adhesion force between the plunger and the object is not significant due to the smaller contact area. The object can be easily removed by a simple vibration or retracing the plunger at high acceleration. However, in some tasks, target objects are required to be released at an accurate position and attitude. Additionally, fragile objects cannot be damaged. In the next generation, release technology will be explored in specific environments.
6. Conclusions and Future Work
This paper presents a work of vision-based micromanipulation simulation. Compared with the original generation, this new work has mainly focused on improvements in the control performance and GUI design. The new control system is faster, simpler, and more accurate. Different types of control strategies were implemented in each stage. In gripper position control, the incremental PID control was adopted, and different sample frequencies were tested. The shortest adjustment time was only 0.3 s, with an improvement of over 50%. The new camera control algorithm maintained a bigger gripper motion range and higher position resolution and force resolution at the same time. The gripper motion range was enlarged from 1.8 × 1.8 mm to 4.48 × 4.48 mm. The jaw position control was accomplished faster and with more stability without contact force happening. In the force control stage, the sampling frequency was enlarged from 10 to 10–40 Hz. The position resolution and force resolution were improved from 2.27 to 0.56 µm and 2.86 to 1.56 µN, respectively. The maximum measured force error was significantly reduced from 15 to less than 4 µN. The steady-state error and setting time have also been greatly decreased.
The GUI helps users to run this complex system. The separate Image Generation Simulator app assists users in obtaining suitable initial coordinates and camera parameters and the desired position resolutions and force resolutions. This app has been packed as a standalone executable file. The main app includes the functions of simulation running, debugging, playback, results report, and calibration. Different sampling frequencies, control parameters, and even control strategies can be simply changed in this app, which makes this simulation more adaptable to a real system.
Although some control performance improvements have been realized, there are still some issues that can be investigated in the next generation. New improvements can focus on the adaptability of this simulation platform so that it can simulate different environments and specific tasks, such as wafer alignments and cell grasping. For a certain task, the material, structure, and size of the microgripper and the objects should be adjustable. Different kinds of vision-based force measurement methods can be integrated into this system. The camera’s movement control and autofocus can be explored to assist in the grasping of moving targets. Some real and simple experiments can be performed to verify this simulation in the future; for example, a simple soft object on a microscale can be grasped with an accurate position and force control and then released to another position. More complex operations can also be considered, such as grasping in a liquid environment. The finite-element analysis method can be used to determine the relationship between the contact force and soft object deformation as the force estimation model. Some image processing methods, such as edge detection and the center points method, can be utilized to measure the deformation of the soft object.