Next Article in Journal
An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array
Next Article in Special Issue
Recent Progress in Technologies for Tactile Sensors
Previous Article in Journal
Investigation of Sensitivities and Drift Effects of the Arrayed Flexible Chloride Sensor Based on RuO2/GO at Different Temperatures
Previous Article in Special Issue
Modeling Electronic Skin Response to Normal Distributed Force
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Active Prior Tactile Knowledge Transfer for Learning Tactual Properties of New Objects

Institute for Cognitive Systems (ICS), Technische Universität München, Arcisstrasse 21, 80333 München, Germany
*
Author to whom correspondence should be addressed.
Mohsen Kaboli and Di Feng contributed equally to this work.
Sensors 2018, 18(2), 634; https://doi.org/10.3390/s18020634
Submission received: 1 November 2017 / Revised: 14 February 2018 / Accepted: 16 February 2018 / Published: 21 February 2018
(This article belongs to the Special Issue Tactile Sensors and Sensing)

Abstract

:
Reusing the tactile knowledge of some previously-explored objects (prior objects) helps us to easily recognize the tactual properties of new objects. In this paper, we enable a robotic arm equipped with multi-modal artificial skin, like humans, to actively transfer the prior tactile exploratory action experiences when it learns the detailed physical properties of new objects. These experiences, or prior tactile knowledge, are built by the feature observations that the robot perceives from multiple sensory modalities, when it applies the pressing, sliding, and static contact movements on objects with different action parameters. We call our method Active Prior Tactile Knowledge Transfer (APTKT), and systematically evaluated its performance by several experiments. Results show that the robot improved the discrimination accuracy by around 10 % when it used only one training sample with the feature observations of prior objects. By further incorporating the predictions from the observation models of prior objects as auxiliary features, our method improved the discrimination accuracy by over 20 % . The results also show that the proposed method is robust against transferring irrelevant prior tactile knowledge (negative knowledge transfer).

1. Introduction

1.1. Motivation

We humans perceive tactual properties of an object (e.g., stiffness, texture, temperature, weight) by applying exploratory actions (e.g., pressing, sliding, static contact, lifting) [1]. After applying different exploratory actions on an object, we can obtain its different tactile information. Conversely, making the same exploratory action on different objects produces different tactile observations. Therefore, when we learn about an object, we always link its physical properties with the exploratory actions that we apply on it.
Besides different kinds of exploratory actions, the tactile information we perceive from an object is also dependent on how we apply an action. Consider an example of pressing on two objects. Object 1 is made of soft sponge, and object 2 is made by covering a solid metal with a soft sponge surface. When pressing our fingertips on both objects with a small normal force, we can recognize similar object deformations. However, if we press with a larger normal force, object 1 deforms much more than object 2, since we have reached the metal part of object 2. A similar situation occurs when we apply the sliding movement on object surfaces with different forces and velocities. As a result, by applying different exploratory actions in different ways, we can build a detailed knowledge of the object’s tactual properties which we call tactile exploratory action experiences.
We humans learn about new objects in an active and incremental way. We actively select the most informative exploratory actions to interact with them [2,3]. More importantly, we relate these new objects with the experiences of exploring objects that we have previously encountered. By transferring the prior tactile knowledge, or prior tactile exploratory action experiences, we can largely reduce the amount of exploratory actions required to discriminate among new objects. In this way, we humans save a lot of time and energy, and recognize new objects with high accuracy [4,5,6,7,8,9,10].
Can robotic systems with a sense of touch also perform like humans to actively transfer the past tactile exploratory action experiences when learning about new objects (transfer learning)?

1.2. Background

Over the past decades, researchers have developed various tactile sensors and mounted them on robotic systems (e.g., [11,12,13,14,15,16,17]). In this way, the robots with a sense of touch can perceive different objects’ tactual properties by applying exploratory actions. For example, a robot can slide its sensory parts on objects to sense their textural properties [18,19,20,21], establish a static contact to estimate the temperature [22], or lift objects to measure their center of mass [23]. Bhattacharjee et al. [24] developed algorithms to classify objects into four categories: (1) Hard-Unmoved; (2) Hard-Moved; (3) Soft-Unmoved; and (4) Soft-Moved using One Nearest Neighbor Classifier, Hidden Markov Models and Long Short Term Memory networks based on features of time-varying tactile sensor data (maximum force, contact area, and contact motion). Furthermore, several methods have been proposed for the active object exploration problem, in which the robot actively applies multiple exploratory actions to recognize objects (e.g., [25,26,27,28,29,30,31,32]).
However, the problem of transferring the robotic tactile knowledge has been rarely investigated. Even though many transfer learning techniques have been successfully applied to several areas (e.g., Natural Language Processing: [33]; WiFi-based localization: [34]; Computer Vision: [35,36,37,38]; Bio-informatics: [39]), it is our works that introduced tactile transfer learning. Previously, Kaboli et al. [20,21] developed a novel textural descriptor. Using the descriptor, a ShadowHand dexterous robotic hand equipped with BioTac sensors on its fingertips could efficiently discriminate among object surface textures. Later, we designed a transfer learning method [40,41,42] so that the robotic hand could reuse the prior texture models from 12 objects to learn about the surface textures of 10 new objects. However, since only the sliding movement was applied, the robot could only transfer the object textural properties.
In our previous works [43,44], we proposed an active touch learning method in which an UR10 robotic arm with an artificial skin on its end-effector or fingertips could apply sliding, pressing, and static contact movements to learn about objects’ surface texture, stiffness, and thermal conductivity, respectively. Even though our active learning method enables the robot to efficiently learn about objects, the robot still needs to learn from scratch given a new set of objects. In this regard, recently, for the first time in robotics and tactile domains, we proposed an algorithm called Active Tactile Transfer Learning (ATTL) [45] to actively transfer multiple physical properties of prior objects. Using ATTL, the UR10 robotic arm could actively select prior knowledge to transfer (surface texture, stiffness, and thermal conductivity by applying sliding, pressing, and static contact movements). As a result, the robot could use fewer training samples (even one sample) to achieve higher recognition rate, when it learns about new objects.
The robotic systems in the above-mentioned works only applied exploratory actions with fixed action parameters, e.g., sliding with a fixed velocity to perceive surface textures. In order to learn their detailed physical properties (e.g., the vibro-tactile feedbacks by sliding at different speeds) so as to better discriminate among them, the robots should be able to apply exploratory actions with different action parameters.

1.3. Contribution

In this paper, we focus on actively transferring the prior tactile exploratory action experiences to learn more details about the physical properties of new objects (see Figure 1). Our contributions are two-fold:
  • We enable a robot to apply exploratory actions with multiple action parameters. In this way, the robot gains more detailed tactile information.
  • We propose an active tactile transfer learning algorithm so that the robot leverages the previously obtained detailed tactile knowledge (prior tactile exploratory action experiences) while learning about a new set of objects.
In the sequel, we first introduce the robotic system (Section 2). Then, we illustrate how the robot applies exploratory actions and obtains the physical properties of objects (Section 3). Afterwards, we illustrate our proposed tactile transfer learning in detail (Section 4), followed by a systematic evaluation of the method (Section 5). We finalize this paper with a conclusion and a discussion about future works (Section 6).

2. System Description

2.1. Multi-modal Artificial Skin

To enable the robot to perform more human-like behaviours with multiple tactile sensing modalities, we designed and manufactured multi-modal artificial skin (Figure 2a made by seven active tactile modules (Figure 2b [12]. Each module is a small hexagonal printed circuit board equipped with off-the-shelf sensors (one temperature sensor, one accelerometer, three normal force sensors, and one proximity sensor). In this way, robots are equipped with such an artificial skin that contains seven temperature sensors, seven accelerometers, 21 normal force sensors, and seven proximity sensors. They can emulate the human tactile sensing about temperature, vibrations, force, and light touch. Their technical information is summarized in Table 1.

2.2. UR10 Robotic Arm

We mounted the multi-modal artificial skin on the end-effector of an Universal Robotic Arm (UR10) with six DoFs (Figure 2a). The UR10 was controlled in collaboration with the aritificial skin in order to apply different exploratory actions on objects.

3. Exploratory Actions and Perception

3.1. Exploratory Actions Definition

By applying exploratory actions on objects with different action parameters, the robot can attain different feature observations. In this work, we consider three types of exploratory actions: pressing (denoted as P), sliding (denoted as S), and static contact (denoted as C). Formally, we define N α number of exploratory actions as A = { α n θ n } n = 1 N α , where θ n is the action parameters that define "how" the robot can apply the exploratory action. We further define θ n { θ P , θ S , θ C } , where θ P , θ S , and θ C represent the action parameters for the pressing, sliding, and static contact movements respectively.

3.1.1. Pressing

The robotic system presses its sensory part on the object surfaces in order to perceive its stiffness (see Figure 3a). The pressing movement consists of pressing until a depth of d P and holding the artificial skin for t P seconds, i.e., θ P = [ d P , t P ] . During the pressing, the multi-modal artificial skin can record the normal force feedbacks from each normal force sensor: F n f , n s = { F n f , n s m } m = 1 t P · f s in order to measure the object stiffness. n f is the index of a normal force sensor in one skincell ( n f = 1 , , N f , in our case N f = 3 ), and n s is the index of skincells in the artificial skin ( n s = 1 , , N s , in our case N s = 7 ). f s is the sampling rate of the artificial skin, and m the sampling time step. In addition to the normal force feedbacks, the robot can also record the temperature feedbacks from each temperature sensor in order to measure the object thermal conductivity: T n t , n s = { T n t , n s m } m = 1 t P · f s , n t = 1 , , N t , with N t being the number of temperature sensors in one skincell (in our case N t = 1 ).

3.1.2. Sliding

The robot slides the artificial skin on the object surface and perceives its textural properties [18,21] (see Figure 3b). To do this, the robot first builds a contact with objects with the normal force of F S , then it linearly slides on the objects with a speed of v S for t S seconds, θ S = [ F S , v S , t S ] . During sliding, the robot collects the outputs of accelerometers (in three axes: x , y , z ): a n a , n s ( x ) = { a n a , n s ( x ) , m } m = 1 t S · f s , a n a , n s ( y ) = { a n a , n s ( y ) , m } m = 1 t S · f s , a n a , n s ( z ) = { a n a , n s ( z ) , m } m = 1 t S · f s . Then the robot combines these signals together: a = { a n a , n s } n a = 1 , n s = 1 N a , N s ; a n a , n s = [ a n a , n s ( x ) , a n a , n s ( y ) , a n a , n s ( z ) ] , n a = 1 , , N a , where N a is the number of accelerometers in one skincell (in our case N a = 1 ). Besides, the change of temperature during sliding is also collected as an extra information T n t , n s = { T n t , n s m } m = 1 t S · f s .

3.1.3. Static Contact

The object thermal cues can be attained by the robotic system by applying static contact movement: the robot presses its sensory part against the object surface until a depth of d C and maintains the contact for t C seconds, i.e., θ C = [ d C , t C ] (see Figure 3c). The normal force feedbacks and temperature feedbacks are recorded: F n f , n s = { F n f , n s m } m = 1 t C · f s , T n t , n s = { T n t , n s m } m = 1 t C · f s .

3.2. Object Physical Properties Perception

3.2.1. Stiffness

We use the normal force averaged over all normal force sensors and time steps as an indicator for the object stiffness. For the pressing movement with pressing time steps t P · f s , object stiffness can be estimated by F ¯ = 1 t P · f s 1 N f 1 N s m = 1 t P · f s n f = 1 N f n s = 1 N s F n f , n s m .

3.2.2. Textural Property

In this work, we use the same textural feature extraction method in [43]: The vibration signals a in the artificial skin are used to calculate the activity, mobility and complexity features, denoted as A ( a ) , M ( a ) , C ( a ) . These features represent the object tactile properties in the time domains. We also computed the linear correlation of accelerometer signals between different directions ( x y , y z , x z ) denoted as L ( a ) , as these accelerometer components are correlated with each other during the sliding movement. The final descriptor of textural features combines activity, mobility, complexity and linear correlation together [43]: T D = [ A ( a ) , M ( a ) , C ( a ) , L ( a ) ] .

3.2.3. Thermal Conductivity

To extract the features that describe the object thermal cues, we first calculate the average temperature sequence from all the temperature sensors: T ¯ = n t = 1 N t n s = 1 N s T n t , n s N t · N s . We then calculate its gradient at each time step as: T ¯ , and combine it with the average temperature sequence: [ T ¯ , T ¯ ] . To avoid the curse of dimensionality, we further reduce this combination to 10 dimensions via Principle Component Analysis (PCA) method and use it as the final feature to describe the object thermal conductivity.
Table 2 summarizes the exploratory actions, the sensory feedbacks and the corresponding tactile features.

4. Transferring Prior Tactile Exploratory Action Experiences

This section describes our proposed active prior tactile knowledge transfer algorithm (APTKT) in detail. First, we formulate our problem in Section 4.1. Then, we illustrate our transfer learning method, including its process (Section 4.3) and the problems of what to transfer (Section 4.4), how to transfer (Section 4.5), from where to transfer, and how much to transfer (Section 4.6). The motivation of our method is demonstrated in Figure 1.

4.1. Problem Formulation

Assume that a robotic system has gained prior tactile knowledge of some old objects, on which the robot has previously applied different exploratory actions with different action parameters. These prior exploratory action experiences consist of the feature observations perceived by the multiple sensors and observation models from the old objects. Now, the robot is tasked to learn about a set of new objects. Since the old objects might share some similar physical properties with the new objects, by leveraging the related tactile exploratory action experiences, the robot can learn about new objects more efficiently.
We define N new number of new objects ( C new = { c j new } j = 1 N new ) the robot is tasked to learn about through different exploratory actions A = { α n θ n } n = 1 N α (For simplicity, we will denote α as an exploratory action in the rest of the paper). In other words, the robot should actively attain object feature observations ( V α new = { V c 1 new , V c 2 new , , V c N new new } ) for each exploratory action α and construct reliable observation models V α new f α new C new . We further define the robot prior tactile experience for an exploratory action α for N old number of prior objects ( C old = { c i old } i = 1 N old ) as the prior object feature observations ( V α old = { v c 1 old , V c 2 old , , V c N old old } ) and the observation models of old objects V α old f α old C old . These feature observations are collected by the multiple tactile sensors from the artificial robotic skin.
We formulate our problem as the transfer learning in the Gaussian Process Classification (GPC) framework [46], where each object is regarded as a class, and for each exploratory action, a GPC model is built as the observation model. The robot iteratively applies the exploratory actions and leverages prior tactile knowledge to improve the GPC observation models of new objects.

4.2. Gaussian Process Classification

The Gaussian Process Classification (GPC) model describes the mapping between the observation set X and the output set Y by: X f Y . The latent function g ( x ) in the GPC model is assumed to be sampled from a high-dimensional gaussian distribution called GP prior [46]: g ( x ) GP ( m ( x ) , K ( x , x ) ) , where each sample g ( x ) is a random variable. In this work, we use one-vs-all multi-class classification. For each object class, a binary GPC whose output label is converted to { 1 , + 1 } is trained for each of the N labels: f n ( · ) . Given a new sample x , each binary classifier predicts the observation probability of its label p ( y n | x ) . The sample is assigned to the class with the largest prediction probability y = arg max y n Y p ( y n | x ) .

4.3. Process

The robot following our proposed method first applies each exploratory action one time on each new object, in order to collect a small number of feature observations V new = { V α n new } n = 1 N α (Initial data collection). Then, the robot reuses its prior tactile exploratory action experiences to improve the observation models for each new object (Initial prior knowledge transfer). During this process, the robot compares the relatedness between its prior tactile exploratory action experiences and the new objects (Section 4.6), and chooses the most related one to transfer the old object feature observations V old (Section 4.5). Afterwards, the robot begins to iteratively collect and combine the feature observations and update the prior tactile knowledge in order to improve the observation models. At each iteration of prior tactile knowledge updating, the robot (1) actively selects the next object and the next exploratory action in order to attain a new feature observation; and (2) updates the prior tactile knowledge for the selected exploratory action. The iteration terminates when there is no improvement in the observation models of new objects. Our algorithm is demonstrated by Figure 4.

4.4. What to Transfer

When the robotic system applies an exploratory action on objects, it perceives multiple feature observations (e.g., by the pressing movement, the robot can perceive the object stiffness and thermal conductivity). The prior tactile exploratory action experiences are built using the feature observations of prior objects from multiple sensory modalities that are combined together and the corresponding GPC observation models of prior objects.
In order to combine the observations perceived from different tactile sensors, we first define v α as the feature observation of an exploratory action α . It is comprised of multiple observations: v α = [ v α ( 1 ) , , v α ( m α ) , , v α ( M α ) ] , where v α ( m α ) is an observation from the sensor modality m α , M α is the number of sensing modalities. For the pressing and static contact movements, we use the normal force and temperature sensing, for the sliding movement the accelerometer and temperature sensing (Table 2). Then, we assume that for a sensor modality m α , a kernel function K ( m α ) is given. To combine multiple feature observations so as to exploit the information from all sensors after applying the exploratory action α , we linearly combine the kernels:
K α = γ α ( 1 ) K ( 1 ) + + γ α ( m α ) K ( m α ) + + γ α ( M α ) K ( M α ) ,
where γ α ( m α ) 0 . This hyper-parameter controls how much the robot can rely on the sensor modality m α . It ranges between 0 and 1, with γ α ( m α ) = 0 indicating that the sensor feedback is not informative, and γ α ( m α ) = 1 highly useful. We further constrain these hyper-parameters with L 1 norm: | m α = 1 M α γ α ( m α ) | = 1 . For each exploratory action, a GPC observation model is built using K . The hyper-parameters of γ and kernels are selected by maximizing the log marginal likelihood [46]. Figure 5 illustrates our multiple feature observations combination method. It is also demonstrated by Algorithm 1.
Algorithm 1 Multiple Feature Observations Combination
Sensors 18 00634 i001

4.5. How to Transfer

Taking advantage of our previously proposed method [45], the robotic system transfers the feature observations of a prior object c i old to learn the GPC observation model of a new object c j new , based on an exploratory action α . For simplicity, we hereby refer to i and j as c i old and c j new , respectively. We define g i old as the Gaussian Process latent function values [46] for the old object c i old and g j new for the new object c j new . We assume that these two function values are not independent of each other, but are sampled together over a dependent Gaussian Prior (GP). This dependent GP is then used to construct the GPC observation model of the new object. The latent function can be modified accordingly: g j new [ g i old , g j new ] [45]. We further incorporate the relatedness between prior object and new object into the dependent GP model by introducing the following dependent kernel function:
K = K ( V i old , V i old ) λ K ( V i old , V j new ) λ K ( V j new , V i old ) K ( V j new , V j new ) .
K ( V i old , V i old ) and K ( V j new , V j new ) serve as the kernel matrix that measures the similarity among all feature observations of the old object and the new object, respectively. Each element in the kernel matrix measures the similarity between two feature observations, which is calculated by the radial basis function (RBF). λ K ( V j new , V i old ) and λ K ( V i old , V j new ) are the kernel matrix between the old object and the new object. λ controls the relatedness, or similarity, between c i old and c j new . We constrain its range within [ 0 , 1 ] . As Chai et al. [47] evaluated, λ = 0 indicates that the old object and the new object are totally different, while λ = 1 indicates that the two objects are the same.

4.6. From Where and How Much to Transfer

Section 4.5 describes how to transfer the prior tactile knowledge to learn about new objects. This section illustrates how the robotic system selects the most related old object (from where to transfer) and how to determine the relatedness ( λ ) between two objects (how much to transfer).
To do this, we use our previously proposed method [45]. Let p ( c i old | v j new ) be the prediction probability that a feature observation from the new object v j new is assigned to the old object c i old . We measure the average prediction to all the observations v j new V j new that belong to the new object: p ¯ ( c i old | V j new ) = 1 N j new p ( c i old | v j new ) , with N j new being the number of new object feature observations. This average prediction value indicates the similarity between the old object c i old and the new object c j new . A larger value indicates that these two objects are highly similar. Therefore, we can use it to select the most related old object (denoted as c old ) for a new object regarding the exploratory action α . Furthermore, to avoid transferring irrelevant tactile information, we add a threshold ϵ n e g which prevents the robot from selecting any old object when the prediction value is smaller than ϵ n e g . The final old object selection criterion is:
c old = arg max c i old C old p ¯ ( c i old | V j new ) , if p ¯ ( c old | V j new ) ϵ n e g None , otherwise .
Once we select c old , we further use the predictions from the observation model of old objects to determine the object relatedness λ : λ = p ¯ ( c old | V new ) .

4.7. Prior Exploratory Action Experiences Update

When the robot updates its prior exploratory action experiences, it needs to iteratively collect a new feature observation by applying an exploratory action on an object. We use our previously proposed active tactile learning algorithm [43] called Active Touch for Learning Physical Properties (AT-LPP). Using our AT-LPP approach, the robot actively decides which new feature on the object to explore next (denoted as c new ) and which physical property to learn next (which exploratory action to apply next). It is denoted as α ) . In the following, we briefly summarize the AT-LPP algorithm (Algorithm 2) [43].
The robot first calculates the Shannon entropy of the object posterior for a new feature observation v new with the equation: H ( c new | v new ) = c j new C new p ( c j new | v new ) log ( p ( c j new | v new ) ) . Then the robot estimates the uncertainty in the GPC model with regard to each exploratory action and new object by the mean value of the Shannon entropy: UNC ( α n , c j ) = 1 N α n , j new v α n , j new V α n , c j new new H ( c j new | v α n , j new ) , where v α n , j new refers to the a feature observation the robot has collected for the new object c j new and exploratory action α n ; N α n , j new is the number of feature observations. A large UNC ( α n , c j ) indicates that the robot is uncertain about the object feature observations from the exploratory action α n . As discussed in [43], an efficient next object and the next action selection process should be considered to greedily reduce such uncertainty while at the same time allowing the robot to explore (exploration-exploitation trade-off). In this regard, the next exploratory action α and the next object c new are determined by:
c new , α = arg max α n A ; c j new C new UNC ( α n , c j new ) if p rand ϵ explor c new = U { c 1 new , c 2 new , , c N new new } , α = U { α 1 , α 2 , α N α } otherwise ,
where ϵ explor is the exploration rate, and p rand is randomly generated following the uniform distribution U ( 0 , 1 ) .
Algorithm 2 Active Touch for Learning Physical Properties
Sensors 18 00634 i002
Once the robot collects a new feature observation, it updates the prior tactile exploratory action experiences only from action α . This process includes updating the feature observation combination, updating the object relatedness λ , and transferring these prior feature observations to the observation models of new objects.

5. Experimental Results

5.1. Experimental Objects

In order to evaluate the performance of the proposed active prior tactile knowledge transfer algorithm (APTKT), we deliberately selected 10 daily objects with different physical properties which served to build the robotic prior to tactile exploratory action experiences (see Figure 1 Prior objects). Furthermore, we selected five new objects about which the robotic system was tasked to learn (Figure 1 New objects). For each new object, there existed one or more old objects that shared similar physical properties. For example, both rough sponge and smooth sponge are soft; paper box and hard box have similar surface textures; metal toolbox and biscuit box have high thermal conductivity. In this way, when learning about new objects based on their physical properties, the robot can leverage the related prior tactile knowledge.

5.2. Exploratory Action Determination and Test Data Collection

In our experiment, we defined seven exploratory actions from the pressing, sliding, and static contact movements with various action parameters (Pressing: P 1 , d P = 1 mm, t P = 3 s; P 2 , d P = 2 mm, t P = 3 s. Sliding: S 1 , F S = 0.1 N, t S = 5 s v S = 1 cm/s; S 2 , F S = 0.1 N, t S = 1 s, v S = 5 cm/s; S 3 , F S = 0.2 N, t S = 5 s, v S = 1 cm/s; S 4 , F S = 0.2 N, t S = 1 s, v S = 5 cm/s. Static Contact: C 1 , d C = 2 mm, t C = 15 s). Before applying any of the seven exploratory actions, the robot established light contact with the objects which were detected once the total normal force on the artificial skin increased above 0.05 N. Furthermore, after applying an exploratory action, the robot was controlled to raise its end-effector for 30 s such that the temperature sensors could be restored to the ambient temperature.
We evaluated the performance of our proposed method based on a test dataset built by the robot by applying each actions 20 times on each object. During this process, objects were manually shifted and rotated so that the data was robust against the variations in the object contact locations with the artificial skin.

5.3. Evaluation of Multiple Feature Observations Combination Method

We first evaluated the performance of our proposed robotic multiple feature observation combination algorithm. To do this, the robot selected 10 groups of objects (shown in Figure 1) to construct the GPC observation models for each of the seven exploratory actions. Each group contained five objects that were selected randomly both from the old and new object lists, following a uniform distribution. The algorithm performance was evaluated by the discrimination accuracy of the test dataset predicted by the GPC models with the growing number of feature observations. We compared our method with the baseline methods that built the GPC models using only a single sensor modality.
The experiments were conducted 10 times for each object group. For a fair comparison, we used RBF kernel [46] for each sensor modality. Results are plotted in Figure 6. For all seven exploratory actions, our proposed algorithm either took advantage of combining different sensor modalities to reach the best discrimination accuracy ( P 1 , P 2 , C 1 , S 4 in Figure 6), or performed the same as the best single-sensor result ( S 1 , S 2 , S 3 in Figure 6), indicating that the robot actively selected the most informative sensory feedback to learn about objects.

5.4. Evaluation of the Transfer Learning Method with Different Groups of Prior Objects

In this experiment, we evaluated the performance of our proposed transfer learning method (APTKT) to learn the five new objects (see new objects in Figure 1) with different groups of prior objects (see prior objects in Figure 1). To start the learning process, the robot applied each of the seven actions once on each new object. When the robot iteratively learned the new objects’ physical properties, it updated both the multiple feature observations combination and the prior tactile knowledge built by the dependent GPC models with all the feature observations collected so far. At each learning iteration, we measured the object discrimination accuracy of the test dataset. The transfer learning performance was compared with the baseline learning method that combined multiple feature observations without transferring any prior tactile knowledge.
We randomly shuffled the prior objects into ten groups following a uniform distribution. Each group consisted of the feature observations and the observation models from three prior objects. We conducted the experiment with five trials for each group. In each trial, the robot followed the transfer learning approach and no-transfer approach to collect 40 feature observations in total, allowing a fair comparison between different learning strategies to be made. Figure 7 illustrates that with the help of prior knowledge, the robot consistently outperformed the learning process without prior knowledge with a discrimination accuracy of 10 % .
In order to further evaluate the robustness of APTKT, the robot was then tasked to learn about objects via applying only one of the exploratory actions. The experimental procedure was the same as the one described above. As the results in Figure 8 show, The robot had a larger improvement by actions P1, P2 and C1 than actions S1, S2, S3 and S4. For example, the robot increased the discrimination accuracy by 25 % , when it reused the prior tactile instance knowledge from the movement P2. However, when learning about objects by actions S1 and S4, little improvement was seen. This was due to the fact that different exploratory actions produced different object feature observations. For action P2, there existed higher related prior tactile knowledge than S1 and S4, and the robot could benefit more on it.
In all scenarios, using our proposed transfer learning algorithm, the robot could achieve a higher discrimination accuracy than the baseline method with the same number of feature observations. Therefore, we can conclude that APTKT helps the robot build reliable observation models of new objects with fewer training samples, even when only one kind of exploratory action is applied.

5.5. Increasing the Number of Prior Objects

We further evaluated the performance of our proposed method with an increasing number of prior tactile experiences. Intuitively, as the number of old objects grows, it is more likely that the robot can find highly-related prior tactile knowledge, so that the learning performance can continue to be improved. In this regard, the robot was asked to learn about new objects via all seven exploratory actions, with the number of old objects increasing from 5, 7 to 10. We followed the same experimental procedure described above, and conducted each experiment with five trials. Unexpectedly, as Figure 9b–d show, the growing number of prior tactile knowledge reduced the transfer learning improvement. This was because as the number of prior objects grow, it was more difficult for the robot to classify them. As a result, the object relatedness λ predicted by the old object GPC models was lower than the threshold ϵ neg , making the robot stop transferring prior knowledge.
To compensate for this, we use our previously proposed feature augmentation trick [45]. We defined p ( c i old | v ) as the prediction probability that a feature observation from the new object v is assigned to the old object c i old . Then we augmented a feature observation v from a new object as:
v = [ v , original features p ( c 1 old | v ) , , p ( c i old | v ) , , p ( c N old old | x ) predictions from old objects observation models ] .
The auxiliary features [ p ( c 1 old | v ) , , p ( c N old old | v ) ] encode the knowledge of all prior objects. They represent the relatedness between prior objects and the new object, and thus can help the robotic system to distinguish among new objects. Furthermore, since the auxiliary features can be regarded to be perceived from an auxiliary sensor, we directly employed our proposed multiple feature observation combination method to the augmented feature observations by casting a weight γ to its kernel. The augmented feature observations were then used to build the new object dependent GPC models.
We tested our proposed feature augmentation technique when the robot leveraged the tactile knowledge of 3, 5, 7, and 10 prior objects to learn about new objects via all seven actions. The learning performance is shown by the green curves in Figure 9a–d. Clearly, by introducing the probability predictions as auxiliary features, the robot was able to reuse the prior tactile knowledge again, and it achieved similar improvement of discrimination accuracy for 3 prior objects, and higher improvement for 5, 7, and 10 prior objects compared to the other methods. Specifically, when resuing 10 prior objects, the robot achieved 20 % higher discrimination accuracy than the baseline method, when only one new feature observation was collected, showing the one-shot learning behaviour. This experiment also indicates that with a further growing number of prior objects, a further improvement of discrimination accuracy is achievable.

5.6. Negative Prior Tactile Knowledge Transfer Testing

When the constructed prior tactile exploratory action experiences are not relevant to the new objects, a brutal-force transfer may degrade the learning performance, resulting in the negative knowledge transfer phenomena. In this case, the transfer learning algorithm should stop leveraging irrelevant prior knowledge.
In order to evaluate our proposed transfer learning method (APTKT) against the negative tactile knowledge transfer, we deliberately selected irrelevant prior objects and compared the transfer learning performance with the baseline method, following the same experimental process described in Section 5.4. When finding which objects were relevant (or irrelevant) to each other, we built object confusion matrices to roughly evaluate the object similarity. For each of the seven exploratory actions, we trained a Gaussian Mixture Model (GMM) and calculated the object confusion matrix. To do this, we first used GMM to cluster all the samples from the dataset with the hyper-parameters optimized by the Expectation-Maximization (EM) algorithm. The number of clusters was set to be the same as the number of objects (in our case, 15), and each cluster centroid was initialized as the mean value of all data samples that belonged to an object. The maximum EM iterations was set to be 100, with convergence threshold being 0.001 . We further calculated the confusion matrix averaged over all exploratory actions. These matrices indicated the averaged similarity between objects. We rescaled their values to be within 0-1, with 0 meaning that two objects are totally dissimilar, and 1 the same. The objects which had low similarity values with target objects were selected as irrelevant objects. The results are shown in Figure 10. According to Figure 10, prior objects {1, 5, 7} (objects {1–10}) were dissimilar to the new objects (objects {11–15}) regarding the exploratory movement P1, objects {1, 4, 7} for P2, objects {4, 7, 10} for C1, objects {1, 6, 9} for S1, objects {1, 7, 10} for S2, objects {1, 3, 9} for S3, and objects {1, 3, 8} for S4. We thus used these objects as prior objects to test the transfer learning performance via the single exploratory action. We further selected objects {1, 5, 10} to test the learning process via all exploratory actions, since these three objects shared relative small similarity to the new objects.
The results in Figure 11 illustrate that the discrimination accuracy achieved by APTKT was similar to the baseline method, when the robot applied either one or all seven exploratory actions. The results indicate that our proposed algorithm stopped transferring negative prior tactile instance knowledge.

6. Conclusions

In this work, we proposed a transfer learning method for a robot equipped with multi-modal artificial skin to actively reuse the prior tactile exploratory action experiences when learning about the detailed physical properties of new objects. These prior action experiences are built by the feature observations, when the robotic arm applies the pressing, sliding and static contact movements with different action parameters on the previous-explored objects (prior objects). The feature observations are perceived from multiple sensory modalities. Using our proposed tactile transfer learning method, the robot has a "warm start" of the learning process. It applies fewer exploratory actions and gains a detailed tactile knowledge of new objects (e.g., normal force feedback at different pressing depths).
One limitation of our work is that performing static contact movement took 15 s, which prevented the rapid transfer learning. Furthermore, due to the limitations of our artificial skin, the robot can only interact with objects with flat surfaces. In the future, we will extend our method to more exploratory actions (such as tapping and lifting), so that the robot can transfer more exploratory action experiences to learn more physical properties of an object, such as auditory feedback and center of mass. Furthermore, an interesting topic would be how to transfer the prior tactile knowledge across different exploratory actions, e.g., transferring the tactile knowledge from pressing to static contact movement.

Acknowledgments

This work was supported by the German Research Foundation (DFG) and the Technical University of Munich within the Open Access Publishing Funding Programme.

Author Contributions

M.K. and G.C have developed the idea of tactile transfer learning. D.F. and M.K. conceived and designed the experiments; D.F. performed the experiments; D.F., M.K., and G.C analyzed the data and evaluated the experimental results; G.C. has supervised this research as the final responsible of its supporting projects. D.F., M.K., and G.C wrote the paper. M.K. and D.F. contributed equally to this work.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Lederman, S.J.; Klatzky, R.L. Hand movements: A window into haptic object recognition. Cogn. Psychol. 1987, 19, 342–368. [Google Scholar] [CrossRef]
  2. Lederman, S.J.; Klatzky, R.L. Haptic classification of common objects: Knowledge-driven exploration. Cogn. Psychol. 1990, 22, 421–459. [Google Scholar] [CrossRef]
  3. Kaboli, M.; Long, A.; Cheng, G. Humanoids learn touch modalities identification via multi-modal robotic skin and robust tactile descriptors. Adv. Rob. 2015, 29, 1411–1425. [Google Scholar] [CrossRef]
  4. Pugh, K.J.; Bergin, D.A. Motivational influences on transfer. Educational Psychologist 2006, 41, 147–160. [Google Scholar] [CrossRef]
  5. Schunk, D. Learning Theories: An Educational Perspective, 4th ed.; Pearson: Upper Saddle River, NJ, USA, 2004; p. 22. ISBN 0130384968. [Google Scholar]
  6. Cree, V. Transfer of Learning in Professional and Vocational Education; Routledge: Abingdon, UK, 2000; ISBN 0415204186. [Google Scholar]
  7. Ormrod, J.E. Human Learning, 6th ed.; Pearson: Upper Saddle River, NJ, USA, 2012; ISBN 9780132595186. [Google Scholar]
  8. Hung, W. Problem-based learning: A learning environment for enhancing learning transfer. New Directions Adult Continuing Educ. 2004, 137, 27–38. [Google Scholar] [CrossRef]
  9. Choi, S.; Meeuwsen, H.; French, R.; Sherrill, C.; McCabe, R. Motor Skill Acquition, Rentention, and Transfer in Adults with Profound Mental Retardation. Adapted Phys. Act. Q. 2001, 18, 257–272. [Google Scholar] [CrossRef]
  10. Canini, K.R.; Shashkov, M.M.; Griffiths, T.L. Modeling Transfer Learning in Human Categorization with the Hierarchical Dirichlet Process. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 151–158. [Google Scholar]
  11. Yogeswaran, N.; Dang, W.; Navaraj, W.T.; Shakthivel, D.; Khan, S.; Polat, E.O.; Gupta, S.; Heidari, H.; Kaboli, M.; Lorenzelli, L.; et al. New materials and advances in making electronic skin for interactive robots. Adv. Rob. 2015, 29, 1359–1373. [Google Scholar] [CrossRef]
  12. Mittendorfer, P.; Cheng, G. Humanoid multimodal tactile-sensing modules. IEEE Trans. Rob. 2011, 27, 401–410. [Google Scholar] [CrossRef]
  13. Jamali, N.; Sammut, C. Material classification by tactile sensing using surface textures. In Proceedings of the 2010 IEEE International Conference Robotics and Automation (ICRA), Anchorage, AK, USA, 3–7 May 2010; pp. 2336–2341. [Google Scholar]
  14. Chu, V.; McMahon, I.; Riano, L.; McDonald, C.G.; He, Q.; Perez-Tejada, J.M.; Arrigo, M.; Fitter, N.; Nappo, J.C.; Darrell, T.; et al. Using robotic exploratory procedures to learn the meaning of haptic adjectives. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 3048–3055. [Google Scholar]
  15. Papakostas, T.V.; Lima, J.; Lowe, M. A large area force sensor for smart skin applications. Proc. IEEE. 2002, 2, 1620–1624. [Google Scholar]
  16. Song, A.; Han, Y.; Hu, H.; Li, J. A novel texture sensor for fabric texture measurement and classification. IEEE Trans. Instrum. Meas. 2014, 63, 1739–1747. [Google Scholar] [CrossRef]
  17. Watanabe, K.; Sohgawa, M.; Kanashima, T.; Okuyama, M.; Norna, H. Identification of various kinds of papers using multi-axial tactile sensor with micro-cantilevers. In Proceedings of the World Haptics Conference (WHC), Daejeon, Korea, 14–18 April 2013; pp. 139–144. [Google Scholar]
  18. Kaboli, M.; Mittendorfer, P.; Hügel, V.; Cheng, G. Humanoids learn object properties from robust tactile feature descriptors via multi-modal artificial skin. In Proceedings of the 14th IEEE International Conference on Humanoid Robots (Humanoids), Madrid, Spain, 18–20 November 2014; pp. 187–192. [Google Scholar]
  19. Friedl, K.E.; Voelker, A.R.; Peer, A.; Eliasmith, C. Human-inspired neurorobotic system for classifying surface textures by touch. IEEE Rob. Autom. Lett. 2016, 1, 516–523. [Google Scholar] [CrossRef]
  20. Kaboli, M.; Cheng, G. Robust Tactile Descriptors for Discriminating Objects from Textural Properties via Artificial Robotic Skin. IEEE Trans. Rob. 2018, 9, 1–19. [Google Scholar]
  21. Kaboli, M.; Rosa, A.D.L.T.; Walker, R.; Cheng, G. In-hand object recognition via texture properties with robotic hands, artificial skin, and novel tactile descriptors. In Proceedings of the IEEE International Conference on Humanoid Robots (Humanoids), Seoul, South Korea, 3–5 November 2015; pp. 1155–1160. [Google Scholar]
  22. Bhattacharjee, T.; Wade, J.; Kemp, C. Material Recognition from Heat Transfer given Varying Initial Conditions and Short-Duration Contact. In Proceedings of the Robotics: Science and Systems, Rome, Italy, 13–17 July 2015; pp. 1–6. [Google Scholar]
  23. Yao, K.; Kaboli, M.; Cheng, G. Tactile-based Object Center of Mass Exploration and Discrimination. In Proceedings of the IEEE International Conference on Humanoid Robots (Humanoids), Birmingham, UK, 15–17 November 2017; pp. 1–6. [Google Scholar]
  24. Bhattacharjee, T.; Rehg, M.J.; Kemp, C. Inferring Object Properties with a Tactile Sensing Array Given Varying Joint Stiffness and Velocity. Int. J. Humanoid Rob. 2017, 14, 1–32. [Google Scholar] [CrossRef]
  25. Zhang, M.M.; Atanasov, N.; Daniilidis, K. Active end-effector pose selection for tactile object recognition through monte carlo tree search. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 3258–3265. [Google Scholar]
  26. Martinez-Hernandez, U.; Dodd, T.J.; Prescott, T.J. Feeling the shape: Active exploration behaviors for object recognition with a robotic hand. IEEE Trans. Syst. Man Cybern. Syst. 2017, 99, 1–10. [Google Scholar] [CrossRef]
  27. Xu, D.; Loeb, G.E.; Fishel, J.A. Tactile identification of objects using Bayesian exploration. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 3056–3061. [Google Scholar]
  28. Schneider, A.; Sturm, J.; Stachniss, C.; Reisert, M.; Burkhardt, H.; Burgard, W. Object identification with tactile sensors using bag-of-features. In Proceedings of the IEEE RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 243–248. [Google Scholar]
  29. Lepora, N.F.; Martinez-Hernandez, U.; Prescott, T.J. Active touch for robust perception under position uncertainty. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 3020–3025. [Google Scholar]
  30. Fishel, J.A.; Loeb, G.E. Bayesian exploration for intelligent identification of textures. Front. Neurorobotics 2012, 6, 1–20. [Google Scholar]
  31. Saal, H.; Ting, J.A.; Vijayakumar, S. Active sequential learning with tactile feedback. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Taipei, Taiwan, 18–22 October 2010; pp. 677–684. [Google Scholar]
  32. Tanaka, D.; Matsubara, T.; Sugimoto, K. An optimal control approach for exploratory actions in active tactile object recognition. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; pp. 787–793. [Google Scholar]
  33. Guo, H.L.; Zhang, L.; Su, Z. Empirical study on the performance stability of named entity recognition model across domains. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Sydney, Australia, 22–23 July 2006; pp. 509–516. [Google Scholar]
  34. Yang, Q. Activity Recognition: Linking Low-level Sensors to High-level Intelligence. IJCAI 2009, 9, 20–25. [Google Scholar]
  35. Tommasi, T.; Orabona, F.; Caputo, B. Safety in numbers: Learning categories from few examples with multi model knowledge transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 3081–3088. [Google Scholar]
  36. Tommasi, T.; Orabona, F.; Kaboli, M.; Caputo, B.; Martigny, C. Leveraging over prior knowledge for online learning of visual. In Proceedings of the British Machine Vision Conference categories, Guildford, UK, 3–7 September 2012; pp. 1–8. [Google Scholar]
  37. Kaboli, M. Leveraging over Prior Knowledge for Online Learning of Visual Categories across Robots. Thesis Dissertation, The Royal Institute of Technology (KTH), Stockholm, Sweden, 2012. [Google Scholar]
  38. Rodner, E.; Denzler, J. One-shot learning of object categories using dependent gaussian processes. In Joint Pattern Recognition Symposium; Springer: Berlin/Heidelberg, Germany, 2010; Volume 637, pp. 232–241. [Google Scholar]
  39. Yang, X.; Kim, S.; Xing, E.P. Heterogeneous multitask learning with joint sparsity constraints. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 7–10 December 2009; pp. 2151–2159. [Google Scholar]
  40. Kaboli, M.; Walker, R.; Cheng, G. Re-using prior tactile experience by robotic hands to discriminate in-hand objects via texture properties. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2242–2247. [Google Scholar]
  41. Kaboli, M.; Cheng, G. Novel Tactile Descriptors and a Tactile Transfer Learning Technique for Active In-Hand Object Recognition via Texture Properties. In Proceedings of the IEEE International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15 November 2016; pp. 1–6. [Google Scholar]
  42. Kaboli, M.; Cheng, G. Dexterous hands learn to re-use the past experience to discriminate in-hand objects from the surface texture. In Proceedings of the 33rd Annual Conference of the Robotics Society of Japan, Tokyo, Japan, 3–5 September 2015; pp. 1–6. [Google Scholar]
  43. Kaboli, M.; Di, F.; Kunpeng, Y.; Pablo, L.; Cheng, G. A Tactile-based Framework for Active Object Learning and Discrimination using Multi-modal Robotic Skin. IEEE Rob. Autom. Lett. 2017, 2, 2143–2150. [Google Scholar] [CrossRef]
  44. Kaboli, M.; Kunpeng, Y.; Di, F.; Cheng, G. Tactile-based active object discrimination and target object search in an unknown workspace. Autonom. Rob. 2018, 2, 1–35. [Google Scholar] [CrossRef]
  45. Kaboli, M.; Di, F.; Cheng, G. Active Tactile Transfer Learning for Object Discrimination in an Unstructured Environment using Multimodal Robotic Skin. Int. J. Humanoid Rob. 2017, 15, 1–27. [Google Scholar] [CrossRef]
  46. Rasmussen, C.E.; Williams, C.K.I. Gaussian processes for machine learning; MIT Press: Cambridge/London, UK, 2006. [Google Scholar]
  47. Chai, K.M. Generalization errors and learning curves for regression with multi-task Gaussian processes. In Advances in neural information processing systems; MIT Press: Cambridge/London, UK, 2009; pp. 279–287. [Google Scholar]
Figure 1. The robot leverages the prior tactile exploratory action experiences built by applying the pressing, sliding, and static contact movements with different action parameters on the prior objects (with index #1–#10) to learn about new objects’ (with index #1–#5) physical properties. The feature observations of prior objects (prior tactile instance knowledge) were used to transfer the action experiences.
Figure 1. The robot leverages the prior tactile exploratory action experiences built by applying the pressing, sliding, and static contact movements with different action parameters on the prior objects (with index #1–#10) to learn about new objects’ (with index #1–#5) physical properties. The feature observations of prior objects (prior tactile instance knowledge) were used to transfer the action experiences.
Sensors 18 00634 g001
Figure 2. (a) The robotic arm equipped with a multi-modal artificial skin; (b) The multi modal artificial skin.
Figure 2. (a) The robotic arm equipped with a multi-modal artificial skin; (b) The multi modal artificial skin.
Sensors 18 00634 g002
Figure 3. The figure visualizes multiple exploratory actions. (a) The pressing movement defined by the action parameters d P and t P ; (b) The sliding movement with action parameters v S , F S , and t S ; (c) The static contact movement defined by d C and t C .
Figure 3. The figure visualizes multiple exploratory actions. (a) The pressing movement defined by the action parameters d P and t P ; (b) The sliding movement with action parameters v S , F S , and t S ; (c) The static contact movement defined by d C and t C .
Sensors 18 00634 g003
Figure 4. Flowchart of the Active Prior Tactile Knowledge Transfer algorithm.
Figure 4. Flowchart of the Active Prior Tactile Knowledge Transfer algorithm.
Sensors 18 00634 g004
Figure 5. Illustration of multiple feature observations combination method. (a) The robotic system combines the normal force sensing and temperature sensing to learn about objects by applying pressing and static contact movements; (b) The robot slides on the object surface to sense its textural property and thermal conductivity.
Figure 5. Illustration of multiple feature observations combination method. (a) The robotic system combines the normal force sensing and temperature sensing to learn about objects by applying pressing and static contact movements; (b) The robot slides on the object surface to sense its textural property and thermal conductivity.
Sensors 18 00634 g005
Figure 6. Multiple feature observations combination results for exploratory actions P 1 , P 2 , C 1 , S 1 , S 2 , S 3 , S 4 and the averaged result. STIF: building the GPC observation model based on object stiffness; Thermal-C: thermal conductivity; Texture: object surface textural properties; Multi: combined feature observations. The horizontal axis represents the number of feature observations. The vertical axis represents the discrimination accuracy of the test dataset.
Figure 6. Multiple feature observations combination results for exploratory actions P 1 , P 2 , C 1 , S 1 , S 2 , S 3 , S 4 and the averaged result. STIF: building the GPC observation model based on object stiffness; Thermal-C: thermal conductivity; Texture: object surface textural properties; Multi: combined feature observations. The horizontal axis represents the number of feature observations. The vertical axis represents the discrimination accuracy of the test dataset.
Sensors 18 00634 g006
Figure 7. Transferring the exploratory actions experiences from three prior objects. The small plots show the learning process from 10 groups of old objects. The large plot on the right shows the averaged results. Horizontal axis: the growing number of feature observations the robot collected. Vertical axis: the discrimination accuracy of the test dataset.
Figure 7. Transferring the exploratory actions experiences from three prior objects. The small plots show the learning process from 10 groups of old objects. The large plot on the right shows the averaged results. Horizontal axis: the growing number of feature observations the robot collected. Vertical axis: the discrimination accuracy of the test dataset.
Sensors 18 00634 g007
Figure 8. Transfer learning using only one exploratory action.
Figure 8. Transfer learning using only one exploratory action.
Sensors 18 00634 g008
Figure 9. Increasing the number of prior objects from 3, 5, 7 to 10, and comparing the performance of different learning methods. Red: baseline method; Blue: the proposed active prior tactile knowledge transfer method (APTKT) without auxiliary features; Green: APTKT with auxiliary features.
Figure 9. Increasing the number of prior objects from 3, 5, 7 to 10, and comparing the performance of different learning methods. Red: baseline method; Blue: the proposed active prior tactile knowledge transfer method (APTKT) without auxiliary features; Green: APTKT with auxiliary features.
Sensors 18 00634 g009
Figure 10. object confusion matrices (value normalized between 0 and 1) for each exploratory action and the average. The blue indices represent the old objects. The red indices represent the new objects, with #11–#15 indicating new objects #1–#5. Best viewed in magnification.
Figure 10. object confusion matrices (value normalized between 0 and 1) for each exploratory action and the average. The blue indices represent the old objects. The red indices represent the new objects, with #11–#15 indicating new objects #1–#5. Best viewed in magnification.
Sensors 18 00634 g010
Figure 11. Negative prior tactile knowledge transfer testing. The prior objects that were unrelated to the new objects were deliberately selected.
Figure 11. Negative prior tactile knowledge transfer testing. The prior objects that were unrelated to the new objects were deliberately selected.
Sensors 18 00634 g011
Table 1. Technical information of sensors in the artificial skin ([12]).
Table 1. Technical information of sensors in the artificial skin ([12]).
TypeSensorRangeAccuracyResolution
ProximityVCNL4010 200 mm N.A. 0.25 lx
AccelerationBMA250 ± 2 g 256 LSB / g 3.91 mg
TemperatureLM71−40–150 °C ± 1.5 C 31.25 m C
Normal forcecustomized> 10 N 0.05 N N.A.
Table 2. Exploratory actions and perception.
Table 2. Exploratory actions and perception.
Exploratory actionsAction Parameters ( θ )Sensory feedbacksFeatures
Pressing d P , t P F , T F ¯ , [ T ¯ , T ¯ ]
Sliding F S , t S , v S a , T T D , [ T ¯ , T ¯ ]
Static contact d C , t C F , T F ¯ , [ T ¯ , T ¯ ]

Share and Cite

MDPI and ACS Style

Feng, D.; Kaboli, M.; Cheng, G. Active Prior Tactile Knowledge Transfer for Learning Tactual Properties of New Objects. Sensors 2018, 18, 634. https://doi.org/10.3390/s18020634

AMA Style

Feng D, Kaboli M, Cheng G. Active Prior Tactile Knowledge Transfer for Learning Tactual Properties of New Objects. Sensors. 2018; 18(2):634. https://doi.org/10.3390/s18020634

Chicago/Turabian Style

Feng, Di, Mohsen Kaboli, and Gordon Cheng. 2018. "Active Prior Tactile Knowledge Transfer for Learning Tactual Properties of New Objects" Sensors 18, no. 2: 634. https://doi.org/10.3390/s18020634

APA Style

Feng, D., Kaboli, M., & Cheng, G. (2018). Active Prior Tactile Knowledge Transfer for Learning Tactual Properties of New Objects. Sensors, 18(2), 634. https://doi.org/10.3390/s18020634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop