Robot Programming from Fish Demonstrations
Abstract
:1. Introduction
2. Learning from Demonstration Framework
2.1. Module 1—Task Demonstration
2.2. Module 2—Fish Tracking
2.3. Module 3—Analysis of Fish Trajectories
2.4. Module 4—Acquisition of Robot Training Data
2.5. Module 5—Generating A Perception–Action Robot Controller
2.6. Module 6—Performance Evaluation
3. Materials and Methods
3.1. Fish Learning Experiments (Task Demonstration)
3.2. Fish Tracking
3.2.1. Manual Video Annotations
3.2.2. Training DeepLabCut Models
3.2.3. Performance Evaluation of the DLC Models
3.3. Analysis of Fish Trajectories
3.3.1. Preprocessing
3.3.2. Extracting Sensor Readings and Motor Commands for the Perception–Action Controller
3.4. Perception–Action Controllers
3.4.1. The Random Walk Controller
3.4.2. The Proportional Controller
3.4.3. The ANN controller
3.4.4. The Fuzzy Controller
3.5. Computer Simulations
3.6. Performance Evaluation of the Controllers
4. Results
4.1. Performance of the DLC Models
4.2. Performance of the Perception–Action Controllers
5. Discussion
5.1. Summary
5.2. Limitations
5.3. Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ravichandar, H.; Polydoros, A.S.; Chernova, S.; Billard, A. Recent advances in robot learning from demonstration. Annu. Rev. Control Robot. Auton. Syst. 2020, 3, 297–330. [Google Scholar] [CrossRef] [Green Version]
- Schaal, S. Is imitation learning the route to humanoid robots? Trends Cogn. Sci. 1999, 3, 233–242. [Google Scholar] [CrossRef] [PubMed]
- Hayes, G.M.; Demiris, J. A Robot Controller Using Learning by Imitation; University of Edinburgh, Department of Artificial Intelligence: Edinburgh, UK, 1994. [Google Scholar]
- Akanyeti, O.; Kyriacou, T.; Nehmzow, U.; Iglesias, R.; Billings, S. Visual task identification and characterization using polynomial models. Robot. Auton. Syst. 2007, 55, 711–719. [Google Scholar] [CrossRef] [Green Version]
- Escobar-Alvarez, H.D.; Ohradzansky, M.; Keshavan, J.; Ranganathan, B.N.; Humbert, J.S. Bioinspired approaches for autonomous small-object detection and avoidance. IEEE Trans. Robot. 2019, 35, 1220–1232. [Google Scholar] [CrossRef]
- Humbert, J.S.; Hyslop, A.M. Bioinspired visuomotor convergence. IEEE Trans. Robot. 2009, 26, 121–130. [Google Scholar] [CrossRef]
- Webb, B. Robots with insect brains. Science 2020, 368, 244–245. [Google Scholar] [CrossRef] [Green Version]
- Ijspeert, A.J.; Crespi, A.; Ryczko, D.; Cabelguen, J.M. From swimming to walking with a salamander robot driven by a spinal cord model. Science 2007, 315, 1416–1420. [Google Scholar] [CrossRef] [Green Version]
- Rañó, I. On motion camouflage as proportional navigation. Biol. Cybern. 2022, 116, 69–79. [Google Scholar] [CrossRef]
- Kyriacou, T. Using an evolutionary algorithm to determine the parameters of a biologically inspired model of head direction cells. J. Comput. Neurosci. 2012, 32, 281–295. [Google Scholar] [CrossRef]
- de Croon, G.; Dupeyroux, J.; Fuller, S.; Marshall, J. Insect-inspired AI for autonomous robots. Sci. Robot. 2022, 7, eabl6334. [Google Scholar] [CrossRef]
- Akanyeti, O.; Thornycroft, P.; Lauder, G.; Yanagitsuru, Y.; Peterson, A.; Liao, J. Fish optimize sensing and respiration during undulatory swimming. Nat. Commun. 2016, 7, 1–8. [Google Scholar] [CrossRef] [Green Version]
- Braitenberg, V. Vehicles: Experiments in Synthetic Psychology; MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]
- Bialek, W.; Cavagna, A.; Giardina, I.; Mora, T.; Silvestri, E.; Viale, M.; Walczak, A.M. Statistical mechanics for natural flocks of birds. Proc. Natl. Acad. Sci. USA 2012, 109, 4786–4791. [Google Scholar] [CrossRef] [Green Version]
- Law, J.; Shaw, P.; Lee, M. A biologically constrained architecture for developmental learning of eye–head gaze control on a humanoid robot. Auton. Robot. 2013, 35, 77–92. [Google Scholar] [CrossRef]
- Dupeyroux, J.; Serres, J.R.; Viollet, S. AntBot: A six-legged walking robot able to home like desert ants in outdoor environments. Sci. Robot. 2019, 4, eaau0307. [Google Scholar] [CrossRef] [Green Version]
- Gravish, N.; Lauder, G.V. Robotics-inspired biology. J. Exp. Biol. 2018, 221, jeb138438. [Google Scholar] [CrossRef] [Green Version]
- Hein, A.M.; Altshuler, D.L.; Cade, D.E.; Liao, J.C.; Martin, B.T.; Taylor, G.K. An algorithmic approach to natural behavior. Curr. Biol. 2020, 30, R663–R675. [Google Scholar] [CrossRef]
- Akanyeti, O.; Nehmzow, U.; Billings, S.A. Robot training using system identification. Robot. Auton. Syst. 2008, 56, 1027–1041. [Google Scholar] [CrossRef] [Green Version]
- Antonelli, G.; Chiaverini, S.; Fusco, G. A fuzzy-logic-based approach for mobile robot path tracking. IEEE Trans. Fuzzy Syst. 2007, 15, 211–221. [Google Scholar] [CrossRef]
- Chen, J.; Wu, C.; Yu, G.; Narang, D.; Wang, Y. Path following of wheeled mobile robots using online-optimization-based guidance vector field. IEEE/ASME Trans. Mechatronics 2021, 26, 1737–1744. [Google Scholar] [CrossRef]
- Yen, J.; Pfluger, N. A fuzzy logic based extension to Payton and Rosenblatt’s command fusion method for mobile robot navigation. IEEE Trans. Syst. Man Cybern. 1995, 25, 971–978. [Google Scholar] [CrossRef] [Green Version]
- Hagras, H.A. A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots. IEEE Trans. Fuzzy Syst. 2004, 12, 524–539. [Google Scholar] [CrossRef]
- Juang, C.F.; Chou, C.Y.; Lin, C.T. Navigation of a fuzzy-controlled wheeled robot through the combination of expert knowledge and data-driven multiobjective evolutionary learning. IEEE Trans. Cybern. 2021, 52, 7388–7401. [Google Scholar] [CrossRef] [PubMed]
- Nehmzow, U.; Akanyeti, O.; Billings, S. Towards modelling complex robot training tasks through system identification. Robot. Auton. Syst. 2010, 58, 265–275. [Google Scholar] [CrossRef]
- Pomerleau, D.A. Neural network vision for robot driving. In Handbook of Brain Theory and Neural Networks; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
- Nehmzow, U.; Iglesias, R.; Kyriacou, T.; Billings, S.A. Robot learning through task identification. Robot. Auton. Syst. 2006, 54, 766–778. [Google Scholar] [CrossRef]
- Akanyeti, O.; Rañó, I.; Nehmzow, U.; Billings, S. An application of Lyapunov stability analysis to improve the performance of NARMAX models. Robot. Auton. Syst. 2010, 58, 229–238. [Google Scholar] [CrossRef] [Green Version]
- Billings, S.; Chen, S. The Determination of Multivariable Nonlinear Models for Dynamic Systems Using Neural Networks; ACSE Research Report 629; The University of Sheffield, Department of Automatic Control and Systems Engineering: Sheffield, UK, 1996. [Google Scholar]
- Korenberg, M.; Billings, S.A.; Liu, Y.; McIlroy, P. Orthogonal parameter estimation algorithm for non-linear stochastic systems. Int. J. Control 1988, 48, 193–210. [Google Scholar] [CrossRef]
- Blllings, S.; Voon, W. Correlation based model validity tests for non-linear models. Int. J. Control 1986, 44, 235–244. [Google Scholar] [CrossRef]
- Richards, B.A.; Lillicrap, T.P.; Beaudoin, P.; Bengio, Y.; Bogacz, R.; Christensen, A.; Clopath, C.; Costa, R.P.; de Berker, A.; Ganguli, S.; et al. A deep learning framework for neuroscience. Nat. Neurosci. 2019, 22, 1761–1770. [Google Scholar] [CrossRef]
- Brydges, N.M.; Heathcote, R.J.; Braithwaite, V.A. Habitat stability and predation pressure influence learning and memory in populations of three-spined sticklebacks. Anim. Behav. 2008, 75, 935–942. [Google Scholar] [CrossRef]
- Mathis, A.; Mamidanna, P.; Cury, K.M.; Abe, T.; Murthy, V.N.; Mathis, M.W.; Bethge, M. DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 2018, 21, 1281–1289. [Google Scholar] [CrossRef]
- Nath, T.; Mathis, A.; Chen, A.C.; Patel, A.; Bethge, M.; Mathis, M.W. Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nat. Protoc. 2019, 14, 2152–2176. [Google Scholar] [CrossRef]
- Graving, J.M.; Chae, D.; Naik, H.; Li, L.; Koger, B.; Costelloe, B.R.; Couzin, I.D. DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. eLife 2019, 8, e47994. [Google Scholar] [CrossRef]
- Lauer, J.; Zhou, M.; Ye, S.; Menegas, W.; Schneider, S.; Nath, T.; Rahman, M.M.; Di Santo, V.; Soberanes, D.; Feng, G.; et al. Multi-animal pose estimation, identification and tracking with DeepLabCut. Nat. Methods 2022, 19, 496–504. [Google Scholar] [CrossRef]
- Bartumeus, F.; Levin, S.A. Fractal reorientation clocks: Linking animal behavior to statistical patterns of search. Proc. Natl. Acad. Sci. USA 2008, 105, 19072–19077. [Google Scholar] [CrossRef] [Green Version]
- Codling, E.A.; Plank, M.J.; Benhamou, S. Random walk models in biology. J. R. Soc. Interface 2008, 5, 813–834. [Google Scholar] [CrossRef] [Green Version]
- Olberg, R.M. Visual control of prey-capture flight in dragonflies. Curr. Opin. Neurobiol. 2012, 22, 267–271. [Google Scholar] [CrossRef]
- Fetherstonhaugh, S.E.; Shen, Q.; Akanyeti, O. Automatic segmentation of fish midlines for optimizing robot design. Bioinspiration Biomim. 2021, 16, 046005. [Google Scholar] [CrossRef]
- Akanyeti, O.; Di Santo, V.; Goerig, E.; Wainwright, D.K.; Liao, J.C.; Castro-Santos, T.; Lauder, G.V. Fish-inspired segment models for undulatory steady swimming. Bioinspiration Biomim. 2022, 17, 046007. [Google Scholar] [CrossRef]
if is far and is far left, then is turn hard left |
if is far and is left, then is turn left |
if is far and is front, then is go straight |
if is far and is right, then is turn right |
if is far and is far right, then is turn hard right |
if is near and is left, then is turn hard left |
if is near and is left, then is turn left |
if is near and is front, then is go straight |
if is near and is right, then is turn right |
if is near and is far right, then is turn hard right |
Dataset 1 | Dataset 2 | |
---|---|---|
DLC-1 | ||
(%) | 84.9 ± 9.7 [45.4, 95.3] | 48.0 ± 24.9 [1.0, 84.9] |
(%) | 86.2 ± 9.1 [46.7, 97.0] | 47.6 ± 24.9 [0.6, 86.2] |
(BL) | 0.038 ± 0.002 [0.039, 0.042] | 0.040 ± 0.005 [0.033, 0.052] |
(BL) | 0.035 ± 0.002 [0.032, 0.042] | 0.035 ± 0.002 [0.030, 0.041] |
DLC-2 | ||
(%) | 89.7 ± 6.8 [54.0, 97.3] | 85.7 ± 5.3 [73.6, 92.7] |
(%) | 90.3 ± 6.9 [54.2, 98.6] | 85.9 ± 5.6 [69.6, 95.5] |
(BL) | 0.032 ± 0.002 [0.029, 0.034] | 0.040 ± 0.004 [0.034, 0.048] |
blue (BL) | 0.030 ± 0.002 [0.027, 0.034] | 0.038 ± 0.003 [0.033, 0.044] |
Controller | Simulation Experiment 1 | Simulation Experiment 2 | ||||
---|---|---|---|---|---|---|
S (%) | T (s) | D | S (%) | T (s) | D | |
random walk | 0 | nan | nan | 1.2 | 12.4 ± 4.0 | 2.27 ± 0.89 |
proportional | 93.0 | 6.8 ± 1.3 | 1.04 ± 0.04 | 86.4 | 6.9 ± 1.5 | 1.06 ± 0.05 |
ANN | 96.3 | 6.9 ± 1.2 | 1.05 ± 0.01 | 78.0 | 7.4 ± 1.5 | 1.10 ± 0.10 |
fuzzy | 100 | 6.8 ± 1.1 | 1.05 ± 0.02 | 98.7 | 6.8 ± 1.5 | 1.05 ± 0.02 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Coppola, C.M.; Strong, J.B.; O’Reilly, L.; Dalesman, S.; Akanyeti, O. Robot Programming from Fish Demonstrations. Biomimetics 2023, 8, 248. https://doi.org/10.3390/biomimetics8020248
Coppola CM, Strong JB, O’Reilly L, Dalesman S, Akanyeti O. Robot Programming from Fish Demonstrations. Biomimetics. 2023; 8(2):248. https://doi.org/10.3390/biomimetics8020248
Chicago/Turabian StyleCoppola, Claudio Massimo, James Bradley Strong, Lissa O’Reilly, Sarah Dalesman, and Otar Akanyeti. 2023. "Robot Programming from Fish Demonstrations" Biomimetics 8, no. 2: 248. https://doi.org/10.3390/biomimetics8020248
APA StyleCoppola, C. M., Strong, J. B., O’Reilly, L., Dalesman, S., & Akanyeti, O. (2023). Robot Programming from Fish Demonstrations. Biomimetics, 8(2), 248. https://doi.org/10.3390/biomimetics8020248