Target State Optimization: Drivability Improvement for Vehicles with Dual Clutch Transmissions
Abstract
:1. Introduction
2. Optimization Problem and Objective Functions
2.1. Customer Objectives
2.1.1. Acceleration Peak and Acceleration Build-Up Objective
2.1.2. Reaction Time Objective
2.2. Discomfort Objectives
2.2.1. Engine Speed Objective
2.2.2. Clutch Torque/Jerk Objective
Algorithm 1. Local minima detection of the clutch torque. |
Begin |
Set tfg = gradient(torque_filtered) Declare osc Set q = 0 Set i = 1 While i < length(tfg) if (tfg [i] < 0 && tfg[i − 1] ≥ 0) || (tfg [i] ≥ 0 && tfg[i − 1] < 0) then Set osc[q] = i Set q++ end end Set l = 0 Declare lp For Each o in osc if torque_filtered[o − 1] > torque_filtered[o] then Set lp[l] = o Set l++ end end Set = length(lp) Return |
End |
2.3. The Reward
2.4. Software in the Loop Environment
2.5. Optimization Parameters
- The engine speed is increasing, vehicle accelerating (red).
- The engine speed is steady until the input-shaft speed is almost equal to the engine speed (blue).
- The engine speed and its speed gradient are adjusted to the input-shaft speed and its speed gradient (for a smooth clutch engagement (yellow)).
- Parameter 1—phase 1: percentage of the engine torque which should be used to accelerate the engine speed to phase 2 (low value: quick vehicle acceleration response but the engine speed increases slowly—sluggish vehicle acceleration, high value: fast increase of the engine speed worsened vehicle acceleration response).
- Parameter 2—phase 1: minimum value of the engine torque which should be used to accelerate the engine to phase 2 (active if the value of parameter 1 is to low).
- Parameter 3: P-gain control value to control the behavior of the engine speed regarding the engine target speed.
- Parameter 4—phase 3: time for reducing the slip speed in phase 3.
- Parameter 5—phase 3: engine speed which should be reached at the end of phase 3.
3. Benchmark: Self-Learning Algorithms
4. Target State Optimization
4.1. Generation of the Action
4.2. Model Optimization and Hyperparameter-Tuning
4.3. Brief Introduction into Neural Networks
4.4. Relevance of the Activation Function
4.5. Batch Size
5. Results
5.1. Software in the Loop
5.2. Robustness
5.3. Test Vehicle
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Schmiedt, M.; Pawlenka, A.; Rinderknecht, S. AI-based parameter optimization method-applied for vehicles with dual clutch transmissions. In Proceedings of the 22 Internationales Stuttgarter Symposium, Stuttgart, Germany, 15–16 March 2022; Bargende, M., Reuss, H.-C., Wagner, A., Eds.; Springer Fachmedien: Wiesbaden, Germany, 2022; Volume 2, pp. 337–353. [Google Scholar]
- Wehbi, K.; Bestle, D.; Beilharz, J. Automatic calibration process for optimal control of clutch engagement during launch. Mech. Based Des. Struct. Mach. 2017, 45, 507–522. [Google Scholar] [CrossRef]
- Dutta, A.; Zhong, Y.; Depraetere, B.; Van Vaerenbergh, K.; Ionescu, C.; Wyns, B.; Pinte, G.; Nowe, A.; Swevers, J.; De Keyser, R. Model-based and model-free learning strategies for wet clutch control. Mechatronics 2014, 24, 1008–1020. [Google Scholar] [CrossRef] [Green Version]
- Sun, Z.; Hebbale, K. Challenges and opportunities in automotive transmission control. In Proceedings of the 2005 American Control Conference, Portland, OR, USA, 8–10 June 2005; IEEE: Evanston, IL, USA; Piscataway, NJ, USA, 2005; pp. 3284–3289. [Google Scholar]
- Fischer, R.; Kücükay, F.; Jürgens, G.; Pollak, B. Das Getriebebuch, 2nd ed.; Der Fahrzeugantrieb; Springer Vieweg: Wiesbaden, Germany, 2016; ISBN 978-3-658-13104-3. [Google Scholar]
- Simon, D. Entwicklung Eines Effizienten Verfahrens zur Bewertung des Anfahrverhaltens von Fahrzeugen. 2010. Available online: http://rosdok.uni-rostock.de/file/rosdok_disshab_0000000705/rosdok_derivate_0000004687/Dissertation__Simon_2011.pdf (accessed on 5 March 2021).
- He, P.; Kraft, E.; Rinderknecht, S. Objektivierung subjektiver Kriterien für die Bewertung von Anfahrvorgängen. In Proceedings of the Digital-Fachtagung VDI-MECHATRONIK, Darmstad Germany, 23–24 March 2022; Bertram, T., Corves, B., Janschek, K., Rinderknecht, S., Eds.; Universitäts- und Landesbibliothek Darmstadt: Darmstadt, Germany, 2022. [Google Scholar]
- Skoda, S.; Steffens, J.; Becker-Schweitzer, J. Einfluss von Fahrzeuggeräuschen auf die Subjektive Bewertung von Beschleunigung; Fortschritte der Akustik—DAGA 2012: Darmstadt, Germany, 2012. [Google Scholar]
- Kingma, H. Thresholds for perception of direction of linear acceleration as a possible evaluation of the otolith function. BMC Ear Nose Throat Disord. 2005, 5, 5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- De Winkel, K.N.; Soyka, F.; Bülthoff, H.H. The role of acceleration and jerk in perception of above-threshold surge motion. Exp. Brain Res. 2020, 238, 699–711. [Google Scholar] [CrossRef] [Green Version]
- Spering, M.; Schmidt, T. Wahrnehmung, Aufmerksamkeit, Denken, Sprache, 3rd ed.; Allgemeine Psychologie kompakt; Beltz: Weinheim, Germany; Basel, Switzerland, 2017; ISBN 978-3-621-27937-6. [Google Scholar]
- Haycock, B.; Grant, P.R. The influence of jerk on perceived simulator motion strength. In Proceedings of the Driving Simulation Conference, Iowa City, IA, USA, 12–14 September 2007; p. 11. [Google Scholar]
- Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
- Darwin, C.; Bynum, W.F. The Origin of Species by Means of Natural Selection: Or the Preservation of Favored Races in the Struggle for Life; AL Burt New York: New York, NY, USA, 1894. [Google Scholar]
- Knippers, R. Molekulare Genetik; Georg Thieme Verlag: Stuttgart, Germany, 2006; ISBN 978-3-13-477009-4. [Google Scholar]
- Janning, W.; Knust, E. Genetik: Allgemeine Genetik-Molekulare Genetik-Entwicklungsgenetik, 2nd ed.; Thieme: Stuttgart, Germany; New York, NY, USA, 2008; ISBN 978-3-13-128772-4. [Google Scholar]
- Bäck, T.; Schwefel, H. An Overview of Evolutionary Algorithms for Parameter Optimization. Evol. Comput. 1993, 1, 1–23. [Google Scholar] [CrossRef]
- Gadhvi, B.; Savsani, V.; Patel, V. Multi-Objective Optimization of Vehicle Passive Suspension System Using NSGA-II, SPEA2 and PESA-II. Procedia Technol. 2016, 23, 361–368. [Google Scholar] [CrossRef] [Green Version]
- Koziolek, A.; Koziolek, H.; Becker, S.; Reussner, R.H. Automatically improve software architecture models for performance, reliability, and cost using evolutionary algorithms. In Proceedings of the First Joint WOSP/SIPEW International Conference on Performance Engineering, New York, NY, USA, 28–30 January 2010; Association for Computing Machinery: New York, NY, USA, 2010. [Google Scholar]
- Kahlbau, S. Mehrkriterielle Optimierung des Schaltablaufs von Automatikgetrieben. 2013. Available online: https://opus4.kobv.de/opus4-btu/frontdoor/index/index/year/2013/docId/2751 (accessed on 1 February 2021).
- Kahlbau, S.; Bestle, D. Optimal Shift Control for Automatic Transmission#. Mech. Based Des. Struct. Mach. 2013, 41, 259–273. [Google Scholar] [CrossRef]
- Desai, C. Design and Optimization of Hybrid Electric Vehicle Drivetrain and Control Strategy Parameters Using Evolutionary Algorithms. 2010. Available online: https://spectrum.library.concordia.ca/id/eprint/7496/ (accessed on 7 December 2021).
- Bachinger, M.; Knauder, B.J.; Stolz, M. Automotive vehicle launch optimization based on differential evolution (DE) approach for increased driveability. In Proceedings of the International Conference on Engineering Optimization, Rio de Janeiro, Brazil, 1–5 July 2012; pp. 1–12. [Google Scholar]
- Zaglauer, S. Methode zur Multikriteriellen Optimierung des Motorverhaltens Anhand Physikalisch Motivierter Modelle. 2014. Available online: https://opus4.kobv.de/opus4-fau/frontdoor/index/index/docId/5248 (accessed on 1 February 2021).
- Huang, H. Model-Based Calibration of Automated Transmissions. 2016. Available online: https://pdfs.semanticscholar.org/7b9f/7ca311a304de065e05121958e3ade249c1a0.pdf (accessed on 1 February 2021).
- Zhong, Y.; Wyns, B.; De Keyser, R.; Pinte, G.; Stoev, J. An implementation of genetic-based learning classifier system on a wet clutch system. In Proceedings of the Applied Stochastic Models and Data Analysis Conference, 14th, Rome, Italy, 7–10 June 2011; pp. 1431–1439. [Google Scholar]
- Hwang, S.-F.; He, R. A hybrid real-parameter genetic algorithm for function optimization. Adv. Eng. Inform. 2006, 20, 7–21. [Google Scholar] [CrossRef]
- Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef]
- Piszcz, A.; Soule, T. Genetic programming: Optimal population sizes for varying complexity problems. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, New York, NY, USA, 8–12 July 2006; Association for Computing Machinery: New York, NY, USA, 2006; pp. 953–954. [Google Scholar]
- Hassanat, A.; Almohammadi, K.; Alkafaween, E.; Abunawas, E.; Hammouri, A.; Prasath, V.B.S. Choosing Mutation and Crossover Ratios for Genetic Algorithms—A Review with a New Dynamic Approach. Information 2019, 10, 390. [Google Scholar] [CrossRef] [Green Version]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction, 2nd ed.; Adaptive Computation and Machine Learning Series; A Bradford Book: Cambridge, MA, USA, 2018; ISBN 978-0-262-03924-6. [Google Scholar]
- Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing Atari with Deep Reinforcement Learning. arXiv 2013, arXiv:1312.5602. [Google Scholar]
- Bello, I.; Pham, H.; Le, Q.V.; Norouzi, M.; Bengio, S. Neural Combinatorial Optimization with Reinforcement Learning. arXiv 2017, arXiv:1611.09940. [Google Scholar]
- Gambardella, L.M.; Dorigo, M. Ant-Q: A Reinforcement Learning approach to the traveling salesman problem. In Proceedings of the Machine Learning, Tahoe City, CA, USA, 9–12 July 1995; Prieditis, A., Russell, S., Eds.; Morgan Kaufmann: San Francisco, CA, USA, 1995; pp. 252–260, ISBN 978-1-55860-377-6. [Google Scholar]
- Xiaohui, L.; Bingzhao, G.; Hong, C. Q-learning based adaptive PID controller design for AMT clutch engagement during start-up process. In Proceedings of the 31st Chinese Control Conference, Hefei, China, 25–27 July 2012; pp. 3131–3136. [Google Scholar]
- Gagliolo, M.; Van Vaerenbergh, K.; Rodríguez, A.; Nowé, A.; Goossens, S.; Pinte, G.; Symens, W. Policy search reinforcement learning for automatic wet clutch engagement. In Proceedings of the 15th International Conference on System Theory, Control and Computing, Sinaia, Romania, 14–16 October 2011; pp. 1–6. [Google Scholar]
- Van Vaerenbergh, K.; Rodríguez, A.; Gagliolo, M.; Vrancx, P.; Nowé, A.; Stoev, J.; Goossens, S.; Pinte, G.; Symens, W. Improving wet clutch engagement with reinforcement learning. In Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
- Brys, T.; Moffaert, K.V.; Vaerenbergh, K.V.; Nowé, A. On the Behaviour of Scalarization Methods for the Engagement of a Wet Clutch. In Proceedings of the 2013 12th International Conference on Machine Learning and Applications, Miami, FL, USA, 4–7 December 2013; Volume 1, pp. 258–263. [Google Scholar]
- Lampe, A.; Gühmann, C.; Serway, R.; Siestrup, L.G. Artificial Intelligence in Transmission Control Clutch Engagement with Reinforcement Learning. VDI-Berichte 2019, 2354, 899–918. [Google Scholar]
- Genders, W.; Razavi, S. Using a Deep Reinforcement Learning Agent for Traffic Signal Control. arXiv 2016, arXiv:1611.01142. [Google Scholar]
- Kraft, E.; Viehmann, A.; Erler, P.; Rinderknecht, S. Virtuelle Fahrerprobungen von Antriebssystemen im Fahrsimulator. ATZ-Automob. Z. 2021, 123, 42–47. [Google Scholar] [CrossRef]
- Erler, P. Untersuchung von Vorausschauenden Motion-Cueing-Algorithmen in Einem Neuartigen Längsdynamischen Fahrsimulator; Shaker: Darmstadt, Germany, 2020; ISBN 978-3-8440-6918-1. [Google Scholar]
- Backhaus, K.; Erichson, B.; Plinke, W.; Weiber, R. Multivariate Analysemethoden: Eine anwendungsorientierte Einführung, 14th ed.; Springer: Berlin/Heidelberg, Germany, 2016; ISBN 978-3-662-46076-4. [Google Scholar]
- Bellem, H.; Schönenberg, T.; Krems, J.F.; Schrauf, M. Objective metrics of comfort: Developing a driving style for highly automated vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2016, 41, 45–54. [Google Scholar] [CrossRef]
- Elbanhawi, M.; Simic, M.; Jazar, R. In the passenger seat: Investigating ride comfort measures in autonomous cars. IEEE Intell. Transp. Syst. Mag. 2015, 7, 4–17. [Google Scholar] [CrossRef]
- Nowatschin, K.; Fleischmann, H.-P.; Gleich, T.; Franzen, P.; Hommes, G.; Faust, H.; Friedmann, O.; Wild, H. Multitronic—Das neue Automatikgetriebe von Audi. ATZ-Automob. Z. 2000, 102, 746–753. [Google Scholar] [CrossRef]
- Hirzel, C. Ein Beitrag zur Synthese und Analyse Elektrifizierter Fahrzeuggetriebestrukturen aus Einer Kombination von Stirnrad-Und Planetengetrieben Mit Fokus auf die Systematische Realisierung Einer Hinreichenden Gangverteilung. 2018. Available online: https://opendata.uni-halle.de/bitstream/1981185920/13503/1/Hirzel_Cathleen_Dissertation_2018.pdf (accessed on 2 March 2021).
- Hoberock, L.L. A Survey of Longitudinal Acceleration Comfort Studies in Ground Transportation Vehicles; Council for Advanced Transportation Studied; University of Texas at Austin: Austin, TX, USA, 1976. [Google Scholar]
- Müller, T.; Hajek, H.; Radić-Weißenfeld, L.; Bengler, K. Can you feel the difference? The just noticeable difference of longitudinal acceleration. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Los Angeles, CA, USA; SAGE Publications Sage CA: Los Angeles, CA, USA, 2013; Volume 57, pp. 1219–1223. [Google Scholar]
- Deng, G.; Cahill, L.W. An adaptive Gaussian filter for noise reduction and edge detection. In Proceedings of the 1993 IEEE Conference Record Nuclear Science Symposium and Medical Imaging Conference, San Francisco, CA, USA, 31 March 1993; IEEE: San Francisco, CA, USA, 1993; p. 5. [Google Scholar]
- Junghanns, A.; Mauss, J.; Seibt, M. Faster Development of AUTOSAR Compliant ECUs through Simulation; Embedded Real Time Software and Systems (ERTS2014): Toulouse, France, 2014; p. 5. [Google Scholar]
- Van Berkel, K.; Hofman, T.; Serrarens, A.; Steinbuch, M. Fast and smooth clutch engagement control for dual-clutch transmissions. Control Eng. Pract. 2014, 22, 57–68. [Google Scholar] [CrossRef]
- Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with deep reinforcement learning. arXiv 2015, arXiv:1509.02971. [Google Scholar] [CrossRef]
- Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal Policy Optimization Algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar] [CrossRef]
- Mnih, V.; Badia, A.P.; Mirza, M.; Graves, A.; Lillicrap, T.P.; Harley, T.; Silver, D.; Kavukcuoglu, K. Asynchronous Methods for Deep Reinforcement Learning. arXiv 2016, arXiv:1602.01783. [Google Scholar] [CrossRef]
- Haarnoja, T.; Zhou, A.; Abbeel, P.; Levine, S. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arXiv 2018, arXiv:1801.01290. [Google Scholar] [CrossRef]
- Holland, J.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
- Ishii, S.; Yoshida, W.; Yoshimoto, J. Control of exploitation–exploration meta-parameter in reinforcement learning. Neural Netw. 2002, 15, 665–687. [Google Scholar] [CrossRef] [Green Version]
- Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
- Yang, L.; Shami, A. On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT press: Cambridge, MA, USA, 2016; ISBN 978-0-262-03561-3. [Google Scholar]
- Pedregosa, F. Hyperparameter optimization with approximate gradient. In Proceedings of the 33rd International Conference on Machine Learning PMLR, New York, NY, USA, 20–22 June 2016; pp. 737–746. [Google Scholar]
- Koutsoukas, A.; Monaghan, K.J.; Li, X.; Huan, J. Deep-learning: Investigating deep neural networks hyper-parameters and comparison of performance to shallow methods for modeling bioactivity data. J. Cheminformatics 2017, 9, 42. [Google Scholar] [CrossRef] [Green Version]
- Ozaki, Y.; Yano, M.; Onishi, M. Effective hyperparameter optimization using Nelder-Mead method in deep learning. IPSJ Trans. Comput. Vis. Appl. 2017, 9, 20. [Google Scholar] [CrossRef] [Green Version]
- Rasmussen, C.E. Gaussian Processes in Machine Learning. In Advanced Lectures on Machine Learning: ML Summer Schools 2003, Canberra, Australia, February 2–14, 2003, Tübingen, Germany, August 4–16, 2003, Revised Lectures; Bousquet, O., von Luxburg, U., Rätsch, G., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; pp. 63–71. ISBN 978-3-540-28650-9. [Google Scholar]
- Sibi, P.; Jones, S.; Siddarth, P. Analysis of different activation functions using back propagation neural networks. J. Theor. Appl. Inf. Technol. 2005, 47, 1264–1268. [Google Scholar]
- Bishop, C.M. Neural networks and their applications. Rev. Sci. Instrum. 1994, 65, 1803–1832. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.; Li, Y.; Song, Y.; Rong, X. The Influence of the Activation Function in a Convolution Neural Network Model of Facial Expression Recognition. Appl. Sci. 2020, 10, 1897. [Google Scholar] [CrossRef] [Green Version]
- Schmidt-Hieber, J. Nonparametric regression using deep neural networks with ReLU activation function. Ann. Stat. 2020, 48, 1875–1897. [Google Scholar] [CrossRef]
- Radiuk, P.M. Impact of Training Set Batch Size on the Performance of Convolutional Neural Networks for Diverse Datasets. Inf. Technol. Manag. Sci. 2017, 20. [Google Scholar] [CrossRef]
- Keskar, N.S.; Mudigere, D.; Nocedal, J.; Smelyanskiy, M.; Tang, P.T.P. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. arXiv 2017, arXiv:1609.04836. [Google Scholar]
- Kitano, H. Biological robustness. Nat. Rev. Genet. 2004, 5, 826–837. [Google Scholar] [CrossRef]
- Jing, J.; Liu, Y.; Wu, J.; Huang, W.; Zuo, B.; Yang, G. Research on drivability control in P2.5 hybrid system. Energy Rep. 2021, 7, 1582–1593. [Google Scholar] [CrossRef]
- Gindele, J.; Diehl, M. Systemansatz für einen dedizierten Hybridantrieb. ATZ-Automob. Z. 2019, 121, 44–51. [Google Scholar] [CrossRef]
Objective | Value | Unit |
---|---|---|
Engine Speed Drop | 0.00 | rpm |
Acceleration Peak | 3.25 | m/s2 |
Acceleration Build-up | 6.50 | m/s3 |
Reaction Time | 0.50 | s |
P-gain | Engine Speed Error/Rpm | |||||
---|---|---|---|---|---|---|
−500 | −250 | 0 | 250 | 500 | ||
Driver request/Nm | 0 | z11 | z12 | z13 | z14 | z15 |
150 | z21 | z22 | z23 | z24 | z25 | |
300 | z31 | z32 | z33 | z34 | z35 |
Algorithm | ||||||
---|---|---|---|---|---|---|
NSGA-II | 104.2 | 15.37 | 84 | 126 | 21.0 | 13.8 |
DDPG | 0.0 | - | 0 | 0 | - | - |
PPO | 1.2 | 1.30 | 0 | 3 | 363.0 | 261.6 |
A2C | 2.0 | 1.87 | 0 | 5 | 332.8 | 274.3 |
SAC | 28.8 | 4.15 | 22 | 32 | 46.4 | 31.1 |
Algorithm | ||||||
---|---|---|---|---|---|---|
NSGA-II | 104.2 | 15.4 | 84 | 126 | 21.0 | 13.8 |
SAC | 28.8 | 4.2 | 22 | 32 | 46.4 | 31.1 |
TSO (sigmoidal) | 7.8 | 5.5 | 2 | 16 | 221.6 | 332.0 |
TSO (ReLU) | 250.0 | 160.4 | 36 | 442 | 20.6 | 11.3 |
Test Run/Iteration | Layer | Neurons | Learning Rate |
---|---|---|---|
7 | 1 | 512 | 0.0237 |
16 | 1 | 25 | 0.0545 |
23 | 1 | 512 | 0.0237 |
6 | 6 | 130 | 0.0247 |
16 | 1 | 180 | 0.0414 |
20 | 4 | 425 | 0.0012 |
Test Run | ±1% | ±2% |
---|---|---|
1 | 20 | 21 |
2 | 78 | 33 |
3 | 17 | 74 |
4 | 7 | 0 |
Average | 30.5 | 32 |
Objective | Value | Unit |
---|---|---|
Engine Speed Drop | 0.00 | rpm |
Acceleration Peak | 3.25 | m/s2 |
Clutch Torque Minima | 0.00 | - |
Reaction Time | 0.50 | s |
Acceleration | Reaction Time | Successful Iterations | First Success |
---|---|---|---|
3.25 m/s2 | 0.5 s | 0 | - |
3.5 m/s2 | 0.3 s | 16 | 13 |
3.5 m/s2 | 0.4 s | 11 | 4 |
4 m/s2 | 0.3 s | 1 | 13 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Schmiedt, M.; He, P.; Rinderknecht, S. Target State Optimization: Drivability Improvement for Vehicles with Dual Clutch Transmissions. Appl. Sci. 2022, 12, 10283. https://doi.org/10.3390/app122010283
Schmiedt M, He P, Rinderknecht S. Target State Optimization: Drivability Improvement for Vehicles with Dual Clutch Transmissions. Applied Sciences. 2022; 12(20):10283. https://doi.org/10.3390/app122010283
Chicago/Turabian StyleSchmiedt, Marius, Ping He, and Stephan Rinderknecht. 2022. "Target State Optimization: Drivability Improvement for Vehicles with Dual Clutch Transmissions" Applied Sciences 12, no. 20: 10283. https://doi.org/10.3390/app122010283
APA StyleSchmiedt, M., He, P., & Rinderknecht, S. (2022). Target State Optimization: Drivability Improvement for Vehicles with Dual Clutch Transmissions. Applied Sciences, 12(20), 10283. https://doi.org/10.3390/app122010283