Next Article in Journal
Predicting the Mine Friction Coefficient Using the GSCV-RF Hybrid Approach
Next Article in Special Issue
Tensor Implementation of Monte-Carlo Tree Search for Model-Based Reinforcement Learning
Previous Article in Journal
FNNS: An Effective Feedforward Neural Network Scheme with Random Weights for Processing Large-Scale Datasets
Previous Article in Special Issue
Adherence Improves Cooperation in Sequential Social Dilemmas
 
 
Article
Peer-Review Record

Fresher Experience Plays a More Important Role in Prioritized Experience Replay

Appl. Sci. 2022, 12(23), 12489; https://doi.org/10.3390/app122312489
by Jue Ma 1,2, Dejun Ning 1,*, Chengyi Zhang 1 and Shipeng Liu 1,2
Reviewer 2:
Appl. Sci. 2022, 12(23), 12489; https://doi.org/10.3390/app122312489
Submission received: 20 October 2022 / Revised: 2 December 2022 / Accepted: 3 December 2022 / Published: 6 December 2022
(This article belongs to the Special Issue Deep Reinforcement Learning for Robots and Agents)

Round 1

Reviewer 1 Report

Thank you for giving me an opportunity to review this paper. The topic of this paper is interesting. The presentation is good but should follow the journal format. I have only a few comments.

1. please rewrite the abstract part; it is not clear enough.

2. Please add a discussion part on how it will help in the real world. 

3. Please test the algorithm with some data and present your findings

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

This study focuses on working on the use of Prioritized Experience Replay (PER) experience, which is used on a satisfactory scale in deep learning and semantic learning, which helps to increase the benefiting and the actual application of artificial intelligence tools and how these benefit could reduce the level of error through operations of Deep learning, as well as mainly help to reduce the time needed to learn before reaching the correct answer.

The paper in its general form is suitable for publication, but it needs a number of necessary amendments before its final acceptance. These observations can be summarized in the following points:

§  Non-repetition some terms that are listed more than once, such as Prioritized experience replay (PER), which is listed repeatedly in the manuscript. In addition, some terms are not mentioned by their primary name such as DQN

§  Is not mentioned and addressed a realistic example of the expected time for implementing the understanding a solutions according to the artificial intelligence algorithms that mentioned and proposed in this study, although the realistic and experimental example will help increase the level of confidence required and the importance needed to use this approach.

§  New experiments can be restarted frequently to improve the learning efficiency of the DRL or FPER algorithms in each of the control tasks, thus showing results that can help to improve the learning speed in various environments. Therefore, it is important to increase the efficiency of trial substitution and to make the learning process stronger through several modifications of several algorithms mentioned in a number of previous studies.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

It can be considered for publication.

Author Response

Thank you for your reply.

Back to TopTop