Performance Comparision of Different Momentum Techniques on Deep Reinforcement Learning


IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), Gdynia, Poland, 3 - 05 July 2017, pp.302-306 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Volume:
  • Doi Number: 10.1109/inista.2017.8001175
  • City: Gdynia
  • Country: Poland
  • Page Numbers: pp.302-306
  • Keywords: Deep reinforcement learning, momentum techniques, nesterov momentum, NEURAL-NETWORKS
  • Çukurova University Affiliated: Yes


Increase in popularity of deep convolutional neural networks in many different areas leads to increase in the use of these networks in reinforcement learning. Training a huge deep neural network structure by using simple gradient descent learning can take quite a long time. Some additional learning approaches should be utilized to solve this problem. One of these techniques is use of momentum which accelerates gradient descent learning. Although momentum techniques are mostly developed for supervised learning problems, it can also be used for reinforcement learning problems. However, its efficiency may vary due to the dissimilarities in two training learning processes. In this paper, the performances of different momentum techniques are compared for one of the reinforcement learning problems; Othello game benchmark. Test results show that the Nesterov accelerated momentum technique provided a more effective generalization on benchmark