Performance Comparision of Different Momentum Techniques on Deep Reinforcement Learning


SARIGÜL M. , AVCI M.

IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), Gdynia, Polonya, 3 - 05 Temmuz 2017, ss.302-306 identifier identifier

  • Cilt numarası:
  • Doi Numarası: 10.1109/inista.2017.8001175
  • Basıldığı Şehir: Gdynia
  • Basıldığı Ülke: Polonya
  • Sayfa Sayıları: ss.302-306

Özet

Increase in popularity of deep convolutional neural networks in many different areas leads to increase in the use of these networks in reinforcement learning. Training a huge deep neural network structure by using simple gradient descent learning can take quite a long time. Some additional learning approaches should be utilized to solve this problem. One of these techniques is use of momentum which accelerates gradient descent learning. Although momentum techniques are mostly developed for supervised learning problems, it can also be used for reinforcement learning problems. However, its efficiency may vary due to the dissimilarities in two training learning processes. In this paper, the performances of different momentum techniques are compared for one of the reinforcement learning problems; Othello game benchmark. Test results show that the Nesterov accelerated momentum technique provided a more effective generalization on benchmark