Evaluating the Robustness of RNN Models on Diverse Sequential Datasets

Authors

  • Imra Shoukat UMT, Sialkot
  • Mujeeb Rehman UMT, Sialkot

Keywords:

RNN, Sequence Modeling, Hyperparameter Tuning

Abstract

Recurrent Neural Networks are deep neutral networks that handle sequence data due to their ability to capture and reserve temporal dependencies in the data. This characteristic makes them ideal for applications such as natural language processing, speech recognition, and time series prediction. In this research paper RNN methods are applied and analyzed using the UCI Student Performance dataset and the UCI Human Activity Recognition (HAR) dataset. It studies the impact of hyperparameter optimization and data set balancing on critical performance system of measurement such as model accuracy, memory usage, and time taken for prediction. By altering critical features, the number of RNNs, the dropout rate, and the batch size, the effect of these parameters on model performance is analyzed. The results show that varying the number of RNN units and batch size optimization improves model performance, as well as the effect of computational efficiency. The RNN test produces 99.8% accuracy in the UCI Student Performance data set and 95.11% in the HAR data set. In adding, balanced datasets help to improve generalization by reducing bias and overfitting. A train-test split of 70%-30% is testified to provide the best overall precision compared to computational resource costs. This research tells us how hyperparameter optimization and data handling modify and improve the performance of RNN models.

References

Mienye, Ibomoiye Domor, Theo G. Swart, and George Obaido. "Recurrent neural networks: A comprehensive review of architectures, variants, and applications." Information 15.9 (2024): 517.

Bengio, Yoshua, Patrice Simard, and Paolo Frasconi. "Learning long-term dependencies with gradient descent is difficult." IEEE transactions on neural networks 5.2 (1994): 157-166.

Che, Zhengping, et al. "Recurrent neural networks for multivariate time series with missing values." Scientific reports 8.1 (2018): 6085.

Chung, Junyoung, et al. "Empirical evaluation of gated recurrent neural networks on sequence modeling." arXiv preprint arXiv:1412.3555 (2014).

Greff, Klaus, et al. "LSTM: A search space odyssey." IEEE transactions on neural networks and learning systems 28.10 (2016): 2222-2232.

Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.

Nitish, Srivastava. "Dropout: a simple way to prevent neural networks from overfitting." J. Mach. Learn. Res. 15 (2014): 1.

Zaremba, Wojciech, Ilya Sutskever, and Oriol Vinyals. "Recurrent neural network regularization." arXiv preprint arXiv:1409.2329 (2014).

Mienye, Ibomoiye Domor, Theo G. Swart, and George Obaido. "Recurrent neural networks: A comprehensive review of architectures, variants, and applications." Information 15.9 (2024): 517.

Yu, Yong, et al. "A review of recurrent neural networks: LSTM cells and network architectures." Neural computation 31.7 (2019): 1235-1270.

Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." Advances in neural information processing systems 27 (2014).

Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. "Speech recognition with deep recurrent neural networks." 2013 IEEE international conference on acoustics, speech and signal processing. Ieee, 2013.

Liu, Zhenyu, et al. "Forecast methods for time series data: A survey." Ieee Access 9 (2021): 91896-91912.

Vinyals, Oriol, et al. "Show and tell: A neural image caption generator." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.

Sherstinsky, Alex. "Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network." Physica D: Nonlinear Phenomena 404 (2020): 132306.

Yin, Wenpeng, et al. "Comparative study of CNN and RNN for natural language processing." arXiv preprint arXiv:1702.01923 (2017).

Li, Shuai, et al. "Independently recurrent neural network (indrnn): Building a longer and deeper rnn." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.

Xiao, Jianqiong, and Zhiyong Zhou. "Research progress of RNN language model." 2020 ieee international conference on artificial intelligence and computer applications (icaica). IEEE, 2020.

Zhao, Jingyu, et al. "Do RNN and LSTM have long memory?." International Conference on Machine Learning. PMLR, 2020.

Al-Selwi, Safwan Mahmood, et al. "RNN-LSTM: From applications to modeling techniques and beyond—Systematic review." Journal of King Saud University-Computer and Information Sciences (2024): 102068.

Dong, Y., Gao, D., Li, Y., Shi, G., & Liu, D. (2024). Degradation estimation recurrent neural network with local and non-local priors for compressive spectral imaging. IEEE Transactions on Geoscience and Remote Sensing.

Baghoussi, Y. (2024). Enhancing Forecasting using Read & Write Recurrent Neural Networks (Doctoral dissertation, Universidade do Porto (Portugal)).

Alonso, N. I. (2024). The Mathematics of Recurrent Neural Networks. The Mathematics of Recurrent Neural Networks (October 27, 2024).

Rivas, F., Sierra-Garcia, J. E., & Camara, J. M. (2025). Comparison of LSTM-and GRU-Type RNN Networks for Attention and Meditation Prediction on Raw EEG Data from Low-Cost Headsets. Electronics, 14(4), 707.

Dewi, T., Mardiyati, E. N., Risma, P., & Oktarina, Y. (2025). Hybrid Machine learning models for PV output prediction: Harnessing Random Forest and LSTM-RNN for sustainable energy management in aquaponic system. Energy Conversion and Management, 330, 119663.

Downloads

Published

2025-09-01 — Updated on 2025-12-26

Versions

How to Cite

Imra Shoukat, & Rehman, M. (2025). Evaluating the Robustness of RNN Models on Diverse Sequential Datasets. University of Sindh Journal of Information and Communication Technology, 9(1), 1–10. Retrieved from https://sujo.usindh.edu.pk/index.php/USJICT/article/view/7547 (Original work published September 1, 2025)

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.