In this research, an advanced system getting-to-know model is being carried out to properly examine and analyze multiple Bangladeshi vocal cues to diagnose gender correctly. This study is implemented in quite a few domain names, including customer service, where determining the gender of a caller can provide extra-personalized interactions. Additionally, voice-activated assistants use it to customize responses and beautify the consumer experience. To beautify provider shipping and offer demographic insights and gender recognition the use of voice evaluation is also utilized in safety systems, transcription services and sociolinguistic research. Voice is stricken by environmental way of life elements such as smoking, acid reflux sickness, air pollutants, warm weather weight reduction, city air pollutants and the horrible effect of energy on Bangladeshi fitness. In this study, we obtained data on voices from male and female participants living in Bangladesh. To provide a consistent and convenient method of research, we first converted each recording to Waveform Audio File Format (WAV). We then extracted the most significant voices from these WAV recordings and converted them to statistical data. Then, we preprocessed this statistical information to put it together for in-depth analysis. After preprocessing, we used report visualization to understand the traits and patterns observed within the voice recordings. This holistic approach enables a complete assessment of voice data to gain the goals of our study. So, the idea of this venture is to train a machine learning model with updated information processing techniques which can be expected should be gender according to voice notes. We goal to establish a dependable gender identification set of rules based on modern-day findings and big-scale facts.
Keywords: Voice analysis, Gender classification, Acoustic features, Machine learning, Vocal biometrics, Machine learning classification, Bengali language dataset, Gender recognition system, Random forest, Logistic regression, Bangla audio data, Audio data.
[1] Reddy, V.S.K., & Surendran, R. (2023). Human Voice Recognition System to Predict the Gender using Random Forest Algorithm. In IEEE 2nd International Conference on Edge Computing and Applications (ICEC AA), Pages 946–951. https://doi.org/10.1109/icecaa58104.2023.10212186.
[2] Jadav, S. (2018). Voice-based gender identification using machine learning. In IEEE 4th International Conference on Computing Communication and Automation (ICCCA), Pages 1–4. https://doi.org/10.1109/ccaa. 2018.8777582.
[3] Kone, V.S., Anagal, A., Anegundi, S., Jadhav, P., Kulkarni, U., & Meena, S.M. (2023). Voice-based gender and age recognition system. In IEEE International conference on advancement in Computation & Computer Technologies (InCACCT), Pages 74–80. https://doi.org/10.1109/incacct57535.2023.10141801.
[4] Markitantov, M., & Verkholyak, O. (2019). Automatic recognition of speaker age and gender based on deep neural networks. In Speech and Computer: 21st International Conference, SPECOM, Istanbul, Turkey, Proceedings, Pages 327–336, Springer International Publishing. https://doi.org/10.1007/978-3-030-26061-3_34.
[5] Chachadi, K., & Nirmala, S.R. (2022). Voice-based gender recognition using neural network. In Information and Communication Technology for Competitive Strategies (ICTCS 2020) ICT: Applications and Social Interfaces, Pages 741–749, Springer Singapore. https://doi.org/10.1007/978-981-16-0739-4_70.
[6] Li, W., Kim, D.J., Kim, C.H., & Hong, K.S. (2010). Voice-based recognition system for non-semantics information by language and gender. In IEEE 3rd International Symposium on Electronic Commerce and Security, Pages 84–88. https://doi.org/10.1109/isecs.2010.27.
[7] Nair, R.R., & Vijayan, B. (2019). Voice based gender recognition. International Research Journal of Engineering and Technology, 6(5): 2109–2112.
[8] Harb, H., & Chen, L. (2005). Voice-based gender identification in multimedia applications. Journal of Intelligent Information Systems, 24: 179–198. https://doi.org/10.1007/s10844-005-0322-8.
[9] Uddin, M.A., Hossain, M.S., Pathan, R.K., & Biswas, M. (2020). Gender recognition from human voice using multi-layer architecture. In IEEE International Conference on Innovations in Intelligent Systems and Applications (INISTA), Pages 1–7. https://doi.org/10.1109/inista49547.2020.9194654.
[10] Majkowski, A., Kołodziej, M., Pyszczak, J., Tarnowski, P., & Rak, R.J. (2019). Identification of gender based on speech signal. In IEEE 20th International Conference on Computational Problems of Electrical Engineering (CPEE), Pages 1–4. https://doi.org/10.1109/cpee47179.2019.8949078.
[11] Buyukyilmaz, M., & Cibikdiken, A.O. (2016). Voice gender recognition using deep learning. International Conference on Modeling, Simulation and Optimization Technologies and Applications (MSOTA), Pages 409–411. https://doi.org/10.2991/msota-16.2016.90.
[12] Rabiner, L., Cheng, M., Rosenberg, A., & McGonegal, C. (1976). A comparative performance study of several pitch detection algorithms. IEEE Transactions on Acoustics, Speech, and Signal Processing, 24(5): 399–418. https://doi.org/10.1109/tassp.1976.1162846.
[13] Yusnita, M.A., Hafiz, A.M., Fadzilah, M.N., Zulhanip, A.Z., & Idris, M. (2017). Automatic gender recognition using linear prediction coefficients and artificial neural network on speech signal. In IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Pages 372–377. https://doi.org/10.1109/ iccsce.2017.8284437.
[14] Tolmeijer, S., Zierau, N., Janson, A., Wahdatehagh, J.S., Leimeister, J.M.M., & Bernstein, A. (2021). Female by default?–exploring the effect of voice assistant gender and pitch on trait and trust attribution. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Pages 1–7. https://dl.acm.org/doi/abs/10.1145/3 411763.3451623.
Source of Funding:
This study did not receive any grant from funding agencies in the public, commercial, or not–for–profit sectors.
Competing Interests Statement:
The authors declare no competing financial, professional, or personal interests.
Consent for publication:
The authors declare that they consented to the publication of this study.
Authors' contributions:
All the authors made an equal contribution in the Conception and design of the work, Data collection, Simulation analysis, Drafting the article, and Critical revision of the article. All the authors have read and approved the final copy of the manuscript.
Availability of data and material:
Authors are willing to share data and material according to the relevant needs.
A New Issue was published – Volume 8, Issue 2, 2025
13-04-2025 11-01-2025