Classification of audio samples by convolutional networks in audio beehive monitoring | Vestnik Tomskogo gosudarstvennogo universiteta. Upravlenie, vychislitelnaja tehnika i informatika – Tomsk State University Journal of Control and Computer Science. 2018. № 45. DOI: 10.17223/19988605/45/8

Classification of audio samples by convolutional networks in audio beehive monitoring

In the investigation, we consider the problem of classification of audio samples resulting from the audio beehive monitoring. Audio beehive monitoring is a key component of electronic beehive monitoring (EBM) that can potentially automate the identification of various stressors for honeybee colonies. We propose to use convolutional neural networks (ConvNets) and compare developed ConvNets in classifying audio samples from electronic beehive monitors deployed in live beehives. As a result, samples are placed in one of the three non-overlapping categories: bee buzzing (B), cricket chirping (C), and ambient noise (N). We show that ConvNets trained to classify raw audio samples perform slightly better than ConvNets trained to classify spectrogram images of audio samples. We demonstrate that ConvNets can successfully operate in situ on low voltage devices such as the credit card size raspberry pi computer.

Download file
Counter downloads: 417

Keywords

глубокое обучение, сверточные нейронные сети, классификация аудио, обработка аудио, электронный мониторинг пчелиных ульев, deep learning, machine learning, convolutional neural networks, audio classification, audio processing, electronic beehive monitoring

Authors

NameOrganizationE-mail
Kulyukin Vladimir AlekseevichUtah State Universityvladimir.kulyukin@usu.edu
Mukherjee SarbajitUtah State Universitymukherjee@aggiemail.usu.edu
Burkatovskaya Yulia BorisovnaTomsk Polytechnic Universitytracey@tpu.ru
Всего: 3

References

Bromenschenk, J.J., Henderson, C.B., Seccomb, R.A., Rice, S.D. & Etter, R.T. (2007) Honey bee acoustic recording and analysis system for monitoring hive health. U.S. Patent 2007/0224914 A1. Sep. 27.
Ferrari, S., Silva, M., Guarino, M. & Berckmans, D. (2008) Monitoring of swarming sounds in bee hives for early detection of the swarming period. Computers and Electronics in Agriculture. 64(1). pp. 72-77. DOI: 10.1016/j.compag.2008.05.010
Atauri, M.D. & Llorente, M.J. (2009) Platform for beehives monitoring based on sound analysis: A perpetual warehouse for swarm's daily activity. Spanish Journal of Agricultural Research. 7(4). pp. 824-828. DOI: 10.5424/sjar/2009074-1109
Kulyukin, V.A., Putnam, M. & Reka, S.K. (2016) Digitizing buzzing signals into A440 piano note sequences and estimating forage traffic levels from images in solar-powered, electronic beehive monitoring. Lecture Notes in Engineering and Computer Science: Proc. of the International MultiConference of Engineers and Computer Scientists. 1. pp. 82-87.
Kulyukin, V.A. & Reka, S.K. (2016) Toward sustainable electronic beehive monitoring: algorithms for omnidirectional bee counting from images and harmonic analysis of buzzing signals. Engineering Letters. 24(3). pp. 317-327.
Ramsey, M., Bencsik, M. & Newton, M.I. (2017) Long-term trends in the honeybee 'whooping signal' revealed by automated detection. PLoS One. 12(2). DOI: 10.1371/journal.pone.0171162
Mukherjee, S. & Kulyukin, V.A. (2017) Python source code for audio classification experiments with convolutional neural net works. [Online] Available from: https://github.com/sarba-jit/EBM_Audio_Classification.
Kulyukin, V.A., Mukherjee, S. & Amlathe, P. (2017) BUZZ1: A database of audio samples from live beehives captured with BeePi monitors. [Online] Available from: https://usu.app.box.com/v/BeePiAudioData.
Bengio, Y., Courville, A. & Vincent, P. (2013) Representation learning: a review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence. 35(8). pp. 1798-1828. DOI: 10.1109/TPAMI.2013.50
Mitchell, T.M. (1997). Machine learning. McGraw-Hill.
Krizhevsky, A., Sutskever, I. & Hinton, G.E. (2012) ImageNet classification with deep convolutional neural networks. Proc. of the 25th International Conference on Neural Information Processing Systems. 1. pp. 1097-1105.
Farabet, C., Couprie, C., Najman, L. & LeCun, Y. (2013) Learning hierarchical features for scene labeling. IEEE Trans Pattern AnalMach Intell. 35(8). pp. 1915-1929. DOI: 10.1109/TPAMI.2012.231
Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A.R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. & Kingsbury, B. (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. Signal Pro-cessingMagazine. 29(6). pp. 82-97. DOI: 10.1109/MSP.2012.2205597
Leung, M.K.K., Xiong, H.Y., Lee, L.J. & Frey, B.J. (2014). Deep learning of the tissue-regulated splicing code. Bioinformatics. 30(12). pp. 121-129. DOI: 10.1093/bioinformatics/btu277
Nair, V. & Hinton, G.E. (2010) Rectified linear units improve restricted boltzmann machines. Proc. of the 27th International Conference on Machine Learning. pp. 807-814.
Sutton, R.S. & Barto, A.G. (1998). Introduction to Reinforcement Learning. MIT Press.
Ioffe, S. & Szegedy, C. (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. JMLR Workshop and Conference. 37. pp. 448-456.
Kingma, D. & Ba, J. (2015) Adam: A Method for Stochastic Optimization. Proc. of the 3rd International Conference on Learning Representations (ICLR).
 Classification of audio samples by convolutional networks in audio beehive monitoring | Vestnik Tomskogo gosudarstvennogo universiteta. Upravlenie, vychislitelnaja tehnika i informatika – Tomsk State University Journal of Control and Computer Science. 2018. № 45. DOI: 10.17223/19988605/45/8

Classification of audio samples by convolutional networks in audio beehive monitoring | Vestnik Tomskogo gosudarstvennogo universiteta. Upravlenie, vychislitelnaja tehnika i informatika – Tomsk State University Journal of Control and Computer Science. 2018. № 45. DOI: 10.17223/19988605/45/8

Download full-text version
Counter downloads: 957