A Deep Learning Approach For The automatic Classification Of Acoustic Events: A Case Of Natural Disasters

dc.contributor.authorObu, E.A.
dc.date.accessioned2023-08-14T17:04:47Z
dc.date.available2023-08-14T17:04:47Z
dc.date.issued2020-10
dc.descriptionMPhil. Computer Scienceen_US
dc.description.abstractAutomatic classification of acoustic events is a signal processing activity that has recently gained research interest, especially in the machine learning community. This is due to its cost-effectiveness in the long-term monitoring of larger areas and the collection of large amounts of data in real-time. A plethora of techniques have been proposed and adopted for the classification of acoustic events such as respiratory sound, animal calls/vocalizations, baby cry, speech disorders, and environmental sound. This study was aimed at developing a natural disaster sound classification model that will enable automatic classification of natural disasters. Accordingly, deep learning techniques including Convolutional Neural Network (CNN) and a Long short-term memory based-Recurrent Neural Network (RNN-LSTM) were used to develop classification models. The adopted algorithms and sound features used in this study were motivated by methodologies used in the area of speech/voice recognition. To ensure a relevant and rigorous research, this study adopted the design science research methodology which consisted of a five-phase cycle; awareness of the problem, suggestion, development, evaluation, and conclusion. Furthermore, to also ensure the real-time classification of natural disaster sounds, the detection-by-classification approach was adopted instead of detection-and-classification. The dataset used for this study consisted of five classes of natural disasters sound that was extracted from the Freesound database. The sound files were preprocessed at 16000Hz to extract 13 Mel Frequency Cepstral Coefficient (MFCC). An arbitrary time frame of 0.1s was adopted. In the end, the performance of both models was validated using the classification metrics and cross-validation. Results indicated that although CNN performed slightly better than RNN-LSTM, both models were effective at automatically discerning one disaster sound from the other in real-time. Best results of 99.95% in classification accuracy, and 0.999 in the area under the curve (AUC) score were obtained from CNN.en_US
dc.identifier.urihttp://ugspace.ug.edu.gh:8080/handle/123456789/39725
dc.language.isoenen_US
dc.publisherUniversity of Ghanaen_US
dc.subjectRequirement
dc.subjectEnvironmental Sounden_US
dc.subjectResearch Methodologyen_US
dc.subjectConvolutional Neural Networken_US
dc.subjectRespiratory Sounden_US
dc.titleA Deep Learning Approach For The automatic Classification Of Acoustic Events: A Case Of Natural Disastersen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Ekpezu, Akon Obu_2020.pdf
Size:
3.27 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: