UCSY's Research Repository

Audio Classification in Speech and Music by using Neural Network: Multilayer Perceptron

Show simple item record

dc.contributor.author Myint, Ei Sandar
dc.contributor.author Ni, Nwe
dc.date.accessioned 2019-07-29T04:42:51Z
dc.date.available 2019-07-29T04:42:51Z
dc.date.issued 2009-12-30
dc.identifier.uri http://onlineresource.ucsy.edu.mm/handle/123456789/1440
dc.description.abstract Audio classification can be used in many different applications. Rapid increase in the amount of audio data demands for an efficient method to automatically segment or classify audio stream based on its content. This paper focuses the attention on audio classification in music and speech. This audio classification system consists of three processing stages: feature extraction, training and classification. Spectral flux, short time energy and cepstrum coefficient are used to classify input audio into two types: speech and music. In this paper, single type of audio classification system is based on Multilayer Perceptron (MLP) neural network. Back propagation algorithm is used to perform training process. Simulation results are also included in this paper. It can classify audio files combining speech and music. en_US
dc.language.iso en en_US
dc.publisher Fourth Local Conference on Parallel and Soft Computing en_US
dc.title Audio Classification in Speech and Music by using Neural Network: Multilayer Perceptron en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository



Browse

My Account

Statistics