Show simple record

dc.contributor.authorTong, H.
dc.contributor.authorSharifzadeh, Hamid
dc.contributor.authorMcLoughlin, I.
dc.date.accessioned2020-11-23T23:11:16Z
dc.date.available2020-11-23T23:11:16Z
dc.date.issued2020-10
dc.identifier.urihttps://hdl.handle.net/10652/5017
dc.description.abstractDysarthria is a speech disorder that can significantly impact a person’s daily life, and yet may be amenable to therapy. To automatically detect and classify dysarthria, researchers have proposed various computational approaches ranging from traditional speech processing methods focusing on speech rate, intelligibility, intonation, etc. to more advanced machine learning techniques. Recently developed machine learning systems rely on audio features for classification; however, research in other fields has shown that audio-video crossmodal frameworks can improve classification accuracy while simultaneously reducing the amount of training data required compared to uni-modal systems (i.e. audio- or video-only). In this paper, we propose an audio-video cross-modal deep learning framework that takes both audio and video data as input to classify dysarthria severity levels. Our novel cross-modal framework achieves over 99% test accuracy on the UASPEECH dataset – significantly outperforming current uni-modal systems that utilise audio data alone. More importantly, it is able to accelerate training time while improving accuracy, and to do so with reduced training data requirements.en_NZ
dc.language.isoenen_NZ
dc.publisherISCA (International Speech Communication Association)en_NZ
dc.rightsCopyright © 2020 ISCAen_NZ
dc.subjectdysarthriaen_NZ
dc.subjectmotor speech disordersen_NZ
dc.subjectdysarthric patientsen_NZ
dc.subjectassessmenten_NZ
dc.subjectaudio data processing systemsen_NZ
dc.subjectvideo data processing systemsen_NZ
dc.subjectdeep-learning algorithmsen_NZ
dc.subjectalgorithmsen_NZ
dc.subjectUASPEECH (dataset of dysarthric speech)en_NZ
dc.subjectConvolutional Neural Networks (CNN)en_NZ
dc.titleAutomatic assessment of dysarthric severity level using audio-video cross-modal approach in deep learningen_NZ
dc.typeConference Contribution - Paper in Published Proceedingsen_NZ
dc.date.updated2020-11-10T13:30:08Z
dc.subject.marsden080108 Neural, Evolutionary and Fuzzy Computationen_NZ
dc.subject.marsden119999 Medical and Health Sciences not elsewhere classifieden_NZ
dc.identifier.bibliographicCitationTong, H., Sharifzadeh, H., & McLoughlin, I. (2020). Automatic Assessment of Dysarthric Severity Level Using Audio-Video Cross-Modal Approach in Deep Learning. INTERSPEECH 2020 (pp. 4786-4790). doi:http://dx.doi.org/10.21437/Interspeech.2020-1997 Retrieved from http://www.interspeech2020.org/Program/Technical_Program/#en_NZ
unitec.publication.spage4786en_NZ
unitec.publication.lpage4790en_NZ
unitec.conference.titleINTERSPEECH 2020 “Cognitive Intelligence for Speech Processing”en_NZ
unitec.conference.orgISCA (International Speech Communication Association)en_NZ
unitec.conference.locationShanghai, Chinaen_NZ
unitec.conference.sdate2020-10-25
unitec.conference.edate2020-10-29
unitec.peerreviewedyesen_NZ
dc.contributor.affiliationUnitec Institute of Technologyen_NZ
dc.contributor.affiliationSingapore Institute of Technologyen_NZ
unitec.identifier.roms64998en_NZ
unitec.institution.studyareaComputing


Files in this item

Thumbnail

This item appears in

Show simple record


© Unitec Institute of Technology, Private Bag 92025, Victoria Street West, Auckland 1142