DeepLung: Smartphone Convolutional Neural Network-Based Inference of Lung Anomalies for Pulmonary Patients

Mohsin Y. Ahmed, Md. Mahbubur Rahman, Jilong Kuang


DeepLung is an end-to-end deep learning based audio sensing and classification framework for lung anomaly (e.g. cough, wheeze) detection for pulmonary patients from streaming audio and inertial sensor data from a chest-held smartphone. We design and develop 1-D and 2-D convolutional neural networks for DeepLung, and train them using the Interspeech 2010 Paralinguistic Challenge features. Two different audio windowing schemes: i) real-time respiration cycle based natural windowing, and ii) static length windowing are compared and experimented with. Classifiers are developed considering 2 different system architectures: i) mobile-cloud hybrid architecture, and ii) mobile in-situ architecture. Patient privacy is preserved in the phone by filtering speech with a shallow classifier. To evaluate DeepLung, a novel and rigorous lung activity dataset is made by collecting audio and inertial sensor data from more than 131 real pulmonary patients and healthy subjects and annotated accurately by professional crowdsourcing. Experimental results show that the best combination of DeepLung convolutional neural network is 15–27% more accurate when compared to a state-of-the-art smartphone based body sound detection system, with a best F1 score of 98%.


 DOI: 10.21437/Interspeech.2019-2953

Cite as: Ahmed, M.Y., Rahman, M.M., Kuang, J. (2019) DeepLung: Smartphone Convolutional Neural Network-Based Inference of Lung Anomalies for Pulmonary Patients. Proc. Interspeech 2019, 2335-2339, DOI: 10.21437/Interspeech.2019-2953.


@inproceedings{Ahmed2019,
  author={Mohsin Y. Ahmed and Md. Mahbubur Rahman and Jilong Kuang},
  title={{DeepLung: Smartphone Convolutional Neural Network-Based Inference of Lung Anomalies for Pulmonary Patients}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2335--2339},
  doi={10.21437/Interspeech.2019-2953},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2953}
}