Tuberc Respir Dis > Volume 86(4); 2023 > Article
Kim, Hyon, Woo, Lee, Lee, Ha, and Chung: Evolution of the Stethoscope: Advances with the Adoption of Machine Learning and Development of Wearable Devices

Abstract

The stethoscope has long been used for the examination of patients, but the importance of auscultation has declined due to its several limitations and the development of other diagnostic tools. However, auscultation is still recognized as a primary diagnostic device because it is non-invasive and provides valuable information in real-time. To supplement the limitations of existing stethoscopes, digital stethoscopes with machine learning (ML) algorithms have been developed. Thus, now we can record and share respiratory sounds and artificial intelligence (AI)-assisted auscultation using ML algorithms distinguishes the type of sounds. Recently, the demands for remote care and non-face-to-face treatment diseases requiring isolation such as coronavirus disease 2019 (COVID-19) infection increased. To address these problems, wireless and wearable stethoscopes are being developed with the advances in battery technology and integrated sensors. This review provides the history of the stethoscope and classification of respiratory sounds, describes ML algorithms, and introduces new auscultation methods based on AI-assisted analysis and wireless or wearable stethoscopes.

Key Figure

Introduction

Auscultation has long been used for the examination of patients because it is non-invasive, and provides valuable information in real-time [1-3]. Thus, the stethoscope is a primary diagnostic device especially for respiratory diseases [4]. Abnormal respiratory sounds provide information on pathological conditions involving the lungs and bronchi. However, the importance of auscultation is declining, in part due to the development of other diagnostic methods [5] but mainly because of the largest drawback of auscultation, i.e., its inherent subjectivity. The discrimination of abnormal sounds largely depends on the experience and knowledge of the listeners; this problem is being addressed by implementing a standardized system to analyze respiratory sounds accurately. For example, lung sounds can be recorded with a digital stethoscope and then shared [6]. Artificial intelligence (AI)-assisted auscultation and digital stethoscopes that make use of machine learning (ML) algorithms are changing the clinical role of auscultation [7-24].
Another limitation of existing stethoscopes with respect to auscultation is the impossibility of remote care for patients with chronic diseases who are confined to nursing facilities or home, or cannot readily access a doctor [7,24]. Auscultation requires contact between the stethoscope and patient’s body, and thus cannot be used remotely. The utility of non-face-to-face treatment was well demonstrated by the coronavirus disease 2019 (COVID-19) crisis [25-28]. This limitation is being addressed by recent advances in battery technology and integrated sensors, which have led to the development of wireless stethoscopes that can be worn by the patient and allow auscultation to be done remotely [29-32].
In this review, we briefly examine the history of the stethoscope, from its development to the present, and the various respiratory sounds. We then describe ML in a step-by-step manner, including its use in analyzing respiratory sounds. New auscultation methods based on AI-assisted analysis and wireless or wearable stethoscopes are considered, and the results of recent clinical trials examining AI-based analyses of respiratory sounds are discussed.

Classification of Respiratory Sounds

Respiratory sounds are generated by airflow in the respiratory tract and may be normal or abnormal (Table 1). Normal respiratory sounds include tracheal, bronchovesicular, and vesicular sounds [33,34]. Abnormal sounds are caused by diseases of the lungs or bronchi [34,35] and can be identified according to their location, mechanism of production, characteristics (pitch, continuity, time when typically heard), and acoustic features (waveform, frequency, and duration) [36]. Crackles are short, discontinuous, explosive sounds that occur on inspiration and sometimes on expiration [3,37]. Coarse crackles are caused by gas passing through an intermittent airway opening and are a feature of secretory diseases such as bronchitis and pneumonia [38]. Fine crackles are induced by an inspiratory opening in the small airways and are associated with interstitial pneumonia, idiopathic pulmonary fibrosis (IPF), and congestive heart failure [39]. Stridor is a high-pitched, continuous sound produced by turbulent airflow through a narrowed airway of the upper respiratory tract [3]. It is usually a sign of airway obstruction and thus requires prompt intervention. Wheezes are produced in the narrowed or obstructed airway [3], are of high frequency (>100 to 5,000 Hz), and have a sinusoidal pattern of oscillation [40]. They usually occur in obstructive airway diseases such as asthma and chronic obstructive pulmonary disease (COPD) [38]. Rhonchi are caused by narrowing of the airways due to the secretions and may thus disappear after coughing [3]. In patients with pleural inflammation such as pleurisy, the visceral pleura becomes rough and friction with the parietal pleura generates crackling sounds, i.e., friction rub [41]. Sometimes, a mixture of two or more sounds or noises are heard. Respiratory sounds may also be ambiguous such that even an expert will have difficulty in distinguishing them accurately. In these challenging situations, an AI-assisted stethoscope would be useful.

Development of Stethoscopes

The word “stethoscope” is derived from the Greek words stethos (chest) and scopos (examination). In the 5th century B.C., Hippocrates listened to chest sounds to diagnose disease (Table 2) [1,6,29-32,42-48]. In the early 1800s, before the development of the stethoscope, physical examinations included percussion and direct auscultation, with doctors placing an ear on the patient’s chest to listen to internal sounds. In 1817, the French doctor Rene Laennec invented an auscultation tool. Over time, the stethoscope gradually became a binaural device with flexible tubing and a rigid diaphragm. Throughout the 1900s, minor improvements were made to reduce the weight of the stethoscope and improve its acoustic quality. Electronic versions of the stethoscope allowed further sound amplification. Since the 2000s, with advances in battery technology, low power embedded processors, and integrated sensors, wearable and wireless digital stethoscopes are emerging; some devices are now able to record and transmit sound, which can then be automatically analyzed using AI algorithms.

Artificial Intelligence

1. Overview

The development of an AI model typically consists of four steps (although in the case of deep learning, three steps are involved), as follows (Figure 1).

1) Data preparation

A dataset appropriate for the target, i.e., the output of the model is obtained. During preprocessing, outliers are pre-screened and missing values are augmented from the given data. Since more data results in higher accuracy and better generalization, after data preprocessing, whether the data are still sufficient for model construction should be confirmed.

2) Feature extraction

After data preparation, features appropriate for the target are extracted. All available features, the relevance of which is determined based on expert insights, should be extracted, but even seemingly uncorrelated features might improve model performance. However, there may also be competition between the features of interest. In such cases, dimensional reduction in the feature domain will greatly improve the computational performance and efficiency of the model. In deep learning, this step is merged with the subsequent training and validation steps. Deep learning is designed for automatic feature generation based on the data, but the process leading to the model’s output is poorly understood such that deep learning is called a “black box”-type algorithm.

3) Training and validation

These key steps in ML and data driven algorithms are performed using support vector machine (SVMs), decision trees, and deep learnings. From the prepared data, the algorithms train the model by optimizing the cost function for the target. An appropriate and efficient cost function is therefore an important task in model construction. After the model has been trained, it is validated using data prepared from the training data. The most well-known validation method is n-fold validation.

4) Testing

After a model with the desired performance is obtained, it is run using a test dataset, which is also prepared either exclusively from the training and validation dataset or with the addition of new data. A model that performs better during training and validation than testing, usually suffers from overfitting; data generalization is one of the solutions to this problem.

2. AI-based classification of respiratory sounds

Most current AI algorithms for the identification of respiratory sounds are data driven, i.e., trained from a given dataset. In this approach, the performance of the AI algorithms is highly dependent on the data type and preparation. Thus far, deep learning methods have performed best; however, regardless of the method, clinicians should understand how AI-based respiratory sound analysis applies to clinical practice. Studies assessing the utility of ML and deep learning to analyze respiratory sounds are summarized in Table 3.
In a previous study, we developed a deep learning algorithm of respiratory sounds [49], using convolutional neural network (CNN) classifier with transfer learning method for appropriate feature extraction. The model detected abnormal sounds among 1,918 respiratory sounds recorded in a routine clinical setting; the detection accuracy was 86.5%. Furthermore, the model was able to classify abnormal lung sounds, such as wheezes, rhonchi, and crackles with an accuracy of 85.7%. Chen et al. [18] proposed a novel deep learning method for the classification of wheeze, crackle, and normal sounds using optimized S-transform (OST) and deep residual networks (ResNets) to overcome the limitations of artifacts and constrained feature extraction methods. The proposed ResNet using OST had an accuracy of 98.79%, sensitivity of 96.27%, and specificity of 100% for the classification of wheeze, crackle, and normal respiratory sounds.
An ML-based analysis using artificial neural networks (ANNs) was applied to identify four respiratory sounds, including wheezes, rhonchi, and coarse and fine crackles [20]. The AI approach was more efficient than an evaluation by five physicians. In another study using ANNs, the accuracy for classifying crackle, rhonchi, and normal respiratory sounds was 85.43% [12]. The classification performance of ANNs for pediatric respiratory sounds was independently validated. Detection accuracy was high for crackles and wheezes, at 95% and 93% respectively [10]. A recent study developed a semi-supervised deep learning algorithm to identify wheezes and crackles in 284 patients with pulmonary diseases [23]. The area under the receiver characteristic curve obtained with the algorithm was 0.86 for wheezes and 0.74 for crackles. The study showed that a semi-supervised algorithm with SVM enables the analysis of large datasets without the need for additional labeling of lung sounds.
Analysis of respiratory sounds using a deep learning algorithm (deep belief networks [DBNs]) successfully distinguished patients with COPD from healthy individuals [8]. The proposed DBN classifier also distinguished COPD from non-COPD patients with accuracy, sensitivity, and specificity, values of 93.67%, 91%, and 96.33%, respectively. The authors concluded that a deep learning-based model could aid the assessment of obstructive lung diseases, including COPD. Using a decision tree forest algorithm with additional wavelet features, Fernandez-Granero et al. [50] investigated the respiratory sounds recorded in 16 COPD patients telemonitored at home with a respiratory sensor device. The detection accuracy of 78.0% demonstrated the potential of the proposed system for early prediction of COPD exacerbations, which would allow patients to obtain timely medical attention.
A recent study investigated the use of deep learning to recognize pulmonary diseases based on respiratory sounds [13]. Electronically recorded lung sounds were obtained from controls and patients suffering from asthma, COPD, bronchiectasis, pneumonia, and heart failure; the classification accuracy of the deep learning algorithm was 98.80%, 95.60%, 99.00%, 100%, 98.80%, and 100%, respectively. In that study, the data were initially preprocessed and used to train two deep learning network architectures, i.e., a CNN and bidirectional long short-term memory network (BDLSTM), attained a precision of 98.85%. Other studies have also applied deep learning models to classify pulmonary diseases based on respiratory sounds [51]. A study using the International Conference on Biomedical and Health Informatics (ICBHI) database used crackles, wheezes, and both as the basis for AI-guided diagnosis of pneumonia, lower respiratory tract infection (LRTI), upper respiratory tract infection (URTI), bronchiectasis, bronchiolitis, COPD, and asthma [52]. Respiratory sounds representing those diseases were synthesized using a variety of variational autoencoders. The results showed that unconditional generative models effectively evaluated the synthetic data. The sensitivity of multi-layer perceptron, CNN, long short-term memory (LSTM), Res-Net-50, and Efficient Net B0 was 97%, 96%, 92%, 98%, and 96%, respectively. A hybrid model combining CNN and LSTM was also proposed for accurate pulmonary disease classification [53]. Four different sub-datasets were generated from the ICBHI [52] and King Abdullah University Hospital [54]. The four datasets contained data asthma, fibrosis, bronchitis, COPD, heart failure, heart failure+COPD, heart failure+lung fibrosis, lung fibrosis, pleural effusion, and pneumonia. The best results were obtained with the hybrid model, which had the highest classification accuracy (99.81%) for 1,457 respiratory sounds from controls and patients with asthma, bronchiectasis, bronchiolitis, COPD, LRTI, pneumonia, and URTI. By expanding the number of training datasets, the overall classification accuracy improved by 16%.
These studies suggest that AI-based respiratory sound analysis can provide an innovative approach in the diagnosis of respiratory diseases. Although there are still limitations such as the interpretations of complex sounds and data dependency, moreover, undesired noises exist, AI-based algorithms are expected to play a key role in supporting clinicians to evaluate respiratory diseases in the future. Additionally, noise reduction or removal and data standardization are needed for obtaining enhanced prediction performance of AI algorithms.

Clinical Trials Using Digital Stethoscopes and Artificial Intelligence

Many ongoing studies are investigating the use of digital stethoscopes in combination with AI, beginning with the classification of respiratory sounds as normal or abnormal. AI algorithms are then developed to analyze the recorded sounds for the detection of COPD, asthma, IPF, and other lung diseases (e.g., using the StethoMe electronic stethoscope, Poznań, Poland). Other clinical trials are examining the use of AI to screen COVID-19 patients based on their voice, breaths, and cough, and testing the feasibility of smart stethoscopes for remote telemedicine and the monitoring patients with respiratory diseases (by checking respiratory and cough sounds) (Table 4).
With advances in mechanical technology and AI, smart stethoscopes able to analyze and classify respiratory sounds will soon be implemented in many clinical fields. This may lead to renewed awareness of the clinical importance of auscultation, which has been underestimated since the advent of image technologies such as computed tomography and sonography.

New Devices

Wireless and wearable stethoscopes, which are gradually becoming commercially available, are particularly useful for treating patients who must be isolated, such as those with COVID-19, in whom auscultation cannot be performed with a regular stethoscope. For example, a wireless stethoscope with a Bluetooth connection system can be used to monitor hospitalized patients with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pneumonia [55]. More advanced wearable stethoscopes may improve the monitoring of patients with pulmonary diseases. Zhang et al. [55] developed a wearable, Bluetooth 5.0 Low Energy (LE)-enabled, multimodal sensor patch that combines several modalities, including a stethoscope, ambient noise sensing, electrocardiography, and impedance pneumography, thus providing multiple types of clinical data simultaneously. Lee et al. [56] developed a soft wearable stethoscope with embedded ML and evaluated its ability to classify respiratory sounds; the feasibility of wearable smart stethoscopes was demonstrated. Joshitha et al. [57] designed a Wi-Fi-enabled contactless electronic stethoscope. The prototype, which contained a receiver module and circuit diagram, demonstrated the potential of this novel platform for use in smart stethoscopes. Recently, a smart stethoscope with contactless radar measurement ability has been introduced as an advanced method for the acquisition of respiratory sounds for use in many clinical settings. An important point in developing a new stethoscope is to reduce the motion artifact caused by the lack of adhesion between the rigid stethoscope and the patient’s dry skin. Since it leads to inaccurate diagnosis of breath sounds, there are attempts to reduce it by making the stethoscope of bendable materials or attaching the stethoscope to the body through bandages (Table 5) [29,55-58].

Conclusion

With the development of digital stethoscopes and sound transmission technology, it has become possible to record and share respiratory sounds and accurately distinguish them using ML algorithms. Advances in battery technology, embedded processors with low power consumption, integrated sensors, Wi-Fi, and radar technology are contributing to the development of stethoscopes and other wireless and wearable medical devices. In the case of stethoscopes, these significant modifications have overcome the limitations of existing models. Importantly, they allow the identification of respiratory diseases without a specialist, thereby enhancing the clinical effectiveness of auscultation. Accurate auscultation can result in more rapid diagnosis and earlier treatment, in addition to reducing the burden of radiation exposure and high examination costs by avoiding unnecessary imaging tests. As the use of telemedicine has expanded, driven by its successful implementation during the COVID-19 pandemic, the monitoring of chronic respiratory diseases in the hardto-reach patients will be vastly improved.
With the development of wearable stethoscopes and the transmission of breath sounds through Bluetooth, there are attempts to continuously monitor the breath sounds of critically ill patients in the intensive care unit. In a study comparing continuous auscultation using a wearable stethoscope with intermittent auscultation using a conventional stethoscope for wheeze detection, extending the duration of auscultation using a wearable stethoscope even in a noisy environment showed comparable performance to the conventional auscultation [59]. Another study showed that the quantification of abnormal respiratory sounds could predict complications after extubation [60]. In addition, a study was attempted to detect the position of the double-lumen tube through the strength of the respiratory sound in patients undergoing lung surgery [61].
However, smart stethoscope must still overcome several problems. Coexisting noises make it difficult to precisely discriminate respiratory sounds, such that noise filtering is important albeit challenging task. In addition, two or more sounds may be mixed and heard at the same time, which complicates their differentiation. Further refinement of deep learning algorithms may improve noise filtering, including the ability to distinguish the characteristics of coexisting sounds.
Point-of-Care Ultrasound (POCUS), which has been used for the diagnosis and treatment of various diseases since the 1990s, has also rapidly increased its use with the introduction of portable device, and is useful for primary care of patients along with stethoscopes [62]. Like a stethoscope, POCUS is non-invasive and can be performed right on the bed without moving the patient to the examination room. The combination of image-based POCUS and the sound-based stethoscope serves as an effective diagnostic tool in the clinical field [63-65].
In conclusion, the stethoscope is expected to develop through steady research in the future, and to be used as more popular medical devices through the application of AI analysis programs and development of wearable device. This breakthrough of stethoscope has the potential to pave the way for a future defined by personalized digital healthcare.

Notes

Authors’ Contributions

Conceptualization: Ha T, Chung C. Methodology: Lee SI. Investigation: Lee S. Writing - original draft preparation: Kim Y, Woo SD, Chung C. Writing - review and editing: Hyon YK. Approval of final manuscript: all authors.

Conflicts of Interest

No potential conflict of interest relevant to this article was reported.

Funding

This study was supported by a 2021-Grant from the Korean Academy of Tuberculosis and Respiratory Diseases, National Research Foundation of Korea (2022R1F1A1076515), and National Institute for Mathematical Sciences (NIMS) grant funded by the Korean government (No. B23910000).

Fig. 1.
Machine learning processes. (A) General machine learning process. (B) Deep learing (DL) process.
trd-2023-0065f1.jpg
trd-2023-0065f2.jpg
Table 1.
Classification of normal and abnormal respiratory sounds
Origin Mechanism Characteristics Acoustics Related diseases
Normal Tracheal sound Proximal/Upper airway Turbulent flow impinging on airway walls High-pitched Heard on inspiration and expiration White noise Frequency 100-5,000 Hz
Bronchovesicular sound trd-2023-0065i1.jpg Turbulent flow, vortices Intermediate-pitched Heard on inspiration and expiration Strong expiratory component Intermediate between tracheal and vesicular
Vesicular Distal/Lower airway Turbulent flow, vortices Low-pitched Heard on inspiration > early expiration Low-pass- filtered noise Frequency 100-1,000 Hz
Abnormal Stridor Proximal/Upper airway Airway narrowing Continuous High-pitched Heard on inspiration or expiration Sinusoid Frequency > 500 Hz Vocal cord palsy
Unaffected by cough Duration > 250 msec Post extubation laryngeal edema
trd-2023-0065i2.jpg Tracheomalacia
Wheeze Heard on expiration > inspiration Frequency > 100-5,000 Hz Asthma, COPD
Duration >80 msec Endobronchial tumor
Foreign body in bronchus
Rhonchus Rupture of fluid films of secretion, Airway vibration Low-pitched Frequency about 150 Hz Bronchitis
Affected by cough Duration >80 msec Pneumonia
Coarse crackle Intermittent airway opening (related to secretion) Discontinuous Low-pitched Begin at early inspiration, extend into expiration Rapidly dampened wave deflection Lower frequency (about 350 Hz) Pneumonia
Affected by cough Longer duration (about 15 msec) Pulmonary edema
Fine crackle Opening of small airway closed by surface forces during the previous expiration (unrelated to secretion) High-pitched Heard on late inspiration Higher frequency (about 650 Hz) Interstitial lung disease
Distal/Lower airway Unaffected by cough Shorter duration (about 5 msec) Congestive heart failure
Pleural friction rub Pleura Movement of inflamed and roughened pleural surfaces Continuous Low-pitched Heard throughout inspiration and expiration Rhythmic succession of short sounds Frequency <350 Hz Pleurisy
Unaffected by cough Duration >15 msec Pericarditis
Pleural tumor

COPD: chronic obstructive pulmonary disease.

Table 2.
History of stethoscope development
trd-2023-0065i3.jpg
Table 3.
Recent studies performing AI-based analyses of respiratory sounds
Study Database (accessibility) Datasets Respiratory sounds Methods Results
Kim et al. (2021) [49] Chungnam National University Hospital, Korea (private) 297 Crackles, 298 wheezes, 101 rhonchi, and 1,222 normal sounds from 871 patients Crackle, wheeze, rhonchi, and normal sounds CNN classifier with transfer learning Accuracy: 85.7%
AUC: 0.92
Chen et al. (2019) [18] ICBHI dataset (public) 136 Crackles, 44 wheezes, and 309 normal sounds Crackle, wheeze, and normal sounds OST and ResNets Accuracy: 98.79%
Sensitivity: 96.27%
Specificity: 100%
Grzywalski et al. (2019) [20] Karol Jonscher University Hospital in Poznan, Poland (private) 522 Respiratory sounds from 50 pediatric patients Wheezes, rhonchi, and coarse and fine crackles ANNs F1-score: higher than that of pediatricians (8.4%)
Meng et al. (2020) [12] China-Japan Friendship Hospital, China (private) 240 Crackles, 260 rhonchi, and 205 normal sounds from 130 patients Crackle, rhonchi, and normal sounds ANNs Accuracy: 85.43%
Kevat et al. (2020) [10] Monash Children’s Hospital, Melbourne, Australia (private) 192 Respiratory sounds from 25 pediatric patients Crackle and wheeze ANNs True-positive rate
Crackles: Clinicloud, 0.95; Littman, 0.75
Wheeze: Clinicloud, 0.93; Littman, 0.8
Chamberlain et al. (2016) [23] Four separate clinical sites in Maharashtra, India (private) 890 Respiratory sounds from 284 patients Crackle and wheeze SVM AUC
Crackle: 0.74
Wheeze: 0.86
Altan et al. (2020) [8] Respiratory Database@TR (public) 600 Respiratory sounds from 50 patients Wheeze (COPD and non-COPD patients) DBN classifier Accuracy: 93.67%
Sensitivity: 91%
Specificity: 96.33%
Fernandez-Granero et al. (2018) [50] Puerta del Mar University Hospital in Cadiz, Spain (private) 16 Patients with COPD Wheeze (acute exacerbation of COPD) DTF classifier with additional wavelet features Accuracy: 78.0%
Fraiwan et al. (2022) [13] ICBHI dataset, KAUH dataset (public) 1,483 Respiratory sounds from 213 patients Asthma, COPD, bronchiectasis, pneumonia, heart failure and normal (control) patients BDLSTM Highest average accuracy: 99.62%
Total agreement: 98.26%
Sensitivity: 98.43%
Specificity: 99.69%
Saldanha et al. (2022) [51] ICBHI dataset (public) 6,898 Respiratory sounds from 126 patients Healthy, pneumonia, LRTI, URTI, bronchiectasis, bronchiolitis, COPD, asthma MLP, CNN, LSTM, ResNet-50, Efficient Net B0 Sensitivity:
97% (MLP)
96% (CNN)
92% (LSTM)
98% (ResNet-50)
96% (Efficient Net B0)
Alqudah et al. (2022) [53] ICBHI dataset, KAUH dataset (public) 1,457 Respiratory sounds Normal, asthma, bronchiectasis, bronchiolitis, COPD, LRTI, pneumonia, and URTI CNN, LSTM, and hybrid model (CNN- LSTM) Accuracy:
99.62% (CNN)
99.25% (LSTM)
99.81% (CNN- LSTM)

CNN: convolutional neural network; AUC: area under the curve; ICBHI: International Conference on Biomedical and Health Informatics; OST: optimized S-transform; ResNet: deep residual network; ANN: artificial neural networks; SVM: support vector machine; COPD: chronic obstructive pulmonary disease; DBN: deep belief network; DTF: decision tree forest; KAUH: King Abdullah University Hospital; BDLSTM: bidirectional long short-term memory network; LRTI: lower respiratory tract infection; URTI: upper respiratory tract infection; MLP: multi-layer perceptron; LSTM: long short-term memory.

Table 4.
Ongoing clinical trials using digital stethoscopes and artificial intelligence
Trial Stethoscope NCT number
1 Evaluating the Feasibility of Artificial Intelligence Algorithms in Clinical Settings for the Classification of Normal, Wheeze and Crackle Sounds Acquired from a Digital Stethoscope Smart stethoscope NCT05268263
2 Clinical Evaluation of AI-aided Auscultation with Automatic Classification of Respiratory System Sounds (AIR) StethoMe electronic stethoscope NCT04208360
3 AI Evaluation of COVID-19 Sounds (AI-EChOS) (AI-EChOS) NA NCT05115097
4 Deep Learning Diagnostic and Risk-Stratification for IPF and COPD (Deep Breath) Digital stethoscope NCT05318599
5 Acoustic Cough Monitoring for the Management of Patients with Known Respiratory Disease Hyfe Cough Tracker (Hyfe) App in smartphone NCT05042063
6 Telemedicine System and Intelligent Monitoring System Construction of Pediatric Asthma Based on the Electronic Stethoscope Electronic stethoscope NCT05659225
7 Blue Protocol and Eko Artificial Intelligence Are Best (BEA-BEST) Eko digital stethoscope NCT05144633
8 Personalized Digital Health and Artificial Intelligence in Childhood Asthma (Asthmoscope) Digital Stethoscope NCT04528342

NCT: National Clinical Trial; NA: not available.

Table 5.
Recently developed wireless and wearable stethoscopes
trd-2023-0065i4.jpg

REFERENCES

1. Roguin A. Rene Theophile Hyacinthe Laennec (1781-1826): the man behind the stethoscope. Clin Med Res 2006;4:230-5.
crossref pmid pmc
2. Bloch H. The inventor of the stethoscope: René Laennec. J Fam Pract 1993;37:191.
pmid
3. Bohadana A, Izbicki G, Kraman SS. Fundamentals of lung auscultation. N Engl J Med 2014;370:744-51.
crossref pmid
4. Coucke PA. Laennec versus Forbes: tied for the score! How technology helps us interpret auscultation. Rev Med Liege 2019;74:543-51.

5. Arts L, Lim EH, van de Ven PM, Heunks L, Tuinman PR. The diagnostic accuracy of lung auscultation in adult patients with acute pulmonary pathologies: a meta-analysis. Sci Rep 2020;10:7347.
crossref pmid pmc pdf
6. Swarup S, Makaryus AN. Digital stethoscope: technology update. Med Devices (Auckl) 2018;11:29-36.
crossref pmid pmc pdf
7. Fernandez-Granero MA, Sanchez-Morillo D, Leon-Jimenez A. Computerised analysis of telemonitored respiratory sounds for predicting acute exacerbations of COPD. Sensors (Basel) 2015;15:26978-96.
crossref pmid pmc
8. Altan G, Kutlu Y, Allahverdi N. Deep learning on computerized analysis of chronic obstructive pulmonary disease. IEEE J Biomed Health Inform 2020;24:1344-50.
crossref
9. Mondal A, Banerjee P, Tang H. A novel feature extraction technique for pulmonary sound analysis based on EMD. Comput Methods Programs Biomed 2018;159:199-209.
crossref pmid
10. Kevat A, Kalirajah A, Roseby R. Artificial intelligence accuracy in detecting pathological breath sounds in children using digital stethoscopes. Respir Res 2020;21:253.
crossref pmid pmc pdf
11. Jung SY, Liao CH, Wu YS, Yuan SM, Sun CT. Efficiently classifying lung sounds through depthwise separable CNN models with fused STFT and MFCC features. Diagnostics (Basel) 2021;11:732.
crossref pmid pmc
12. Meng F, Shi Y, Wang N, Cai M, Luo Z. Detection of respiratory sounds based on wavelet coefficients and machine learning. IEEE Access 2020;8:155710-20.
crossref
13. Fraiwan M, Fraiwan L, Alkhodari M, Hassanin O. Recognition of pulmonary diseases from lung sounds using convolutional neural networks and long short-term memory. J Ambient Intell Humaniz Comput 2022;13:4759-71.
crossref pmid pmc pdf
14. Aras S, Ozturk M, Gangal A. Automatic detection of the respiratory cycle from recorded, single-channel sounds from lungs. Turk J Electr Eng Comput Sci 2018;26:11-22.
crossref
15. Gurung A, Scrafford CG, Tielsch JM, Levine OS, Checkley W. Computerized lung sound analysis as diagnostic aid for the detection of abnormal lung sounds: a systematic review and meta-analysis. Respir Med 2011;105:1396-403.
crossref pmid pmc
16. Palaniappan R, Sundaraj K, Sundaraj S. Artificial intelligence techniques used in respiratory sound analysis: a systematic review. Biomed Tech (Berl) 2014;59:7-18.
crossref pmid
17. Altan G, Kutlu Y, Gokcen A. Chronic obstructive pulmonary disease severity analysis using deep learning onmulti-channel lung sounds. Turk J Electr Eng Comput Sci 2020;28:2979-96.
crossref
18. Chen H, Yuan X, Pei Z, Li M, Li J. Triple-classification of respiratory sounds using optimized s-transform and deep residual networks. IEEE Access 2019;7:32845-52.
crossref
19. Hsu FS, Huang SR, Huang CW, Huang CJ, Cheng YR, Chen CC, et al. Benchmarking of eight recurrent neural network variants for breath phase and adventitious sound detection on a self-developed open-access lung sound database: HF_Lung_V1. PLoS One 2021;16:e0254134.
crossref pmid pmc
20. Grzywalski T, Piecuch M, Szajek M, Breborowicz A, Hafke-Dys H, Kocinski J, et al. Practical implementation of artificial intelligence algorithms in pulmonary auscultation examination. Eur J Pediatr 2019;178:883-90.
crossref pmid pmc pdf
21. Aykanat M, Kilic O, Kurt B, Saryal S. Classification of lung sounds using convolutional neural networks. EURASIP J Image Video Process 2017;2017:65.
crossref pdf
22. Altan G, Kutlu Y, Pekmezci AO, Nural S. Deep learning with 3D-second order difference plot on respiratory sounds. Biomed Signal Process Control 2018;45:58-69.
crossref
23. Chamberlain D, Kodgule R, Ganelin D, Miglani V, Fletcher RR. Application of semi-supervised deep learning to lung sound analysis. Annu Int Conf IEEE Eng Med Biol Soc 2016;2016:804-7.
crossref pmid
24. Murphy RL, Vyshedskiy A, Power-Charnitsky VA, Bana DS, Marinelli PM, Wong-Tse A, et al. Automated lung sound analysis in patients with pneumonia. Respir Care 2004;49:1490-7.
pmid
25. Arun Babu T, Sharmila V. Auscultating with personal protective equipment (PPE) during COVID-19 pandemic: challenges and solutions. Eur J Obstet Gynecol Reprod Biol 2021;256:509-10.
crossref pmid pmc
26. Vasudevan RS, Horiuchi Y, Torriani FJ, Cotter B, Maisel SM, Dadwal SS, et al. Persistent value of the stethoscope in the age of COVID-19. Am J Med 2020;133:1143-50.
crossref pmid pmc
27. White SJ. Auscultation without contamination: a solution for stethoscope use with personal protective equipment. Ann Emerg Med 2015;65:235-6.
crossref
28. Mun SK. Non-face-to-face treatment in Korea: suggestions for essential conditions. Korean J Med 2023;98:1-3.
crossref pdf
29. Klum M, Leib F, Oberschelp C, Martens D, Pielmus AG, Tigges T, et al. Wearable multimodal stethoscope patch for wireless biosignal acquisition and long-term auscultation. Annu Int Conf IEEE Eng Med Biol Soc 2019;2019:5781-5.
crossref pmid
30. Liu Y, Norton JJ, Qazi R, Zou Z, Ammann KR, Liu H, et al. Epidermal mechano-acoustic sensing electronics for cardiovascular diagnostics and human-machine interfaces. Sci Adv 2016;2:e1601185.
crossref pmid pmc
31. Yilmaz G, Rapin M, Pessoa D, Rocha BM, de Sousa AM, Rusconi R, et al. A wearable stethoscope for long-term ambulatory respiratory health monitoring. Sensors (Basel) 2020;20:5124.
crossref pmid pmc
32. Klum M, Urban M, Tigges T, Pielmus AG, Feldheiser A, Schmitt T, et al. Wearable cardiorespiratory monitoring employing a multimodal digital patch stethoscope: estimation of ECG, PEP, LVET and respiration using a 55 mm single-lead ECG and phonocardiogram. Sensors (Basel) 2020;20:2033.
crossref pmid pmc
33. Bardou D, Zhang K, Ahmad SM. Lung sounds classification using convolutional neural networks. Artif Intell Med 2018;88:58-69.
crossref pmid
34. Zimmerman B, Williams D. Lung sounds. In: StatPearls. Treasure Island: StatPearls Publishing; 2022 [cited 2023 Aug 21]. Available from: https://www.ncbi.nlm.nih.gov/books/NBK537253.

35. Reichert S, Gass R, Brandt C, Andres E. Analysis of respiratory sounds: state of the art. Clin Med Circ Respirat Pulm Med 2008;2:45-58.
crossref pmid pmc pdf
36. Sengupta N, Sahidullah M, Saha G. Lung sound classification using cepstral-based statistical features. Comput Biol Med 2016;75:118-29.
crossref pmid
37. Serbes G, Sakar CO, Kahya YP, Aydin N. Feature extraction using time-frequency/scale analysis and ensemble of feature sets for crackle detection. Annu Int Conf IEEE Eng Med Biol Soc 2011;2011:3314-7.
crossref pmid
38. Faustino P, Oliveira J, Coimbra M. Crackle and wheeze detection in lung sound signals using convolutional neural networks. Annu Int Conf IEEE Eng Med Biol Soc 2021;2021:345-8.
crossref pmid
39. Vyshedskiy A, Alhashem RM, Paciej R, Ebril M, Rudman I, Fredberg JJ, et al. Mechanism of inspiratory and expiratory crackles. Chest 2009;135:156-64.
crossref pmid
40. Gavriely N, Shee TR, Cugell DW, Grotberg JB. Flutter in flow-limited collapsible tubes: a mechanism for generation of wheezes. J Appl Physiol (1985) 1989;66:2251-61.
crossref pmid
41. Pasterkamp H, Brand PL, Everard M, Garcia-Marcos L, Melbye H, Priftis KN. Towards the standardisation of lung sound nomenclature. Eur Respir J 2016;47:724-32.
crossref pmid
42. Cheng TO. Hippocrates and cardiology. Am Heart J 2001;141:173-83.
crossref pmid
43. Hajar R. The art of listening. Heart Views 2012;13:24-5.
crossref pmid pmc
44. Bishop PJ. Evolution of the stethoscope. J R Soc Med 1980;73:448-56.
crossref pmid pmc pdf
45. Barrett PM, Topol EJ. To truly look inside. Lancet 2016;387:1268-9.
crossref pmid
46. Permin H, Norn S. The stethoscope: a 200th anniversary. Dan Medicinhist Arbog 2016;44:85-100.
pmid
47. Choudry M, Stead TS, Mangal RK, Ganti L. The history and evolution of the stethoscope. Cureus 2022;14:e28171.
crossref pmid pmc
48. Lee SH, Kim YS, Yeo WH. Advances in microsensors and wearable bioelectronics for digital stethoscopes in health monitoring and disease diagnosis. Adv Healthc Mater 2021;10:e2101400.
crossref pmid pdf
49. Kim Y, Hyon Y, Jung SS, Lee S, Yoo G, Chung C, et al. Respiratory sound classification for crackles, wheezes, and rhonchi in the clinical field using deep learning. Sci Rep 2021;11:17186.
crossref pmid pmc pdf
50. Fernandez-Granero MA, Sanchez-Morillo D, Leon-Jimenez A. An artificial intelligence approach to early predict symptom-based exacerbations of COPD. Biotechnol Biotechnol Equip 2018;32:778-84.
crossref
51. Saldanha J, Chakraborty S, Patil S, Kotecha K, Kumar S, Nayyar A. Data augmentation using Variational Autoencoders for improvement of respiratory disease classification. PLoS One 2022;17:e0266467.
crossref pmid pmc
52. Rocha BM, Filos D, Mendes L, Serbes G, Ulukaya S, Kahya YP, et al. An open access database for the evaluation of respiratory sound classification algorithms. Physiol Meas 2019;40:035001.
crossref pmid pdf
53. Alqudah AM, Qazan S, Obeidat YM. Deep learning models for detecting respiratory pathologies from raw lung auscultation sounds. Soft comput 2022;26:13405-29.
crossref pmid pmc pdf
54. Fraiwan M, Fraiwan L, Khassawneh B, Ibnian A. A dataset of lung sounds recorded from the chest wall using an electronic stethoscope. Data Brief 2021;35:106913.
crossref pmid pmc
55. Zhang P, Wang B, Liu Y, Fan M, Ji Y, Xu H, et al. Lung auscultation of hospitalized patients with SARS-CoV-2 pneumonia via a wireless stethoscope. Int J Med Sci 2021;18:1415-22.
crossref pmid pmc
56. Lee SH, Kim YS, Yeo MK, Mahmood M, Zavanelli N, Chung C, et al. Fully portable continuous real-time auscultation with a soft wearable stethoscope designed for automated disease diagnosis. Sci Adv 2022;8:eabo5867.
crossref pmid pmc
57. Joshitha C, Kanakaraja P, Rooban S, Prasad BSDR, Rao BG, Teja SVS. Design and implementation of Wi-Fi enabled contactless electronic stethoscope. 2022 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS); 2022 Apr 7-9: Erode, India: IEEE; 2022.

58. Lenz I, Rong Y, Bliss D. Contactless stethoscope enabled by radar technology. Bioengineering (Basel) 2023;10:169.
crossref pmid pmc
59. Au YK, Muqeem T, Fauveau VJ, Cardenas JA, Geris BS, Hassen GW, et al. Continuous monitoring versus intermittent auscultation of wheezes in patients presenting with acute respiratory distress. J Emerg Med 2022;63:582-91.
crossref pmid
60. Kikutani K, Ohshimo S, Sadamori T, Ohki S, Giga H, Ishii J, et al. Quantification of respiratory sounds by a continuous monitoring system can be used to predict complications after extubation: a pilot study. J Clin Monit Comput 2023;37:237-48.
crossref pmid pdf
61. Wang H, Toker A, Abbas G, Wang LY. Application of a continuous respiratory sound monitoring system in thoracic surgery. J Biomed Res 2021;35:491-4.
crossref pmid pmc
62. Sonko ML, Arnold TC, Kuznetsov IA. Machine learning in point of care ultrasound. POCUS J 2022;7(Kidney):78-87.
crossref pmid pmc pdf
63. Chen H, Wu L, Dou Q, Qin J, Li S, Cheng JZ, et al. Ultrasound standard plane detection using a composite neural network framework. IEEE Trans Cybern 2017;47:1576-86.
crossref pmid
64. Knackstedt C, Bekkers SC, Schummers G, Schreckenberg M, Muraru D, Badano LP, et al. Fully automated versus standard tracking of left ventricular ejection fraction and longitudinal strain: the FAST-EFs Multicenter Study. J Am Coll Cardiol 2015;66:1456-66.
pmid
65. Baloescu C, Toporek G, Kim S, McNamara K, Liu R, Shaw MM, et al. Automated lung ultrasound B-line assessment using a deep learning algorithm. IEEE Trans Ultrason Ferroelectr Freq Control 2020;67:2312-20.
crossref pmid
TOOLS
METRICS Graph View
  • 0 Crossref
  • 0 Scopus
  • 1,687 View
  • 146 Download
ORCID iDs

Yoonjoo Kim
https://orcid.org/0000-0002-9028-0872

Taeyoung Ha
https://orcid.org/0000-0002-9440-1918

Chaeuk Chung
https://orcid.org/0000-0002-3978-0484

Funding Information

National Research Foundation of Korea
https://doi.org/10.13039/501100003725
2022R1F1A1076515

National Institute for Mathematical Sciences

B23910000

Related articles


ABOUT
ARTICLE & TOPICS
Article category

Browse all articles >

Topics

Browse all articles >

BROWSE ARTICLES
FOR CONTRIBUTORS
Editorial Office
101-605, 58, Banpo-daero, Seocho-gu (Seocho-dong, Seocho Art-Xi), Seoul 06652, Korea
Tel: +82-2-575-3825, +82-2-576-5347    Fax: +82-2-572-6683    E-mail: katrdsubmit@lungkorea.org                

Copyright © 2024 by The Korean Academy of Tuberculosis and Respiratory Diseases. All rights reserved.

Developed in M2PI

Close layer
prev next