Next Article in Journal
MPC and PSO Based Control Methodology for Path Tracking of 4WS4WD Vehicles
Next Article in Special Issue
Acoustic Localization for a Moving Source Based on Cross Array Azimuth
Previous Article in Journal
A Versatile Velocity Map Ion-Electron Covariance Imaging Spectrometer for High-Intensity XUV Experiments
Previous Article in Special Issue
A Multi-Frame PCA-Based Stereo Audio Coding Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings

1
Graduate School of Advanced Technology and Science, Tokushima University, Tokushima 770-8506, Japan
2
Graduate School of Technology, Industrial and Social Sciences, Tokushima University, Tokushima 770-8506, Japan
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2018, 8(6), 999; https://doi.org/10.3390/app8060999
Submission received: 9 May 2018 / Revised: 8 June 2018 / Accepted: 13 June 2018 / Published: 19 June 2018
(This article belongs to the Special Issue Modelling, Simulation and Data Analysis in Acoustical Problems)

Abstract

:
Information on bowel motility can be obtained via magnetic resonance imaging (MRI)s and X-ray imaging. However, these approaches require expensive medical instruments and are unsuitable for frequent monitoring. Bowel sounds (BS) can be conveniently obtained using electronic stethoscopes and have recently been employed for the evaluation of bowel motility. More recently, our group proposed a novel method to evaluate bowel motility on the basis of BS acquired using a noncontact microphone. However, the method required manually detecting BS in the sound recordings, and manual segmentation is inconvenient and time consuming. To address this issue, herein, we propose a new method to automatically evaluate bowel motility for noncontact sound recordings. Using simulations for the sound recordings obtained from 20 human participants, we showed that the proposed method achieves an accuracy of approximately 90% in automatic bowel sound detection when acoustic feature power-normalized cepstral coefficients are used as inputs to artificial neural networks. Furthermore, we showed that bowel motility can be evaluated based on the three acoustic features in the time domain extracted by our method: BS per minute, signal-to-noise ratio, and sound-to-sound interval. The proposed method has the potential to contribute towards the development of noncontact evaluation methods for bowel motility.

1. Introduction

The decrease in or loss of bowel motility is a problem that seriously affects quality of life (QOL) and daily eating habits of patients; examples of this include functional gastrointestinal disorders (FGID), in which patients experience bloating and pain when bowel motility is impaired due to stress or other factors. Such bowel disorders are diagnosed by evaluating the bowel motility. Bowel motility is currently measured using X-ray imaging or endoscopy techniques; however, these methods require complex testing equipment and place immense mental, physical, and financial burdens on patients, which make these methods unsuitable for repeated monitoring.
In recent years, the acoustic features obtained from bowel sounds (BS) have been used to evaluate bowel motility. BS are created when transportation of gas and digestive contents through the digestive tract occurs due to peristaltic movement [1]. BS can be easily recorded by applying an electronic stethoscope to the surface of the body. In recent years, a method has been developed for evaluating bowel motility by automatically extracting BS from the audio data recorded using electronic stethoscopes [2,3,4,5,6,7]. In quiet conditions, BS can be perceived at a slight distance without the use of an electronic stethoscope. As such, our recent research has demonstrated that even when data is acquired using a noncontact microphone, bowel motility can be evaluated based on BS in a manner the same as that when an electronic stethoscope is used [8]. However, in this study, BS were required to be manually extracted from the audio data that was recorded using noncontact microphones, and a large amount of time was spent on carefully labeling the sounds. The sound pressure of BS recorded using noncontact microphones was lower than that of BS recorded with electronic stethoscopes placed directly on the surface of the body. Furthermore, compared to recordings from electronic stethoscopes, there may have been sounds other than BS mixed in at higher volumes. As such, a BS extraction system that is robust against extraneous noise must be developed to reduce the time- and labor-intensive work of BS labeling.
To resolve these issues, this study proposes a new system for evaluating bowel motility on the basis of results obtained by automatically extracting BS from the audio data recorded with a noncontact microphone. The proposed method is primarily made up of the following four steps: (1) segment detection using the short-term energy (STE) method; (2) automatic extraction of two acoustic features—mel-frequency cepstral coefficients (MFCC) [9,10] and power-normalized cepstral coefficients (PNCC) [11,12,13,14]—from segments; (3) automatic classification of segments as BS/non-BS based on an artificial neural network (ANN); and (4) evaluation of bowel motility on the basis of the acoustic features in the time domain of the BS that were automatically extracted. On the basis of audio data recorded from 20 human participants before and after they consumed carbonated water, we verified (i) the validity of automatic BS extraction by the proposed method and (ii) the validity of bowel motility evaluation based on acoustic features in the time domain.

2. Materials and Methods

2.1. Subject Database

This study was conducted with the approval of the research ethics committee of the Institute of Technology and Science at Tokushima University in Japan. A carbonated water tolerance test was performed using 20 male participants (age: 22.9 ± 3.4, body mass index (BMI): 22.7 ± 3.8) who had provided their consent to the research content and their participation. The test was conducted after 12 or more hours of fasting by the participants, over a 25-min period (comprised of a 10-min period of rest before consuming carbonated water and a 15-min period of rest after consuming carbonated water). During the test, sound data was recorded using a noncontact microphone (NT55 manufactured by RODE), an electronic stethoscope (E-Scope2 manufactured by Cardionics), and a multitrack recorder (R16 manufactured by ZOOM). The primary frequency components of BS have generally been reported to be present between 100 Hz and 500 Hz [15]. Based on these reports, sound data was stored at a sampling frequency of 4000 Hz and digital resolution of 16 bits. Furthermore, sound data was filtered by a third-order Butterworth bandpass filter with a cutoff frequency of 100–1500 Hz. The participants were in a supine position during testing, with an electronic stethoscope positioned 9 cm to the right of the navel and a microphone 20 cm above the navel [8].
BS present in the sound data obtained using the noncontact microphone were also present in the sound data obtained using the electronic stethoscope. Based on this, as in our previous studies, we used audio playback software to listen carefully to both types of sound recordings, and classified as a BS episode any episode that was 20 ms or more in duration and could be distinguished by the ear at the same time position in both recordings [7].
For the analysis, we divided the sound data into sub-segments with a window range of 256 samples and a shift range of 64 samples. The STE method was used to calculate the power of each window range, making it possible to detect sub-segments above a certain signal-to-noise ratio (SNR). SNR, as used in this study, is defined as follows:
S N R = 10 l o g 10 P S P N
Here, P S represents the signal power and P N represents the noise power. P N can be calculated based on a one-second interval of silence determined by conducting the abovementioned listening process, and it is a time-averaged value. Sub-segments detected successively using the STE method are treated as a single segment (also called sound episode (SE)). If a detected segment corresponds to a BS episode, then it is defined as a BS segment; otherwise, it is defined as a non-BS segment.

2.2. Automatic BS Extraction on the Basis of Acoustic Features

The acoustic feature presented to the ANN is either MFCC or PNCC. MFCC is widely used in fields such as speech recognition and analysis of biological sounds such as lung or heart sounds [9,16,17,18]. MFCC is calculated by performing a discrete cosine transformation on the output from triangular filter banks evenly spaced along a logarithmic axis; this is referred to as a mel scale, and it approximates the human auditory frequency response. PNCC is a feature value developed to improve the robustness of voice recognition systems in noisy environments [11,12,13,14]. Because BS captured using noncontact microphones are generally low in volume and have degraded SNR, PNCC can be expected to be effective; it improves the process of calculating MFCC to make it more similar to certain physiological aspects of humans. Moreover, PNCC differs from MFCC primarily in the following three ways: First, instead of the triangular filter banks used in MFCC, PNCC uses gamma-tone filter banks based on an equivalent rectangular bandwidth to imitate the workings of the cochlea. Second, it uses bias subtraction based on the ratio of the arithmetic mean to the geometric mean (AM-to-GM ratio) for the sound that undergoes intermediate processing, which is not done in the MFCC calculation process. Third, it replaces the logarithmic nonlinearity (used in MFCC) with power nonlinearity. Owing to these differences, PNCC is expected to provide sound processing with excellent resistance to noise. For BS extraction in this work, a SE is divided into frames with a frame size of 200 samples and a shift size of 100 samples. Considering the number of dimensions often used in the field of voice recognition, we use 13-dimension MFCC and PNCC obtained from 24-channel filter banks, averaged over all the frames in each episode.
On the basis of these acoustic features, an artificial neural network (ANN) is used as a classifier to categorize segments detected with the STE method into BS segments and non-BS segments. The ANN is structured as a hierarchical neural network made up of three layers: namely, the input, intermediate, and output layers. The number of units in the input, intermediate, and output layers are, respectively, 13, 25 and 1. The output function of the intermediate layer units is a hyperbolic tangent function, and the transfer function of the output layer units is a linear function. As a target signal, the value of 1 is assigned to analysis sections in which sound is present if the sound is BS, whereas 0 is assigned if it is non-BS. The ANN learns from this categorization using an error back-propagation algorithm based on the Levenberg–Marquardt method [19,20]. To calculate sensitivity and specificity based on the post-training ANN output, a receiver operating characteristic (ROC) curve can be drawn. Through the analysis of the ROC curve, an optimum threshold ( T h ) is estimated for use when classifying testing data sets. The optimum threshold used at this point is the threshold that is the shortest Euclidean distance from the positions at which sensitivity = 1 and specificity = 1 on the ROC curve [21]. Using this threshold for the ANN test output b ^ , it is possible to calculate the classification accuracy using sensitivity (Sen), specificity (Spe), positive predictive value (PPV), negative predictive value (NPV), and accuracy (Acc).
As shown in Figure 1, automatic BS extraction performance in this ANN-based method is evaluated by dividing the BS and non-BS segments obtained from the 20-person sound database at a ratio of 3:1, and using them respectively as training and testing data. This study calculated the average classification accuracy by performing multiple trials of ANN training and testing, in which (1) initial values of combined load were randomly assigned or (2) test data was randomly assigned.

2.3. Evaluation of Bowel Motility Based on Automatically Extracted BS

Our past research demonstrated significant differences in the following time domain acoustic features extracted before and after consumption of carbonated water by the participants: BS detected per minute, SNR, length of BS, and interval between BS (sound to sound (SS) interval). These differences suggest that bowel motility can be evaluated on the basis of these acoustic features [8]. As such, this study examines whether bowel motility can be automatically evaluated based on these acoustic features, as investigated in the previous study. To evaluate bowel motility from the data of one participant, the acoustic features of time domains were extracted based on multiple BS automatically extracted by performing leave-one-out cross validation for the proposed method. As in past studies, the differences between the previously mentioned acoustic features before and after the participant consumed carbonated water was evaluated using a Wilcoxon signed-rank test. The block diagram in Figure 2 shows the process leading up to the evaluation of bowel motility.

3. Results

To investigate the effect of SNR thresholds used in the STE method on the automatic evaluation performance and evaluation of bowel motility by the method, experiments were performed in which the SNR thresholds used in the STE method were 0, 0.5, 1 and 2 dB.

3.1. Automatic Bowel Sound Detection

Table 1 lists the number and length of BS and non-BS segments obtained at each SNR threshold used in the STE method.
Table 1 reveals the following pattern for both cases (before and after consumption of carbonated water by participants): As the SNR threshold decreases, the numbers of both BS and non-BS segments increase until a certain threshold, after which the numbers of segments decrease. Additionally, the values in the table confirm that the lengths of both segments also increase with decrease in SNR. The values of length and number of both segments were larger after consumption of carbonated water than those before consumption, and BS segments were longer than non-BS segments.
To evaluate the automatic extraction performance of the proposed method, the respective segments were divided in a ratio of 3:1 for training data and testing data. Table 2 and Table 3, respectively, present the results of 100 ANN-based approach trials that used MFCC and PNCC as acoustic features to derive the average classification accuracy.
Table 2 reveals that for the case before consumption of carbonated water, accuracy slightly degraded with decrease in the SNR threshold, whereas the accuracy increased with decrease in SNR threshold in the case after consumption. Table 3 demonstrates that when PNCC is used, classification accuracy increases as SNR threshold decreases, for cases both before and after consumption of carbonated water. Furthermore, we can see that the highest accuracy is obtained when the SNR threshold is 0 dB. Figure 3 shows the results of a comparative analysis of extraction accuracy before and after consumption of carbonated water when using MFCC and PNCC, respectively. Table 3 shows that PNCC is more accurate than MFCC for all SNR thresholds. When the SNR threshold is 0 dB before the consumption of carbonated water, the average of PNCC becomes sufficiently larger compared to that of MFCC. In general, a BS with lower sound-pressure occurs before consumption of carbonated water than after consumption. This suggests that PNCC is effective in classifying such sounds. On the basis of the abovementioned observation, a subsequent automatic evaluation of bowel motility was conducted using PNCC with an ANN-based approach.

3.2. Bowel Motility Evaluation

In this study, leave-one-out cross validation was performed for each participant, and the classification accuracy of an ANN-based approach using PNCC was verified. Table 4 presents the average classification accuracies for which the corresponding accuracy was the highest for each participant after leave-one-out cross validation was performed 50 times.
As was noted in a prior study [8], Table 5 shows that the acoustic features—BS detected per minute, SNR, and SS interval—can capture the differences in bowel motility before and after a participant consumes carbonated water, up to a point at which the SNR threshold decreases to nearly 0 dB. Note that these results are related to the accuracy of automatic BS extraction. However, unlike in the prior study [8], no significant difference in BS length before and after consumption of carbonated water was found. This suggests that when the SNR threshold reduces to 0 dB, the acoustic features of BS detected per minute, SNR, and SS interval can evaluate the bowel motility without being affected by the reduction in SNR threshold.

4. Discussion and Conclusions

This study proposes a system for automatic evaluation of bowel motility on the basis of acoustic features in BS time domains obtained by automatically extracting BS from sound data recorded using a noncontact microphone. Although studies related to bowel motility using BS have been conducted previously [2,3,4,5,6,7], those studies used electronic stethoscopes that were applied to the surface of the body. Our recent research has demonstrated that bowel motility can be evaluated from sound data recorded using a noncontact microphone the same way as it can be evaluated using data recorded with a stethoscope [8]. However, the extraction of BS from sound data performed in this study was based on manual labeling. The sound pressure of BS recorded using noncontact microphones is lower than that of BS recorded using electronic stethoscopes applied to the surface of the human body, and there are fewer perceptible BS. As such, using sound data recorded without contact requires an automatic BS extraction method that is resistant to extraneous noise. Even so, the results suggest that the system proposed herein—which uses PNCC and has excellent noise resistance—is able to automatically extract BS with approximately 90% accuracy if the SNR threshold is 0 dB. Furthermore, even when the SNR threshold drops to 0 dB, results suggest that bowel motility can be evaluated using the acoustic features other than those from the BS length time domain, such as BS detected per minute, SNR, and SS interval.
The proposed method could extract more sound by decreasing the SNR threshold used in the STE method, further extending segment length to increase the information provided to the system for BS/non-BS differentiation. We believe that as a result of this extension, we could improve the performance of automatic BS extraction. However, this also suggests that proper BS length cannot be obtained because of the extension in BS segment length caused by the decrease in the SNR threshold used in the STE method.
Compared to the results of the performance evaluation based on random sampling, the results based on leave-one-out cross validation tended to have a larger standard deviation and decreased sensitivity in the proposed method, particularly before the consumption of carbonated water by participants. The cause of this was thought to be the small number of participants, meaning that sufficient BS segments were not available for use in leave-one-out cross validation. As such, we expect an improvement with increase in the number of subjects. To further improve system performance, a combination of the following two measures would likely be useful: (1) replacing the STE method with another method for detecting segments having sound; and (2) selecting acoustic features with excellent resistance to extraneous noise.
In this study, we have provided new knowledge for noncontact automatic evaluation of bowel motility. It is hoped that the foundations of the system developed in this study can assist in the further development of the evaluation of bowel motility using noncontact microphones and research related to diagnostic support for bowel disorders.

Author Contributions

T.E., R.S., and Y.G. conceived and designed the experiments; R.S. and Y.G. performed the experiments; R.S. analyzed the data; R.S. and M.A. contributed materials/analysis tools; T.E. and R.S. wrote the paper.

Acknowledgments

This study was partly supported by the Ono Charitable Trust for acoustics.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zaloga, G.P. Blind bedside placement of enteric feeding tubes. Tech. Gastrointest. Endosc. 2001, 3, 9–15. [Google Scholar] [CrossRef]
  2. Shono, K.; Emoto, T.; Abeyratne, T.O.U.R.; Yano, H.; Akutagawa, M.; Konaka, S.; Kinouchi, Y. Automatic evaluation of gastrointestinal motor activity through the analysis of bowel sounds. In Proceedings of the 10th IASTED International Conference on Biomedical Engineering, BioMed 2013, Innsbruck, Austria, 11–13 February 2013; ACTA Press: Calgary, AB, Canada, 2013; pp. 136–140. [Google Scholar]
  3. Ulusar, U.D. Recovery of gastrointestinal tract motility detection using Naive Bayesian and minimum statistics. Comput. Boil. Med. 2014, 51, 223–228. [Google Scholar] [CrossRef] [PubMed]
  4. Craine, B.L.; Silpa, M.L.; O’toole, C.J. Two-dimensional positional mapping of gastrointestinal sounds in control and functional bowel syndrome patients. Dig. Dis. Sci. 2002, 47, 1290–1296. [Google Scholar] [CrossRef] [PubMed]
  5. Goto, J.; Matsuda, K.; Harii, N.; Moriguchi, T.; Yanagisawa, M.; Sakata, O. Usefulness of a real-time bowel sound analysis system in patients with severe sepsis (pilot study). J. Artif. Organs 2015, 18, 86–91. [Google Scholar] [CrossRef] [PubMed]
  6. Dimoulas, C.; Kalliris, G.; Papanikolaou, G.; Petridis, V.; Kalampakas, A. Bowel-sound pattern analysis using wavelets and neural networks with application to long-term, unsupervised, gastrointestinal motility monitoring. Expert Syst. Appl. 2008, 34, 26–41. [Google Scholar] [CrossRef]
  7. Ranta, R.; Louis-Dorr, V.; Heinrich, C.; Wolf, D.; Guillemin, F. Digestive activity evaluation by multichannel abdominal sounds analysis. IEEE Trans. Biomed. Eng. 2010, 57, 1507–1519. [Google Scholar] [CrossRef] [PubMed]
  8. Emoto, T.; Abeyratne, U.R.; Gojima, Y.; Nanba, K.; Sogabe, M.; Okahisa, T.; Kinouchi, Y. Evaluation of human bowel motility using non-contact microphones. Biomed. Phys. Eng. Express. 2016, 2, 45012. [Google Scholar] [CrossRef]
  9. Lu, X.; Dang, J. An investigation of dependencies between frequency components and speaker characteristics for text-independent speaker identification. Speech Commun. 2008, 50, 312–322. [Google Scholar] [CrossRef]
  10. Karunajeewa, A.S.; Abeyratne, U.R.; Hukins, C. Multi-feature snore sound analysis in obstructive sleep apnea–hypopnea syndrome. Physiol. Meas. 2010, 32, 83–97. [Google Scholar] [CrossRef] [PubMed]
  11. Kim, C.; Stern, R.M. Power-normalized cepstral coefficients (PNCC) for robust speech recognition. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 4101–4104. [Google Scholar]
  12. Chenchah, F.; Lachiri, Z. A bio-inspired emotion recognition system under real-life conditions. Appl. Acoust. 2017, 115, 6–14. [Google Scholar] [CrossRef]
  13. Kim, C.; Stern, R.M. Feature extraction for robust speech recognition based on maximizing the sharpness of the power distribution and on power flooring. In Proceedings of the 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), Dallas, TX, USA, 14–19 March 2010; pp. 4574–4577. [Google Scholar]
  14. Kim, C.; Stern, R.M. Feature extraction for robust speech recognition using a power-law nonlinearity and power-bias subtraction. In Proceedings of the Interspeech 2009, Tenth Annual Conference of the International Speech Communication Association, Brighton, UK, 6–10 September 2009; pp. 28–31. [Google Scholar]
  15. Cannon, W.B. Auscultation of the rhythmic sounds produced by the stomach and intestines. Am. J. Physiol. Legacy Content 1905, 14, 339–353. [Google Scholar] [CrossRef]
  16. Chauhan, S.; Wang, P.; Lim, C.S.; Anantharaman, V. A computer-aided MFCC-based HMM system for automatic auscultation. Comput. Boil. Med. 2008, 38, 221–233. [Google Scholar] [CrossRef] [PubMed]
  17. Rubin, J.; Abreu, R.; Ganguli, A.; Nelaturi, S.; Matei, I.; Sricharan, K. Classifying heart sound recordings using deep convolutional neural networks and mel-frequency cepstral coefficients. In Proceedings of the 2016 Computing in Cardiology Conference (CinC), Vancouver, BC, Canada, 11–14 September 2016; pp. 813–816. [Google Scholar]
  18. Duckitt, W.D.; Tuomi, S.K.; Niesler, T.R. Automatic detection, segmentation and assessment of snoring from ambient acoustic data. Physiol. Meas. 2006, 27, 1047. [Google Scholar] [CrossRef] [PubMed]
  19. Levenverg, K. A Method for the Solution of Certain Problems in Least Squares. Q. Appl. Math. 1944, 2, 164–168. [Google Scholar] [CrossRef]
  20. More, J.J. The Levenberg–Marquardt Algorithm: Implementation and theory. In Numerical Analysis; Lecture Notes in Mathematics 630; Watson, G.A., Ed.; Springer: Berlin/Heidelberg, Germany, 1970; pp. 105–116. [Google Scholar]
  21. Akobeng, A.K. Understanding diagnostic tests 3: Receiver operating characteristic curves. Acta Paediatr. 2007, 96, 644–647. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Block diagram showing the proposed method for automatic BS extraction based on acoustic features. SE: sound episode; MFCC: Mel Frequency Cepstral Coefficients; PNCC: Power Normalized Cepstral Coefficients; ANN: artificial neural network; ROC: receiver operating characteristic; BS: bowel sound; b: ANN test output; Th: threshold obtained via ROC analysis.
Figure 1. Block diagram showing the proposed method for automatic BS extraction based on acoustic features. SE: sound episode; MFCC: Mel Frequency Cepstral Coefficients; PNCC: Power Normalized Cepstral Coefficients; ANN: artificial neural network; ROC: receiver operating characteristic; BS: bowel sound; b: ANN test output; Th: threshold obtained via ROC analysis.
Applsci 08 00999 g001
Figure 2. Block diagram showing the proposed method for automatic evaluation of bowel motility.
Figure 2. Block diagram showing the proposed method for automatic evaluation of bowel motility.
Applsci 08 00999 g002
Figure 3. Comparison of accuracies of ANN-based approaches based on MFCC and PNCC, respectively.
Figure 3. Comparison of accuracies of ANN-based approaches based on MFCC and PNCC, respectively.
Applsci 08 00999 g003
Table 1. Number and length of BS and non-BS segments obtained at each SNR threshold used in the STE method.
Table 1. Number and length of BS and non-BS segments obtained at each SNR threshold used in the STE method.
Threshold of SNR (dB)Before Soda IntakeAfter Soda Intake
No. of SegmentsLength of Segments(s)No. of SegmentsLength of Segments(s)
BSNon-BSBSNon-BSBSNon-BSBSNon-BS
239648400.42 ± 0.590.34 ± 1.49153863720.59 ± 1.820.30 ± 1.15
1439102020.59 ± 0.910.36 ± 1.28.1463156140.85 ± 2.470.29 ± 0.82
0.5441134440.90 ± 2.270.39 ± 1.351378212021.23 ± 3.610.32 ± 0.53
0409149041.51 ± 3.830.47 ± 1.721264235221.86 ± 4.590.39 ± 0.65
Table 2. Results of automatic BS extraction using an ANN-based approach based on MFCC (using performance evaluation through random sampling). Sne: sensitivity; Spe: specificity; PPV: positive predictive value; NPV: negative predictive value; Acc: Accuracy.
Table 2. Results of automatic BS extraction using an ANN-based approach based on MFCC (using performance evaluation through random sampling). Sne: sensitivity; Spe: specificity; PPV: positive predictive value; NPV: negative predictive value; Acc: Accuracy.
Threshold of SNR (dB)MFCC
Before Soda Intake
Sen (%)Spe (%)PPV (%)NPV (%)Acc (%)
278.8 ± 4.783.9 ± 2.128.8 ± 2.398.0 ± 0.483.5 ± 1.8
179.7 ± 4.983.3 ± 2.117.1 ± 1.499.0 ± 0.283.2 ± 1.9
0.578.9 ± 4.183.2 ± 2.013.5 ± 1.499.2 ± 0.283.1 ± 1.9
078.0 ± 4.881.1 ± 2.010.2 ± 1.099.3 ± 0.281.0 ± 1.9
Threshold of SNR (dB)MFCC
After Soda Intake
Sen (%)Spe (%)PPV (%)NPV (%)Acc (%)
277.6 ± 2.679.9 ± 1.648.2 ± 1.693.7 ± 0.679.4 ± 1.0
181.9 ± 2.483.8 ± 1.132.2 ± 1.398.0 ± 0.283.6 ± 0.9
0.582.4 ± 2.484.7 ± 1.226.0 ± 1.398.7 ± 0.284.6 ± 1.0
082.6 ± 2.883.6 ± 1.421.4 ± 1.298.9 ± 0.283.6 ± 1.2
Table 3. Results of automatic BS extraction using an ANN-based approach based on PNCC (using performance evaluation through random sampling).
Table 3. Results of automatic BS extraction using an ANN-based approach based on PNCC (using performance evaluation through random sampling).
Threshold of SNR (dB)PNCC
Before Soda Intake
Sen (%)Spe (%)PPV (%)NPV (%)Acc (%)
279.4 ± 4.585.3 ± 2.030.9 ± 2.698.1 ± 0.484.9 ± 1.7
181.9 ± 3.886.6 ± 1.720.9 ± 1.899.1 ± 0.286.4 ± 1.6
0.582.2 ± 4.087.4 ± 1.417.8 ± 1.499.3 ± 0.187.3 ± 1.3
085.6 ± 4.387.8 ± 1.616.3 ± 1.699.6 ± 0.187.8 ± 1.5
Threshold of SNR (dB)PNCC
After Soda Intake
Sen (%)Spe (%)PPV (%)NPV (%)Acc (%)
280.6 ± 2.383.5 ± 1.654.1 ± 2.194.7 ± 0.682.9 ± 1.1
184.4 ± 1.987.6 ± 1.138.9 ± 1.998.4 ± 0.287.3 ± 0.9
0.587.1 ± 1.888.2 ± 0.932.4 ± 1.499.1 ± 0.188.1 ± 0.8
087.0 ± 2.188.3 ± 0.928.7 ± 1.499.2 ± 0.188.2 ± 0.8
Table 4. Results of automatic BS extraction using an ANN-based approach based on PNCC (using performance evaluation through leave-one-out cross validation).
Table 4. Results of automatic BS extraction using an ANN-based approach based on PNCC (using performance evaluation through leave-one-out cross validation).
Threshold of SNR (dB)PNCC
Before Soda Intake
Sen (%)Spe (%)PPV (%)NPV (%)Acc (%)
271.5 ± 23.185.0 ± 12.025.5 ± 21.596.5 ± 6.685.5 ± 9.6
175.2 ± 23.888.8 ± 8.017.6 ± 15.198.8 ± 2.088.7 ± 7.3
0.574.1 ± 23.590.2 ± 5.815.4 ± 13.799.2 ± 1.090.0 ± 5.5
072.4 ± 23.090.4 ± 6.415.2 ± 13.599.4 ± 0.890.2 ± 6.3
Threshold of SNR (dB)PNCC
After Soda Intake
Sen (%)Spe (%)PPV (%)NPV (%)Acc (%)
278.6 ± 8.185.2 ± 6.054.7 ± 12.894.4 ± 2.884.4 ± 4.1
182.9 ± 8.788.6 ± 5.340.0 ± 12.998.2 ± 1.188.3 ± 4.7
0.585.4 ± 7.290.1 ± 3.434.5 ± 12.998.9 ± 0.789.9 ± 3.1
084.4 ± 8.790.0 ± 3.630.7 ± 12.899.1 ± 0.789.8 ± 3.2
Table 5. Results of automatic bowel motility evaluation using acoustic features in four time domains: BS detected/min, SNR (dB), length of BS (s), and SS interval (s).
Table 5. Results of automatic bowel motility evaluation using acoustic features in four time domains: BS detected/min, SNR (dB), length of BS (s), and SS interval (s).
Threshold of SNR (dB)BS Detected/min.SNR (dB)
Before Soda IntakeAfter Sofa IntakePValueBefore Soda IntakeAfter Sofa IntakeP Value
23.97 ± 3.626.97 ± 3.83>0.001 *5.38 ± 0.886.20 ± 0.640.007 *
16.59 ± 4.709.54 ± 4.280.015 *3.83 ± 0.774.65 ± 0.560.002 *
0.57.79 ± 5.4810.69 ± 3.820.008 *3.05 ± 0.823.86 ± 0.560.002 *
07.90 ± 5.3511.18 ± 3.700.014 *2.44 ± 0.943.06 ± 0.540.036 *
Threshold of SNR (dB)Length of BS(s)SS Interval(s)
Before Soda IntakeAfter Sofa IntakePValueBefore Soda IntakeAfter Sofa IntakeP Value
20.56 ± 0.460.52 ± 0.150.126 *23.32 ± 17.3810.26 ± 5.660.001 *
10.85 ± 0.800.67 ± 0.200.457 *12.59 ± 10.116.84 ± 3.550.013 *
0.51.20 ± 1.300.98 ± 0.450.3239.83 ± 8.255.37 ± 2.510.014 *
01.89 ± 2.551.30 ± 0.610.3799.54 ± 7.944.74 ± 2.570.021 *

Share and Cite

MDPI and ACS Style

Sato, R.; Emoto, T.; Gojima, Y.; Akutagawa, M. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings. Appl. Sci. 2018, 8, 999. https://doi.org/10.3390/app8060999

AMA Style

Sato R, Emoto T, Gojima Y, Akutagawa M. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings. Applied Sciences. 2018; 8(6):999. https://doi.org/10.3390/app8060999

Chicago/Turabian Style

Sato, Ryunosuke, Takahiro Emoto, Yuki Gojima, and Masatake Akutagawa. 2018. "Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings" Applied Sciences 8, no. 6: 999. https://doi.org/10.3390/app8060999

APA Style

Sato, R., Emoto, T., Gojima, Y., & Akutagawa, M. (2018). Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings. Applied Sciences, 8(6), 999. https://doi.org/10.3390/app8060999

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop