Advanced Arab Academy of Audio-Vestibulogy Journal

: 2016  |  Volume : 3  |  Issue : 2  |  Page : 25--34

Templates for speech-evoked auditory brainstem response performance in cochlear implantees

Mona I Mourad1, Mohamed Eid2, Hicham G Elmongui3, Mohamed M Talaat1, Mirhan K Eldeeb1,  
1 Unit of Audiovestibular Medicine, Faculty of Medicine, Department of Otorhinolaryngology, Alexandria University, Alexandria, Egypt
2 Department of Diagnostic Imaging, Faculty of Medicine, Alexandria University, Alexandria, Egypt
3 Department of Computer and Systems Engineering, Faculty of Engineering, Alexandria University, Alexandria, Egypt

Correspondence Address:
Mirhan K Eldeeb
Unit of Audiovestibular Medicine, Faculty of Medicine, Department of Otorhinolaryngology, Faculty of Medicine, Al Sultan Hussein Street, Al Khartoom Square, Al Azareeta, Alexandria, 21111


Introduction Speech-evoked auditory brainstem response (ABR) has been used to assess the fidelity of encoding speech stimuli at the subcortical level in normal individuals in noise and in special populations such as learning-impaired children and musicians. The neural code generated by cochlear implants (CIs) in the auditory brainstem pathway and its similarity to stimulus may account for variable speech development in cochlear implantees. Objective The aim of this study was to describe speech ABR recorded in CI individuals and establish measurement parameters for the neural response and its reproducibility. Participants and methods Children between 5 and 10 years of age implanted in the right ear with fully inserted 12-electrode CIs were selected. All participants had normal morphology of the cochlea and auditory nerve in preoperative computed tomographic scan and MRI. Speech syllable 40 ms /da/ was used to elicit speech ABR. Response traces for intensity input/output functions were harvested. Grand averages were constructed for peak picking. Individual patient responses were analyzed for reproducibility, latency of wave V, root mean square amplitude of the response, and correlation to the stimulus. Results Grand averages showed wave V, followed by the frequency following response. Wave V is a vertex-positive peak, equivalent to that elicited by a click, which reflects the stimulation by the transient /d/. The mean latency of wave V was 2.59±0.7 ms at 70 dBHL. The frequency following response showed multiple sequenced troughs corresponding to the sustained vowel /a/. Individual responses collected for similar stimulus parameters showed high reproducibility, being 99.65% at 60 dBHL and 52.8% at 30 dBHL. Participants showed variable latency and root mean square amplitude-intensity input–output functions slopes. The mean stimulus-to-response correlation was 18.1±3.1%. Conclusion Speech ABR in CI participants shows similar morphology to that recorded in norms. CIs thus transcribe the speech signal with high fidelity to the brainstem pathways.

How to cite this article:
Mourad MI, Eid M, Elmongui HG, Talaat MM, Eldeeb MK. Templates for speech-evoked auditory brainstem response performance in cochlear implantees.Adv Arab Acad Audio-Vestibul J 2016;3:25-34

How to cite this URL:
Mourad MI, Eid M, Elmongui HG, Talaat MM, Eldeeb MK. Templates for speech-evoked auditory brainstem response performance in cochlear implantees. Adv Arab Acad Audio-Vestibul J [serial online] 2016 [cited 2017 Jun 25 ];3:25-34
Available from:

Full Text


Sound transduction in the cochlea follows propagation of the mechanical traveling wave along the basilar membrane, stimulating the outer and inner hair cells, and evoking the eighth nerve action potential. In profound hearing loss, this function is substantially disturbed with subsequent failure to provoke an auditory nerve action potential.

Cochlear implant (CI) transduces acoustic into electrical signals bypassing the damaged cochlea and provoking auditory nerve action potentials. The transduction process involves acoustic signal processing to extract prominent features of the target speech and feeds it to the auditory nerve through electrical biphasic pulses tonotopically mapped in the cochlear scala using place coding strategies. In addition, temporal coding, conveyed through pulse rate, enhances low-frequency perception [1].

Auditory brainstem response (ABR) is a series of potentials that have robust timing and reproducibility for transient and sustained acoustic signals. It is measured for speech stimuli [2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15],[16],[17],[18],[19],[20],[21],[22]. Transient stimuli whether a click [23] or a stop consonant [20] will yield a series of voltages I–V. Sustained stimuli such as phrases [24], monosyllabic words [25], and vowels [26],[27],[28] will yield a series of potentials composing the frequency following response (FFR). The FFR vertex-negative peaks in response to the speech syllable /da/ of 40 ms duration are named B, C, D, E, F, and O [20]. Transduction of acoustic speech with CI into neural codes in the brainstem may therefore be studied by speech-evoked ABR.

The aim of the present study was to describe speech ABR morphology in CI individuals and establish parameters for neural response reproducibility.

In this investigation it was hypothesized that the CI processor-electrode coupling transduces the speech syllable reflecting its temporal and spectral components. In this respect, it mimics speech ABR reported in normal individuals.

 Participants and methods


Ten prelingually deafened children using CIs were selected. Particpants’ ages ranged from 5 to 10 years (four male and six female). The study was approved by the local ethics committee, and an informed consent was obtained from each participant’s parent before inclusion. Criteria for selection were as follows:

Preoperative computed tomography (CT) and MRI of the petrous bone, indicating normal anatomy of the cochlea and eighth nerve.Postoperative CT of the petrous bone showing full insertion of the 12-electrode (Med-EL, Innsbruck, Austria) standard array.Implantation in the right ear.Participants with all electrodes enabled.For all participants the same coding strategy was used, fine structure four (FS4) coding strategy.

Informed consent was obtained from every participant’s parent. Each CI participant was subjected to the following protocol of evaluation.

Behavioral assessment

All children were examined using warble tones in decibel hearing level (dBHL) at 250, 500, 1000, 2000, and 4000 Hz to obtain aided free-field thresholds using their final map adjustments.

Speech auditory brainstem response recording

Speech stimulus

Speech-like /da/ syllable 40 ms duration, provided by Kraus brainstem toolbox, was presented at a repetition rate of 2.1/s and alternating polarity. The /da/ complex utilizes a five-formant synthetic speech syllable /da/, produced using a Klatt cascade/parallel formant synthesizer. A detailed description of the stimulus is provided in the study by Banai et al. [29].

Stimulus calibration

Calibration of the stimulus /da/ was measured for each participant at the level of the ear with CI. Stimulus intensity was measured in decibel sound pressure level (dBSPL) using a Radio Shack sound level meter. It was corrected to dBHL by subtracting 20 dB from the sound level meter dial.

Recording parameters

Responses were differentially recorded using an electrode montage forehead-to-contralateral (left) mastoid, chin being the ground. This contralateral electrode montage was chosen to minimize stimulus artifact by increasing the distance between the device, speaker, and the reference electrode. Disposable electrodes with conductive paste were used. Electrode sites were cleaned with alcohol and rubbed with rough gauze to lower skin resistance. To ensure balanced inputs to the differential amplifier and optimize signal-to-noise ratio, electrode impedance did not exceed 3000 Ω and differences between electrode pairs were kept below 2000 Ω.

Responses were averaged for 1000 stimuli. A 60-ms recording window (including a 10 ms prestimulus period) was used. Responses were online filtered through a 30–500 Hz band-pass filter. An averaged no stimulus run, with the processor turned on, was used as a control trace. Moreover, an averaged response with the processor-off was used as a control trace.

On the basis of a pilot study on CI participants, when the parameters used in normal-hearing children in the literature (a stimulus rate of 4.1/s and a band-pass filter of 30–3000 Hz [8]) were applied to CI participants, the traces had poor definition of response peaks and troughs. Changing stimulus rate to 2.1/s and changing band-pass filter to 30–500 Hz yielded clearer recordings with less noise contamination and visually rated reproducibility. Wave V could be traced down to 30 dBHL in most cases.

Test procedure

Responses were recorded using Bio-logic navigator Pro version 7.0.0 (Natus Medical Incorporated, San Carlos, California, United States). Measures were obtained in a quiet room and all participants were tested either in a comfortable state while watching silent cartoon or while sleeping. The stimulus /da/ was delivered through a speaker located 30 cm from the participant’s head at 90° azimuth. Response input–output intensity function was recorded starting at 70 dBHL and then at successively lower intensities by 10 dB decrements down to the level where no visual response could be attained. Two traces were recorded at each stimulus intensity to ensure that the response is repeatable. The two traces were used to assess waveform reproducibility and then the two traces were added to create an average.

Data analysis of the response

Traces were analyzed using both the MATLAB Digital Signal Processing toolbox and the Kraus brainstem toolbox in MathWorks’ MATLAB software (The MathWorks, Inc., Natick, Massachusetts, United States).

Traces were exported to ASCII format using the AEP2ASCII software provided by the Natus corporation version 7.0.0 of Bio-logic navigator Pro. Digital signals were extracted from the ASCII files; they had a frequency of four kHz. The stimulus /da/ was retrieved as a WAV file, which had a frequency of 48 kHz.

For the analysis, both the stimulus and responses (traces) were converted into 8 kHz digital signals. The stimulus sampling rate was reduced by using a sampling rate compressor that implements the function: xd[n]=x[6n], where xd[n] is the compressed stimulus and x[n] is the WAV file stimulus. On the other hand, the responses were upsampled by a factor of 2, and then filtered using a low pass filter to compensate for missing values. The following formula was used to implement this upsampling, where yi[n] is the upsampled response and y[n] is the extracted trace:[INLINE:1]

All upsampled responses and compressed stimulus were converted to an AVG format that the Kraus brainstem toolbox uses.

To analyze the responses and their correlation with the stimulus, normalized cross correlation was used. In normalized cross correlation, maximum correlation was searched across different lag times. The FFR was correlated to the vowel part of the stimulus, which bracketed the temporal window (11–40 ms) at 60 and 70 dBHL. The FFR included the segment 3 ms after wave V trough to the end of the trace. The normalized cross correlation allowed clearer illustrations of the cross correlation even when signals had diverse levels of energy. The normalized cross correlation of two signals is the cross correlation of the normalized signals. Let [INSIDE:1] and σy[n] denote the normalized signal, the average value, and the SD of y[n], respectively. Therefore,[INLINE:2]

The root mean square (RMS) amplitude was obtained for the whole response waveform. RMS amplitude was also measured for wave V peak to its following trough. RMS amplitude measurement for the FFR included the segment 3 ms after wave V trough to the end of the trace. The RMS amplitude ratio of wave V to FFR was calculated at all stimulus intensities.

For each patient, the intertrace normalized correlation was performed for the following:

Speech ABR traces of similar intensities.Between speech ABR trace at 60 dBHL and a control trace recorded and averaged, where no stimulus was presented, but the processor was turned on.Between speech ABR trace at 60 dBHL and a control trace recorded and averaged when the processor was turned off.

The prestimulus baseline RMS amplitude was subtracted from the response to remove any noise during recording.

Labeling of wave V at threshold followed visual inspection of the peak in the input–output function. Wave V was marked as the first positive peak 6 ms or earlier, followed by a sharp trough at high stimulus intensities. To estimate wave V threshold, a bracketing procedure of ‘down-ten/up-ten dBHL’ was applied. Threshold corresponded to the level at which responses were obtained for two ascending runs. Wave V was identified as the positive peak near 6 ms immediately before the negative slope, and A was selected as the bottom of the downward slope following wave V [8],[29]. A reliable peak was judged as having a peak-to-peak amplitude larger than the prestimulus baseline activity. Ambiguous peaks were visually assessed by three raters. Wave V and the FFR were expected to be earlier compared with speech ABR in normal individuals despite possible maturational delays. This can be attributed to loss of cochlear travel time estimated to be 6 ms present in normal individuals [30],[31]. A delay of 0.8 ms due to the speaker distance from the ear was also expected and was corrected after peak marking.

Designation of the FFR thresholds followed the normalized correlation procedure. Two parameters were measured. First, the percentage correlation between the two traces when wave V is absent. This correlation was based on morphology and RMS similarity. The second parameter is the lag time between the traces in ms. The FFR threshold was marked as the minimum intensity at which a maximal correlation was obtained at ∼0 ms lag time. The peak of maximum correlation had to be a single peak more than 50% to judge the trace as the FFR threshold.

A grand average was constructed for traces of similar intensities across patients to create a template for peak marking. Initially, the average latency of wave V of individual traces at a given intensity was calculated to give wave V latency of a grand average. Individual traces were aligned at the average latency of wave V for a given intensity.


Map levels and aided free-field thresholds

[Table 1] shows the mean and SDs of threshold (T) and most comfortable (C) electrical stimulation levels in charge units (qu) in final maps and aided free-field thresholds.{Table 1}

Speech auditory brainstem response morphology

Speech auditory brainstem response grand average

The response consisted of an early segment (wave V peak followed by wave A trough) and a later segment (series of vertex-negative peaks, which represent the FFR). The most prominent FFR troughs in the grand averages, particularly that at 70 dBHL, were waves C, D, and E. Waves B, F, and O of FFR in normal individuals were not detected. [Figure 1] shows labeled grand averages for speech ABR at 70, 50, and 30 dBHL. [Table 2] shows speech ABR grand average latencies and amplitudes of the response segments at 70 dBHL.{Figure 1}{Table 2}

Speech auditory brainstem response individual traces

Waveform reproducibility: The response morphology to the /da/ stimulus was maintained for the input–output intensity function (70–30 dBHL). Speech ABR trace reproducibility was maximal at high and moderate intensities (reaching 99.65% at 60 dBHL). [Table 3] shows speech ABR mean trace reproducibility expressed in percentage and mean lag time in ms at high, moderate, and low stimulus intensity levels. [Figure 2] shows normalized correlation between two traces of the same intensity. [Figure 3] shows normalized correlation between a true trace and an averaged raw waveform when the processor turned off ([Figure 3]a), and there was no stimulus input ([Figure 3]3b). The latter two averaged traces in [Figure 3]a and [Figure 3]b served as a control (no stimulus feed to the brainstem pathway).{Table 3}{Figure 2}{Figure 3}

Thresholds of speech ABR ranged from 30 to 50 dBHL between participants. At threshold the full morphology (V-FFR) of the speech ABR was detected (n=7). In one participant, wave V alone was manifest at threshold. However, in other individuals (n=2), only the FFR was manifest at threshold. [Figure 4] shows the respective trace morphologies. [Table 4] shows speech ABR mean thresholds for wave V and FFR in dBHL.{Figure 4}{Table 4}

Latency of speech auditory brainstem response wave V: The mean latency of wave was 2.59±0.7 ms at 70 dBHL with a range of 1.81–4.82 ms. [Figure 5]a shows speech ABR wave V latency-intensity function with the best linear fitted lines. The slope of the average best linear fitted line was 0.038.{Figure 5}

Root mean square amplitude of the speech auditory brainstem response response: [Figure 5]b shows speech ABR RMS amplitude-intensity function with best linear fitted lines. The slope of the average best linear fitted line was 0.091. [Figure 6] shows the mean interamplitude (RMS) ratio of wave V to FFR (V/FFR) in percentage at different stimulus intensities. The ratio was greater than 1 at 40 and 50 dBHL.{Figure 6}

Stimulus-to-response correlations: The vowel segment /a/ of the stimulus was correlated with the FFR segment of the response at 60 and 70 dBHL. The responses at 60 and 70 dBHL intensities were chosen for the correlation as the grand averages at these intensities displayed the best morphology. Maximum correlation was based on the best morphology and optimum RMS similarity as searched across different lag times. The mean stimulus–response correlation was 18.1±3.1 and ranged between 12.66 and 25.88%. The mean lag time between stimulus and response was 6±7.6 ms and ranged from −3.625 to 24 ms. [Figure 7] shows normalized correlation between the response (FFR) and the stimulus (/a/) at 70 dBHL. The maximum correlation was 25.88% attained at 11 ms lag time. [Figure 8] shows a bar chart of the stimulus–response correlation in percentage at two stimulus intensities of 60 and 70 dBHL.{Figure 7}{Figure 8}

Radiological profiles

[Table 5] shows measurements of anatomical structures depicted in postoperative CT scan: cochlear length, internal auditory canal diameter, electrode array angular insertion, and distribution of electrodes along the basal, middle, and apical turns.{Table 5}


In the present study, acoustic speech ABR was recorded in 10 CI children. Responses were evaluated for intertrace reproducibility, stimulus-to-response correlation, latency-intensity, and RMS intensity output function.

Speech auditory brainstem response responses

Because of the presence of highly reproducible transient and sustained neural responses in CI individuals for /da/, our results suggest that the neural codes provoked by CI faithfully transcribe the speech signal. The response shows many fine details as compared with speech ABR in normal cochleae [8],[31]. These details may represent the difference between auditory nerve firing provoked by electrical stimulation through CI versus those provoked by acoustic stimulation through the traveling wave. Electrical stimulation of the auditory nerve produces a deterministic firing pattern that is tightly phase locked to the stimulus. This phase-locked response follows the all or none rule of nerve action potential [32],[33],[34]. In contrast, acoustic stimulation through cochlear transduction follows a stochastic firing with unequal intervals between the peaks due to the probabilistic nature of hair cell neural connection [35],[36],[37]. In addition, the deterministic nature of electrical stimulation and the tight phase locking explains the high amplitude of the waves ([Table 2]) compared with norms [29].

Variability in morphology, latency, and amplitude was noted among implanted patients. The grand average constructed at different intensities limited this variation among individuals. Peak and trough picking in the grand average showed wave V and waves C, D, and E of the FFR in norms. In literature norms, FFR represents phase locking to the fundamental frequency of the stimulus. It occurs in response to the periodic information present in the vowel at the frequency of the sound source (i.e. the glottal pulse). Subsequently, the period between peaks D, E, and F of the FFR corresponds to the fundamental frequency of the stimulus. Wave C marks the transition from the consonant /d/ to the vowel /a/, whereas the waves D, E, and F represent phase locking to the first formant [29]. Absence of wave F in CI individuals, which is prominent in norms, may be attributed to the following:

Limited coding of higher frequencies of the first formant through the CI due to frequency-place shift of the apical electrodes. The later shift is promoted by deeper electrode insertion and/or larger angles of insertion [38].The band-pass filter used in the present study to record the responses was narrower (30–500 Hz) than that used in literature norms (70–2000 Hz). The bandwidth selected in the study may hinder the recording of the first formant higher frequencies.

Speech ABR as reported in the literature was limited to grand average morphologies at moderate intensity. In the present study, the grand averages as well as individual responses among individuals is reported for wave V and FFR latency-intensity function. The variability expressed in SDs and wide range for wave V latency for CI individuals may be an expression of variable neural survival [39],[40],[41],[42],[43] and/or differential electrical stimulation levels in the different cochlear turns [44].

The grand average latency of speech ABR wave V in normal individuals is reported to be 6.6 ms when recorded using insert phones in 8–12-year-old children [8],[29]. The electrical wave V latency recorded using biphasic pulses for single electrodes is 3–4 ms [45],[46],[47],[48]. In the present study, the grand average latency of wave V recorded in CI individuals using speakers was 2.59 ms at 70 dBHL. This early onset is attributed to bypassing cochlear traveling wave delay, which ∼6 ms [47]. The earlier acoustic speech ABR wave V latency in CIs compared with electrical wave V latency is explained by summation and overlap of electrical fields caused by complex signals. The acoustic speech ABR leads to simultaneous and overlapping stimulation of most electrodes. This simultaneous stimulation of multiple electrodes produces synchrony of neural firing and earlier latencies.

There was a clear growth of the RMS amplitude with the increase in acoustic signal intensity (refer to [Figure 5]b). The growth function of the biological neural responses may be an indication of the following:

Appreciable neural density with subsequently increased voltage capacity.The decreased RMS amplitude at low intensities or for elevated thresholds indicates decreased neural firing in the former and less surviving neural population in the latter.

The correlation of the vowel /a/ with the FFR was generally similar to the norms reported in the literature. The range of FFR–vowel correlation in CI was 12.66–25.88% at 60 and 70 dBHL, which is in close similarity to the literature norms (20–30%). However, the delay at which maximum correlation was calculated was larger for the current study (−3.625–24 ms) compared with literature norms (5.6–8.1 ms) [20].

Because the morphology of speech ABR response in CI individuals mimics the speech signal in its transient and sustained portions, brainstem responses to complex stimuli are viewed as biomarkers for encoding a speech syllable in the subcortical auditory system. Speech ABR in individuals with CI showed both rapid deflections (wave V) and some of the discrete peaks corresponding to the periodic peaks of the stimulus waveform in a robust manner.

Role of the fine structure four strategy, and speech auditory brainstem response, root mean square amplitude, and lag time in view of the present research

The FS4 strategy was implemented in current research. In this strategy, the input signal is band-pass filtered and fed into channels to stimulate the electrode array tonotopically placed in the cochlea. In the low-frequency channels, the fine structure is encoded by stimulating the first four apical electrodes at a rate equal to the instantaneous frequency of the signals. The amplitude of biphasic pulses is equal to the instantaneous envelope of the signal. For this temporal weighted strategy, phase-locking stimulation is emphasized simulating the normal cochlea and increasing the low-frequency information conveyed to the apical portions of the cochlea to 970 Hz [49]. This explains the approximation of the speech ABR FFR morphology and reproducibility in CI individuals to that in normal individuals. Muller et al. [50] reported that FS4 strategy improves vowel identification and speech understanding in CI individuals due to phase-locking mechanisms. The high-frequency channels in the basal electrodes process signals according to the continuous interleaving strategy principle in which the envelope of the signal is amplitude modulated at a constant rate [51]. This principle is applied to the remaining eight electrodes and simulates the place theory for frequency coding in the normal cochlea.

RMS similarity and lag time were used to evaluate trace reproducibility in addition to stimulus–response correlation. As the use of these parameters reflected response consistency, they may provide prognostic measure of speech neural encoding in CI. Assessment of intertrace correlation based on RMS and lag time determines the power of the phase-locking abilities of the auditory nerve and the brainstem. This may also provide useful information about the effectiveness of a particular coding strategy as regards low-frequency signals to neural code transduction.

Electrode array insertion angle and depth

The standard electrode array used in our cases is 31 mm in length, which allows an insertion angle of ∼720° [52]. A long electrode would allow the stimulation of more apical regions of the cochlea with better coding of the low-frequency information in the vowel /a/. FFR will therefore display most of the described fundamental and formant frequencies harbored in the stimulus.


Brainstem auditory responses provoked by CI signal transduction faithfully transcribe the complex input signal.Response lag time and RMS are reasonable biomarkers for response consistency.User-friendly software programs for clinical implementation will provide a valuable tool to assess CI signal transduction.


The authors thank the Cochlear Implant Unit, Faculty of Medicine, Alexandria University, Egypt, for providing the participants of the study. They also thank Dr Nina Kraus Laboratory for providing the brainstem toolbox.

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.


1Clark G. The multi-channel cochlear implant and the relief of severe-to-profound deafness. Cochlear Implants Int 2012; 13:69–85.
2Wible B, Nicol T, Kraus N. Abnormal neural encoding of repeated speech stimuli in noise in children with learning problems. Clin Neurophysiol 2002; 113:485–494.
3Wible B, Nicol T, Kraus N. Atypical brainstem representation of onset and formant structure of speech sounds in children with language-based learning problems. Biol Psychol 2004; 67:299–317.
4Johnson KL, Nicol TG, Zecker SG, Kraus N. Auditory brainstem correlates of perceptual timing deficits. J Cogn Neurosci 2007; 19:376–385.
5Abrams DA, Nicol T, Zecker SG, Kraus N. Auditory brainstem timing predicts cerebral asymmetry for speech. J Neurosci 2006; 26:11131–11137.
6Kraus N, McGee TJ, Carrell TD, Zecker SG, Nicol TG, Koch DB. Auditory neurophysiologic responses and discrimination deficits in children with learning problems. Science 1996; 273:971–973.
7Russo NM, Nicol TG, Zecker SG, Hayes EA, Kraus N. Auditory training improves neural timing in the human brainstem. Behav Brain Res 2005; 156:95–103.
8Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clin Neurophysiol 2004; 115:2021–2030.
9Kraus N, Banai K. Auditory-processing malleability − Focus on language and music. Curr Dir Psychol Sci 2007; 16:105–110.
10Song JH, Banai K, Kraus N. Brainstem timing deficits in children with learning impairment may result from corticofugal origins. Audiol Neurootol 2008; 13:335–344.
11Banai K, Nicol T, Zecker SG, Kraus N. Brainstem timing: implications for cortical processing and literacy. J Neurosci 2005; 25:9850–9857.
12Chandrasekaran B, Hornickel J, Skoe E, Nicol T, Kraus N. Context-dependent encoding in the human auditory brainstem relates to hearing speech in noise: implications for developmental dyslexia. Neuron 2009; 64:311–319.
13Wible B, Nicol T, Kraus N. Correlation between brainstem and cortical auditory processes in normal and language-impaired children. Brain 2005; 128:417–423.
14King C, Warrier CM, Hayes E, Kraus N. Deficits in auditory brainstem pathway encoding of speech sounds in children with learning problems. Neurosci Lett 2002; 319:111–115.
15Hornickel J, Kraus N. Unstable representation of sound: a biological marker of dyslexia. J Neurosci 2013; 33:3500–3504.
16Wong PC, Skoe E, Russo NM, Dees T, Kraus N. Musical experience shapes human brainstem encoding of linguistic pitch patterns. Nat Neurosci 2007; 10:420–422.
17Hayes EA, Warrier CM, Nicol TG, Zecker SG, Kraus N. Neural plasticity following auditory training in children with learning problems. Clin Neurophysiol 2003; 114:673–684.
18Anderson S, Skoe E, Chandrasekaran B, Kraus N. Neural timing is linked to speech perception in noise. J Neurosci 2010; 30:4922–4926.
19Song JH, Banai K, Russo NM, Kraus N. On the relationship between speech- and nonspeech-evoked auditory brainstem responses. Audiol Neurootol 2006; 11:233–241.
20Cunningham J, Nicol T, Zecker SG, Bradlow A, Kraus N. Neurobiologic responses to speech in noise in children with learning problems: deficits and strategies for improvement. Clin Neurophysiol 2001; 112:758–767.
21Song JH, Nicol T, Kraus N. Test-retest reliability of the speech-evoked auditory brainstem response. Clin Neurophysiol 2011; 122:346–355.
22Rocha-Muniz CN, Befi-Lopes DM, Schochat E. Sensitivity, specificity and efficiency of speech-evoked ABR. Hear Res 2014; 317:15–22.
23Moller AR. Neural mechanisms of BAEP. Electroencephalogr Clin Neurophysiol Suppl 1999; 49:27–35.
24Galbraith GC, Amaya EM, de Rivera JM, Donan NM, Duong MT, Hsu JN et al. Brain stem evoked response to forward and reversed speech in humans. Neuroreport 2004; 15:2057–2060.
25Krishnan A, Xu Y, Gandour JT, Cariani PA. Human frequency-following response: representation of pitch contours in Chinese tones. Hear Res 2004; 189:1–12.
26Ananthakrishnan S, Krishnan A, Bartlett E. Human frequency following response: neural representation of envelope and temporal fine structure in listeners with normal hearing and sensorineural hearing loss. Ear Hear 2016; 37:e91–e103.
27Krishnan A. Human frequency-following responses: representation of steady-state synthetic vowels. Hear Res 2002; 166:192–201.
28Aiken SJ, Picton TW. Envelope and spectral frequency-following responses to vowel sounds. Hear Res 2008; 245:35–47.
29Banai K, Abrams D, Kraus N. Sensory-based learning disability: insights from brainstem processing of speech sounds. Int J Audiol 2007; 46:524–532.
30Anderson S, Kraus N. Sensory-cognitive interaction in the neural encoding of speech in noise: a review. J Am Acad Audiol 2010; 21:575–585.
31Johnson KL, Nicol TG, Kraus N. Brain stem response to speech: a biological marker of auditory processing. Ear Hear 2005; 26:424–434.
32Clark GM. Hearing due to electrical stimulation of the auditory system. Med J Aust 1969; 1:1346–1348.
33Clark GM. Middle ear and neural mechanisms in hearing and the management of deafness [thesis]. Sydney, New South Wales: University of Sydney; 1970.
34Clark GM. Responses of cells in the superior olivary complex of the cat to electrical stimulation of the auditory nerve. Exp Neurol 1969; 24:124–136.
35Paolini A, Clark GM. The effect of pulsatile intracochlear electrical stimulation on intracellularly recorded cochlear nucleus neurons. In: Clark GM editor. Cochlear Implants: XVI World Congress of Otohinolaryngology Head and Neck Surgery. Bologna, Italy: Monduzzi Editore; 1997:119–124.
36Siebert WM. Frequency discrimination in the auditory system: place or periodicity mechanisms? Proc IEEE Inst Electr Electron Eng 1970; 58:723–730.
37Burkitt AN, Clark GM. Synchronization of the neural response to noisy periodic synaptic input. Neural Comput 2001; 13:2639–2672.
38Schatzer R, Vermeire K, Visser D, Krenmayr A, Kals M, Voormolen M et al. Electric-acoustic pitch comparisons in single-sided-deaf cochlear implant users: frequency-place functions and rate pitch. Hear Res 2014; 309:26–35.
39Kikkawa YS, Nakagawa T, Ying L, Tabata Y, Tsubouchi H, Ido A et al. Growth factor-eluting cochlear implant electrode: impact on residual auditory function, insertional trauma, and fibrosis. J Transl Med 2014; 12:280.
40Sly DJ, Hampson AJ, Minter RL, Heffer LF, Li J, Millard RE et al. Brain-derived neurotrophic factor modulates auditory function in the hearing cochlea. J Assoc Res Otolaryngol 2013; 13:1–16.
41Gillespie LN, Zanin MP, Shepherd RK. Cell-based neurotrophin treatment supports long-term auditory neuron survival in the deaf guinea pig. J Control Release 2015; 198:26–34.
42Fransson A, Jarlebark LE, Ulfendahl M. In vivo infusion of UTP and uridine to the deafened guinea pig inner ear: effects on response thresholds and neural survival. J Neurosci Res 2009; 87:1712–1717.
43Landry TG, Wise AK, Fallon JB, Shepherd RK. Spiral ganglion neuron survival and function in the deafened cochlea following chronic neurotrophic treatment. Hear Res 2011; 282:303–313.
44Firszt JB, Chambers RD, Kraus , Reeder RM. Neurophysiology of cochlear implant users I: effects of stimulus current level and electrode site on the electrical ABR, MLR, and N1-P2 response. Ear Hear 2002; 23:502–515.
45Gyo K, Yanagihara N. Electrically and acoustically evoked brain stem responses in guinea pig. Acta Otolaryngol 1980; 90:25–31.
46Starr A, Brackmann DE. Brain stem potentials evoked by electrical stimulation of the cochlea in human subjects. Ann Otol Rhinol Laryngol 1979; 88:550–556.
47Guiraud J, Gallego S, Arnold L, Boyle P, Truy E, Collet L. Effects of auditory pathway anatomy and deafness characteristics? (1): on electrically evoked auditory brainstem responses. Hear Res 2007; 223:48–60.
48Lundin K, Stillesjo F, Rask-Andersen H. Prognostic value of electrically evoked auditory brainstem responses in cochlear implantation. Cochlear Implants Int 2015; 16:254–261.
49Riss D, Hamzavi JS, Blineder M, Honeder C, Ehrenreich I, Kaider A et al. FS4, FS4-p, and FSP: a 4-month crossover study of 3 fine structure sound-coding strategies. Ear Hear 2014; 35:e272–e281.
50Muller J, Brill S, Hagen R, Moeltner A, Brockmeier SJ, Stark T et al. Clinical trial results with the MED-EL fine structure processing coding strategy in experienced cochlear implant users. J Otorhinolaryngol Relat Spec 2012; 74:185–198.
51Wilson BS, Finley CC, Lawson DT, Wolford RD, Eddington DK, Rabinowitz WM. Better speech recognition with cochlear implants. Nature 1991; 352:236–238.
52Brill S, Muller J, Hagen R, Moltner A, Brockmeier SJ, Stark T et al. Site of cochlear stimulation and its effect on electrically evoked compound action potentials using the MED-EL standard electrode array. Biomed Eng Online 2009; 8:40.