戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 -talker masker for both natural and monotone speech.
2 y ultimately reflect a special type of overt speech.
3 ption by people with physical limitations of speech.
4 stened to several hours of natural narrative speech.
5 y between produced gestures and co-occurring speech.
6 d (in tone languages) lexical information in speech.
7 l communication resembles that of gesture in speech.
8 p gradually at further angles, more than for speech.
9 each an articulatory target for intelligible speech.
10 n for the recognition of words in continuous speech.
11  aerodigestive theory and the development of speech.
12 ct perception for either form of incongruent speech.
13 ated communication system, as do gesture and speech.
14 xperiment while subjects listened to natural speech.
15 me informational versus energetic masking of speech.
16 rbitrary combinations of auditory and visual speech.
17 al, articulatory, and semantic properties of speech.
18 responses to phoneme instances in continuous speech.
19 med the low levels observed for intelligible speech.
20 erarchy of language structures in continuous speech.
21 places whether presented as images, text, or speech [9, 10].
22         These results demonstrate that inner speech - a purely mental action - is associated with an
23 ll have intellectual disability with delayed speech, a history of febrile and/or non-febrile seizures
24 llary expansion techniques to develop normal speech, achieve functional occlusion for nutrition intak
25 nt 1 tested feedforward control by examining speech adaptation across trials in response to a consist
26 uggest that sign should not be compared with speech alone but should be compared with speech-plus-ges
27 ld be compared with speech-plus-gesture, not speech alone" (sect.
28 ronal oscillations in processing accelerated speech also relates to their scale-free amplitude modula
29 temporal modulations in a range relevant for speech analysis ( approximately 2-4 Hz) were reconstruct
30 ts to standard approaches that use segmented speech and block designs, which report more laterality i
31 needed to account for differences between co-speech and co-sign gesture (e.g., different degrees of o
32 more integrative view embraces not only sign/speech and co-sign/speech gesture, but also indicative g
33       Listeners extract pitch movements from speech and evaluate the shape of intonation contours ind
34  researchers must differentiate between sign/speech and gesture.
35  of a workshop examining possible futures of speech and hearing science out to 2030.
36                                   In humans, speech and language are species-specific signals of fund
37 ous or obtrusive to function directed toward speech and language at a given moment.
38 in both human patients and rodent models and speech and language deficits.
39       We have provided 3-D and 4D mapping of speech and language function based upon the results of d
40 y observed in human patients who demonstrate speech and language impairments.
41 and comprehensive dysphagia assessments by a speech and language therapist (SALT) were associated wit
42 pants to either 3 weeks or more of intensive speech and language therapy (>/=10 h per week) or 3 week
43 t guidelines for aphasia recommend intensive speech and language therapy for chronic (>/=6 months) ap
44         INTERPRETATION: 3 weeks of intensive speech and language therapy significantly enhanced verba
45 imed to examine whether 3 weeks of intensive speech and language therapy under routine clinical condi
46 h per week) or 3 weeks deferral of intensive speech and language therapy.
47 ly improved from baseline to after intensive speech and language treatment (mean difference 2.61 poin
48  signed, is dominant over laughter, and that speech and manual signing involve similar mechanisms.
49 rocognitive adverse events comprising slowed speech and mentation and word-finding difficulty).
50 ain gateway to communication with others via speech and music, and it also plays an important role in
51 category and between-category differences in speech and nonspeech contexts.
52 er the distinct pitch processing pattern for speech and nonspeech stimuli in autism was due to a spee
53 ults experience with respect to attending to speech and other salient acoustic signals.
54 ients gained improvements in facial contour, speech and swallow.
55 terature shows that power energizes thought, speech, and action and orients individuals toward salien
56 uctures, such as animal vocalizations, human speech, and music.
57 ant factor in the perceptual organization of speech, and reveal a widely distributed neural network s
58 prehensible and incomprehensible accelerated speech, and show that neural phase patterns in the theta
59 /left posterior superior-frontal regions and speech arrest.
60 reased reliance on sensory feedback to guide speech articulation in this population.
61 stently reflect the syllabic rate, even when speech becomes too fast to be intelligible.
62 l part of the feedforward control system for speech but is not essential for online, feedback control
63  maintaining accurate feedforward control of speech, but relatively uninvolved in feedback control.SI
64 ectroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme
65                  Additionally, both sign and speech can employ modifying components that convey iconi
66                      Noise-induced errors in speech communication, for example, make it difficult for
67 activity at rates directly relevant to human speech communication.
68 s reveal persistent movements between stable speech communities facilitated by kinship rules.
69 urage sustained directional movement between speech communities, then languages should be channeled a
70 esidence rule turned social communities into speech communities.
71 ral processing of the formant frequencies of speech, compared to non-native nonmusicians, suggesting
72 ith bvFTD laughed less relative to their own speech comparedwith healthy controls.
73               Phonological training improved speech comprehension abilities and was particularly effe
74 dressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking an
75 ons in 20 patients with chronic aphasia with speech comprehension impairment following left hemispher
76 tivity along the ventral pathway facilitates speech comprehension in multisensory environments.
77 ork for a comprehensive bottom-up account of speech comprehension in the human brain.SIGNIFICANCE STA
78 sistent with previous studies, we found that speech comprehension involves hierarchical representatio
79                                We found that speech comprehension is related to the scale-free dynami
80 d evidence for the efficacy or mechanisms of speech comprehension rehabilitation.
81                                              Speech comprehension requires that the brain extract sem
82              The primary outcome measure was speech comprehension score on the comprehensive aphasia
83 CANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cor
84 udiovisual (AV) integration is essential for speech comprehension, especially in adverse listening si
85                                        Human speech comprehension, poorly understood as a neurobiolog
86 ed in a small but significant improvement in speech comprehension, whereas donepezil had a negative e
87 ve role of inferior frontal gyrus in natural speech comprehension.
88 but it had an unpredicted negative effect on speech comprehension.
89 er in the high informational masking natural speech condition, where the musician advantage was appro
90 we studied Bengalese finch song, which, like speech, consists of variable sequences of "syllables." W
91 ng us to communicate using a rich variety of speech cues [1, 2].
92 ng that automatic encoding of these relevant speech cues are sensitive to language experience.
93 rtant tasks, integrating auditory and visual speech cues to allow us to communicate with others.
94 cesses triggered by asynchronous audiovisual speech cues.
95 hension by challenging syllable tracking and speech decoding using comprehensible and incomprehensibl
96 ental delay/intellectual disability (10/10), speech delay (10/10), postnatal microcephaly (7/9), and
97 on of auditory cortex by visual and auditory speech developed in synchrony after implantation.
98 lay, intellectual disability, and expressive speech disorder and carry de novo variants in EBF3.
99 tational changes in learning, attention, and speech disorders.SIGNIFICANCE STATEMENT We characterized
100 the musicians were better able to understand speech either in noise or in a two-talker competing spee
101          However, if the auditory and visual speech emanate from different talkers, integration decre
102                     During high acoustic SNR speech encoding by temporally entrained brain activity w
103   Across both groups, intelligible sine-wave speech engaged a typical left-lateralized speech process
104        Moreover, the strength of LRTC of the speech envelope decreased at the maximal rate, suggestin
105 g severe intellectual disability with absent speech, epilepsy, and hypotonia was observed in all affe
106 a dynamic neural transformation of low-level speech features as they propagate along the auditory pat
107              Our results show that low-level speech features propagate throughout the perisylvian cor
108 e used models based on a hierarchical set of speech features to predict BOLD responses of individual
109 uning of the perisylvian cortex to low-level speech features.
110  auditory cortex disrupts the segregation of speech from background noise, leading to deficits in spe
111 tivation of auditory brain regions by visual speech from before to after implantation and its relatio
112 tivation of auditory brain regions by visual speech from before to after implantation is associated w
113 f modulated sounds disrupt the separation of speech from modulated background noise in auditory corte
114 ropagation of low-level acoustic features of speech from posterior superior temporal gyrus toward ant
115  brain that were central to the emergence of speech functions in humans.
116 ew embraces not only sign/speech and co-sign/speech gesture, but also indicative gestures irrespectiv
117  variable, and similar to spoken language co-speech gesture.
118  how the spontaneous gestures that accompany speech have been studied.
119 tion is deciding whether auditory and visual speech have the same source, a process known as causal i
120 o severe intellectual disability with absent speech, hypotonia, brachycephaly, congenital heart defec
121          In my writings, classes, and public speeches, I've tried to convey one important take-home m
122 n left posteromedial auditory cortex predict speech identification in modulated background noise.
123 en magnified envelope coding and deficits in speech identification in modulated noise has been absent
124  (GDD), intellectual disability (ID), severe speech impairment and gait abnormalities.
125 y, epilepsy, developmental delay, hypotonia, speech impairments, and minor dysmorphic features.
126                                Understanding speech in background noise that fluctuates in intensity
127        In utero experience, such as maternal speech in humans, can shape later perception, although t
128 s illustrate the importance of using natural speech in neurolinguistic research.
129 st time the cortical processing of ambiguous speech in people without psychosis who regularly hear vo
130                        While the encoding of speech in the auditory cortex is modulated by selective
131 l frequency (F0) to better understand target speech in the presence of interfering talkers.
132  the ability of SNHL listeners to understand speech in the presence of modulated background noise.
133 hearers reported recognizing the presence of speech in the stimuli before controls, and before being
134 we show that the natural statistics of human speech, in which voices co-occur with mouth movements, a
135 ssociation between cognitive performance and speech-in-noise (SiN) perception examine different aspec
136 ted that musicians have an advantage in some speech-in-noise paradigms, but not all.
137 s including filtered phoneme recognition and speech-in-noise recognition.
138 musicians outperform nonmusicians on a given speech-in-noise task may well depend on the type of nois
139                          We demonstrate that speech-induced phasic dopamine release into the dorsal s
140                                  Audiovisual speech integration combines information from auditory sp
141 , the periodic cues of TFS are essential for speech intelligibility and are encoded in auditory neuro
142                                     Gains in speech intelligibility could be predicted from gameplay
143  Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targe
144                                 As expected, speech intelligibility improved with increasing F0 diffe
145             Seeing a speaker's face enhances speech intelligibility in adverse environments.
146 esis that musical training leads to improved speech intelligibility in complex speech or noise backgr
147                  Our study demonstrated that speech intelligibility in humans relied on the periodic
148                                      Whereas speech intelligibility was unchanged after WM training,
149 h is limited by the brain's ability to parse speech into syllabic units using delta/theta oscillation
150                                Comprehending speech involves the rapid and optimally efficient mappin
151 it that perceptual resilience to accelerated speech is limited by the brain's ability to parse speech
152 es most of the information conveyed by human speech, is not principally determined by basilar membran
153 ormalized ratio measured, HbA1c measurement, speech language pathology consultation, anticoagulation
154 either in noise or in a two-talker competing speech masker.
155            These findings suggest that inner speech may ultimately reflect a special type of overt sp
156  These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex
157 d in feedback control.SIGNIFICANCE STATEMENT Speech motor control is a complex activity that is thoug
158  cerebellum has been shown to be part of the speech motor control network, its functional contributio
159 n hypothesized to form a crucial part of the speech motor control network.
160 ss both anticipatory and reactive aspects of speech motor control, comparing the performance of patie
161 opamine release into the dorsal striatum and speech motor cortex exerts direct modulation of neuronal
162 entary examines this claim in the context of speech motor learning and biomechanics, proposing that s
163  articulatory rehearsal system that controls speech motor output.
164 ould provide a basis for both swallowing and speech movements, and provides biomechanical simulation
165 tomical and physiological differences in the speech neural networks of adults who stutter.
166 brain synchrony was unrelated to episodes of speech/no-speech or general content of conversation.
167 ces in the MMRs to categorical perception of speech/nonspeech stimuli or lack thereof, neural oscilla
168 btained while human participants listened to speech of varying acoustic SNR and visual context.
169  the relationship between gesture, sign, and speech offers a valuable tool for investigating how lang
170  different stages of language planning until speech onset.
171 (CI) processors, the temporal information in speech or environmental sounds is delivered through modu
172 hrony was unrelated to episodes of speech/no-speech or general content of conversation.
173 o improved speech intelligibility in complex speech or noise backgrounds.
174 hen we hear an auditory stream like music or speech or scan a texture with our fingertip, physical fe
175 f theta rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mec
176 ulation (p=0.0622), cognition (p=0.0040) and speech (p=0.0423).
177 ng-recognized [24] variability in individual speech patterns, or idiolects.
178 ed model of causal inference in multisensory speech perception (CIMS) that predicts the perception of
179 spheres are equally and actively involved in speech perception and interpretation.
180 aring loss can produce prolonged deficits in speech perception and temporal processing.
181 her electrophysiological, psychophysical, or speech perception effects.
182 rom background noise, leading to deficits in speech perception in modulated background noise.SIGNIFIC
183         Therefore, a key step in audiovisual speech perception is deciding whether auditory and visua
184       Recent psychophysics data suggest that speech perception is not limited by the capacity of the
185 me measures were audibility, scores from the speech perception tests, and scores from a questionnaire
186 imately .3 between cognitive performance and speech perception, although some variability in associat
187 S), a brain region known to be important for speech perception, is complex, with some regions respond
188 ate or frequency, plays an important role in speech perception, music perception, and listening in co
189 he hierarchical generative models underlying speech perception.
190 al sulcus (pSTS) is known to be critical for speech perception.
191 ural processes that are specific to auditory speech perception.
192 ignals: EEQ1, which operated on the wideband speech plus noise signal, and EEQ4, which operated indep
193                  One key question is whether speech-plus-gesture and sign-with-iconicity really displ
194  article, that "sign should be compared with speech-plus-gesture, not speech alone" (sect.
195 ith speech alone but should be compared with speech-plus-gesture.
196 ock designs, which report more laterality in speech processing and associated semantic processing to
197 ks harness distinct capabilities to activate speech processing areas.
198                   We propose that successful speech processing imposes constraints on the self-organi
199 rebral hemispheres were actively involved in speech processing in large and equal amounts.
200 he benefit of our subjects' hearing aids for speech processing in noisy listening conditions.
201 ve speech engaged a typical left-lateralized speech processing network.
202 ively by the involvement of visual cortex in speech processing, and negatively by the cross-modal rec
203 eported comparable expectations for improved speech processing, thereby controlling for placebo effec
204 mantic elements occurs early in the cortical speech-processing stream.
205 hat each instance of a phoneme in continuous speech produces multiple distinguishable neural response
206 hat each instance of a phoneme in continuous speech produces several observable neural responses at d
207         We hypothesized that the recovery of speech production after left hemisphere stroke not only
208      In language, semantic prediction speeds speech production and comprehension.
209 ic lateralization of neural processes during speech production has been known since the times of Broc
210  neural activation during natural, connected speech production in children who stutter demonstrates t
211 een few neurophysiological investigations of speech production in children who stutter.
212  promise for the use of fNIRS during natural speech production in future research with typical and at
213 onses over neural regions integral to fluent speech production including inferior frontal gyrus, prem
214 tering, atypical functional organization for speech production is present and suggests promise for th
215 nd drives left-hemispheric lateralization of speech production network.
216 ted to language production (sentential overt speech production-Speech task) and activation related to
217  property in high-gamma fluctuations mirrors speech rate.
218 the greater the comprehension of the fastest speech rate.
219                     On average, ETS improved speech reception thresholds by 2.2 dB over cochlear impl
220                                    Automatic Speech Recognition (ASR) systems with near-human levels
221 ocial cognition and communication (affective speech recognition (ASR), reading the mind in the eyes,
222  the TFS in natural speech sentences on both speech recognition and neural coding.
223                                              Speech recognition in a single-talker masker differed on
224        The masking release (MR; i.e., better speech recognition in fluctuating compared with continuo
225 onsiderable overlap in the audiograms and in speech recognition performance in the unimplanted ear be
226 This research aims to bridge the gap between speech recognition processes in humans and machines, usi
227 re has direct translational implications for speech recognition technology.
228 o and audio processing, computer vision, and speech recognition, their applications to three-dimensio
229 ren's multitasking abilities during degraded speech recognition.
230 ask costs while multitasking during degraded speech recognition.
231 opulations, we collected genetic markers and speech recordings in the admixed creole-speaking populat
232  -0.90, p < 0.05 corrected), suggesting that speech recovery is related to structural plasticity of l
233 selective neurodegeneration of human frontal speech regions results in delayed reconciliation of pred
234                We found distinctly different speech-related hemodynamic responses in the group of chi
235 geting temporal processing may improve later speech-related outcomes.
236 ying network mechanisms by quantifying local speech representations and directed connectivity in MEG
237 e of auditory-frontal interactions in visual speech representations and suggest that functional conne
238 ally contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typical
239           In 22,627 audio samples of natural speech sampled from the daily interactions of 143 health
240 or non-musicians, it was not correlated with speech scores.
241  the effects of modifying the TFS in natural speech sentences on both speech recognition and neural c
242                           The TFS of natural speech sentences was modified by distorting the phase an
243  continuous and various types of fluctuating speech-shaped Gaussian noise including those with both r
244          However, it remains unclear how the speech signal is transformed and represented in the brai
245 ing block designs and segmented or synthetic speech.SIGNIFICANCE STATEMENT To investigate the process
246 g scan while passively listening to degraded speech ('sine-wave' speech), that was either potentially
247 rmed by the human brain to transform natural speech sound into meaningful language, we used models ba
248 pants judged whether a given consonant-vowel speech sound was large or small, round or angular, using
249 gs that exist between phonetic properties of speech sounds and their meaning.
250 lling new evidence for dynamic processing of speech sounds in the auditory pathway.
251 e experience affect the neural processing of speech sounds throughout the auditory system.
252 in human subjects as they manipulated stored speech sounds.
253  right-frontal regions during recognition of speech sounds.
254 icted by their relative similarity to voiced speech sounds.
255 al network supporting perceptual grouping of speech sounds.
256 omplex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical s
257 omplex acoustic scene consisting of multiple speech sources is represented in separate hierarchical s
258                                           In speech, speakers adjust their articulatory movement magn
259 and nonspeech stimuli in autism was due to a speech-specific deficit in categorical perception of lex
260     Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex
261 ions in the sensory tracking of the attended speech stream and frontoparietal activity during selecti
262 al areas, by contrast, represent an attended speech stream separately from, and with significantly hi
263 y sensory areas is dominated by the attended speech stream, whereas competing input is suppressed.
264 ory representation of a selectively attended speech stream.
265              Here, both attended and ignored speech streams are represented with almost equal fidelit
266 ning task, in which one out of two competing speech streams had to be attended selectively.
267 ory scene, with both attended and unattended speech streams represented with almost equal fidelity.
268 gnificantly higher fidelity than, unattended speech streams.
269 sm with the addition of progressive balance, speech, swallowing, eye movement and cognitive impairmen
270  auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual
271 tegration combines information from auditory speech (talker's voice) and visual speech (talker's mout
272 ar associations were shown for the different speech target and masker types.
273 oduction (sentential overt speech production-Speech task) and activation related to cognitive process
274 and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and
275 ity in humans relied on the periodic cues of speech TFS in both quiet and noisy listening conditions.
276 ase locking patterns to the periodic cues of speech TFS that disappear when reconstructed sounds do n
277 TD would laugh less in response to their own speech than other dementia groups or controls, while tho
278 for left-hemispheric lateralization of human speech that is due to left-lateralized dopaminergic modu
279 ntence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded
280 ly listening to degraded speech ('sine-wave' speech), that was either potentially intelligible or uni
281 y employed a novel design to show that inner speech - the silent production of words in one's mind -
282 ants with chronic aphasia received intensive speech therapy for 3 weeks, with standardized naming tes
283 sis of machine-extracted regularities in the speech to lexicon mapping process.
284 uditory evoked potential (CAEP) responses to speech tokens was introduced into the audiology manageme
285 omputerized CL auditory training can enhance speech understanding in levels of background noise that
286 onal deficits that contribute to the loss of speech understanding in the elderly.
287 c dysfunction at the level of MGB may affect speech understanding negatively in the elderly populatio
288 after implantation is associated with better speech understanding with a CI.
289 unt of how auditory processing of continuous speech unfolds in the human brain.
290 s compared with original clinician or family speech using the qualitative research methods of directe
291 tion (AV) and visual enhancement of auditory speech (VA).
292 nes capable of recognizing patterns (images, speech, video) and interacting with the external world i
293 i that are most relevant for behavior (i.e., speech, voice).
294                                  Interpreted speech was compared with original clinician or family sp
295 and sensorimotor processes, selectively when speech was potentially intelligible.
296 ccipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory c
297         We recorded 24 mothers' naturalistic speech while they interacted with their infants and with
298 onemes), and also identification of degraded speech, while manipulating audiovisual asynchrony.
299     Patients' ability to understand auditory speech with their CI was also measured following 6 mo of
300 ific hypotheses about the representations of speech without using block designs and segmented or synt

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。
 
Page Top