戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 e descriptions of reception of the resultant sound.
2 these spaces due to anthropogenic underwater sound.
3  detect a mistuned harmonic within a complex sound.
4 lea are mechanosensors for the perception of sound.
5 typical correspondences between spelling and sound.
6  signal timing changes to that of the nearby sound.
7 alisation processing across infant and adult sounds.
8 ymptom of FXS is extreme sensitivity to loud sounds.
9 tuations in intensity in amplitude-modulated sounds.
10 e, knocking at a door results in predictable sounds.
11 n subjects as they manipulated stored speech sounds.
12 daptive filter for cancelling self-generated sounds.
13 t neurons can also encode the probability of sounds.
14 frontal regions during recognition of speech sounds.
15 lus in a sequence with similar or dissimilar sounds.
16 se zinc to influence how the brain processes sounds.
17 ion, the auditory analysis of self-generated sounds.
18  preference for vocal compared with nonvocal sounds.
19  be detected and converted into controllable sounds.
20 ork supporting perceptual grouping of speech sounds.
21 cending pathway in the perception of complex sounds.
22 mallet, and then again listening to recorded sounds.
23                               These findings sound a strong note of caution on the employment of live
24 e inner ear (IE) to convey information about sound, acceleration, and orientation to the brain.
25  near the lateral border of V1, responded to sound alone.
26 equired rats to use a joystick to manipulate sound along a continuous frequency axis.
27 ant is suggested as a green and economically sound alternative to physico-chemical treatment.
28 mplexity, amplitude modulation, and enhanced sound amplitude.
29              One ion channel is activated by sound and is responsible for sensory transduction.
30  perisylvian cortical regions is involved in sound and language processing.
31 al formal features and the interplay between sound and meaning.
32 ntacts, type II afferents are insensitive to sound and only weakly depolarized by glutamate release f
33 s selective immediately after the onset of a sound and then become highly selective in the following
34 eview of the literature data on the speed of sound and ultrasound absorption in pure ionic liquids (I
35 ees, including 122 that reliably produced AG sounds and 118 that did not.
36 ying the perception of odors, taste, vision, sound, and gravity.
37  predicts canal responses to angular motion, sound, and mechanical stimulation.
38  to avoid presentation of uncomfortably loud sounds, and (c) to ensure that subjects have control ove
39 00 ms) between actions and action-associated sounds, and we recorded magnetoencephalography (MEG) dat
40 uminance for vision, pitch and intensity for sound-and assemble a stimulus set that systematically va
41                      We show that particular sounds are indicators of nonlinearity and can be used to
42 ferent orofacial movement patterns and these sounds are used in communicatively relevant contexts.
43 he songs where both songs contained "similar sounds arranged in a similar pattern." Songs appear to b
44                                   Concurrent sound associated with very bright meteors manifests as p
45 , it is not clear whether the learned action-sound association modifies subsequent perception.
46  -12 brightness meteors can generate audible sound at 25 dB SPL.
47                                 The speed of sound at 3 MHz in the neutrally buoyant suspensions is m
48 show that auditory cortex neurons respond to sound at very young ages, even before the opening of the
49                The relative arrival times of sounds at both ears constitute an important cue for loca
50 e the relative arrival time of low-frequency sounds at both ears.
51  TMS increased the disadvantage for spelling-sound atypical words more for the individuals with stron
52 ditory system can identify the category of a sound based on the global features of the acoustic conte
53 bnormal salience is attributed to particular sounds based on the abnormal activation and functional c
54 s: a phonological input buffer that captures sound-based information and an articulatory rehearsal sy
55 uring sleep biased AC activity patterns, and sound-biased AC patterns predicted subsequent hippocampa
56          Results are specific to Long Island Sound, but the approach is transferable to other urban e
57 and suppressed by alternating (asynchronous) sounds, but only when the animals engaged in task perfor
58 t female, five male), who triggered recorded sounds by a key press.
59 cant individual differences in the use of AG sounds by chimpanzees and, here, we examined whether cha
60 e shown that some can learn to produce novel sounds by configuring different orofacial movement patte
61 nding mystery about generation of concurrent sounds by fireballs.
62 ties such as the edge stiffness and speed of sound can change by orders of magnitude.
63                                        These sounds cannot be attributed to direct acoustic propagati
64 4 days, beginning 2 days before a calibrated sound challenge (4 h of pre-recorded music delivered by
65 All participants who received the calibrated sound challenge and at least one dose of study drug were
66 ntinued from the study before the calibrated sound challenge because they no longer met the inclusion
67 t 4 kHz measured 15 min after the calibrated sound challenge by pure tone audiometry; a reduction of
68  listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evol
69 ter in chimpanzees that reliably produced AG sounds compared with those who did not.
70  rate of vocal development, producing mature-sounding contact calls earlier than the other twin.
71                                      Natural sounds convey perceptually relevant information over mul
72                                        These sound diffusers are rigidly backed slotted panels, with
73 speech TFS that disappear when reconstructed sounds do not show periodic patterns anymore.
74 uld immediately modify interpretation of the sound during subsequent listening.
75                                   Delivering sounds during sleep biased AC activity patterns, and sou
76 esented with a cyclic pattern of three vowel sounds (/ee/-/ae/-/ee/).
77  showed that in misophonic subjects, trigger sounds elicit greatly exaggerated blood-oxygen-level-dep
78 ed by head movements or shape changes of the sound-emitting mouth or nose.
79 e (FFR) is a measure of the brain's periodic sound encoding.
80 e the mammalian phono-receptors, transducing sound energy into graded changes in membrane potentials,
81 ns of scaling relationships and a physically sound estimation of hydrodynamic characteristics.
82               Together our results show that sound-evoked activity and topographic organization of th
83 m single SGNs showed reduced spontaneous and sound-evoked firing rates.
84 he future thalamocortical input layer 4, and sound-evoked spike latencies were longer in layer 4 than
85 itory brainstem of cats, spatial patterns of sound-evoked Ve can resemble, strikingly, Ve generated b
86                                            A sound excitable drug (SED) that is non-cytotoxic to cell
87                                  Thus, early sound experience can activate and potentially sculpt sub
88 supported more accurate decoding of temporal sound features in the inferior colliculus and auditory c
89 e vocal organ can generate specific, complex sound features.
90  restoring firing rate codes for rudimentary sound features.
91 icture of cortical processes for analysis of sound features.
92 wever, there remains a lack of statistically sound frameworks to model the underlying transmission dy
93 erves are tuned to respond best to different sound frequencies because basilar membrane vibration is
94  formed discrete firing fields at particular sound frequencies.
95 red detection of changes in sound intensity, sound frequency or sound location.
96 pect to the nuclear tonotopic position (i.e. sound frequency selectivity).
97 n close agreement with the expected speed of sound from experiments.
98  of the brain while participants listened to sounds from artificial and natural environments.
99 ate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonanc
100  particularly challenging in audition, where sounds from various sources and localizations, degraded
101 with animals' impairments in detecting brief sound gaps, which is often considered a sign of tinnitus
102 aracteristics of auditory nerve responses to sound have been described extensively.
103 onal integration of generating and detecting sound in a single device.
104 ed to date, the significance of the speed of sound in ILs is regarded.
105 mportantly, a critical analysis of speeds of sound in ILs vs those in classical molecular solvents is
106 ure and pressure dependences on the speed of sound in ILs, as well as the impact of impurities in ILs
107 ility to generate, amplify, mix and modulate sound in one simple electronic device would open up a ne
108 ctive experience of a rhythmically modulated sound in real time, even when the perceptual experience
109 es our ability to study neural processing of sound in the MSO.
110 each syllable to the most spectrally similar sound in the target, regardless of its temporal position
111 straint of continuity for binding successive sounds in a probabilistic manner.
112 ogical correlates in the production of these sounds in chimpanzees.
113 of Vagus Nerve Stimulation (VNS) paired with sounds in chronic tinnitus patients.
114                                      Trigger sounds in misophonics were associated with abnormal func
115 ew evidence for dynamic processing of speech sounds in the auditory pathway.
116 ortant cue for localization of low-frequency sounds in the horizontal plane.
117 ocal sounds than a range of nonvocal control sounds, including scrambled voices, environmental noises
118                                 The speed of sound increased from 1514 to 1532 m/s over the first 24
119 sition, and the illumination patterns of the sound-indication devices allow us to discriminate multip
120 echnique that utilizes multiple, distributed sound-indication devices and a miniature LED backpack to
121               They do this by converting the sound-induced movement of their hair bundles present at
122 ical limitations and non-invasively measured sound-induced vibrations at four locations distributed o
123 vestigate the auditory brainstem where basic sound information is first processed.
124 elopment and, as such, are unable to process sound information.
125                           Hence, for natural sounds, inhibition at SBCs plays an even stronger role i
126 first the presence and then the content of a sound input.
127 trol group using a key press to generate the sounds instead of learning to play the musical instrumen
128  low-latency encoding of onset and offset of sound intensity in the cochlea's base and submillisecond
129  tasks that required detection of changes in sound intensity, sound frequency or sound location.
130 of looming bias without manipulating overall sound intensity.
131 s for changes in source distance, or only in sound intensity.
132 ever, these studies only manipulated overall sound intensity; therefore, it is unclear whether loomin
133 duced transparency based on a coherent light-sound interaction, with the coupling originating from th
134 lea such that the timing and/or intensity of sound is encoded with high precision.
135 elation between ions structure and speeds of sound is presented by highlighting existing correlation
136 integrated: cortical ILD tuning to broadband sounds is a composite of separate, frequency-specific, b
137 ause they would affect how information about sounds is conveyed to higher-order areas for further pro
138                          Hypersensitivity to sounds is one of the prevalent symptoms in individuals w
139 sion of auditory responses to self-generated sounds is well known, it is not clear whether the learne
140 he ability to perceive and memorize rhythmic sounds is widely shared among humans [6] but seems rare
141 ied to explain the "mystery" of Stradivari's sound, it is only recently that studies have addressed t
142 ormatics searches for enzyme candidates with sound kinetic measurements, evolutionary considerations
143                                              Sound knowledge of neoplasms affecting the sternum and t
144                       Imbalanced early-stage sound level processing could partially explain the audit
145 source position that is robust to changes in sound level.SIGNIFICANCE STATEMENT Sensory neurons' resp
146 protocol compared with those without maximum sound levels 81 dB (95% CI, 79-83) versus 77 dB (95% CI,
147                     The relationship between sound levels and densities was variable across the durat
148 rated to estimate abundance and biomass from sound levels at FSAs.
149                                              Sound levels ranged from 0.02 to 12,738 Pa(2), with larg
150            However, difficulties in relating sound levels to abundance have impeded the use of passiv
151                             Maximum and mean sound levels were 78 dB (SD, 9) and 62 dB (SD, 8), respe
152                                      Maximum sound levels were higher in ICUs with a sleep policy or
153  with 14% experiencing a 10-fold increase in sound levels.
154       MSO neurons perform a critical role in sound localization and binaural hearing.
155                The primary cues for binaural sound localization are comprised of interaural time and
156 l superior olive (MSO) play a unique role in sound localization because of their ability to compare t
157 n for input timing adjustment in a brainstem sound localization circuit.
158 nd abnormal coding of frequency and binaural sound localization cues.
159 e this method to probe the representation of sound localization in auditory neurons of chinchillas an
160                                              Sound localization is one of the sensory abilities disru
161 e auditory brainstem and participates in the sound localization process with fast and well-timed inhi
162 pacity for auditory spatial awareness (e.g., sound localization).
163                      One instance of this is sound localization, which improves with increasing bandw
164 naptic properties to the specific demands of sound localization.
165 ezoid body (MNTB) plays an important role in sound localization.
166 ice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9).
167          The processing of binaural cues for sound location has been studied extensively.
168 anges in sound intensity, sound frequency or sound location.
169 d good performance in distinguishing between sound maize and undesirable materials, with cross-valida
170 trast, P2 became larger when listening after sound making compared with the initial naive listening.
171                 The N1 was attenuated during sound making, while P2 responses were unchanged.
172 changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of S
173                          Ultrasonic speed of sound measurements are used to quantify the variation in
174 rovides a key-step towards realizing spatial sound modulators.
175 alities include the ability to project their sound more effectively in a concert hall-despite seeming
176 he inner ear that enable the transduction of sound, motion, and gravity into neuronal impulses.
177 onductance, and pupil area responses to loud sounds (multivariate p = .007) compared with trauma-expo
178 opose that P2 characterizes familiarity with sound objects, whereas beta-band oscillation signifies i
179 ests as popping, hissing, and faint rustling sounds occurring simultaneously with the arrival of ligh
180 w that newborns are capable of retaining the sound of specific words despite hearing other stimuli du
181 ppium and B. pitanga, are insensitive to the sound of their own calls.
182  while first passively listening to recorded sounds of a bell ringing, then actively striking the bel
183 ft hand while they were presented with brief sounds of rising, falling or constant pitches, and in th
184 ndividually tailored spectral cues to create sounds of similar intensity but different naturalness.
185 ucing color sensations was the name, not the sound, of the note; behavioral experiments corroborated
186 However, evidence on the impact of affective sounds on perception and attention is scant.
187 at related to the latency of the response to sound onset, which is found in left auditory cortex.
188 erences in the timing of initial response to sound onset.
189 ther perceptual forms of interaction such as sound or feel.
190                                Are sight and sound out of synch?
191 w nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particula
192 s in that responses were more consistent for sounds perceived as approaching than for sounds perceive
193 for sounds perceived as approaching than for sounds perceived as receding.
194 d and spontaneous activity are important for sound perception.
195 a indicate that the non-linear processing of sound performed by the guinea pig cochlea varies substan
196                Using these values we predict sound pitch to range from 350-800 Hz by VS modulation, c
197 n vitro and focus on two muscles controlling sound pitch.
198 re, research opportunities and barriers, and sound practices to guide providers, patients, and famili
199 at significant phase locking to the rhythmic sounds preceded participants' detection of them.
200 alcium sensor for exocytosis and encoding of sound preferentially over the neuronal calcium sensor sy
201                                 Increases in sound pressure level appeared to be largely driven by la
202  the device's performance and applicability, sound pressure level is characterized in both space and
203 ect operates by continuously integrating the sound pressure level of background noise through tempora
204 tial translation across samples are based on sound principles, but require users to choose between ac
205  of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synapt
206        We related the day-by-day recovery of sound processing to dynamic changes in the strength of i
207                   Misophonia is an affective sound-processing disorder characterized by the experienc
208                Songs are distinct, patterned sounds produced by a variety of animals including baleen
209 presentations by the manipulation of natural sounds produced when one's body impacts on surfaces have
210 asurements to quantify elastic properties of sound producing medial labia (ML).
211 e directly proportional to the size of their sound-producing organs.
212                                   Blue whale sound production has been thought to occur by Helmholtz
213    Here we show that models of measured fish sound production versus independently measured fish dens
214  central sulcus (CS) were associated with AG sound production.
215                   Zero index materials where sound propagates without phase variation, holds a great
216 ions are, in part, based on methodologically sound randomized controlled trials (RCTs), demonstrating
217 e different systems: a music-playing flag, a sound recording film and a flexible microphone for secur
218 to approaching sounds compared with receding sounds, reflecting an evolutionary pressure.
219 icit memory also characterized the impact of sound regularities in benefitting dyslexics' oral readin
220 of clinical development strategies to enable sound regulatory assessment, with a goal toward licensur
221           Pitch, the perceptual correlate of sound repetition rate or frequency, plays an important r
222 r sensory processing by dynamically changing sound representation and by controlling the pattern of s
223        Here, we show that modeling of neural sound representations in terms of frequency-specific spe
224 oduces an approach to embed models of neural sound representations in the analysis of fMRI response p
225  results suggest that learning to produce AG sounds resulted in region-specific cortical reorganizati
226 gnal assignment is fundamental for obtaining sound results when interpreting statistical data from me
227                                    Microwave sounding reveals weather features at pressures deeper th
228 e first example of this phenomenon involving sound rhythm.
229                    Despite the prevalence of sounds rich in harmonic structures in our everyday heari
230 tion of human exposure to chemicals in food, sound risk assessments, and more focused risk abatement
231 ntial preclinical evidence needed to build a sound scientific basis for increased medicinal use of CB
232                                            A sound scientific principle is that the body is constantl
233 n to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures b
234 ted behaviours such as locomotion, touch and sound sensation across different species including Caeno
235 itory stream segregation-the organization of sound sequences into perceptual streams reflecting diffe
236 impanzees that have learned to produce these sounds show significant differences in central sulcus (C
237 collection of simultaneous events, combining sound, sight, and tactile sensation.
238  map the seabed using intense, low-frequency sound signals that penetrate kilometers into the Earth's
239 ons in the apex for precise phase-locking to sound signals.
240                                         This sound-size mapping emerges without visual experience, an
241 ience can change the way we perceive sights, sounds, smells, tastes, and touch.
242  than in the rate of tissue vibration in the sound source ("pitch").
243 f discriminating the individual identity and sound source distance in conspecific communication calls
244 nd source location in the face of changes in sound source level by neurons of the auditory midbrain.
245 thod to the problem of the representation of sound source location in the face of changes in sound so
246 properties contribute to a representation of sound source position that is robust to changes in sound
247                          Instead of the main sound source, the tracheal membranes constitute a morpho
248 ifferences (ITD and ILD)-that correlate with sound-source locations.
249 ly represent the largest described number of sound sources for a vocal organ.
250 into perceptual streams reflecting different sound sources in the environment.
251 ing, as well as the acoustical properties of sound sources in the natural environment, thereby provid
252 on devices allow us to discriminate multiple sound sources including loudspeakers broadcasting calls
253 can be captured by an array of tongue-driven sound sources located along the side of the mouth, and t
254     Here we show tracheophones possess three sound sources, two oscine-like labial pairs and the uniq
255 tory neurons involved in the localization of sound sources.
256              It also includes shaping of the sound spectrum by a dc current and modulating its amplit
257 allenging for airborne acoustics because the sound speed (inversely proportional to the refractive in
258 we present longitudinal (c L) and transverse sound speeds (c T) versus pressure from higher than room
259 ction procedures in both stages are based on sound statistical support.
260                       In this investigation, sound stimuli were chosen to allow observation of fixed
261 s may be active during consecutive cycles of sound stimuli, somatic EPSP normalization renders spike
262  not known whether this "inaudible" rhythmic sound stream also induces entrainment.
263 es additional high-quality, methodologically sound studies to clearly elucidate the role of palliativ
264                                      Natural sounds such as wind or rain, are characterized by the st
265  (anger and anxiety) in response to everyday sounds, such as those generated by other people eating,
266 e frequency-modulated sweeps (Posit Science 'Sound Sweeps' exercise).
267 d psychological mechanisms that give rise to sound symbolism are not, as yet, altogether clear.
268                       These findings portray sound symbolism as a process that is not based merely on
269  gyrus and sulcus that respond more to vocal sounds than a range of nonvocal control sounds, includin
270 or delays of low- relative to high-frequency sounds than vice versa.
271 n (PS) model of auditory stream segregation, sounds that activate the same or separate neural populat
272   Listeners were presented with sequences of sounds that varied in either fundamental frequency (elic
273 ficits, highlighting the potential for using sound therapy soon after cochlear damage to prevent the
274                                 It generates sound thermoacoustically by Joule heating in graphene.
275 h sufficient intensity can create concurrent sounds through radiative heating of common dielectric ma
276 ience affect the neural processing of speech sounds throughout the auditory system.
277                      The middle ear conducts sound to the cochlea for hearing.
278 tory cortex switches its input modality from sound to vision but preserves its task-specific activati
279  analyze echoes of dedicated, self-generated sounds to assess space around them.
280 rts temporal representations of time-varying sounds to firing rate-based representations.
281 rophysiology has mapped acoustic features of sounds to the response properties of neurons; however, g
282 ina and integrates it with information about sound, touch, and state of the animal that is relayed fr
283 ensory hypersensitivity (aversion to certain sounds, touch, etc., or increased ability to make sensor
284 heir skeletal derivatives in jaw support and sound transduction.
285  complications may be accomplished through a sound understanding of the hemodynamic and physiological
286                                              Sound vibration (SV), a mechanical stimulus, can trigger
287 nly utilising the two parameters velocity of sound (VOS) and broadband ultrasound attenuation (BUA),
288  CIs and normal hearing with similar time-in-sound was explored in the present study.
289 udged whether a given consonant-vowel speech sound was large or small, round or angular, using a size
290                                      Ambient sound was measured for 1 minute using an application dow
291              These pseudomagnetic fields for sound waves are the analogue of what electrons experienc
292  the timing and intensity differences of the sound waves arriving at the two ears.
293 .The control and manipulation of propagating sound waves on a surface has applications in on-chip sig
294 owing effort in creating chiral transport of sound waves.
295 as remarkable for slightly asymmetric breath sounds, which appeared to be diminished on the right sid
296   Thus, the neural encoding of low-frequency sounds, which includes most of the information conveyed
297  M100 component over time for self-generated sounds, which indicates cortical adaptation to the intro
298 tual-room size with either active or passive sounds while measuring their brain activity with fMRI.
299 ons are largely unaffected by self-generated sounds while remaining sensitive to external acoustic st
300 tex may play an important role in processing sounds with harmonic structures, such as animal vocaliza

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。
 
Page Top