ELIZA cgi-bash version rev. 1.90
- Medical English LInking keywords finder for the PubMed Zipped Archive (ELIZA) -

return kwic search for data out of >500 occurrences
617251 occurrences (No.12 in the rank) during 5 years in the PubMed. [no cache] 500 found
146) The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data.
--- ABSTRACT ---
PMID:23978654 DOI:10.1093/cercor/bht228
2015 Cerebral cortex (New York, N.Y. : 1991)
* Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.
- Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration.
--- ABSTRACT END ---
[
right
kwic]
[frequency of next (right) word to data]
(1)70 from (14)7 revealed (27)3 analyses (40)2 at
(2)55 were (15)7 support (28)3 can (41)2 by
(3)32 *null* (16)7 was (29)3 concerning (42)2 clearly
(4)26 on (17)6 analysis (30)3 in (43)2 clustering
(5)19 and (18)6 obtained (31)3 indicated (44)2 have
(6)18 for (19)5 collection (32)3 may (45)2 here
(7)17 suggest (20)5 sets (33)3 the (46)2 highlight
(8)13 are (21)5 showed (34)3 which (47)2 mining
(9)11 set (22)5 we (35)3 with (48)2 show
(10)11 to (23)4 demonstrate (36)3 would (49)2 shows
(11)9 of (24)4 is (37)2 about
(12)7 collected (25)4 provide (38)2 also
(13)7 indicate (26)4 regarding (39)2 as

add keyword

--- WordNet output for data --- =>データ, 資料 Overview of noun data The noun data has 1 sense (first 1 from tagged texts) 1. (76) data, information -- (a collection of facts from which conclusions may be drawn; "statistical data") Overview of noun datum The noun datum has 1 sense (first 1 from tagged texts) 1. (5) datum, data point -- (an item of factual information derived from measurement or research) --- WordNet end ---