Language influences perception and concept formation

A neurobiologically constrained model of semantic learning in the human brain was used to simulate the acquisition of concrete and abstract concepts, either with or without verbal labels.
Concept acquisition and semantic learning were simulated using Hebbian learning mechanisms. The network’s category learning performance is defined as the extent to which it successfully:
(i) grouped partly overlapping perceptual instances into a single (abstract or concrete) conceptual representation, while
(ii) still distinguishing representations for distinct concepts.

Co-presence of linguistic labels with perceptual instances of a given concept generally improved the network’s learning of categories, with a significantly larger beneficial effect for abstract than concrete concepts.
These results offer a neurobiological explanation for causal effects of language structure on concept formation and on perceptuomotor processing of instances of these concepts: supplying a verbal label during concept acquisition improves the cortical mechanisms by which experiences with objects and actions along with the learning of words lead to the formation of neuronal ensembles for specific concepts and meanings. Furthermore, the present results make a novel prediction, namely, that such ‘Whorfian’ effects should be modulated by the “concreteness / abstractness” of the semantic categories being acquired, with language labels supporting the learning of abstract concepts more than that of concrete ones.

Schematic illustration of a structural difference between concrete (left) and abstract (right) concepts (semantic feature overlap versus family resemblance). We model the semantic features of any given concept as shared neuronal elements of three ‘grounding patterns’, with 12 neurons per grounding pattern and modality (sensory/visual and motor, i.e. 24 per grounding pattern across both modalities). Only one modality is shown for clarity; procedures were identical for grounding patterns used as input to the visual and motor ‘cortices’ of the model.
Top left panel: concrete concepts were modelled as containing 12 neurons per grounding pattern in total, six shared between all three (therefore representing semantic features) and six unique to each instance (representing instance-specific perceptual or action-related features). In the example of HAMMER, the six shared and therefore ‘semantic’ neurons represent general visual features such as shape features including long handle, head attached at a 90 degree angle along with general action-related ones, including typical motor trajectories characterizing the beating with a hammer. The six instance-specific sensory and motor neurons represent unique features of each hammer exemplar including idiosyncratic properties (e.g. differing sizes, materials, shapes of the head, presence or absence of a wedge), along with specificities of the way each hammer requires sensorimotor adjustment to these individual properties when being used.
Top right panel: abstract concepts were modelled by an implementation of family resemblance, whereby each grounding pattern of an instance is represented by 12 neurons, four shared between two instances and four unique to only one instance. In the example of DEMOCRACY, pairwise shared neurons might represent hand actions involved in casting a vote (shared between i2/i3) or the visual image of several people coming together (shared between i1/i2). Unique features might represent differences in the hand movements for raising one’s hand versus throwing a ballot in a ballot box (i2 versus i3) or differences in the size and layout between an official parliament room and a smaller room where people cast votes in an informal setting.
Bottom panel: all wordform patterns (supplied as input to perisylvian areas during training in the label conditions) consisted of 12 neurons per pattern that were always identical for all three instances of a concept. Brown lines: illustration of the correlation structure (i) among shared neurons in conceptual grounding patterns and (ii) between these shared neurons and the neurons of wordform patterns. For illustration purposes, the correlations are illustrated with examples (brown solid and dashed lines). Whereas for concrete concepts (left), the average correlation is p = 1 both among shared-conceptual and between shared-conceptual and wordform neurons, for abstract concepts there is a higher correlation between shared neurons and word form neurons (p = 2/3, as the wordform always co-occurs with two thirds of the set of neurons formed by the union of all pairwise-shared neuronal sets) than among shared neurons (p = 1/3, as any two concepts share only a third of such a union set). Hence, this difference in correlations—which is present for abstract concepts only—might exert a ‘pull’ on the emerging cell assemblies during training, such that the word form neurons, because of their relatively higher correlation, end up playing a more important role in the entire cell assembly’s structure (and, hence, dynamics).

A brain constrained neurocomputational simulation study explored putative brain mechanisms of associating conceptual categories (each constituted by three distinct, but related, grounding patterns) with linguistic labels.
There is a clear Whorfian effect of category labels on the processing of conceptual instances: the model’s activity in response to perceptuo-motor grounding patterns was modulated depending on whether or not labels had been provided during the earlier training phase. Labels were highly beneficial for semantic category learning performance, and this benefit was more strongly pronounced for abstract compared to concrete concepts and even more so in the deeper-lying semantic ‘hub’ areas of the model than in the primary areas, where stimulation was given. Thus, these effects of linguistic relativity are substantially modulated by the similarity structure of concepts, being more effective and relevant for the formation of abstract concepts with family resemblance structure than for concrete concepts with shared semantic features.

One response to “Language influences perception and concept formation”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: