Advertisement
  • Loading metrics

Is an artificial limb embodied as a hand? Brain decoding in prosthetic limb users

  • Roni O. Maimon-Mor,

    Roles Formal analysis, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Institute of Cognitive Neuroscience, University College London, London, United Kingdom, WIN Centre, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom

  • Tamar R. Makin

    Roles Conceptualization, Data curation, Supervision, Writing – original draft, Writing – review & editing

    t.makin@ucl.ac.uk

    Affiliations Institute of Cognitive Neuroscience, University College London, London, United Kingdom, WIN Centre, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom, Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom

Is an artificial limb embodied as a hand? Brain decoding in prosthetic limb users

  • Roni O. Maimon-Mor, 
  • Tamar R. Makin
PLOS
x

Abstract

The potential ability of the human brain to represent an artificial limb as a body part (embodiment) has been inspiring engineers, clinicians, and scientists as a means to optimise human–machine interfaces. Using functional MRI (fMRI), we studied whether neural embodiment actually occurs in prosthesis users’ occipitotemporal cortex (OTC). Compared with controls, different prostheses types were visually represented more similarly to each other, relative to hands and tools, indicating the emergence of a dissociated prosthesis categorisation. Greater daily life prosthesis usage correlated positively with greater prosthesis categorisation. Moreover, when comparing prosthesis users’ representation of their own prosthesis to controls’ representation of a similar looking prosthesis, prosthesis users represented their own prosthesis more dissimilarly to hands, challenging current views of visual prosthesis embodiment. Our results reveal a use-dependent neural correlate for wearable technology adoption, demonstrating adaptive use–related plasticity within the OTC. Because these neural correlates were independent of the prostheses’ appearance and control, our findings offer new opportunities for prosthesis design by lifting restrictions imposed by the embodiment theory for artificial limbs.

Introduction

The development of wearable technology for substitution (e.g., prosthetic limbs [1], exoskeletons [2]) and augmentation (e.g., supernumerary fingers and arms [3]) is rapidly advancing. Clinical research on prosthetic limbs, the most established form of wearable motor technology to date, teaches us that technological development is necessary but not sufficient for successful device adoption and usage. For example, only 45% of all arm amputees choose to use their prosthesis regularly [4]. The causes for prosthesis rejection are multiplex and include awkward control over the device, lack of tactile feedback, and complex training requirements [46]. Crucially, successful prostheses adoption depends on the human brain’s ability to effectively represent and operate it [7]. A popular (and yet untested) assumption is that amputees reject their prostheses—tools designed to substitute hand function—because they do not feel like it is a real body part (i.e., embodied) [8] and that embodiment can improve prosthesis usage [916].

The first challenge in harnessing the potential powers of embodiment to improve prosthesis design and usage is measuring embodiment. Embodiment is an umbrella term for the multisensory association of external objects with body representation, which engages multiple levels from both conceptual and perceptual perspectives (see “Discussion”). From a neural standpoint, embodiment is defined by the successful allocation of brain resources, originally devoted to controlling one’s own body, to represent and operate external objects [17]. Here, we focus on visual embodiment—the successful allocation of visual hand-related neural resources. Visual embodiment is particularly relevant for studying prosthesis representation; this is because prosthesis usage is highly visually guided [18]. Moreover, visual internal models of the body have been suggested as essential gateways for processing multisensory integration that will result in successful bodily ownership [19], a desired feature of prosthesis usage [9,10]. Previous efforts to test visual prosthesis embodiment have been centred on an illusion of body ownership over the prosthesis (most commonly, the rubber hand illusion, which relies on visual manipulations) with mixed success [11,2023]. Crucially, such efforts focused on measuring visual embodiment but did not associate it with improved prosthesis usage [11,2026] (though see the work by Graczyk and colleagues [27] for results from 2 individuals). We have recently found that as a whole, one-handed individuals tend to respond neutrally to embodiment statements in relation to their own prostheses [28]. Moreover, these measures of embodiment tend to rely on an explicit sense of body ownership, which might not be a necessary consequence of implicit neuronal embodiment (i.e., reallocation of body part resources to represent or control the prosthesis [17]).

As a more direct means of measuring neural visual embodiment, or how the brain represents prosthetic limbs, we recently assessed activity levels in prosthesis users’ occipitotemporal cortex (OTC) while participants were viewing images of prosthetic limbs, using functional MRI (fMRI) [29]. The OTC is known to contain distinct visual representations of different object categories [30], e.g., hands [31,32], tools [33], faces [34], and bodies [35]. It was previously shown to contain overlapping visual and motor body part selective representations [31,36] and even respond to touch [37,38]. OTC has been previously implicated in multisensory integration processing related to embodiment [3941] and OTC’s connectome associates it with hand actions. For example, hand- and body-selective visual areas uniquely connect to the primary sensorimotor hand area [37]. These characteristics qualify the OTC as an ideal candidate for studying action-related visual body representation. We previously found that individuals who used their prosthesis more in daily lives also showed greater activity in OTC’s hand-selective visual areas when viewing images of prostheses and greater functional coupling with sensorimotor hand areas [29]. This result demonstrates that prosthesis users are able to engage visuomotor hand-selective areas while passively representing a prosthesis. However, it is yet unknown whether prosthesis visual representation actually mimics that of a hand (i.e., neural embodiment). Alternatively, because the visual hand-selective regions partially overlap with handheld tool representation [42], the observed activity gains may reflect the prosthesis being represented as a tool. As a third option, because object categorisation in OTC is thought to be based on their semantic and functional properties [43], expert prosthesis usage may result in the emergence of prostheses representation as a new ‘category’, diverging from its existing natural categories (e.g., ‘hands’, ‘tools’). This third alternative is consistent with recent evidence showing that visual expertise can contribute to the shaping of categorical representation in OTC [44] (see ‘Discussion’).

Brain decoding techniques that take advantage of multivoxel representational patterns allow us to reveal the representational structural underlying fMRI activity, e.g., to dissociate overlapping hand and tool representations within the lateral OTC [33,45]. Here, we utilised fMRI data of 32 individuals missing a hand, with varying levels of prosthesis usage (hereafter, ‘prosthesis users’; see Table 1) and 24 two-handed control participants, who have varying life experience of viewing prostheses (See S1 Table). The OTC’s extrastriate body-selective area [35] was independently localised. Two main hypotheses were tested: the embodiment hypothesis, assessing whether prosthetic limbs are in fact represented visually as hands and not tools and the categorisation hypothesis, assessing whether a new ‘prosthesis’ category has formed. To provide distinct predictions for each of these hypotheses, we studied representational similarities between hand, tools, and upper-limb prosthetics images (both cosmetic—designed to resemble hand appearances—and active—designed to afford a grip, e.g., a ‘hook’; Fig 1A) and compared prosthesis users to controls. Broadly speaking, the visual hand embodiment hypothesis predicts that compared to controls, the various prosthesis conditions in prosthesis users will be more similar to hands than to tools (notice that this prediction also allows us to test the inverse prediction—that prostheses are represented more like tools in one-handers). The categorisation hypothesis predicts that, in prosthesis users, the prosthesis conditions will be more similar to each other relative to hands and tools.

thumbnail
Fig 1. Example stimuli and ROI.

(A) Example stimuli from the main 4 experimental conditions (columns, left to right): hands (upper limbs), active prostheses, cosmetic prostheses, hand-held tools. One-handers also observed images from multiple viewpoints of their own prosthesis. One image was shown per trial in an event-related design. (B) Probability maps of the body selective ROI. For each participant and hemisphere, the top 250 most activated voxels within the OTC were chosen based on a headless bodies > objects contrast, providing an independent ROI. ROIs from all participants (n = 56) were superimposed, yielding ROI probability maps. Warmer colours represent voxels that were included in greater numbers of individual ROIs. See S1 Fig for the probability maps of each group separately. Data used to create this figure can be found at https://osf.io/4mw2t/. ROI, region of interest.

https://doi.org/10.1371/journal.pbio.3000729.g001

thumbnail
Table 1. Prosthesis users’ demographic details and daily prosthesis usage habits.

https://doi.org/10.1371/journal.pbio.3000729.t001

Results

Clustering of prostheses types in prosthesis users but not in controls

Analysis was focused on the OTC’s extrastriate body-selective area (EBA) [46,47]. This region of interest (ROI) was independently localised for each participant by choosing the 250 voxels in each hemisphere showing the strongest preference to images of headless bodies over everyday objects (Fig 1B).

To investigate the underlying representational structure within this region, we first characterised multivoxel activity patterns for each participant and condition (hands, tools, cosmetic prostheses that look like hands, and active prostheses that tend to resemble tools rather than hands, Fig 1A; all exemplars are available at https://osf.io/kd2yh/). Distances between each pair of activity patterns (e.g., hands and cosmetic prostheses) were calculated (noise-normalised cross-validated mahalanobis distances [48]). More similar activity patterns will result in smaller distances, or in other words, the more dissimilar patterns are, the greater their relative distance is. Because these are multidimensional patterns, one way to visualise the structure is using a dendrogram, or linkage tree, which illustrates how the different conditions cluster. Using this method on the average distances for each group, we found qualitative differences in the representational structure between controls and prosthesis users (Fig 2). For control participants, cosmetic prostheses were clustered with hands, and active prostheses were clustered with tools, reflecting their native intercategorical similarities across conditions (see S2 Fig for visual similarity analysis). For prosthesis users, however, the 2 prostheses were clustered together, potentially reflecting a newly formed prosthesis category, with tools and hands being further away from both prosthesis conditions.

thumbnail
Fig 2. Representational structure in body-selective visual cortex.

Multidimensional distances between activity patterns of each of the main condition (hands, tools, cosmetic prostheses, active prostheses) were calculated using representational similarity analysis. (A) Representational dissimilarity matrices for each group showing pairwise distances between the 2 prostheses conditions (active and cosmetic), hands, and tools. (B) To visualise the underlying representational structure a linkage tree (dendrogram) was calculated in each group of participants, combining information from all pairwise distances (two-handed controls, left; one-handed prosthesis users, right). Pairs of stimuli that are closer together in the multidimensional space are clustered together under the same branch. Longer connections across the vertical axis indicate greater relative distances. In controls, cosmetic prostheses are clustered with hands and active prostheses with tools, reflecting their visual similarities. In prosthesis users, however, the 2 prostheses types (cosmetic and active) are clustered together, with tools and hands represented dissimilarly from both prostheses. Data used to create this figure can be found at https://osf.io/4mw2t/.

https://doi.org/10.1371/journal.pbio.3000729.g002

Prosthesis-like (categorical) and not hand-like (embodied) representation of prostheses in prosthesis users

Next, we set out to quantify prosthesis representation using the 2 alternative theoretical frameworks—embodiment versus categorisation. According to the embodiment hypothesis, prosthesis representation should resemble hand representation. This hypothesis predicts that in prosthesis users, each of the 2 prosthesis conditions will be more similar (smaller distance) to hands than to tools, compared to controls (quantified as a hand-similarity index; see “Methods”). Notice that the hand-similarity index is the inverse of a tool-similarity index. Therefore, significantly negative embodiment should be taken as evidence for association of the prosthesis with a tool. When comparing hand-similarity indices based on the multidimensional distances across participant groups, we found no significant differences (t(54) = 0.47, p = 0.64; Fig 3A). A Bayesian t test provided substantial evidence in support of the null hypothesis (BF = 0.2), i.e., that on average amputees do not visually represent an unfamiliar prosthesis more similarly to a hand than controls. Similar results were observed across the 2 hemispheres (right OTC: t(54) = 0.46, p = 0.65, left OTC: t(54) = 0.54, p = 0.60).

thumbnail
Fig 3. Assessing the embodiment and categorisation hypotheses.

(A–B) A hand similarity index was calculated for each participant to quantify the degree to which both prostheses conditions (cosmetic and active) are more similar to hands than tools. A higher index in value indicates greater embodiment (hand similarity of prostheses). (A) A group comparison of the hand similarity index between controls and prosthesis users showed no significant differences (t(54) = 0.47, p = 0.64). Horizontal lines indicate group means and dots indicate individual participants. (B) Correlation between the hand similarity index and prosthesis usage was not significant across users (Pearson’s r(30) = −0.03, p = 0.86). Dark/light circles indicate congenital/acquired one-handers, respectively, and grey shading indicates a bootstrapped 95% confidence interval. (C) Hand (left) and tool (right) distances from users’ ‘own’ prosthesis. Individual distances were normalised by the controls’ group mean distance, depending on the visual features of the ‘own’ prosthesis (hand likeness). A value of 1 indicates similar hand/tool distance to controls’. Users showed significantly greater distances between their own prosthesis and hands (t(25) = 4.33, p < 0.001) contrary to the embodiment hypothesis. Together, these findings demonstrate that hand similarity under the embodiment hypothesis does not adequately explain differences in prosthesis representation in users’ OTC. (D–F) A prosthesis similarity index was calculated for each participant to quantify the degree to which the prostheses representation moved away from their natural categories (hands for cosmetic prostheses and tools for active prostheses) and towards one another. (D) A visual illustration of the prosthesis similarity index formula. Arrows pointing outward indicate distances that should grow in users compared to controls (e.g., hands and cosmetic prostheses) and are therefore positively weighted. Arrows pointing inward indicate distances that should shrink in users compared to controls (i.e., cosmetic and active prostheses) and are therefore negatively weighted. (E) Group comparison of the prosthesis similarity index between controls, and prosthesis users showed greater prosthesis similarity in users (t(54) = −2.55, p = 0.01). (F) The prosthesis similarity index significantly correlated with prosthesis usage; higher prosthesis index associated with greater prosthesis usage (based on wear frequency and skill; Pearson’s r(30) = 0.43, p = 0.01). Together, these findings demonstrate the categorisation hypothesis explains differences in user’s prosthesis representation in the OTC, both with respect to controls and interindividual prosthesis usage. Data used to create this figure can be found at https://osf.io/4mw2t/. n.s., no significance; OTC, occipitotemporal cortex.

https://doi.org/10.1371/journal.pbio.3000729.g003

It is possible that the effects of embodiment are present in a subset of users that rely on their prosthesis for daily function most but that this effect is masked in the group effect by the wide range of usage in our group of prosthesis users. We therefore further tested the relationship between visual embodiment and prosthesis usage by correlating the hand-similarity index with a prosthesis usage score, reflecting usage time and habits (see ‘Methods‘). According to the embodiment hypothesis, hand-like prosthesis representation should scale with prosthesis usage. We found no significant correlation between the hand-similarity index and everyday prosthesis usage (or with tool similarity; Pearson’s r(30) = −0.03, p = 0.86), suggesting that prosthesis embodiment is not a strong predictor of usage (Fig 3B).

According to the categorisation hypothesis, prosthesis usage should promote a more independent representation of prostheses. This hypothesis predicts that within our multidimensional space, both prosthesis conditions would move away from the existing natural categories (e.g., the distance between cosmetic prostheses and hands will become larger whereas their distance from tools will become smaller; see Fig 3D) and towards one another (smaller distances between cosmetic and active prostheses). In other words, the 2 different prosthesis types will form a prosthesis cluster within the hand–tool axis. We calculated a prosthesis-similarity index for each participant (see ‘Methods‘ and S3 Fig for intercategory pairwise distances). We found a significant group difference of the prosthesis-similarity index (t(54) = −2.55, p = 0.01; Fig 3E). This indicates that using a prosthesis alters one’s visual representation of different prostheses types into a distinct category and does so in a way that is more complex than prostheses simply becoming more hand- or tool-like. Although previous studies suggest tool representation is left-lateralised [42,49], the reported effect was robust across both hemispheres (right OTC: t(54) = −2.25 p = 0.03, left OTC: t(54) = −2.66, p = 0.01). Exploratory data-driven analysis ran on the one-handers pairwise distances comprising the prosthesis-similarity index further revealed that the departure of the active prostheses in particular away from its native ‘tool’ category is linked with its increased association with cosmetic prostheses (See S4 Table).

Supporting the interpretation that different neural representational structure in prosthesis users is associated with prosthesis usage, we found a significant positive correlation between the prosthesis-similarity index and the prosthesis usage score described above (Pearson’s r(30) = 0.43, p = 0.01; Fig 3F). In other words, the more users use a prosthesis, the more they represented the different prostheses types as a separate category. Conversely, individuals who do not use a prosthesis frequently have a more similar representational structure to that of controls.

The 2 indices for hand- and prosthesis-similarity are not statistically independent, and, as such, we were not able to compare them directly. However, we could use them as a model for predicting daily-life prosthesis usage. Comparing the correlation between prosthesis usage and the indices for the 2 hypotheses (embodiment and categorical) revealed a significant difference (z = −2.52, p = 0.01). Therefore, at least when considering unfamiliar prostheses, the prosthesis-similarity index is a better predictor of prosthesis usage than the hand-similarity index.

Prosthesis categorisation does not depend on users’ developmental period of hand loss or prosthesis type

When considering hand-selective neural resources, individuals who were born without a hand might develop different representational structures than those with an acquired amputation [50]. Considering this, we tested whether the hand-similarity index differed between the 2 prosthesis users subgroups (congenital versus acquired) and found no significant differences (t(30) = −0.615, p = 0.54). Moreover, the reported prosthesis-similarity effects prevailed even when accounting for any potential differences between the 2 subgroups, as demonstrated by an analysis of covariance (ANCOVA) with subgroup (congenital versus acquired) as a fixed-effect and usage as a covariate. The ANCOVA revealed no significant subgroup effect (F(1,29) = 2.02, p = 0.17), demonstrating that prosthesis-similarity does not significantly differ because of cause/developmental period of hand loss, and a nearly significant usage effect (F(1,29) = 4.11, p = 0.052), indicating that the relationship between prosthesis categorisation and usage is relatively independent from potential subgroup effects.

Beyond cause of limb loss, the users tested in this study also diverged with respect to prosthesis type, shape, and history of usage, involving primary users of cosmetic (40%), active (41%, comprising of mechanical hook [body-powered; 25%] and myoelectric [motor-powered, 16%]), as well as nonprosthesis users (16%) and a hybrid user (3%; see Table 1). A key question is what aspects of the prosthesis itself might affect neural prosthesis adoption in OTC. Because our key focus is on prosthesis usage, we looked at whether individuals who primarily use a prosthesis that has a degree of active grip control (e.g., a mechanical hook) show different effects for those who primarily use a cosmetic prosthesis that affords more limited motor control (no grip). Comparing the prosthesis-similarity index of users of the 2 prosthesis types revealed no significant effect (t(20) = 0.055, p = 0.96). Using an ANCOVA with primary prosthesis type in daily life (cosmetic/active prosthesis) as a fixed effect and usage as a covariate revealed no prosthesis type effect (F(1,19) = 0.432, p = 0.52), indicating that the categorisation effect might not depend on the type of prosthesis individuals primarily use. The usage effect remained significant (F(1,19) = 6.01, p = 0.02), indicating that the correlation between usage and categorisation is independent of prosthesis type. Repeating the same analysis with the hand-similarity index revealed no significant effects, indicating that even when accounting for prosthesis type, no relationship is found with visual embodiment. Though null results should be interpreted with caution, our analysis indicates that the categorisation effect observed in prosthesis users is not driven by the prosthesis design or control mechanism but by the functionality the user extracts from it (as reflected in our daily usage scores).

In the active prosthesis condition, a minority of active prosthesis users (n = 5) were shown images of a myoelectric prosthesis (that is not their own; Table 1 marked in asterisks). Because these are arguably visually distinct from the mechanical hooks seen by the control participants, we repeated the analysis of prosthesis-similarity index by replacing the subset of relevant pairwise distances relating to the active prosthesis with the mean distances of the prosthesis users’ group (see ‘Methods‘). In this analysis, the observed effect remained but was reduced to a trend (t(54) = −1.87, p = 0.067). Importantly, the correlation between categorisation and usage remained significant (r(30) = 0.48, p = 0.006) even when excluding the myoelectric users from the analysis altogether (r(25) = 0.46, p = 0.015). This further analysis confirms that our findings were not driven by the subset of myoelectric active prostheses.

Own prosthesis representation

The results discussed so far were derived from visual presentation of prosthesis exemplars that each user was not personally familiar with, allowing us to easily compare the results between the users and controls. However, under the embodiment framework, it could be argued that embodiment can only be achieved for the user’s own prosthesis. To account for this possibility, in addition to the general prosthesis types shown to all participants, most prosthesis users (n = 26; see ‘Methods‘ and Table 1) were also shown images of their own prosthesis (for the many prosthesis users using more than one prosthesis, this refers to the prosthesis each user wore on the day of testing). Because controls do not have a prosthesis of their own, in this analysis, we compared the user’s own prosthesis to the same prosthesis type shown to controls. Therefore, cosmetic ‘own’ prostheses (n = 15) were matched with controls’ general cosmetic condition, and active ‘own’ prostheses (n = 11) were matched with the controls’ general active condition. To allow us to group values across the cosmetic and active prostheses users, the distances between the ‘own’ prosthesis from hands and tools were normalised by the mean distances measured from the control group (using the above-mentioned conditions). Because we hypothesised that altered prosthesis representation is driven by usage, the controls’ averaged distance is used here as a ‘baseline’ measure of how the representation is structured before prosthesis use. This normalised score was entered into a one-sample t test for statistical comparison.

Based on the embodiment hypothesis, users should show greater similarity between users’ own prosthesis and hands. Instead, we found that users showed significantly greater dissimilarity (greater distances), relative to controls, indicated by a normalised distance that was greater than 1 (t(25) = 2.85, p = 0.009). This analysis therefore shows that users’ own prostheses are less similar to hands, providing direct evidence against the embodiment hypothesis. The normalised ‘own’ prosthesis distance from tools was also found to be significantly greater than 1 (t(25) = 2.91, p = 0.008), further supporting the categorisation interpretation. We also repeated the analysis as described above, but this time we standardised distances for 7 users with an ‘own’ active prosthesis with hand-like visual features (see Table 1) with controls’ cosmetic prosthesis. This complementary analysis produced similar results for hands (t(25) = 4.33, p < 0.001) but not for tools (t(25) = 1.62, p = 0.118; Fig 3C). This means that even when taking both visual-feature and operational considerations, prosthetic limbs are not represented as hands in their owner’s visual cortex.

Prosthesis representation beyond EBA

To demonstrate that our results are not specific to our ROI definition in OTC of EBA, we have repeated the same analysis and found similar results in ‘hand’ and ‘tool’ ROIs within OTC, generated from the meta-analysis tool Neurosynth [51] (see S1 Text and S4 Fig).

We next explored prosthesis representation beyond OTC. The stimuli used in the current study were specifically designed to interrogate the information content underlying prosthesis representation in body-selective regions in the OTC. Nevertheless, as demonstrated in S5A Fig, the visual stimuli also significantly activated the intraparietal sulcus (IPS), providing us with an opportunity to explore visual prosthesis representation in a dissociated brain area (though notably, this activity was observed less consistently within individual subjects). Because 11 of the participants did not have enough significantly activated voxels within IPS to meet our ROI criteria, we constructed an anatomical ROI based on the Juliech Histological Atlas in FMRIB’s Software Library (FSL), including human intraparietal (hIP)1-3 bilaterally [52,53] (see S5B Fig). With respect to the representational structure, we did not find significant differences between prosthesis users and controls when comparing both the hand- and prosthesis-similarity indices, even when wavering corrections for multiple comparisons, which are customary for exploratory analysis (hand-similarity: t(54) = 0.71, p = 0.48; prosthesis-similarity: t(54) = −0.45, p = 0.66). This could be due to insufficient power to explore this representational structure or possibly due to a different organising principle in this region. However, we did find that users showed significantly greater dissimilarity (greater distances), relative to controls, when comparing the representation of their own prosthesis to that of both hands and tools (hand: t(25) = 10.11, p < 0.001; tool: t(25) = 4.62, p < 0.001, corrected for 2 comparisons; see S5C Fig). This analysis indicates that in parietal cortex, similar to the OTC, users’ own prostheses are less similar to hands and tools, providing further support against the embodiment hypothesis.

Discussion

Here, we used an fMRI brain decoding technique to probe the information content underlying visual prosthesis representation in the OTC. Previous research shows that prosthesis users are able to activate hand-selective areas in the OTC when viewing a prosthesis [29]. These areas, however, encompass a rich representational structure, spanning well beyond visual hand processing. It is therefore unknown whether, by utilising these hand-selective areas, the users’ brain actually represents the prosthesis as a hand or whether it follows a different representational principle. We pitted the embodiment theory against another prominent theory—the categorisation theory—which is well established in visual neuroscience [43] (as detailed below) but to our knowledge has not been explored for wearable devices. Although not directly opposing, both theories generate different predictions on how a prosthesis should be represented in the brain of its user. Contrary to the embodiment theory, we found that prosthesis users represented their own prosthesis unlike a hand. For unfamiliar prostheses, users formed a prosthesis category, distinct from their natural (existing) categories (hands and tools), as demonstrated in controls. Importantly, this shift scales with usage, such that people who use their prosthesis more in daily life show more independence of the prosthesis category. When comparing the 2 models’ success in explaining interindividual differences between prosthesis users, we find that the prosthesis category model was significantly more correlated with prosthesis usage. This indicates that, for visual representation of unfamiliar prostheses, categorisation provides a better conceptual framework than embodiment. Together with preliminary results showing that prosthesis users exhibited a less hand-like representation of their own prosthesis in parietal cortex, our results collectively show that neural visual embodiment does not predict successful adoption of wearable technology by the human brain. Despite benefiting from hand-selective cortical resources [29], the prosthesis is not ‘embodied’ into people’s visual hand representation. We also did not find any evidence for greater representation of prostheses as tools. However, because the experimental design and analysis we implemented were a priori focused on visual hand representation in OTC, it is possible that other brain areas might find a different representational ‘solution’ to support the daily use of an artificial limb.

As stated above, an intuitive and increasingly popular view that has been inspiring biomedical design and innovation is that embodiment will optimise the usage of technology, such as upper-limb prostheses [10,11,1316,54,55]. How can this view be resolved with our findings? One potential solution originates from the fact that embodiment is a multifaceted phenomenon [17], with distinct levels (i.e., phenomenological—does the prosthesis feel like my hand?; cognitive—do I react to/with the prosthesis like I would to/with my own hand?; neural—is the prosthesis represented via the same mechanisms as a hand? [7]). In another recent study in which we probed the phenomenological and cognitive levels of prosthesis embodiment, we found both to correlate with prosthesis usage [28]. Here, we focused our research on the neural level, because it is entirely unknown whether objective representation of the prosthesis as a body part associates with prosthesis acceptance and usage, let alone benefits it. It is possible, and even likely, that embodiment manifests differently in each of these distinct levels, which may also vary in their importance or even relevance for successful technological implementation [7]. To disentangle the complex concept of ‘embodiment’, future studies should aim to acquire measurements of the different levels of embodiment together with a detailed account of prosthesis skill and use.

A second important consideration is that embodiment is a multisensory process, involving multiple brain areas beyond visual cortex [41,56,57]. Here, we focused on visuomotor processing, and our experimental approach does have some limitations that should be considered. For example, our use of static images was designed to drive activity in the OTC but not in other sensorimotor-related regions in the parietal and frontal regions [58], thereby limiting our conclusions to high-level visual cortex. The use of generic hand images that are not the participants’ own hands can also be limiting when approaching questions of embodiment. Here, it is important to mention that despite using a profoundly nonecological task in the scanner, the resulting brain representations, as captured with our prosthesis-similarity index, correlated significantly with the extent of everyday prosthesis usage. Therefore, despite the inherent limitations of our fMRI task, our task outcomes are ecologically relevant. Still, it is possible that other brain areas involved more directly in motor planning would produce other representational structures with respect to hand representation. Future research aimed at this question should take into consideration that at present, commercially available prosthesis motor control is fundamentally different from that of motor control of the hand, producing further potential challenges for sensorimotor embodiment.

Thirdly, and most speculatively, it is possible that the prosthesis may still be represented in OTC as a body part but one that is not a hand. After all, prosthesis users have strong semantic and sensorimotor knowledge of a hand (all users in the study had one intact hand; acquired amputees had prior experience of their missing hand, including lingering phantom sensations; see also the work by Striem-Amit and colleagues [59] for related findings showing normal visual hand representation in individuals born without both hands). Their experience with operating a prosthesis is fundamentally different from that of a hand. If body representation is not strictly tuned to the specific fine-grained features of a given body part (e.g., the digits of the hand) but is instead also represented at a higher level (e.g., effectors [60] or based on other functionality features [61]), then the dissociation of prostheses from hand representation observed in our study should not be taken as evidence for lack of embodiment per se but rather lack of embodiment as a hand. In this context, the previously reported recruitment of hand-specific visual cortical regions could reflect an underlying embodied representation of a prostheses as a (nonhand) body part. Therefore, we propose that future studies of artificial limb embodiment should not be limited to identifying and/or quantifying hand representation (as the current common practice, e.g., using the rubber hand illusion).

Instead of visual hand embodiment (or tool-like representation), we found that a significant amount of individual differences in prosthesis usage can be predicted by the extent of prosthesis categorisation within the visual body-selective area, providing a significantly better model than hand embodiment. This result is also consistent with the known organising principle of the OTC, in which categorical representation reflects our knowledge of grouping and patterns, which are not necessarily present in the bottom-up sensory inputs [43,6264]. Moreover, categorical representation in OTC was shown to reflect individual differences in conceptual knowledge [65]. Accordingly, people who acquire a specific visual expertise, such as car experts, show increased activity in object-selective areas in OTC (for a review, see the work by Harel [66]). Research on object experts, therefore, provides compelling evidence for the role of visual learning and experience in shaping and fine-tuning categorical representation. Although various studies have demonstrated a relation between expertise and activation, few studies performed multivariate analyses, and those that did reported mixed results [44,6770]. For example, Martens and colleagues, 2018, found that expertise did not alter the representational configuration of the category of expertise, whereas McGugin and colleagues, 2015, who studied car representation of experts in the Fusiform Face Area, report that car representation became more similar to that of faces with expertise. In this context, our present results provide a novel perspective on visual expertise. This is because our results show divergence of prosthesis representation from the ‘natural’ categories normally existing in this brain area, consistent with the formation of a new categorical representation. In other words, rather than refining a category, prosthesis usage results in the formation of a new category. Gomez and colleagues, 2019, recently reported that childhood experience with playing a Pokémon videogame relates to increased classification of Pokémon, compared with other related categories, in the ventral occipital cortex [44]. Extending this finding, we report that categorical prosthesis representation correlates with (ongoing) visuomotor experience in adulthood. As these effects were found in both congenital and acquired one-handers, this prosthesis categorisation effect does not seem to relate to a specific developmental period. Because prosthesis usage relies on visuomotor expertise, it is difficult for us to disentangle the relative contribution of perceptual expertise to the observed prosthesis categorisation. Further research examining the representation of prosthesis in exclusively perceptual experts (e.g., prosthetists) will provide an interesting test case for our expertise hypothesis.

A further distinguishing feature of prosthesis users compared to other experts is that they not only have increased experience with prosthetic limbs but also, arguably, a reduction in exposure to hands, at least from a first-person perspective. Reorganisation is the process by which a specific brain area qualitatively changes its input–output dynamics to support a novel representation. This raises the question of whether or not the OTC becomes reorganised to support a new visual function (prosthesis control, known to strongly rely on visual feedback [18,71]). In this context, in recent years, adaptive behaviour has been suggested [60,7275] and challenged [59,76,77] as a causal driver of brain reorganisation. According to this framework, the function of the deprived cortex is not reassigned; instead, it is the input (prosthesis versus hand) that changes, while the local processing and resulting output persist (domain specificity [78,79]). For example, in a study conducted in individuals with a congenitally missing limb, multiple body parts used to compensate for the missing limb’s function benefited from increased activity in the missing hand’s sensorimotor cortical area [60], replicated in the recent work by Hahamy and Makin [80]. It was, therefore, suggested that opportunities for reorganisation may be restricted by a predetermined functional role of a given brain area, e.g., hand resources will only support processing related to hand-substitution behaviours (other body parts or a prosthesis). This framework has been successfully implemented to demonstrate that the categorical organisation of OTC is preserved following congenital blindness [8183]. For example, the OTC body area was shown to be selectively activated by tactile [84] and auditory [85] inputs conveying hand/body information. Our findings advance beyond these studies by demonstrating that a parallel form of reorganisation can occur even when the relevant sensory pathway (vision) is largely unaffected, further highlighting the role of daily behaviour in shaping brain organisation across the life span.

Finally, our results suggest that the relationship between prosthesis representation and usage is independent of key design and usage features of the artificial device (such as visual mimicry of the human hand) and cause of limb loss (congenital or acquired). This should inspire future efforts in neurologically-informed substitution and augmentative artificial limb design to not be strictly confined to biomimetics, a design principle that is currently highlighted in the development of substitution technology [86] (e.g., the vine prosthesis [87]).

To conclude, we provide a novel neural correlate for the adoption of wearable technology that is distinct from visual embodiment of a prosthesis as a hand. Successful prosthesis usage, in terms of both wear time and habit in daily life, was predicted not by visual embodiment (hand-similarity) or tool-similarity but by a more distinct categorical representation of artificial limbs. Understanding whether the brain can treat a prosthesis as a hand and whether this hand-like representation provides a real advantage for prosthesis users will have important implications on future design and assessment of wearable technology. Considering the limitations related to our focus on visual prosthesis representation in passive settings, we are currently unable to offer a sweeping answer to how the entire brain represents artificial limbs. However, our findings provide an important alternative to the highly prominent embodiment theory that needs to be considered. As such, much more research is necessary to provide a comprehensive understanding of the neural basis of successful prosthesis usage in the human brain.

Methods

Ethics statement

This study was approved by the Oxford University’s Medical Sciences interdivisional research ethics committee (Ref: MSD-IDREC- C2-2014-003). Written informed consent and consent to publish was obtained in accordance with ethical standards set out by the Declaration of Helsinki.

Participants

Thirty-two individuals missing an upper-limb (one-handed prosthesis users, mean age [SD] = 42.3 [11.8], 12 females, 8 missing their right hand) were recruited to take part in the study (Table 1). Sixteen prosthesis users lost their hand following an amputation, and sixteen had a unilateral congenital upper-limb below-elbow deficiency (due to complete arrest of hand development). One additional one-hander was recruited to the study but did not participate in the scanning session because of claustrophobia. In addition, 24 age- and gender-matched two-handed controls (age = 41.7 [13.1]; 12 females; 8 left-handed) took part in the study. Fourteen of the control participants were family members, friends, or held professional relationships with prosthesis users, resulting in passive visual experience of prosthesis usage. All participants took part in a larger study, involving multiple tasks (https://osf.io/kd2yh/). Univariate data from the fMRI task reported here was previously published [29].

Prosthesis usage measurements

Prosthesis usage was assessed by combining two, highly correlated measurements of usage: prosthesis wear frequency and a prosthesis activity log (PAL) [29,88]. Participants rated their prosthesis wear frequency on a scale: 0 = never, 1 = rarely, 2 = occasionally, 3 = daily (<4 hours), 4 = daily (4–8 hours), 5 = daily (>8 hours). Some participants use more than one type of prosthesis; in that case, the measurement from the most frequently used prosthesis was used. The PAL is a revised version of the motor activity log [89] as described in the work by Makin and colleagues [74]. In brief, participants were requested to rate how frequently (0 = never, 1 = sometimes, 2 = always) they incorporate their prosthesis in an inventory of 27 daily activities, with varying degrees of motor control. PAL is calculated as the sum of the levels of frequencies in all activities divided by the maximum possible sum (27 × 2) creating a scale of 0 to 1. This questionnaire, indexing bimanual usage, was previously validated using limb acceleration data [74] and behavioural lab testing [60], collected in ecological settings. Because neither measure is able to fully capture prosthesis use, both the prosthesis wear frequency and PAL were Z-transformed and summed to create a prosthesis usage score.

Stimuli

Participants viewed still object images of the following categories: (i) hands (upper limbs with and without the arm, in multiple orientations and configurations, and of different sizes, genders, and skin-colours; hereafter hands); (ii) man-made hand-held tools; (iii) cosmetic prostheses (aesthetically hand-like but nonoperable), (iv) active prostheses (affording a grip; either mechanical hooks or myoelectric prostheses); and (v) (when available) participants’ own prosthesis (more details below). For hands and prosthesis images, the effector was matched to the prosthesis users’ missing-hand side or the nondominant hand in controls (e.g., participants missing their left hand were presented with ‘left-handed’ hands/prostheses). Headless bodies and typically nonmanipulable man-made object images were also included for localising independent ROIs (see “OTC ROI” section). Additional conditions that were also included in the fMRI paradigm and initial analysis but not reported here were dismorphed images (originally intended to account for low-level visual activity but discarded after univariate analysis revealed increased activity in OTC) and lower limbs (intended as a control body part but not included in final analysis).

Images used for the main stimulus categories can be found at https://osf.io/kd2yh/. All images had their background removed, normalised for size, placed on an equiluminant grey background, and overlaid with a red fixation square. Nonprosthesis conditions were taken from an online database and involved multiple exemplars, whereas each of the 3 prosthesis conditions involved multiple shots of a single prosthesis of different orientations. We chose to use multiple configurations of hands and prostheses to probe the experience of congenital one-handers and control participants (who mostly see prostheses/hands-of-the-missing-side from a third person perspective, respectively). We note that previous studies probing visual hand representation in similar populations [88], including specifically in OTC [59], used a similar approach. We further note that the few studies finding differences between egocentric/allocentric [90] or self/others [91] visual hand representation in OTC identified lateralised effects to the right OTC, whereas our effects are comparable across hemispheres. Prosthesis images of other users’ prostheses (in the cosmetic and active conditions) or of the participant’s prosthesis (in the ‘own’ condition) were taken by the experimenters prior to the functional MRI session. The subset of the active prosthesis users (marked with an asterisk in Table 1) were shown another myoelectric prosthesis in the active prosthesis condition.

In the ‘own’ prosthesis condition, all prosthesis users who had brought their prosthesis to the study were presented with images of their own prostheses, either cosmetic or active (n = 26; see Table 1). All other participants (i.e., the remaining 6 prosthesis users who did not bring a prosthesis and all control participants) were shown pictures of their own shoe instead. Shoes were selected as a familiar external object that was intended to exert similar cognitive effects (e.g., in terms of arousal) as the prosthesis and therefore minimise differences in the scan time course across groups. Because we had no a priori interest in studying shoe representation, the shoe condition was not included in further analysis.

Post hoc shape similarity analysis [92] confirmed that the prosthesis images spanned a diverse range, resulting in similar shape dissimilarity for the 2 prosthesis types with respect to hand and tool exemplars (S2 Fig). It is highly likely that other measurements of visual similarity (e.g., based on contrast/colour comparison or perceptual judgements) would reveal more distinct intercategorical (dis)similarities. However, any such visual (dis)similarities should impact intercategorical representational similarity in the control group. As such, in the present study, all key analyses are interpreted with respect to the controls, providing us with a representational ‘baseline’.

Experimental design

Each condition consisted of 8 different images. In each trial, a single image from one of the conditions was shown for 1.5 s, followed by 2.5 s of fixation. Participants were engaged in a one-back task and were asked to report whenever an image was repeated twice in succession. This occurred once for each condition within each functional run, resulting in 9 trials per condition per run (8 distinct exemplars and 1 repetition). This design was repeated in 4 separate functional runs, resulting in 36 events per condition. First-order counterbalancing of the image sequences was performed using Optseq (http://surfer.nmr.mgh.harvard.edu/optseq), which returns the most optimal image presentation schedule. Run order was varied across participants. The specifics of this design were validated against an event-related design with a jittered interstimulus interval and a block design during piloting (n = 4). Stimuli were presented on a screen located at the rear end of the scanner and were viewed through a mirror mounted on the head coil. Stimulus presentation was controlled by a MacBook Pro running the Psychophysics Toolbox in MATLAB (The MathWorks, Natick, MA).

MRI data acquisition

The MRI measurements were obtained using a 3-Tesla Verio scanner (Siemens, Erlangen, Germany) with a 32-channel head coil. Anatomical data were acquired using a T1-weighted magnetization prepared rapid acquisition gradient echo sequence with the parameters: TR = 2040 ms, TE = 4.7 ms, flip angle = 8°, and voxel size = 1 mm isotropic resolution. Functional data based on the blood oxygenation level-dependent signal were acquired using a multiband gradient echo-planar T2*-weighted pulse sequence [93] with the parameters: TR = 1300 ms, TE = 40 ms, flip angle = 66°, multiband-facto = 6, voxel size = 2 mm isotropic, and imaging matrix = 106 × 106. Seventy-two slices with slice thickness of 2 mm and no gap were oriented in the oblique axial plane, covering the whole cortex, with partial coverage of the cerebellum. The first dummy volume of each scan was saved and later used as a reference for coregistration. Additional dummy volumes were acquired and discarded before the start of each scan to reach equilibrium. Each functional run consisted of 256 volumes.

Preprocessing and first-level analysis

fMRI data processing was carried out using FEAT (FMRI Expert Analysis Tool) version 6.00, part of FSL (www.fmrib.ox.ac.uk/fsl). Registration of the functional data to the high resolution structural image was carried out using the boundary based registration algorithm [94]. Registration of the high resolution structural to standard space images was carried out using FLIRT [95,96] and was then further refined using FNIRT nonlinear registration [97,98]. The following prestatistics processing was applied; motion correction using MCFLIRT [96]; nonbrain removal using BET [99]; B0-unwarping using a separately acquired field-map; spatial smoothing using a Gaussian kernel of FWHM 4 mm; grand-mean intensity normalisation of the entire 4D data set by a single multiplicative factor; highpass temporal filtering (Gaussian-weighted least-squares straight line fitting, with sigma = 50 s). Time-series statistical analysis was carried out using FILM with local autocorrelation correction [100]. The time series model included trial onsets and button presses convolved with a double gamma HRF function; 6 motion parameters were added as confound regressors. Indicator functions were added to model out single volumes identified to have excessive motion (>1 mm). A separate regressor was used for each high motion volume, no more than 9 volumes were found for an individual run (3.5% of the entire run).

OTC ROI

Because the focus of the study was on visual representation of prostheses in the OTC, the representational similarity analysis was restricted to individualised ROIs. ROIs of the extrastriate body area (EBA) [47] were identified bilaterally in each participant, by selecting the top 250 activated voxels, in each hemisphere, in the headless bodies > nonmanipulable objects. Voxel selection was restricted to the lateral occipital cortex, inferior and middle temporal gyri, occipital fusiform gyrus, and temporal occipital fusiform cortex (all bilateral), as defined by the Harvard-Oxford atlas [101]. Voxels from both hemispheres were treated as a single ROI (see S3 Table for the all analyses repeated for each hemisphere separately). Mean activity within the ROI for each participant was calculated by averaging the parameter estimate (beta) for each condition across all 500 voxels.

Representational similarity analysis

To assess the hand–prosthesis–tool representation structure within the ROI, pairwise distances between conditions were calculated using a multivariate approach, generally known as representational similarity analysis [102]. Prior to performing the multivariate analysis, we first examined differences in univariate activity in the ROI, which could drive differences in the multivariate analysis. When comparing activity levels (averaged across all voxels) between controls and prosthesis users within this region, no significant differences were found for each of the image conditions of hands, tools, and prostheses (p > 0.1 for all; see S2 Table), indicating that the 2 groups did not activate this region significantly differently. We then continued with the multivariate analysis. For each participant, parameter estimates of the different conditions and GLM residuals of all voxels within the ROI were extracted from each run’s first-level analysis. To increase the reliability of the distance estimates, parameter estimates underwent multidimensional normalisation based on the voxels’ covariance matrix calculated from the GLM residuals. This was done to ensure that parameter estimates from noisier voxels will be down-weighted [48]. Cross-validated (leave-run-out) Mahalanobis distances (also known as LDC—linear discriminant contrast) [48,103] were then calculated between each pair of conditions. Analysis was run on an adapted version of the RSA Toolbox in MATLAB [103], customised for FSL [104]. Visualisation of the distances in dendrogram was performed using the plotting functions available in the RSA Toolbox.

Hand-similarity and prosthesis-similarity indices

Two indices were calculated to test each of the aforementioned predictions—one based on embodiment and the other on the categorical structure of OTC. Each index’s formula was designed so that positive values support the prediction. Therefore, similar to GLM contrasts, each of the relevant pairwise distances were weighed with a positive multiplier if, under the specific prediction, that distance should grow (decreased similarity) in prosthesis users, and a negative multiplier if it should shrink (greater similarity). For instance, the embodiment hypothesis predicts that for both prostheses (active and cosmetic) the distance from tools would grow (will be less similar), whereas the distance from hands will shrink (will be more similar). Therefore, the formula used to calculate the Hand-Similarity index was (Tools↔CosmP + Tools↔ActP) − (Hands↔CosmP + Hands↔ActP), in which ‘↔’ indicates the distance between a pair of conditions. Using the same logic, the prosthesis-similarity index was calculated as a measurement of how much the 2 prosthesis conditions are represented as a separate cluster away from their native condition (cosmetic prostheses resembling hands and active prostheses resembling tools). The index was calculated using the following formula: 3(Hands↔CosmP + Tools↔ActP) − 2(Hands↔ActP + Tools↔CosmP + ActP↔CosmP); see Fig 3D for a visualisation of the formula. To control for individual differences in absolute distance values, both indices were standardised by the individuals’ distances between hands and tools (i.e., the residuals after accounting for the variance in the Hands↔Tools distance; see S3 Table for all analyses performed with the raw index values).

As mentioned earlier (see ‘Stimuli’), 5 active prosthesis users viewed images of myoelectric prostheses as the active prosthesis condition while all other participants viewed mechanical hooks under that condition. Because this creates a possible bias in our analysis, we have attempted to remedy this in two ways. The first is to replace all distances that involved the active prosthesis condition with the mean distances of the rest of the users’ group. In other words, before calculating the indices, replacing the distances (Tools↔ActP, Hands↔ActP, ActP↔CosmP) for these 5 individuals with the mean distances of the remaining 27 users. Another approach was to remove these individuals from the analysis altogether.

Analysis by prosthesis type

To test the influence of prosthesis type on users’ prosthesis-similarity index, we repeated the same analysis as described above and compared the hand- and prosthesis-similarity indices between individuals using a cosmetic prosthesis (n = 13) and active prosthesis users (n = 9). The following participants have been excluded from this analysis: (1) 5 individuals not using a prosthesis (usage time of 0 = never); (2) the 5 participants that viewed myoelectric prostheses as the active prosthesis condition mentioned above.

Own prosthesis representation

In this analysis we aimed to assess each user’s own prosthesis representation, defined by its distance from hands and tools. Our approach was designed to overcome 2 challenges: First, controls do not have an ‘own’ prosthesis; and second, the visual and operational features of different prostheses may vary significantly and need to be accounted for before similarity measures can be averaged across all prosthesis users. Therefore, for each participant, we normalised (divided) the ‘own’ prosthesis distance by the mean distance of a similarly looking/operating prosthesis as found in controls. This produced a measure that reflects the magnitude of the representational shift of individual’s ‘own’ prosthesis from the average distance of the control participants. A value of one would therefore indicate no difference between controls and the user’s representation of their own prosthesis. For the 7 prostheses users wearing an active prosthesis that had hand-like visual features (see Table 1 for a full breakdown of prostheses types), we repeated the analysis twice over, standardising their distances with control’s active and cosmetic prosthesis distances. This allowed us to account for both visual and operational features.

IPS ROI

The IPS ROI was generated using the Juliech Histological Atlas, including all voxels that have more than 30% probability of being within the grey matter of the IPS areas: hIP1, hIP2, and hIP3, [52,53] in both hemispheres.

Statistical analysis

All statistical testing was performed using SPSS (version 24), with the exception of the Bayesian analysis which was run on JASP [105]. Comparisons between prosthesis users and two-handed controls were performed using a two-tailed Student t test. To test the relationship between the indices and individuals’ prosthesis use, a usage score, described above, was correlated with the indices using a two-tailed Pearson correlation. An ANCOVA with prosthesis usage as a covariate was used to test the contribution of several factors such as cause of limb loss and type of prosthesis used. Own prosthesis analyses were performed using a one-sample t test, comparing the mean to one as the control means were used to calculate the individual indices. To interpret key null results, we ran a one-tailed t test. The Cauchy prior width was set at 0.707 (default). We interpreted the test based on the well accepted criterion of Bayes factor smaller than 1/3 [106,107] as supporting the null hypothesis. Corrections for multiple comparisons were included for exploratory analysis when resulting in significant differences, as indicated in the results section. To minimise flexible analysis, which could lead to p-hacking [108], only a limited set of factors, prespecified in the initial stages of the analysis or based on the reviewer’s comments, were tested in this manuscript. Further exploratory analysis on other recorded clinical factors can be conducted using the full demographic details: https://osf.io/kd2yh/.

Supporting information

S1 Text. Supplementary results: ‘Hand’ and ‘Tool’ ROI analysis.

Additional analyses conducted on ‘hand’ and ‘tool’ ROIs generated from the meta-analysis tool Neurosynth. ROI, region of interest

https://doi.org/10.1371/journal.pbio.3000729.s001

(DOCX)

S1 Table. Control participants demographics.

This table was taken from the supplementary material of van den Heiligenberg and colleagues, 2018, using the same controls cohort.

https://doi.org/10.1371/journal.pbio.3000729.s002

(XLSX)

S2 Table. Group comparison of average activity levels.

Group comparison of average activity levels between controls and prosthesis users within the visual body-selective ROI. Results are shown for both the bilateral ROI (500 voxels) and for each hemisphere separately (250 voxels each). ROI, region of interest

https://doi.org/10.1371/journal.pbio.3000729.s003

(XLSX)

S3 Table. Confirmatory additional analyses.

A summary table for analyses of hand-similarity index and prosthesis-cluster index: group comparisons (controls versus prosthesis users) and correlation with prosthesis usage. Including the results reported in the paper (bilateral ROI), results within the same ROI with the raw indices without controlling for the hand-tool distance (bilateral ROI raw), and for the results of the indices for each hemisphere separately. ROI, region of interest

https://doi.org/10.1371/journal.pbio.3000729.s004

(XLSX)

S4 Table. PCA of pairwise distances.

To explore which of the pairwise distances contributed to the underlying observed effect of prosthesis categorisation we ran a data-driven analysis (PCA) on the 5 distances of the one-handed group. Values in the table are the weights given to each distance within a component. The first component shows a ‘main effect’ of interindividual differences across participants, in which some individuals have overall larger distances than others across all condition pairs. In our calculated indices, we control for this effect by normalising the individual’s selectivity indices by their Hands ↔ Tools distance (see ‘Methods‘). The second component explains almost half of the remaining variance (after accounting for the interindividual differences in component 1). In the second component, individuals showing greater distances between the active prostheses and the tool condition also show greater similarity between the active prosthesis and the cosmetic prosthesis conditions. In other words, when the active prosthesis condition moves away from the tool category, it also tends to get closer to the cosmetic prostheses (as can be seen by the high weights and opposite signs of these 2 distances in the second component). This data-driven analysis provides further support for the hypothesised categorical shift of prosthesis representation. PCA, principle component analysis

https://doi.org/10.1371/journal.pbio.3000729.s005

(XLSX)

S1 Fig. Group probability maps for visual body-selective ROIs in control participants and prosthesis users.

All individual visual ROIs were superimposed per group, yielding corresponding probability maps. Warmer colours represent voxels that were included in greater numbers of individual ROIs. Data used to create this figure can be found at https://osf.io/4mw2t/. ROI, region of interest

https://doi.org/10.1371/journal.pbio.3000729.s006

(TIF)

S2 Fig. Intercategorical shape dissimilarities.

(A) Two exemplars from the ‘hand’ and ‘active prosthesis’ categories. (B) All exemplars shown to each individual participant were submitted to a visual shape similarity analysis (Belongie and colleagues, 2002), in which intercategorical pairwise shape similarity was assessed. (C) A histogram showing intercategorical similarity from one participant’s shown cosmetic (blue) and active (red) prosthesis exemplars, with respect to hand exemplars (all exemplars are available on https://osf.io/kd2yh/). As demonstrated in this example, these dissimilarity ranges were largely overlapping. (D) This intercategory dissimilarity analysis was repeated for each of the participants (based on the specific prostheses exemplars shown to them), and mean histogram values were averaged. As indicated in the resulting matrix, cosmetic and active prostheses did not show strong differences in similarities, on average. This is likely due to the wide range of exemplars/shapes used in the study data set. Data used to create this figure can be found at https://osf.io/4mw2t/.

https://doi.org/10.1371/journal.pbio.3000729.s007

(TIF)

S3 Fig. Pairwise distances in EBA.

(A) Pairwise distances between patterns of activations of hands, cosmetic prostheses, active prostheses, and tools. In the labels, ‘↔’ indicates the distance between a pair of conditions. Within the plot, x indicates the group’s mean. (B) same as panel A, only with each distance standardised by the individuals’ distances between hands and tools. (C) A table illustrating the direction of the effect predicted by each index. Data used to create this figure can be found at https://osf.io/4mw2t/. EBA, extrastriate body-selective area

https://doi.org/10.1371/journal.pbio.3000729.s008

(TIF)

S4 Fig. Neurosynth ‘hand’ and ‘tool’ ROIs.

Using the association maps for the words: ‘hand’ (blue) and ‘tools’ (green), ROIs were defined by using all significant voxels within the OTC. These ROIs are projected on inflated brain for visualisation. Surface and volume masks can be found at https://osf.io/4mw2t/. OTC, occipitotemporal cortex; ROI, region of interest

https://doi.org/10.1371/journal.pbio.3000729.s009

(TIF)

S5 Fig. IPS analysis.

(A) Univariate activations in prosthesis users. Results of the group level univariate contrast of (Active Prosthesis + Cosmetic Prosthesis) > Objects show that at the group level the IPS is also activated. (B) The IPS region of interest was taken from the Juelich Histological Atlas (30% probability of hIP1, hIP2, and hIP3). (C) Hand (left) and tool (right) distances from users’ ‘own’ prosthesis in IPS. Individual distances were normalised by the controls’ group mean distance, depending on the visual features of the ‘own’ prosthesis (hand-likeness). A value of 1 indicates similar hand/tool distance to controls. Users showed significantly greater distances between their own prosthesis and hands (t(25) = 10.11, p < 0.001) contrary to the embodiment hypothesis. A significant increase in the distance of the ‘own’ prosthesis from tools was also observed (t(25) = 4.62, p < 0.001). Data used to create this figure can be found at https://osf.io/4mw2t/. hIP, human intraparietal; IPS, intraparietal sulcus.

https://doi.org/10.1371/journal.pbio.3000729.s010

(TIF)

Acknowledgments

We thank the authors of our previous manuscript [29] and, in particular, Fiona van den Heiligenberg and Jody Culham for contributions in experimental design, data collection, and helpful advice on data analysis. We thank Opcare for help in participant recruitment; Chris Baker and Stephania Bracci for their comments on the manuscript; and our participants and their families for their ongoing support of our research.

References

  1. 1. Osborn LE, Dragomir A, Betthauser JL, Hunt CL, Nguyen HH, Kaliki RR, et al. Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain. Sci Robot. 2018;3: eaat3818. pmid:32123782
  2. 2. Soekadar SR, Witkowski M, Gómez C, Opisso E, Medina J, Cortese M, et al. Hybrid EEG/EOG-based brain/neural hand exoskeleton restores fully independent daily living activities after quadriplegia. Sci Robot. 2016;1: eaag3296.
  3. 3. Penaloza CI, Nishio S. BMI control of a third arm for multitasking. Sci Robot. 2018;3: eaat1228.
  4. 4. Jang CH, Yang HS, Yang HE, Lee SY, Kwon JW, Yun BD, et al. A Survey on Activities of Daily Living and Occupations of Upper Extremity Amputees. Ann Rehabil Med. 2011;35: 907. pmid:22506221
  5. 5. Engdahl SM, Christie BP, Kelly B, Davis A, Chestek CA, Gates DH. Surveying the interest of individuals with upper limb loss in novel prosthetic control techniques. J Neuroeng Rehabil. 2015;12: 53. pmid:26071402
  6. 6. Østlie K, Lesjø IM, Franklin RJ, Garfelt B, Skjeldal OH, Magnus P. Prosthesis rejection in acquired major upper-limb amputees: a population-based survey. Disabil Rehabil Assist Technol. 2012;7: 294–303. pmid:22112174
  7. 7. Makin TR, de Vignemont F, Faisal AA. Neurocognitive barriers to the embodiment of technology. Nat Biomed Eng. 2017;1: 0014.
  8. 8. Murray CD. Embodiment and Prosthetics. In: Gallagher P, Desmond D, MacLachlan M, editors. Psychoprosthetics: State of the Knowledge. London: Springer London; 2008. pp. 119–129. https://doi.org/10.1007/978-1-84628-980-4_9
  9. 9. Beckerle P, Kirchner EA, Christ O, Kim S-P, Shokur S, Dumont AS, et al. Feel-Good Robotics: Requirements on Touch for Embodiment in Assistive Robotics. Front Nerorobotics. 2018;12: 1–7. pmid:30618706
  10. 10. Giummarra MJ, Gibson SJ, Georgiou-Karistianis N, Bradshaw JL. Mechanisms underlying embodiment, disembodiment and loss of embodiment. Neuroscience and Biobehavioral Reviews. Pergamon; 2008. pp. 143–160. pmid:17707508
  11. 11. Rognini G, Petrini FM, Raspopovic S, Valle G, Granata G, Strauss I, et al. Multisensory bionic limb to achieve prosthesis embodiment and reduce distorted phantom limb perceptions. Journal of Neurology, Neurosurgery and Psychiatry. BMJ Publishing Group Ltd; 2019. pp. 833–836. pmid:30100550
  12. 12. Hellman RB, Chang E, Tanner J, Helms Tillery SI, Santos VJ. A robot hand testbed designed for enhancing embodiment and functional neurorehabilitation of body schema in subjects with upper limb impairment or loss. Front Hum Neurosci. 2015;9: 1–10. pmid:25653611
  13. 13. Tyler DJ. Neural interfaces for somatosensory feedback: bringing life to a prosthesis. Curr Opin Neurol. 2015;28: 574–581. pmid:26544029
  14. 14. Pazzaglia M, Molinari M. The embodiment of assistive devices-from wheelchair to exoskeleton. Physics of Life Reviews. Elsevier; 2016. pp. 163–175. pmid:26708357
  15. 15. Longo MR, Sadibolova R, Tamè L. Embodying prostheses–how to let the body welcome assistive devices: Comment on “The embodiment of assistive devices—from wheelchair to exoskeleton” by M. Pazzaglia and M. Molinari. Phys Life Rev. 2016;16: 184–185. pmid:26830704
  16. 16. Marasco PD, Hebert JS, Sensinger JW, Shell CE, Schofield JS, Thumser ZC, et al. Illusory movement perception improves motor control for prosthetic hands. Sci Transl Med. 2018;10: 1–13. pmid:29540617
  17. 17. De Vignemont F. Embodiment, ownership and disownership. Conscious Cogn. 2011;20: 82–93. pmid:20943417
  18. 18. Antfolk C, D’Alonzo M, Rosén B, Lundborg G, Sebelius F, Cipriani C. Sensory feedback in upper limb prosthetics. Expert Rev Med Devices. 2013;10: 45–54. pmid:23278223
  19. 19. Tsakiris M. My body in the brain: A neurocognitive model of body-ownership. Neuropsychologia. 2010;48: 703–712. pmid:19819247
  20. 20. Ehrsson HH, Rosén B, Stockselius A, Ragnö C, Köhler P, Lundborg G. Upper limb amputees can be induced to experience a rubber hand as their own. Brain. 2008;131: 3443–3452. pmid:19074189
  21. 21. Marasco PD, Kim K, Colgate JE, Peshkin MA, Kuiken TA. Robotic touch shifts perception of embodiment to a prosthesis in targeted reinnervation amputees. Brain. 2011;134: 747–58. pmid:21252109
  22. 22. Mulvey MR, Fawkner HJ, Radford HE, Johnson MI. Perceptual Embodiment of Prosthetic Limbs by Transcutaneous Electrical Nerve Stimulation. Neuromodulation Technol Neural Interface. 2012;15: 42–47. pmid:22151561
  23. 23. D’Alonzo M, Clemente F, Cipriani C. Vibrotactile Stimulation Promotes Embodiment of an Alien Hand in Amputees With Phantom Sensations. IEEE Trans Neural Syst Rehabil Eng. 2015;23: 450–457. pmid:25051556
  24. 24. Rosén B, Ehrsson HH, Antfolk C, Cipriani C, Sebelius F, Lundborg G. Referral of sensation to an advanced humanoid robotic hand prosthesis. Scand J Plast Reconstr Surg Hand Surg. 2009;43: 260–266. pmid:19863429
  25. 25. Schmalzl L, Kalckert A, Ragnö C, Ehrsson HH. Neural correlates of the rubber hand illusion in amputees: A report of two cases. Neurocase. 2014;20: 407–420. pmid:23682688
  26. 26. Collins KL, Guterstam A, Cronin J, Olson JD, Ehrsson HH, Ojemann JG. Ownership of an artificial limb induced by electrical brain stimulation. Proc Natl Acad Sci U S A. 2017;114: 166–171. pmid:27994147
  27. 27. Graczyk EL, Resnik L, Schiefer MA, Schmitt MS, Tyler DJ. Home use of a neural-connected sensory prosthesis provides the functional and psychosocial experience of having a hand again. Sci Rep. 2018;8: 1–17. pmid:29311619
  28. 28. Maimon-Mor RO, Obasi E, Lu J, Odeh N, Kirker S, MacSweeney M, et al. Communicative hand gestures as an implicit measure of artificial limb embodiment and daily usage. medRxiv. 2020; 2020.03.11.20033928.
  29. 29. Van Den Heiligenberg FMZ, Orlov T, MacDonald SN, Duff EP, Henderson Slater D, Beckmann CF, et al. Artificial limb representation in amputees. Brain. 2018;141: 1422–1433. pmid:29534154
  30. 30. Kriegeskorte N, Mur M, Ruff DA, Kiani R, Bodurka J, Esteky H, et al. Matching Categorical Object Representations in Inferior Temporal Cortex of Man and Monkey. Neuron. 2008;60: 1126–1141. pmid:19109916
  31. 31. Orlov T, Makin TR, Zohary E. Topographic Representation of the Human Body in the Occipitotemporal Cortex. Neuron. 2010;68: 586–600. pmid:21040856
  32. 32. Bracci S, Caramazza A, Peelen M V. Representational Similarity of Body Parts in Human Occipitotemporal Cortex. J Neurosci. 2015;35: 12977–12985. pmid:26400929
  33. 33. Bracci S, Peelen M V. Body and object effectors: The organization of object representations in high-level visual cortex reflects body-object interactions. J Neurosci. 2013;33: 18247–18258. pmid:24227734
  34. 34. Tsao DY, Livingstone MS. Mechanisms of Face Perception. Annu Rev Neurosci. 2008;31: 411–437. pmid:18558862
  35. 35. Downing PE, Peelen M V. Body selectivity in occipitotemporal cortex: Causal evidence. Neuropsychologia. 2016;83: 138–148. pmid:26044771
  36. 36. Astafiev S V, Stanley CM, Shulman GL, Corbetta M. Extrastriate body area in human occipital cortex responds to the performance of motor actions. Nat Neurosci. 2004;7: 542–548. pmid:15107859
  37. 37. Tal Z, Geva R, Amedi A. The origins of metamodality in visual object area LO: Bodily topographical biases and increased functional connectivity to S1. Neuroimage. 2016;127: 363–375. pmid:26673114
  38. 38. Beauchamp MS, LaConte S, Yasar N. Distributed representation of single touches in somatosensory and visual cortex. Hum Brain Mapp. 2009;30: 3163–3171. pmid:19224618
  39. 39. Limanowski J, Lutti A, Blankenburg F. The extrastriate body area is involved in illusory limb ownership. Neuroimage. 2014;86: 514–524. pmid:24185016
  40. 40. Limanowski J, Blankenburg F. Integration of visual and proprioceptive limb position information in human posterior parietal, premotor, and extrastriate cortex. J Neurosci. 2016;36: 2582–2589. pmid:26937000
  41. 41. Gentile G, Guterstam A, Brozzoli C, Henrik Ehrsson H. Disintegration of multisensory signals from the real hand reduces default limb self-attribution: An fMRI study. J Neurosci. 2013;33: 13350–13366. pmid:23946393
  42. 42. Bracci S, Cavina-Pratesi C, Ietswaart M, Caramazza A, Peelen M V. Closely overlapping responses to tools and hands in left lateral occipitotemporal cortex. J Neurophysiol. 2012;107: 1443–1456. pmid:22131379
  43. 43. Reddy L, Kanwisher N. Coding of visual objects in the ventral stream. Curr Opin Neurobiol. 2006;16: 408–414. pmid:16828279
  44. 44. Gomez J, Barnett M, Grill-Spector K. Extensive childhood experience with Pokémon suggests eccentricity drives organization of visual cortex. Nat Hum Behav. 2019; 1. pmid:31061489
  45. 45. Lingnau A, Downing PE. The lateral occipitotemporal cortex in action. Trends Cogn Sci. 2015;19: 268–77. pmid:25843544
  46. 46. Peelen M V., Downing PE. The neural basis of visual body perception. Nat Rev Neurosci. 2007;8: 636–648. pmid:17643089
  47. 47. Downing PE, Jiang Y, Shuman M, Kanwisher N. A cortical area selective for visual processing of the human body. Science (80-). 2001;293: 2470–2473. pmid:11577239
  48. 48. Walther A, Nili H, Ejaz N, Alink A, Kriegeskorte N, Diedrichsen J. Reliability of dissimilarity measures for multi-voxel pattern analysis. Neuroimage. 2016;137: 188–200. pmid:26707889
  49. 49. Chao LL, Martin A. Representation of Manipulable Man-Made Objects in the Dorsal Stream. Neuroimage. 2000;12: 478–484. pmid:10988041
  50. 50. Wesselink DB, van den Heiligenberg FM, Ejaz N, Dempsey-Jones H, Cardinali L, Tarall-Jozwiak A, et al. Obtaining and maintaining cortical hand representation as evidenced from acquired and congenital handlessness. Elife. 2019;8: 1–19. pmid:30717824
  51. 51. Yarkoni T, Poldrack RA, Nichols TE, Van Essen DC, Wager TD. Large-scale automated synthesis of human functional neuroimaging data. Nat Methods. 2011;8: 665–670. pmid:21706013
  52. 52. Choi HJ, Zilles K, Mohlberg H, Schleicher A, Fink GR, Armstrong E, et al. Cytoarchitectonic identification and probabilistic mapping of two distinct areas within the anterior ventral bank of the human intraparietal sulcus. J Comp Neurol. 2006;495: 53–69. pmid:16432904
  53. 53. Scheperjans F, Eickhoff SB, Hömke L, Mohlberg H, Hermann K, Amunts K, et al. Probabilistic maps, morphometry, and variability of cytoarchitectonic areas in the human superior parietal cortex. Cereb Cortex. 2008;18: 2141–2157. pmid:18245042
  54. 54. Hellman RB, Chang E, Tanner J, Helms Tillery SI, Santos VJ. A Robot Hand Testbed Designed for Enhancing Embodiment and Functional Neurorehabilitation of Body Schema in Subjects with Upper Limb Impairment or Loss. Front Hum Neurosci. 2015;9: 26. pmid:25745391
  55. 55. Valle G, Mazzoni A, Iberite F, D’Anna E, Strauss I, Granata G, et al. Biomimetic Intraneural Sensory Feedback Enhances Sensation Naturalness, Tactile Sensitivity, and Manual Dexterity in a Bidirectional Prosthesis. Neuron. 2018;100: 37–45.e7. pmid:30244887
  56. 56. Makin TR, Holmes NP, Ehrsson HH. On the other hand: Dummy hands and peripersonal space. Behav Brain Res. 2008;191: 1–10. pmid:18423906
  57. 57. Ehrsson HH. Multisensory processes in body ownership. In: Sathian K, Ramachandran V S, editors. Multisensory Perception: From Laboratory to Clinic. Elsevier; 2020. pp. 179–200. https://doi.org/10.1016/b978-0-12-812492-5.00008–5
  58. 58. Macdonald S, van den Heiligenberg F, Makin T, Culham J. Videos are more effective than pictures at localizing tool- and hand-selective activation in fMRI. J Vis. 2017;17: 991.
  59. 59. Striem-Amit E, Vannuscorps G, Caramazza A. Sensorimotor-independent development of hands and tools selectivity in the visual cortex. Proc Natl Acad Sci U S A. 2017;114: 4787–4792. pmid:28416679
  60. 60. Hahamy A, Macdonald SN, van den Heiligenberg FMZ, Kieliba P, Emir U, Malach R, et al. Representation of Multiple Body Parts in the Missing-Hand Territory of Congenital One-Handers. Curr Biol. 2017;27: 1350–1355. pmid:28434861
  61. 61. Graziano MSA, Aflalo TN. Mapping behavioral repertoire onto the cortex. Neuron. 2007. pp. 239–251. pmid:17964243
  62. 62. Seger CA, Miller EK. Category Learning in the Brain. 2010 [cited 19 Dec 2018]. pmid:20410144
  63. 63. Braunlich K, Liu Z, Seger CA. Occipitotemporal Category Representations Are Sensitive to Abstract Category Boundaries Defined by Generalization Demands. J Neurosci. 2017;37: 7631–7642. pmid:28674173
  64. 64. Op de Beeck HP, Pillet I, Ritchie JB. Factors Determining Where Category-Selective Areas Emerge in Visual Cortex. Trends in Cognitive Sciences. Elsevier Ltd; 2019. pp. 784–797. pmid:31327671
  65. 65. Braunlich K, Love BC. Occipitotemporal representations reflect individual differences in conceptual knowledge. J Exp Psychol Gen. 2018;Advance on. pmid:30382719
  66. 66. Harel A. What is special about expertise? Visual expertise reveals the interactive nature of real-world object recognition. Neuropsychologia. 2016;83: 88–99. pmid:26095002
  67. 67. Martens F, Bulthé J, van Vliet C, Op de Beeck H. Domain-general and domain-specific neural changes underlying visual expertise. Neuroimage. 2018;169: 80–93. pmid:29223739
  68. 68. McGugin RW, Van Gulick AE, Tamber-Rosenau BJ, Ross DA, Gauthier I. Expertise Effects in Face-Selective Areas are Robust to Clutter and Diverted Attention, but not to Competition. Cereb Cortex. 2015;25: 26610–2622. pmid:24682187
  69. 69. Bilalić M, Grottenthaler T, Nägele T, Lindig T. The Faces in Radiological Images: Fusiform Face Area Supports Radiological Expertise. Cereb Cortex. 2016;26: 1004–1014. pmid:25452573
  70. 70. Ross DA, Tamber-Rosenau BJ, Palmeri TJ, Zhang J, Xu Y, Gauthier I. High-resolution Functional Magnetic Resonance Imaging Reveals Configural Processing of Cars in Right Anterior Fusiform Face Area of Car Experts. J Cogn Neurosci. 2018;30: 973–984. pmid:29561239
  71. 71. Tan DW, Schiefer MA, Keith MW, Anderson JR, Tyler J, Tyler DJ. A neural interface provides long-term stable natural touch perception. Sci Transl Med. 2014;6: 257ra138. pmid:25298320
  72. 72. Philip BA, Frey SH. Compensatory Changes Accompanying Chronic Forced Use of the Nondominant Hand by Unilateral Amputees. J Neurosci. 2014;34: 3622–3631. pmid:24599461
  73. 73. Stoeckel MC, Seitz RJ, Buetefisch CM. Congenitally altered motor experience alters somatotopic organization of human primary motor cortex. Natl Institutes Ment Heal. 2009;106: 2395–2400. Available: www.pnas.org/cgi/content/full/
  74. 74. Makin TR, Cramer AO, Scholz J, Hahamy A, Henderson Slater D, Tracey I, et al. Deprivation-related and use-dependent plasticity go hand in hand. Elife. 2013;2013: 1–15.
  75. 75. Hahamy A, Sotiropoulos SN, Henderson Slater D, Malach R, Johansen-Berg H, Makin TR. Normalisation of brain connectivity through compensatory behaviour, despite congenital hand absence. Elife. 2015;4. pmid:25562885
  76. 76. Striem-Amit E, Vannuscorps G, Caramazza A. Plasticity based on compensatory effector use in the association but not primary sensorimotor cortex of people born without hands. Proc Natl Acad Sci U S A. 2018;115: 7801–7806. pmid:29997174
  77. 77. Yu XJ, He HJ, Zhang QW, Zhao F, Zee CS, Zhang SZ, et al. Somatotopic reorganization of hand representation in bilateral arm amputees with or without special foot movement skill. Brain Res. 2014;1546: 9–17. pmid:24373804
  78. 78. Mahon BZ, Caramazza A. What drives the organization of object knowledge in the brain? The distributed domain-specific hypothesis. Trends Cogn Sci. 2011;15: 97–103. pmid:21317022
  79. 79. Heimler B, Striem-Amit E, Amedi A. Origins of task-specific sensory-independent organization in the visual and auditory brain: neuroscience evidence, open questions and clinical implications. Curr Opin Neurobiol. 2015;35: 169–177. pmid:26469211
  80. 80. Hahamy A, Makin TR. Remapping in cerebral and cerebellar cortices is not restricted by somatotopy. J Neurosci. 2019; 2599–18. pmid:31611305
  81. 81. Peelen M V, Bracci S, Lu X, He C, Caramazza A, Bi Y. Tool Selectivity in Left Occipitotemporal Cortex Develops without Vision. 2013 [cited 23 Jan 2019]. pmid:23647514
  82. 82. He C, Peelen M V., Han Z, Lin N, Caramazza A, Bi Y. Selectivity for large nonmanipulable objects in scene-selective visual cortex does not require visual experience. Neuroimage. 2013;79: 1–9. pmid:23624496
  83. 83. Striem-Amit E, Dakwar O, Reich L, Amedi A. The large-Scale Organization of “‘Visual’” Streams Emerges Without Visual Experience. Cereb Cortex. 2012;22: 1698–1709. pmid:21940707
  84. 84. Kitada R, Yoshihara K, Sasaki AT, Hashiguchi M, Kochiyama T, Sadato N. The brain network underlying the recognition of hand gestures in the blind: The supramodal role of the extrastriate body area. J Neurosci. 2014;34: 10096–10108. pmid:25057211
  85. 85. Striem-Amit E, Amedi A. Visual Cortex Extrastriate Body-Selective Area Activation in Congenitally Blind People ‘“Seeing”‘ by Using Sounds. Curr Biol. 2014;24: 687–692. pmid:24613309
  86. 86. Bensmaia SJ, Miller LE. Restoring sensorimotor function through intracortical interfaces: progress and looming challenges. Nat Rev Neurosci. 2014;15: 313–325. pmid:24739786
  87. 87. de Oliveira Barata S, Clode D, Taylor J, Elias H. Vine Arm. In: The Alternative Limb Project [Internet]. 2017. Available from: http://www.thealternativelimbproject.com/project/vine/
  88. 88. van den Heiligenberg FMZ, Yeung N, Brugger P, Culham JC, Makin TR. Adaptable Categorization of Hands and Tools in Prosthesis Users. Psychol Sci. 2017;28: 395–398. pmid:28095186
  89. 89. Uswatte G, Taub E, Morris D, Light K, Thompson PA. The Motor Activity Log-28: assessing daily use of the hemiparetic arm after stroke. Neurology. 2006;67: 1189–94. pmid:17030751
  90. 90. Saxe R, Jamal N, Powell L. My body or yours? The effect of visual perspective on cortical body representations. Cereb Cortex. 2006;16: 178–182. pmid:15858162
  91. 91. Myers A, Sowden PT. Your hand or mine? The extrastriate body area. Neuroimage. 2008;42: 1669–1677. pmid:18586108
  92. 92. Belongie S, Malik J, Puzicha J. Shape Matching and Object Recognition Using Shape Contexts. IEEE Trans Pattern Anal Mach Intell. 2002;24: 509–522.
  93. 93. Uğurbil K, Xu J, Auerbach EJ, Moeller S, Vu AT, Duarte-Carvajalino JM, et al. Pushing spatial and temporal resolution for functional and diffusion MRI in the Human Connectome Project. Neuroimage. 2013;80: 80–104. pmid:23702417
  94. 94. Greve DN, Fischl B. Accurate and robust brain image alignment using boundary-based registration. Neuroimage. 2009;48: 63–72. pmid:19573611
  95. 95. Jenkinson M, Smith S. A global optimisation method for robust affine registration of brain images. Med Image Anal. 2001;5: 143–156. pmid:11516708
  96. 96. Jenkinson M, Bannister P, Brady M, Smith S. Improved Optimization for the Robust and Accurate Linear Registration and Motion Correction of Brain Images. Neuroimage. 2002;17: 825–841. pmid:12377157
  97. 97. Andersson JLR, Jenkinson M, Smith S. Non-linear registration aka Spatial normalisation FMRIB Technial Report TR07JA2. 2007.
  98. 98. Andersson JLR, Jenkinson M, Smith S. Non-linear optimisation FMRIB Technial Report TR07JA1. 2007.
  99. 99. Smith SM. Fast robust automated brain extraction. Hum Brain Mapp. 2002;17: 143–155. pmid:12391568
  100. 100. Woolrich MW, Ripley BD, Brady M, Smith SM. Temporal autocorrelation in univariate linear modeling of FMRI data. Neuroimage. 2001;14: 1370–86. pmid:11707093
  101. 101. Desikan RS, Ségonne F, Fischl B, Quinn BT, Dickerson BC, Blacker D, et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage. 2006;31: 968–980. pmid:16530430
  102. 102. Diedrichsen J, Kriegeskorte N. Representational models: A common framework for understanding encoding, pattern-component, and representational-similarity analysis. Cichy R, editor. PLoS Comput Biol. 2017;13: e1005508. pmid:28437426
  103. 103. Nili H, Wingfield C, Walther A, Su L, Marslen-Wilson W, Kriegeskorte N. A Toolbox for Representational Similarity Analysis. PLoS Comput Biol. 2014;10. pmid:24743308
  104. 104. Wesselink DB, Maimon-Mor RO. RSA toolbox extension for FSL. 2018.
  105. 105. Jasp Team. JASP. 2019.
  106. 106. Wetzels R, Matzke D, Lee MD, Rouder JN, Iverson GJ, Wagenmakers EJ. Statistical evidence in experimental psychology: An empirical comparison using 855 t tests. Perspect Psychol Sci. 2011;6: 291–298. pmid:26168519
  107. 107. Dienes Z. Using Bayes to get the most out of non-significant results. Front Psychol. 2014;5: 1–17. pmid:24474945
  108. 108. Makin TR, Orban de Xivry JJ. Ten common statistical mistakes to watch out for when writing or reviewing a manuscript. Elife. 2019;8: 1–13. pmid:31596231