Fig 1.
Proposed avatar with left and right-side sharing between two people.
(A) An avatar is controlled by an individual conventionally. (B) A shared avatar or co-embodiment avatar is controlled by two individuals, and their movements are averaged and reflected in the shared avatar. (C) The proposed avatar is a joint avatar in which body parts are controlled by different individuals. (D) Example posture of our joint avatar, and the two individuals controlling it. The figures were created using unity2017.4.1f1 (https://unity3d.com/get-unity/download/archive).
Fig 2.
The participants worked in dyads and stood back-to-back, resting against two ‘support poles’. They wore the braces on their back similar to wearing a backpack. The brace consisted of four stiff horizontal rods set between their backs of the two bodies it connected. The participants worked in two conditions. (A) In the Tied condition, the backs of the partners were connected by the brace ensuring (C) that any body twists made by one participant during a reaching movement were transmitted to the other. (B) In the Separated condition, the shoulders of the participants were connected to a mannequin. (D) Thus, while the participants still saw the arm movements by their partner, they did not receive any passive upper body movements corresponding to the movements. The figures were created using blender2.93.4 (https://www.blender.org/download/lts/2-93/) and unity2017.4.1f1 (https://unity3d.com/get-unity/download/archive).
Fig 3.
Sense of body ownership towards the controlled and non-controlled arms of the joint virtual avatar in Tied and Separated conditions.
The ownership for the non-controlled arm in the Tied condition was significantly higher than the ownership for the non-controlled arm in the Separated condition. In both Tied and Separated conditions, the ownership for the controlled arm was higher than the ownership for the non-controlled arm.
Fig 4.
Sense of agency towards the controlled and non-controlled arms of the joint virtual avatar in Tied and Separated conditions.
The agency for the non-controlled arm in the Tied condition was significantly higher than the agency for the non-controlled arm in the Separated condition. In both Tied and Separated conditions, the agency for the controlled arm was higher than the agency for the non-controlled arm.
Fig 5.
(A) The pointed virtual shoulder position drifted significantly larger than 0, while the mean in the Separated condition was not significantly different from 0, suggesting that there was a drift towards the virtual shoulder position only after the Tied condition. However, the difference in drift between the two conditions did not reach significance. No significant changes were observed in (B) Test 2, and (C) Test 3.
Fig 6.
Area in which targets appeared in the VE for participants controlling left (red) and right (blue) sides.
Targets appeared inside the 2D squares shown by dotted traces. Targets for the right participant appeared towards the left side of the joint avatar and the targets for the left participant appeared towards the right side of the joint avatar. All targets that appeared inside the blue square were symmetrical to the ones that appeared in the red square to balance the amount of work done by each participant.
Fig 7.
Test 1—Visually measuring the position of the real shoulder.
A. A wall of height 3m appeared 1m in front of each participant parallel to their bodies (shoulders) and the avatar was made invisible. On the wall, a horizontal straight line appeared at a random height within the range of ±10 cm vertically from the shoulder position of the participant (shoulder height was read from the motion capture marker pasted on the shoulder of the participant). A controller was handed to the controlled hand of each participant, and they were asked to point on the line, the position that corresponds to the edge of their non-controlled shoulder. Here the figure shows the ideal case in which the right participant points perfectly at the edge of their left shoulder. The task was opposite for the left participant, who had to use his left hand to point to his right shoulder. B. Test 2—Measuring the width of the real body. A wall of height 3m appeared 2m in front of the participants and the avatar was made invisible. On the wall a door of height 2m and width 30 cm appeared within the range of ±10 cm horizontally from the center of the body. Using the touch pad buttons of the controllers given to each participant, they were asked to adjust the width of the door to match the width of their real bodies. This figure demonstrates a perfectly matched scenario. C. Test 3—Measuring the proprioceptive position of the shoulder. During this test, the HMD was blacked out to avoid the participants seeing anything in VR or their real bodies. They were asked to move their controlled arm hand to place the motion capture marker on the knuckle of that hand just in front of the end of the shoulder of the non-controlled arm shoulder of their real bodies (participants were asked to remember the position of the specific motion capture marker pasted on the knuckle of the controlled arm before the experiment started). They were instructed to make the localization in one smooth movement and not to touch/search for the shoulder. This figure demonstrates a perfectly pointed scenario.
Fig 8.
Design flow of the experiment.
First, the participants performed the three types of proprioception tests mentioned in Fig 7 to answer the location of their real non-controlled shoulder/width of their body before the reaching task. After this pre-test, the reaching task was performed for 5–7 minutes (100 target reaches in total) and another session similar to the pre-test was carried out soon after the reaching task finished to calculate the proprioception towards the real non-controlled shoulder. After that, the participants answered a questionnaire of 8 questions which we used to calculate senses of agency and ownership. After a 30-minute break, they performed another similar session in the next condition (Tied or Separated).