## Figures

## Abstract

Motor control is a challenging task for the central nervous system, since it involves redundant degrees of freedom, nonlinear dynamics of actuators and limbs, as well as noise. When an action is carried out, which factors does your nervous system consider to determine the appropriate set of muscle forces between redundant degrees-of-freedom? Important factors determining motor output likely encompass effort and the resulting motor noise. However, the tasks used in many previous motor control studies could not identify these two factors uniquely, as signal-dependent noise monotonically increases as a function of the effort. To address this, a recent paper introduced a force control paradigm involving one finger in each hand that can disambiguate these two factors. It showed that the central nervous system considers both force noise and amplitude, with a larger weight on the absolute force and lower weights on both noise and normalized force. While these results are valid for the relatively low force range considered in that paper, the magnitude of the force shared between the fingers for large forces is not known. This paper investigates this question experimentally, and develops an appropriate Markov chain Monte Carlo method in order to estimate the weightings given to these factors. Our results demonstrate that the force sharing strongly depends on the force level required, so that for higher force levels the normalized force is considered as much as the absolute force, whereas the role of noise minimization becomes negligible.

**Citation: **Kolossiatis M, Charalambous T, Burdet E (2016) How Variability and Effort Determine Coordination at Large Forces. PLoS ONE 11(3):
e0149512.
doi:10.1371/journal.pone.0149512

**Editor: **Robert J. van Beers,
VU University Amsterdam, NETHERLANDS

**Received: **August 19, 2015; **Accepted: **February 1, 2016; **Published: ** March 2, 2016

**Copyright: ** © 2016 Kolossiatis et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

**Data Availability: **All relevant data are within the paper and its Supporting Information files.

**Funding: **This study was supported by the European Commission grants EU-FP7 HUMOUR (ICT 231554), CONTEST (ITN 317488), ICT-601003 BALANCE, ICT-611626 SYMBITRON and EU-H2020 COGIMON (ICT 644727). MK was supported by travel grants from the University of Kent and Lancaster University. No additional external funding was received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

**Competing interests: ** The authors have declared that no competing interests exist.

## Introduction

Motor control involves the coordination of multiple effectors (muscles, joints and limbs) for the implementation of a task. Even the most basic movements, such as grasping and reaching, can be performed in many ways because the human body uses more degrees-of-freedom (DoF) than needed [1], since several effectors get involved, exceeding the dimensionality of the task requirements. However, several tasks are shown to be consistently implemented via a narrow set of options. Based on this observation, a fundamental research question is how the the central nervous system (CNS) selects a particular set of movements among the vast set of available options.

Several theories attempted to answer this question. Some of them suggest that there is an inherent set of constraints in the CNS such that certain combinations of effectors are stable for certain type of tasks (see, for example, [2–5] and references therein), thus restricting the options considerably and making the problem tractable. Other theories suggest that coordination is achieved among multiple effectors by means of setting common parameters during the planning process [6]. On the other hand, many behavioral goals were uniquely specified by defining a control policy emanating from the optimal solution of the minimization of some cost function, suggesting that the CNS performs some sort of optimal control (see, for example, [7]).

Approaches based on optimal control theory could replicate patterns of reaching movements observed in the human arm [8] as well as account for the structure of the force variability in finger movement [9]. There are several factors that affect motor coordination and for which optimal control has been proposed, such as jerk, torque, torque change and energy; however, recent works have shown that the most representative factors are (i) variability of motor output, and (ii) effort involved [7]. As pointed out in [10], the predictions concerning the temporal shape of the optimal movement of these two costs are similar. This is expected since the requirement to reduce variability under signal-dependent noise leads to a term in the cost function that penalizes the sum of squared motor commands over the movement, the same term usually used for penalizing the effort involved [11]. As a result, for motor behaviors with temporal redundancy, it is hard to dissociate the costs for variability of motor output and effort involved.

An experimental paradigm was developed in [10] in which the influence of these two factors could be dissociated by studying how subjects combined the total force of two fingers, one from each hand. As the deviation due to motor noise grows linearly but with a different gradient for each finger, it was possible to observe how the CNS considers effort and variability by combining different fingers. The results demonstrated that the absolute force determined how the subjects combined the fingers at over 70%, while the influence of the normalized force (i.e., the force divided by the maximum strength of the effectors) and force variability counted less than 15% each.

The question addressed in this study is whether the cost function determining how the effort is shared between the effectors varies with the applied force level. In particular, the highest force levels required in [10] was limited to 16 Newtons (N), while subjects can generally exert forces that may exceed 30N. Would the nervous system modify the way it considers the effort at high force levels? Our hypothesis is that during the preservation of a *relatively large* force level, the distribution of forces, when normalized for the individual maxima, become proportionate to their relative strengths. The rationale behind this hypothesis is that at large force levels people would aim to distribute the difficulty of the task proportionally to avoid uncomfortable situations where maximum effort is used by one part (here one of the fingers) while the other part expends little effort. To examine this hypothesis we carried out a similar experiment to that in [10], where we first confirmed the results for the small range of force used in that study. Extending the range of applied forces up to 28N (a level that was feasible for all participants), our results show that the participants largely changed their cost function with the required force level. In particular the normalized effort becomes a significant factor of effort sharing between the effectors.

## Methods

### Participants

In this experiment, 14 adults (on average 23 years old with standard deviation 4 years) without known sensorimotor impairment from the postgraduate population of Imperial College London participated. All participants were right handed and 4 of them were females. The experiment was specifically approved by the Ethics Committee of Imperial College London, and each subject was informed of the details of the procedure of this study and signed a consent form prior to starting it.

### Apparatus

Participants sat in front of a computer monitor with forearms supported on a flat desktop. Two force transducers (more specifically, we used Phidgets sensors, with maximum weight capacity of 20*kg* and repeatability error maximally 0.1*N*) integrated into keyboard buttons were placed on the table in front of the participant. The participant always kept their wrists on the table throughout whilst pressing the buttons.

Participants received continuous visual feedback about the total force produced as well as the individual absolute force levels via an array of LEDs, as shown in Fig 1. In contrast to [10], where only the sum of forces in the two fingers was displayed, in this experiment the individual forces were also displayed, as we wanted to investigate whether this would affect the cost function with respect to [10]. Note that the instructions were to concentrate on the total force produced. The target force level was represented on screen by two lines in either side of the total force bar, such that the participant could easily identify the target force and exactly determine his/her error throughout the process. Force was sampled at a rate of 200 Hz.

The subjects were required to match the goal force level (a horizontal line in the central bar) as accurately as possible with the summation of each finger’s force level (summation of the force levels is given by the LEDs in the central bar and the individuals’ force levels of the left and right hand are given by the left and right bars, respectively).

### Experimental trials

The trials were divided into 2 parts, the *unimanual* and the *bimanual* trials, each of which has been realized in consecutive days. In order to avoid fatigue, participants had regular breaks of about a minute between tasks; in addition, participants would take a break for a few minutes when changing fingers.

#### Unimanual trials.

For the unimanual trials the force was recorded over 7*s* when trying to maintain forces from 2, 4, 6, … N and up until the force for which the participant can no longer maintain for 7*s*. The trials were executed for the index and little fingers of both right and left hands. The participants were to press and hold the force transducer and try to reach a line on the screen indicating they have reached the required force.

#### Bimanual trials.

For the bimanual trials we recorded the forces over 7*s* for all 4 possible combinations of the index and little fingers, using a transducer for each finger. For all but the combination of both index fingers the participant tried to maintain forces of 2,4,6*N* and increased in 2N intervals until the participant’s maximum force was reached. For the combination of both index fingers the participant tried to maintain all forces for 7*s* up to force level of 28*N*. We did not exceed 28*N*, since not many participants could pass this limit and the data would thus be limited.

### Optimal control model

Let *x*_{i} denote the force of finger *i* with mean corresponding to the motor command; *k*_{i} is the coefficient of variation determined by minimizing the Mean Squared Error (MSE) for finger *i* over all forces (i.e., *σ*(*x*_{i}) = *k*_{i} *u*_{i}) and *MVC*_{i} is its maximum voluntary contraction. Finally, *g* denotes the target force level using both fingers. The association between *k*_{i} and *MVC*_{i} is weak (see subsection MVC measurement), so we include both these effects in the cost function. We use a method that attaches weights λ, *μ*, and *ν* to the cost functions that may affect the decision of the participants. These weights correspond to the non-normalized effort, the normalized effort and the squared error, respectively. Therefore, the cost function of each participant is the following:
(1) (2) (3)
where
are the normalized weights (so that the estimates of λ, *μ*, and *ν* will be equal in the case the effect of the non-normalized effort, the normalized effort and the squared error on the optimal solution are the same).

In order to estimate the weight of the normalized expected error , the coefficient of variation (*k*) for each finger of each participant is calculated (more details in subsection Noise measurement). In addition, in order to estimate the weight of the normalized effort cost , the maximum voluntary contraction (MVC) for each finger of each participant is calculated (more details in subsection MVC measurement). In [10] based on the cost function used the optimal force command was found and it is given as a proposition below.

**Proposition 1**. *For finger i working with finger j, the optimal force command* *is given by* (4) *where ν is the weight of the expected squared error, and* (5) *where* *are as defined above*.

*Similar results hold for finger j when working with finger i*.

From the optimal force commands, the optimal force distribution is given in the corollary below.

**Corollary 1**. *For finger i working with finger j the optimal force distribution* *is given by* (6)

In the rest of this paper we focus on analyzing the force distribution for the combination of the left little and right index fingers (*i* and *j* in the above equations, respectively). The reason we focus on these is that this combination exhibits the largest difference in absolute individual performance of the two fingers, and it is therefore easier to distinguish between the effect of the normalised and the absolute force. However, analysis with other finger combinations showed similar tendencies.

Assume that we have *M* participants and for each force level *l*, we have *N*_{lm} observations from participant *m*, *m* = 1, 2, …, *M*. We denote these observations by *y*_{lm}(*n*), *n* = 1, 2, …, *N*_{lm}. Let **y**_{l} denote the data from all participants and denote the total data size, for force level *l*. Assuming Gaussian noise with variance for force level *l* and participant *m*, the log-likelihood of the force distribution **y**_{l}, conditional on the parameters , is given by
(7)
where is as in Corollary 1 for *i*, *j*: left little and right index finger, respectively, force level *l* and participant *m*.

Since cost functions are unit-less, and an overall scaling factor does not matter, we constrain the sum of the free parameters to be equal to 1, i.e., (8)

Therefore, we essentially have two free parameters (additional to the number of parameters). Without loss of generality, we assume that *ν*, λ are the free parameters and *μ* is derived from these two.

We assume a noninformative joint prior for *ν*, λ and *μ*, under condition Eq (8), as well as an improper prior for each variance :
(9) (10)
where 1_{{A}} is the indicator function that takes the value 1 if condition *A* is satisfied and 0 otherwise. The prior for corresponds to an inverse gamma distribution, *InvGa*(0,0) (a random variable *X* is said to follow an inverse gamma distribution *InvGa*(*α*, *β*) if its density function is given by ). Although this prior is improper, the posterior distribution of is proper. We notice also that the case of *InvGa*(*α* = −1, *β* = 0), which corresponds to , as used in [10], was also implemented, with essentially identical results. Under the priors in Eqs (9) and (10), the joint posterior distribution of all parameters in each force level *l* is given up to a constant of proportionality by
The assumption of having different variances for each participant and at each force level is a generalisation of the structure of [10], where it is assumed that at each force level all participant have the same variance (say, ). See subsection Model fitting and comparisons for more details on this.

### Estimation method

Due to condition Eq (8) and the corresponding prior distribution of *ν*, λ, *μ*, these parameters are also dependent a posteriori, a fact that makes simulations more challenging. In order to circumvent this problem, we use a reparametrization *w*, *z* ∈ (0,1), where
(11)
It is easy to show that the corresponding priors for *w* and *z* are independent *U*(0,1). The posterior distribution of is therefore
(12)
where is now expressed using *w*, *z* and *μ*. As in [10], the force distributions were measured quite reliably, with *SE* = 0.0176*N*.

Markov chain Monte Carlo (MCMC) methods provide an approach to simulating the posterior distributions in complex multi-parameter problems without resorting to a search for the maximum likelihood solution. The posterior distribution Eq (12) is not of standard form, so in order to take samples from it we use MCMC methods, and in particular *Metropolis-within-Gibbs* [12]. For this algorithm we need to calculate the full conditional distributions (i.e., the distributions conditional on the data and all the other parameters) of each parameter in this model (except from *μ*, which can be deterministically calculated from *w*, *z*). For force level *l*, we have:
where *w* appears in the expression for . Similarly,
where *z* appears in the expression for , and
The full conditional distribution of is simply an inverse gamma distribution,
(13)
so we can directly simulate from it.

Unlike , the posterior distribution of *w* and *z* are not of standard form. We therefore use independent Random Walk Metropolis Hastings (RWMH) updating steps for these parameters. In order to assure good mixing for these steps, we use the Adaptive RWMH method of [13].

To sum up, the full MCMC procedure for each force level *l* (we note that this algorithm is performed for each force level independently) is as shown in Algorithm 1.

**Algorithm 1** MCMC procedure for each force level

**Initialization**: We give appropriate, arbitrary starting values for all the parameters (*y*, *z*, *μ* and ).

1. For *T* cycles, we iteratively:

(a) simulate *w*, given the values of all the other parameters and the data, using a RWMH step.

(b) simulate *z*, given the values of all the other parameters and the data, using a RWMH step.

(c) calculate *μ* = (1 − *w*)(1 − *z*).

(d) simulate for each *m*, given the values of all the other parameters and the data, from Eq (13).

2. We discard the initial tail of the chain as a *burn-in* period. The values of the parameters in the rest of the chain correspond to random samples from their corresponding posterior distributions.

**Output**: By acquiring a sample for *w*, *z* and *μ*, we can then use Eq (11) to obtain samples for *ν*, λ and *μ*.

## Results

### Noise measurement

As already mentioned in the “Optimal control model” subsection, the force of each finger *x*_{i} was modeled as a random variable with a mean equal to the motor command (i.e., ) and a noise with a standard deviation proportional to the motor command (i.e., *σ*(*x*_{i}) = *k*_{i} *u*_{i}), with *k*_{i} being the coefficient of variation determined by minimizing the Mean Squared Error (MSE) for each finger over all force levels. The instructed task (enforced by the feedback on the screen) was to minimize the error between produced and required force.

In our experiments, the coefficient of variation was computed by robust regression estimates via iteratively re-weighted least squares with a bi-square weighting function [14–17]. The results for all fingers are depicted in Table 1. The values of the mean and standard deviation for *k* are slightly higher than those in [10] consistently for all fingers. This may stem from the slightly different setup used in the two experiments, or by individual differences.

### MVC measurement

The mean and standard deviation of the maximum voluntary contraction (MVC) found for all fingers are depicted in Table 2.

Note that the MVC was measured in a different way in the two papers. In our experiment, the subjects had to reach their MVC at the end of that part of the session and they had to maintain this for 7*s*. In [10], the mean of the highest 5% of the samples was determined for each trial, and the highest score of the three available trials was taken as the MVC measurement for the finger. These differences may explain why the mean value of MVC was higher in the previous paper.

Fig 2 shows the correlation between *k* and MVC for all fingers and all participants. The dashed line indicates the fitted linear regression line, when all data are considered. It is clear from this figure that there is a negative relationship between the two quantities. However, this relationship, as indicated by the small slope of the fitted line (−0.00065), although significant (*t*(54) = −3.42, p-value = 0.0012), is not particularly strong. Moreover, the coefficient of correlation between *k* and *MVC* (*r* = −0.4224) also indicates a weak relationship between the two (the value of *r* found is also very close to the value found in [10]). As a result, it is sensible to include variability as an additional term in the cost function.

The dashed line is the line of least-squares fit for the whole data set.

### Model fitting and comparisons

For each force level the above algorithm was implemented with *T* = 90000 iterations, from which the first 40000 samples were discarded as burn-in.

Fig 3 exhibits the results for *ν*, *μ* and λ. The continuous lines present the posterior median for each parameter, whereas the dashed lines show 95% credible intervals (we also computed posterior means, instead of medians, with identical results). The results in Fig 3 show that *for small force levels* the weight attributed to the absolute effort (λ) is much larger than the weight for the normalized effort (*μ*). This is consistent with the findings in [10], who studied this behavior up to force level equal to 16N. However, as the force level increases, the difference between these two weights diminished and at about 20N and higher, these weights become more or less equal. The results justify our hypothesis that as the task becomes more difficult, the weight for the normalized effort increases. Finally, the weight attributed to the squared error (*ν*) is generally small for all force levels.

The continuous lines present the posterior median for each parameter, whereas the dashed lines show 95% credible intervals.

Fig 4 shows the optimal solution for fitted (x-axis) versus produced forces (y-axis) based on the assumption that only variability was optimized (left) and under the best fitting model (right) and force target 18N (top), 22N (middle) and 26N (bottom). It is evident that the variability-only cost function performed worse than the one that includes all effort and variability terms in predicting the chosen distribution, as the points in these graphs (actual pairs of fitted-produced forces) are not so close to the dotted line (which corresponds to the ideal scenario that the predicted and the observed forces coincide).

In order to verify that all three terms contribute significantly to the fit, we also fitted all models with only one term and all models with pairs of terms included. This is achieved by setting the value of some of *μ*, *ν*, λ equal to 0 and fitting the new model to the data.

As a first measure of model comparison, we calculated the log-likelihood of the data, averaged over all MCMC samples, under each model. This is a measure of model fit, with higher values indicating better fit. The results are shown in the first row of Table 3. The full model (i.e., the model with all *ν*, *μ* and λ included) provides the best fit and the model closest to that was the model with only *μ* and λ included (the model with *ν* = 0). This is consistent with the results above, since the weight *ν* is the smallest of the three for most force levels. In order to see if the difference between the two models is significant, we use Bayes factors (see, for example, [18]). The Bayes factor for two models, say *M*_{1}, *M*_{2}, is defined as
where *L*(**y**|*M*) is the likelihood under model *M* and **y** is the full data set (including data from all participants and all force levels). For the two best fitting models stated above, , indicating very strong preference for the full model.

Additionally, we also consider the Akaike Information Criterion (AIC) (see, among others, [19]):
(14)
where is the maximum log-likelihood and *d* is the number of free parameters in the model. This statistic therefore takes into account both the model fit (measured by ) and the model complexity (measured by *d*, and penalizing models with higher number of parameters). A lower value of AIC indicates a better fitting model. The results for AIC (Table 3, second row) indicate that the full model, despite having more parameters than the simpler models, is still the preferred one, by a large margin.

In conclusion, both model comparison measures used indicate that the model including all terms is the model that best describes the behavior of the data. All three terms have been demonstrated to be significant in explaining how the CNS assigns the task among the effectors when trying to reach a specific force target.

For comparison purposes we repeated the above analysis for the case in which all participants have the same error variance within each force level, as in [10] (in other words, having the same for each participant instead of having for each participant *m*). We found that the behavior of the weights *μ*, *ν* and λ was in accordance to the one found before: at lower force targets the parameter λ is quite large, whereas the other two are small. Then, as the target increases, the value of *μ* increases and that of λ decreases, whereas *ν* stays at low values. In other words, the change in the effect of the two effort and the variability terms as we increase the force target, seems to be robust to the specification of the variances. On the other hand, when comparing the two models (using both the log-likelihood and the AIC), the model with different variances had a clear advantage over the model with the same variance for each force level, so using the model shown here is justified.

## Discussion

Our experiment extends the research on how efforts (normalized or absolute) and variability costs affect the way our CNS distributes work across different effectors in order to implement a task which has numerous alternative ways to be implemented. The influence of a finger’s individual maximum maintainable force was investigated with respect to the contribution of each finger in a dual finger task (one force target shared between two fingers). The main question addressed is the following: is the distribution of forces, when normalized for the individual maximums, proportionate to their relative strengths?

In summary, we show that during the preservation of a *relatively large* force level by the combination of a strong finger and a weak finger, for example an index finger and a little finger, the little finger’s force gradually reduces and the stronger finger’s force is increased in an anticipatory manner. As a result, it is concluded that over a range of target forces the distribution of effort changes. For a combination of little and index fingers, the little finger provides most absolute force and normalized force to start with, the proportion of its maximum force increases more than that of the index finger, up until the dual target becomes the maximum individual force for the index finger. 50% of each of the fingers’ maximum forces are being used above this point, suggesting that the sharing of effort is being utilized to minimize effort and strain on each finger, as well as variability of the force output over time.

The participants were asked to consider the middle column depicting the total force level, while we allowed them to have visual feedback of the force levels of the individual fingers as well. This feature was not present in the experimental setup of [10] and one could suspect that the participants would aim to equalize the two side columns, i.e., to aim at maintaining the normalized effort only. However, this did not occur; the results of [10] have been justified at low force levels and at high force levels it appears that it did not prohibit the increase of the normalized effort. It is possible that without the two extra columns the increase in the normalized effort could be more immediate or even dominate. Hence, the extra columns probably made the transition smoother, but did not prevent the normalized effort from becoming of significant importance.

## Supporting Information

### S1 Dataset. The data used in this experiment appear in zipped form in S1 Dataset.

doi:10.1371/journal.pone.0149512.s001

(ZIP)

## Author Contributions

Conceived and designed the experiments: TC, EB. Performed the experiments: TC. Analyzed the data: MK. Contributed reagents/materials/analysis tools: MK,EB,TC. Wrote the paper: MK,EB,TC.

## References

- 1.
Bernstein NA (1967) The Co-Ordination and Regulation of Movements. Oxford: Pergamon Press.
- 2. d’Avella A, Portone A, Fernandez L, Lacquaniti F (2006) Control of fast-reaching movements by muscle synergy combinations. J Neurosci. 2006;26:7791–810. doi: 10.1523/JNEUROSCI.0830-06.2006. pmid:16870725
- 3. Bizzi E, Cheung VC (2013) The neural origin of muscle synergies. Front Comput Neurosci. 2013;7:51. doi: 10.3389/fncom.2013.00051. pmid:23641212
- 4. Lacquaniti F, Ivanenko YP, DAvella A, Zelik KE, Zago M (2013) Evolutionary and developmental modules. Front Comput Neurosci. 2013;7:61. doi: 10.3389/fncom.2013.00061. pmid:23730285
- 5. Coscia M, Monaco V, Martelloni C, Rossi B, Chisari C, Micera S (2015) Muscle synergies and spinal maps are sensitive to the asymmetry induced by a unilateral stroke. J Neuroeng Rehabil 18;12(1):39. doi: 10.1186/s12984-015-0031-7. pmid:25928264
- 6.
Schmidt RA, Heuer H, Ghodsian D, Young DE (1998) Generalized motor programs and units of action in bimanual coordination. In Progress in Motor Control, Vol. 1: Bernstein’s traditions in motor control, pp. 329–360, Champaign, IL: Human Kinetics Publishers.
- 7. Todorov E, Jordan MI, (2002) Optimal feedback control as a theory of motor coordination. Nat. Neurosci. 5, 1226–1235. doi: 10.1038/nn963. pmid:12404008
- 8. Chhabra M, Jacobs RA (2006) Properties of synergies arising from a theory of optimal motor behavior. Neural Comput. 18, 2320–2342 doi: 10.1162/neco.2006.18.10.2320. pmid:16907628
- 9. Kutch JJ, Kuo AD, Bloch AM, Rymer WZ (2008) Endpoint force fluctuations reveal flexible rather than synergistic patterns of muscle cooperation. J Neurophysiol. 100(5):2455–71. doi: 10.1152/jn.90274.2008. pmid:18799603
- 10. O’Sullivan I, Burdet E, Diedrichsen J (2009) Dissociating variability and effort as determinants of coordination. PLoS Comput Biol 5(4):e1000345. doi: 10.1371/journal.pcbi.1000345. pmid:19360132
- 11. Todorov E (2005) Stochastic optimal control and estimation methods adapted to the noise characteristics of the sensory motor system. Neural Comput. 17, 1084–1108. doi: 10.1162/0899766053491887. pmid:15829101
- 12. Metropolis N, Rosenbluth A, Rosenbluth M, Teller A, Teller E (1953) Equations of state calculations by fast computing machines. Journal of Chemical Physics, 21(6), 1087–1092. doi: 10.1063/1.1699114.
- 13. Atchadé YF, Rosenthal JS (2005) On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11(5): 815–828. doi: 10.3150/bj/1130077595.
- 14. Holland PW, Welsch RE (1977) Robust Regression Using Iteratively Reweighted Least-Squares. Communications in Statistics: Theory and Methods, A6, pp. 813–827. doi: 10.1080/03610927708827533.
- 15.
Huber PJ (1981) Robust Statistics. Hoboken, NJ: John Wiley & Sons, Inc.
- 16. Street JO, Carroll RJ, Ruppert D (1988) A Note on Computing Robust Regression Estimates via Iteratively Reweighted Least Squares. The American Statistician. Vol. 42, pp. 152–154. doi: 10.2307/2684491.
- 17.
DuMouchel W H, O’Brien F L (1989) Integrating a Robust Option into a Multiple Regression Computing Environment. Computer Science and Statistics: Proc of the Symp on the Interface. Alexandria, VA: American Statistical Association.
- 18.
Jeffreys H (1939) Theory of probability. Oxford University Press.
- 19.
Davison AC (2003) Statistical models. Cambridge University Press.