Dose-Response Analysis in the Joint Action of Two Effectors. A New Approach to Simulation, Identification and Modelling of Some Basic Interactions

In systems with several effectors, the results of dose-response (DR) experiments are usually assessed by checking them against two hypotheses: independent action (IA) and concentration addition (CA). Both are useful simplifications, but do not represent the only possible responses, and avoid to a large extent the analysis of the interactions that are possible in the system. In addition, these are often applied in such a way that they produce insufficient descriptions of the problem that raises them, frequent inconclusive cases and doubtful decisions. In this work a generative approach is attempted, starting from some simple mechanisms necessarily underlying the response of an elementary biological entity to an effector agent. A set of simulations is formulated next through an equally simple system of logical rules, and several families of virtual responses are thus generated. These families include typical responses of IA and CA modes of action, other ones not less probable from a physiological point of view, and even other derived from common and expectable forms of interactions. The analysis of these responses enabled, firstly, to relate some phenomenological regularities with some general mechanistic principles, and to detect several causes by which the IA-CA dualism is necessarily ambiguous. Secondly, it allowed identifying different forms of synergy and antagonism that contribute to explain some controversial aspects of these notions. Finally, it led to propose two sets of explicit algebraic equations that describe accurately a wide diversity of possible and realistic responses.


Introduction
The response of a population of biological entities to the action of an effector is typically sigmoidal and requires for its algebraic description (the dose-response model: DR) an equation with at least three parameters. If the response is altered by a perturbation agent, variations depending on the perturbator concentration must be expected in one or more of these parameters. If two effectors interact, one or more parameters corresponding to the action of each effector will vary, in the description of the joint response, as a function of the concentration of the other one. Although these premises are not much debatable, their practical application has the disadvantage of requiring a solution whose complexity increases in a more than linear way with the number of effectors considered.
This justifies the common use of two simplifications: the IA (independent action) [1] and the CA (concentration addition) [2,3] hypotheses. Both avoid the mentioned disadvantage by postulating conditions that allow verifiable predictions about the joint response, using the individual DR models without adding new parameters. Next we will discuss the details of these hypotheses; now we will point out only that their formalizations are generally considered as empiric models lacking in mechanistic content, what is not completely true. DR models are considered empirical (phenomenological, macroscopical) because they describe the sensitivity distribution of an effector in a target population. Although this provides DR models with a statistical basis, ultimately the response depends on processes that take place at the level of the interactions between the effector quanta (ions, atoms, molecules, electric pulses, radiations) and the receptor structures of the biological system, a level that is ignored by the model. However, using a thermodynamic analogy, the (macroscopic) sensitivity distribution can be broken down into the (microscopic) distributions of other elements that are response-determining at a finer resolution level. These elements can be physical structures whose reduction to other simpler ones has no sense (as the number of receptors per biological entity), or more complex physiological limits (as a response threshold), but in any case, they can be linked in biological systems with the effector quanta of an agent through hypotheses about some general forms of molecular interactions.
Under this perspective, IA and CA hypotheses postulate modes of action that can be associated to general mechanisms or microscopic conditions, which allows to propose variations capable of generating specific responses. To classify these variations from bibliographic data is difficult due to: the interference of the experimental error; the required categories are not usually considered in toxicodynamic studies; and the suitable designs for a given hypothesis rarely can be used to prove facts outside of their conceptual framework. In this sense, a way for eluding these difficulties can be achieved by performing simulation ''experiments''. Both, the statistical basis and the general types of mechanisms underlying the DR relationships (interactions between cell receptors, effectors and interfering agents) are sufficiently known for simulating microscopic conditions able to produce the corresponding macroscopic (populational) results.
In the simulations used in this work, simple properties for the microscopic determinants of the response were postulated, and a set of basic ''sigmoidal scenes''-among them those associated with IA and CA hypotheses-were generated with the only assistance of logical (Boolean) rules. Additionally, more specific response surfaces were obtained by including in such rules some algebraic expressions describing concrete interactions as those that can take place in many physiological contexts (activation/deactivation, competence/cooperation, steric hindrance). The results allowed to illustrate the status of IA and CA hypotheses within the field of the possible responses, to characterize several types of perturbations and interactions, and to propose explicit algebraic models that translate the mechanics of the response into specific parametric variations. Although in some cases the practical utility of these models can be limited by a low number of observations and a high experimental error, the simulations constitute always a useful reference for interpreting a complex response, inferring the type of mechanism involved and suggesting complementary experiments.

Numerical methods
Hereafter we will call effector any agent able to cause a (typically sigmoidal) response in a population of biological entities, and perturbator to any agent that can alter the response to an effector, itself being unable to cause it. Receptor is any biological structure acting as ligand of effectors or perturbators. The term dose is reserved to the concentration of an effector.
In the simulation procedures that are described later, the Weibull's random numbers w:(h;a) were obtained, from the uniform random numbers u:[0,1] provided by the spreadsheet, through [4,5]: where mean (m w ) and standard deviation (s w ) of the corresponding distribution are: To facilitate comparisons, doses were coded into the [0-(0.1)-1] interval, and responses were calculated by considering Y as a total number of biological entities, S the survive population at a given dose, therefore the surviving population response R can be expressed in a useful coded range [0,1] as R = 12(S/Y). The simulated and experimental results were adjusted to the proposed models by non-linear least squares methods (quasi-Newton), in Microsoft Excel spreadsheet, using Solver complement for parametric estimates, and Solver Aid macro [6,7] for confidence intervals and model consistency (Student's t and Fisher's F tests, respectively, with a = 0.05 in both cases).
In the most complex cases (models with interactions), the fitting process involved always a progressive hypotheses of contrast, as it is usual in any stepwise multiple regression method, to select the interactive mode providing the most statistically consistent interpretation. Parametric confidence intervals, coefficient of multiple determination, residual bias and sensitivity analysis were applied as selection criteria. An efficient way of proceeding is by following the next steps: 1) calculation of the sigmoidal parameters from the individual responses; 2) use of these estimates as provisional fixed values of the model, and assay of different interaction hypotheses, rejecting those that lead to not statistically significant coefficients; 3) refinement of the model by recalculation, now allowing the variation of all the accepted parameters.
Although the initial number of parameters is high, it means only a high number of potential alternatives, some of which are mutually exclusive, and others easily rejected in the course of the fitting to a concrete data set. Trying of initial values is facilitated by follow up of the variations produced by the Solver results in graphic displays of response surface and residuals, and convergence is usually immediate. Details about experimental design are provided as Supporting Information (see Figure S1).

Null interaction, synergy and antagonism. IA and CA hypotheses
In any system (as defined in the Bertalanffy's sense: a set of interacting elements), an important and characteristic problem is to know whether the joint effect of two or more elements on the system behaviour is deducible from their individual effects. This issue, with a long history of controversy whose first known attempt goes back to Aristotle, is often stated by replacing «deducible» with «the sum», what leads to define the notions of synergy and antagonism as those interactions by virtue of which the joint effect of two (or more) effectors is greater (synergy) or lesser (antagonism) than the sum of the individual effects.
For examining the meaning of this non-acceptable statement, let us to consider a system in which two effector elements (E 1 and E 2 ) can act, and let us to denote the possible behaviours or responses of such a system as R 1 and R 2 (if only E 1 or E 2 are present), R 1,2 (if E 1 and E 2 do not interact), and R 1&2 (if E 1 and E 2 interact). Under these conditions, to obtain any initial analysis on the possible interactions, it is an essential requirement to compare the responses R 1,2 against R 1&2 , defining synergy and antagonism respectively, as those interactions in which R 1,2 ,R 1&2 and R 1,2 .R 1&2 .
It is obvious that the key aspect to assess any interaction is the response of the system in the absence of interactions (the null interaction). Thus, to define the null interaction, any mechanism must be postulated as underlying any specific behaviour of the system. The addition is the simplest mechanism, and its most immediate options suppose that the added magnitudes can be the effectors (acting as one alone), or the effects on the system of their independent actions (the responses of the system to such actions). Now, if we imagine that the response has a superior limit, as it happens, for example, in the case of the death-survival alternative in a microbial population under increasing doses of two toxics. If the added magnitudes are the concentrations of the toxics, the response obtained, would be identical to the resulting response values of a single toxic dose -although it can be expressed as a function of two independent variables-. The additive response is more problematic, since it is obvious that if one cell dies when one of the doses reaches a given level, it will die independently on the level of the other dose, simply because it cannot die twice.
These two types of sum are the foundations of the two basic accepted modes for describing the joint action of two effectors under null interaction conditions: concentration addition and independent action. Although both can be applied -at least in theory-to any number of effectors, here will be discussed in their simplest forms, for two effectors (their generalizations are, anyway, immediate).
2.1. Independent action hypothesis. It supposes that the effectors act through different mechanisms, whose asymptotic maxima are reached through statistically independent phenomena. Under this premise, the probability theory allows to define the response as the sum of the probabilities of the individual phenomena minus the probability of their joint occurrence [1,8]. Consequently, if R c is the response to the joint action of c 1 and c 2 concentrations, and R c1 and R c2 the individual responses at the same concentrations, it can be established: An expression easily generalizable to more than two effectors is obtained by writing the first R c1 in the second member of (1) as 12(12R c1 ): 2.2. Concentration addition hypothesis. In its classical formulation [2,3], null interaction is not defined as a relation between the individual responses, but through the following criterion: since the concentration (c) of an effector whose action obeys the equation R = f(c) can be considered as a fictitious combination of c 1 and c 2 concentrations (c = c 1 +c 2 ), it is obvious that the response to c will be described by the equation R = f(c), with c = c 1 +c 2 . If the response to a mixed dose of two effectors behaves as the response to the ''mixed'' dose of the same effector, it is accepted that the interaction between them is null, implying that any effector concentration can be substituted by the equieffective concentration of the other one.
The conventional practice avoids an explicit algebraic formulation when evaluating the joint response of effectors, using the isobolographic analysis [9] (or lines on the plane of the independent variables that represent the dose combinations that produce an equal response) to solve the lack of a clear mathematical process. Thus, in the isobolographic analysis, if D 1 and D 2 are the doses of two effectors that produce the individual response R a , and d 1 and d 2 any dose combination that produces the same joint response R a (Figure 1), under null interaction conditions the isobole of the response R a will be necessarily described by the following lineal equation: Consequently, if the individual DR models are R i = f i (D i ) and their reciprocal functions D i = g i (R i ) exist, it can be established that: In other words: straight isoboles indicate null interaction, and concave and convex up isoboles indicate synergy and antagonism, respectively. The dimensionless quotients d i /g i (R a ) are called toxic units and represent the relative contribution of each effector to the joint response R a .

Some problems associated with the AI-AC approach
A first unsatisfactory aspect of this approach is the difference between the formal criteria applied to each mode of action. IA hypothesis proposes an explicit response surface model, but a general agreement does not exist about the ways in which synergy and antagonism must be formulated. CA hypothesis avoids the explicit model, but equation (4) is accepted as a criterion for detecting synergy and antagonism. This criterion, however, is not transferable to the IA framework, whose mathematical form prevents straight isoboles in null interaction if -as it occurs in most DR relationships-the individual responses R 1 and R 2 are not proportionally constant. As a general rule, IA isoboles in null interaction are convex up at low doses, concave up at high doses, and with two branches of opposite curvature in a transitional zone when R 1 ?R 2 ( Figure 1). But the factual meaning of this formal property cannot be attributed to synergy or antagonism, only implying that the probability that at least one dose is lethal is low at low doses and high at high doses (another outcome of the statistical independence, as the above mentioned impossibility of dying twice).
In CA hypothesis, as already pointed out, the absence of an explicit mathematical model is another disadvantage. A measure of the sign and degree in which an isobole deviates from linearity is provided by (4), and isoboles with a variable degree of curvature and asymmetry can be defined by using alternative expressions as equation (5) described by [10] or equation (6) described by [11].
where the additional parameters b or c are more precise indexes of synergy and antagonism. Although these equations would enable the construction of isobole maps by solving them numerically, the procedure is laborious and in practice is usually applied only to the isobole at the half-maximum response. Furthermore, in this approach a homogeneous isobolic curvature is postulated for the whole considered domain, a property that as we shall see later, is not necessarily true. Some joint responses to chemically similar/dissimilar effectors were suitably described with CA/IA models, respectively [12,13]. However, there are evidences, as well, that these modes of action are not always obeyed by the reality. Jonker et. al [14] have proposed in each case, besides synergy and antagonism, other deviations from null interaction which were defined a priori as effects depending on the absolute dose levels or dose ratios, thus enabling synergistic and antagonistic displays on different regions of the same response surface. But even so, the reality seems to be richer: in a revision of 158 cases [15] was found that 20% of the responses can be adequately predicted by IA, 10% by CA, another 20% admitted both models and half of the cases were not correctly described with either of them. Moreover, neither of the models was significantly better than the other on assessing synergy and antagonism at the 50% effective doses.
These results are not really surprising. Figure 2 represents a simple hypothetical metabolic pathway that can lead to a set of different situations depending on the considered inhibition mechanisms (competitive, non-competitive, acompetitive), the kinetic constants that are null, the values of the rest of such constants and the definition of the response (individual or simultaneous drops of the products P 1 , P 2 or P 3 ). But even without including an experimental error, none of these possible situations obeys unambiguously IA or CA modes of action. In our laboratory, these ambiguities have been detected not only in the joint action of hydrocarbons and dispersants on the larval growth of sea urchin [16], but also, as we will see, in the simpler context of the oxidation inhibition of a substrate by the joint action of two antioxidants.

DR model for a single effector
The natural form of a DR model is a cumulative (mass) probability function, and it translates the response of a population with a given sensitivity distribution to an effector. Four additional conditions seem reasonable as well: 1) the model should have an explicit algebraic form; 2) it should be lacking of an intercept (null response at null dose); 3) an asymptote equal or lesser than 1 should be enabled; and 4) the parameters with important factual meaning should be explicitly included, to facilitate the trial of initial values and the calculation of confidence intervals when nonlinear fitting methods are applied.
Although normal and log-normal distributions have been the basis of the classical DR analysis, they have the disadvantage of lacking of an explicit form for their mass functions. Logistic-type equations are more useful and they can be expressed in forms easily modifiable to comply with the above mentioned conditions [17,18]. However, their derivatives (their density functions) show only right bias, which can be a restriction scarcely realistic. Another option is the Gompertz [19] equation, but its use is prolix, especially with the modifications that are required to apply in the DR context. The mass function of the Weibull distribution [20] can be expressed in a suitable reparametrized form [21][22][23], providing the adequate DR analytical model, whose expression can be write as: g ; briefly : R~W D; K,m,a ð Þð 7Þ  where D is the dose, R the response (with K as asymptotic maximum, not necessarily 1), m the dose producing half-maximum response, and a a shape parameter related to the maximum slope of the response (r m ) by: It should be noted that m is the abscissa of the inflection point, which represents the accumulated modal response. The basic parameter of the DR analysis is ED 50 or EC 50 , defined as the effective dose for the 50% of the assay population. Thus, m and EC 50 are coincident values if the maximum response implies the whole population (K = 1). The reciprocal function of equation (7), which is a requirement when performing the isobole analysis in the CA hypothesis, it can be written as: The use of the equation (7) as DR model is interesting for several reasons. Its density function (the sensitivity distribution of the population) can be symmetrical or asymmetrical with right or left bias, which makes it very versatile. It produced the best fittings, among the above mentioned alternatives, when it was applied to the simulations that will be described in the next sections, and this result was repeatedly confirmed before by experimental data [21,[24][25][26][27]. Moreover, the Weibull distribution is the conventional model for the failure of complex devices, making the equation more attractive because it unifies phenomena in which underlies a profound analogy.

Simulation of the response to a single effector
Since the basic sigmoidal profile of the DR relationships translates a macroscopic, statistical phenomenon, it should be possible to simulate it as a result of the microscopic behaviour of a population of elementary biological entities, which we will call cells. Such a simulation can be carried out on the following basis: B1. One cell is defined by means of three aleatory magnitudes: r, or number of receptors of an effector, a, or number of active receptors -ready for linking the effector-in a given instant, and l, threshold or minimum number of active receptors that must be linked to an effector to produce a response. B2. The dose D is defined as the number of effector units per cell, accepting that every unit is capable of linking to one receptor. B3. The cell response r is limited to two modalities: death (r = 0) and survival (r = 1), obeying the following logical rule: Notice that a,l implies r = 1 at any dose. It translates possible limitations in the effector bioavailability, resistant cells or other conditions which, relatively frequent in DR assays, produce lesser than 1 asymptotes.
Although r (highest limit of a) is interesting in some cases, hereafter it will be omitted without loss of generality, and a cell will be defined through the pair a, l. Thus, if we assign to a and l probability distributions defined by their mean and variance values, we can create, in a spreadsheet, a virtual cell population whose sensitivity distribution depends on the parameters a and l. The population response R to an increasing dose series is simulated by applying the rule (10) to each cell and defining R = 12(S/Y) when S cells, of a total number Y, survive at a given dose. The typical shape of the response arises even at moderate population sizes (Y,100) and becomes highly stable when Y$1,000. At low population sizes, the variability of the result represents a simulation of the natural variability, the experimental error, or both.
These premises define the minimum complexity of a system able to generate a sigmoidal response and, despite their schematic character, they produce a great variety of profiles, depending on the a and l parameters. Although the microscopic solution of the system is determined by such parameters, the obtained profiles can be macroscopically described by the conventional equations of the DR analysis.
The use of the Weibull distribution for defining a and l is not essential. With normal, log-normal, Poisson, binomial, or even uniform distributions, the best descriptions of the responses are obtained, as it has been mentioned. However, distributions with domain [0,') are obviously preferable, since low means and high variances in (2',') distributions lead to a finite probability of negative values of a and l, which lacks of physical meaning.

Simulation of perturbations of the response to an effector
Using the elements above defined, we will admit that a perturbator does not produce a response by itself, but it can modify the response to an effector by altering the number of active receptors (a), the threshold (l) or the effective dose corresponding to the nominal dose (D) (hereafter a, l and D-perturbations, respectively). a and D-perturbations can be illustrated in molecular terms through the well-known key-lock analogies ( Figure 3). l-perturbations require to suppose an intermediate fast process modifying the cell sensitivity.
All of them can be exemplified by common physiological mechanisms as those that take place in trans-membrane proteins, second messengers or enzymatic systems, in which any DR assay allows to distinguish at least between a and D-perturbations. Indeed, in the presence of an excess of effector, a moderate increment of a perturbator modifies or not the response depending on whether the perturbation acts over the receptors or the effector.
The effect of these perturbations on the response can be simulated by using the rule R0 (10) and adding a vector that represents increasing perturbator concentrations, as well as a criterion for modifying the values of a, l or D as a function of the perturbator concentration. Since direct or inverse ratios are the simplest criteria, a perturbation term can be formulated as: where P is the perturbator concentration, p e the proportionality coefficient, and the e subscript indicates the element affected by the perturbation. The term U e multiplies or divides the values of  Table 1). Notice that the alterations of the effector-receptor affinity (for simplicity reasons only perturbations are illustrated) do not modify the response to a given dose, but the response to a given time. doi:10.1371/journal.pone.0061391.g003 D, a or l depending on the effect that we are trying to achieve, and it can be replaced by any other algebraic expression able to describe any other type of alteration. Now, it can be pointed out that if a DR curve is sigmoidal because the most sensitive elements of a population die at lower doses than the most resistant ones, therefore a time-response curve will be sigmoidal if the sensitivity distribution of the population is translated into responses at shorter or longer times. Although this problem will not be considered here, the time-course of the response could be treated also in terms of molecular interactions, which in this case, would accelerate or delay the progress of the effect (Figure 3). In some fields -i.e. marine toxins-, the time of death of a target organism often replaces the response in the framework of the empirical dose-time death models [28][29][30][31]. We do not allude here to this approach, but to the time-response relationships, which can be described by using the same models as DR analysis [26].

Simulation of the response to two effectors
In this case, the perturbator must be replaced by a second effector, but the essential issue is the need to include in the simulation algorithm, even in the absence of any interaction, any rule about the joint action. The simplest hypothesis in this regard leads to admit that: 1) the receptors are specific of the effectors; 2) the cell response is r = 0 if any of the doses exceeds the corresponding threshold and there is a sufficient number of receptors. Thus, the rule is: These conditions are reminiscent of IA hypothesis and in fact, as we shall see later, they produce the typical IA response surfaces. Now then, both conditions can be denied, what seems quite reasonable for the second one [32], since it prevents to accept that two biological subsystems -e.g. glycolysis and b-oxidationcan be affected at individually sub-lethal, but jointly lethal levels. By denoting the first condition (specificity) as S + and the second one (independence) as I + , three additional rules, besides S + I + , can be considered: S + I 2 . It admits that 1) the effect Gi of a dose Di below threshold li is Gi = Di/li; 2) Gi values are additive; 3) r = 0 if G1+G2$1 and there enough receptors: S 2 I + . It admits that any effector has access to the whole of the receptors (a 1 +a 2 ), what produces competence if they are insufficient. Competence depends on factors as the relative doses of the effectors, their diffusivity or their affinity for the receptors, but to simplify, only relative doses will be considered here: C 1 = D 1 / (D 1 +D 2 ) and C 2 = D 2 /(D 1 +D 2 ). Thus, the number of receptors linked to D i will be C i (a 1 +a 2 ), what leads to the rule: ; IF(AND(OR(Sa i vl 1 ; C 1 (a 1 za 2 )vl 1 ); OR(Sa i vl 2 ; C 2 (a 1 za 2 )vl 2 )); 1; 0)) S 2 I 2 . It admits simultaneously competence (if the receptors are insufficient) and additivity of below-threshold effects. The rule is: None of these rules implies the sum of the doses required by the CA hypothesis. This condition plays down the receptor specificity, but again the competence arises if the receptors are insufficient. If both effectors have the same threshold, the response is not modified by the competence. If the thresholds are different, the number of occupied receptors will depend on the relative doses and will be C i (a 1 +a 2 ), turning the dose addition effect in belowthreshold addition effect: G i = C i (a 1 +a 2 )/l i . Therefore, two rules are possible, any of which produces the response surfaces with straight isoboles that characterize the null interaction in the CA hypothesis: R5a: Without competence (equal thresholds): r~IF(AND(Sa i vl 1 ; Sa i vl 2 ); 1; IF(AND(SD i vl 1 ; SD i vl 2 ); 1; 0)) r~(Sa i vl 1 )^(Sa i vl 2 ) _ :((Sa i vl 1 )(

Inherent and accidental mechanisms
Concrete assumptions (about specificities, thresholds, competence and dose or effect addition) as those included in the rules R1 to R5 are indispensable for any joint action hypothesis, and they represent mechanisms that are inherent to a given null interaction model. Over these minimal inherent modes of action, accidental mechanisms (interactions) can be superimposed, in which each effector can perturb the response to each other by modifying the number of active receptors, the threshold or the effective dose corresponding to the nominal dose. Thus, when the effector E 1 perturbs the response to E 2 , we can write an interaction term like (11): which is included into the rules R1 to R5, as a factor or divisor of D 2 , a 2 or l 2 . The interactions can be unidirectional (E 1 alters some factor related with E 2 , but E 2 does not alter E 1 accordingly) or reciprocal (mutual alterations), and these last ones can or cannot be symmetrical (the same or different strength in both directions). Whereas the reference of a perturbation is the response in the absence of perturbator, the reference of an interaction is the inherent mechanism or null interaction. This implies that a given interaction has different consequences depending on the inherent mechanism and the frame of application. A systematics in this regard is proposed in Table 1, which attempts to preserve the main senses of the usual terminology [33], and whose categories allow simulations and specific formal descriptions. Although this terminology can be used without ambiguity, it seems preferable to simplify it by defining stimulation/inhibition as perturbations that increase/decrease the response to an effector, and sinergy/ antagonism as interactions that increase/decrease the joint response to two effectors with respect to the response promoted by the null interaction.

Perturbations of the response to a single effector
In Figure 4 the three conditions that depress the response simulations to an effector in the presence of increasing concentrations (P) of a perturbator are shown (decrease of the effective dose, decrease of the number of active receptors and increase of the threshold). The individual fittings to the resulting profiles with the model (7) proved that, in each series, the estimates of K and m parameters varied as a function of P in specific ways (Table 2), thus enabling to identify the underlying perturbation. Although the maximum slope varied as a consequence of the variations of K and m, in agreement with (8), the parameter a remained constant in all cases.
Such a constancy of a is not surprising, since the simulations with the rule R0 prove that the variation of this parameter is related with the variations of the variances of a and l, a condition that was not considered in any case. Such a restriction simplifies the analysis and is not arbitrary. From the microscopical point of view adopted for the effector and perturbator actions, variations in the number of receptors and threshold are clear possibilities, but it is more difficult to justify an action on a populational property such as the variance.
1.1. The perturbation function. To obtain a simultaneous solution for every series of profiles, an auxiliary function (a perturbation function p h ) is required for describing the possible variations of any parameter h as a function of P ( Figure 5). This can be achieved by means of different biparametric (exponential, polynomial or hyperbolic) expressions: where the h subscript represents the modified parameter (K, m), h 0 is the parametric value when P = 0, and the pairs b h , c h are fitting coefficients. It should be noted that, in the absence of perturbation, the first function requires b h = 0 and c h = 1, whereas the other two require b h = 0 and c h = 0. Moreover, the condition of the third function a singularity is obtained if c h P = 21. In order to avoid it, when P is coded in the interval [0,1] it is advisable to include the restriction c h .20.999 in the fitting algorithm. Thus, the model (7) turns into: With any of the mentioned forms of p h , the (22) led to excellent simultaneous fittings, which confirmed the specificity of the parametric variations found in the individual descriptions. Figure 4 and Table 3 show the results obtained with the hyperbolic function (21), which will be the form used now on. The description of the inhibition of the haemolytic activity of palytoxin by ouabain [22], illustrates the application of equation (22) for perturbations involving a time-response profile.
When the model (22) is satisfied with uniparametric p h (b h = 0 or c h = 0), the relationships between confidence intervals (CI) and parametric values are approximately of the same numerical size in all the cases. When b h ?0 and c h ?0 are required, both CI are penalized by the linear correlation between both coefficients, and therefore, with high experimental error and low number of observations, some b h , c h pair can become not statistically significant, even in highly predictive models. This problem is solved if the model is recalculated fixing the value of one of the coefficients and excluding it of the Student test, or -with a small loss of predictive capability-making zero the coefficient of the pair whose suppression does not alter the increasing or decreasing trend of the parametric variation due to the perturbation function. Finally, in D-perturbations is interesting to observe that two equivalent solutions are possible. One of them is that provided by the model (22), describing the variation of m (the only affected parameter) due to the presence of the perturbator. The other one describes directly the variation of the effective dose through the equation:  Table 2. Variations (+: increase; 2: decrease; 0: no change) in the parameters of the response to an effector, as described by the equation (7), due to the presence of an agent which produces the specified perturbations.   Later on we will see that this dual solution is not possible in interactions between two effectors.

The response surfaces to two effectors
Among the five basic types of response produced by the rules R1 to R5, only two of them were in accordance with those corresponding to IA and CA hypotheses, and the diversity increases as interactions are included. Two possible reasons to avoid describing this diversity as unrealistic are because it involves options that are implicit in the same rules that produce IA and CA responses and it translates common physiological mechanisms. In fact, the situation seems to be just the opposite one. As Rovati et. al [34] pointed out regarding the cases of partial agonism, or effectors able to interact with receptors promoting opposite effects, there are responses «that were often disregarded by the experimentalists, or considered as artefacts, in the absence of a biological and/or mathematical theory to justify them».
The parametric variations due to interactions showed the same specific increasing and decreasing trends as those due to perturbations (Table 2), in linear or asymptotic forms ( Figure 5) depending on quantitative factors, and such fact is hardly surprising.

Modelling of interactions under the independent action hypothesis
As expected, the response surfaces produced by the rule R1 were consistent with the IA hypothesis ( Figure 6, Table 4) and could be accurately described by transferring model (7) to equation (2): Since each effector can act as perturbator of the response to the other, by altering effective doses, active receptors or thresholds, auxiliary functions p hi can be defined in terms as those applied to the perturbations. Thus, the (24) will be written, in its most complex form as: When this equation was used to describe the corresponding simulations, the specific variations in the parameters K i and m i led to discriminate all the modalities of interaction (Table 2), whose main types are summarized in Figure 6, Figure 7 and Table 4. For those interactions affecting the effective dose, the dual solution that is possible in the homologous case of the perturbations, here, it would be reduced to parametric variations affecting only to m i .
3.1. Partial forms of independent action. The R2-4 rules exhibit the three alternatives to the two conditions of R1, which represents the typical independent action. The corresponding simulations showed that unspecific receptors produce a competence that depresses the response, while additive below-threshold effects promote a cooperative effect with an opposite enhancing result. Since the equation (1), that describes IA mode, contains a term (the product of the individual responses) translating the joint probability, it can be supposed that the cooperative and competitive effects could be described by including in (1) a coefficient s modifying the contribution of that term.
However, the fitting tests proved that the coefficient s is necessary, but not enough, and that accurate descriptions ( Figure 6) require parametric structures including interaction terms (Table 4). This seems contradictory with the definitions of inherent mechanism and accidental interaction proposed in the ''Simulation of the response to two effectors'' section, since S + I 2 , S 2 I + and S 2 I 2 modes (like IA = S + I + one) represent inherent mechanisms, without modifications of effective doses, receptors or thresholds. Nevertheless, the lack of specificity in the receptors and the additivity of the below-threshold effects involve that the action of an effector is not indifferent to the presence of the other, what constitutes an interaction, although of a passive character.
Thus, a generalized IA model in its most complex form -in practice several p hi = 1 are expectable-, can be write as: It should be noted that the need of s?1 detects the relaxation of some of the conditions defining IA mode, but in such a case the identification of possible D, a or l-interactions become confuse.

Modelling of interactions under the concentration addition hypothesis
As it has been already mentioned, the application of the CA hypothesis is usually carried out through the isobole analysis. However, the definition of null interaction according to Berenbaum provides the key for establishing an explicit model. Indeed, if the response to a mixed dose of two effectors should behave as a fictitious mixed dose of a single effector, the model is necessarily:   In fact, the simulations obtained with any of the rules R5 were accurately described by this equation, which produced straight isoboles with equal intersection points on the doses axis ( Figure 8 and Table 5). Although the competence affects the parametric values, it does not alter the functional form, making equivalent the two alternatives of the rule R5.
As a consequence of the equation (27), the CA hypothesis can be accepted when the individual responses differ in their m parameters and -given the relation (8)-in their maximum slopes, but should be rejected when such responses differ in their K parameters. This is consistent with the isobole approach: if the asymptotes differ, expressions as (4), (5) or (6) could only be applied to lower responses than the lowest asymptote, since the inverse function (9) only exists if R,K.
When interactions are included, the key notion of concentration addition should be preserved in any modification of the (27), what means that the doses should act as an additive block in a function with a single set of sigmoidal parameters (K, m, a). These conditions enable the cases that are described next.
4.1. Effectors with different potency. This case was simulated by using the rule R5a and multiplying one of the doses by a tox factor (tox = 1 for effectors with equal potency). Results were fitted -with coded doses in the same interval-to the equation: which provided a precise description and produced straight isoboles with different intersection points on the doses axis ( Figure 8). The u coefficient (u.1 if the first effector has more potency than the second one) means that if a joint response is described by the equation (28), the m 2 parameter of the individual response to the second effector is m 2 = m6u.

Synergy and antagonism.
To obtain surfaces with isoboles like those associated with synergy and antagonism, the rule R5a must include the condition that an effector alters, unidirectional or reciprocally, the effective dose of the other one. By using p Di terms like those p hi defined by equation (22), the corresponding simulations (Figure 8 and Figure 9) were described by means of:   Active receptors (a i ) and thresholds (l i ) were defined as in Table 3 (26). Keys and graphic criteria as in Figure 6. See also Table 4. doi:10.1371/journal.pone.0061391.g007 Figure 8. Joint response to two effectors under CA mode of action. Concrete types of interactions are specified and adjusted to the generalized model (32). Keys and graphic criteria as in Figure 6. See also Table 5. doi:10.1371/journal.pone.0061391.g008 Contrary to the case of the perturbations, the equivalent dual solution based in the variation of the m parameter is not possible here.

Interactions effector-receptor (a) and effectorthreshold (l).
If it is admitted that an effector E 1 alters the number of active receptors a 2 or the threshold l 2 of the effector E 2 , the conservation of the CA hypothesis requires to admit also that E 1 alters the a 1 or l 1 values. What in turn means to admit autoinhibitory or autocalytic effects. In fact, when this type of interactions are included in the rules R5, the response surfaces from the resulting simulations are limited by individual responses that decrease after a maximum or increase not asymptotically. Moreover, the estimates of the sigmoidal parameters from the joint response are only apparent (such as the Michaelian parameters in the presence of inhibitors), useful to predict the response surface, but without direct physical meaning in connection with individual responses, whose real parameters should be separately calculated using the model (7).
Avoiding now to discuss the realism of these behaviours, it can be pointed out that the response to lactic acid of some lactic acid bacteria seems to illustrate them, at least in their autoinhibitory modality [35][36][37].. Thus, the response to this acid of Leuconostoc mesenteroides [38] showed a profile -like that resulting from an enzymatic kinetics under substrate inhibition-which was described by including in the logistic equation a dose-depending term depressing the asymptotic value. But the description is also feasible -and more accurate-using a modified Weibull model as: In any case, these a and l interactions under IA response can be described, in their most complex forms, by modifying the K and m parameters with interaction terms:   Notations as in Table 4. See also Figure 9 and Figure 10.  (32). Keys and graphic criteria as in Figure 6. See also Although these effects increase or decrease the response with respect to that expected in null interaction, the isoboles differ markedly (Figure 9) from those produced by the equation (29) and corresponding to synergy and antagonism.
As in IA mode, all the possible options under CA hypothesis can be unified in a generalized model which, in its most complex form -again in practice simpler cases with several p hi = 1 should be expected-can be formulates as:

Broad and strict sense of the notions of synergy and antagonism
The frequent and not always enlightening discussions about synergy and antagonism have led some authors to consider synergy as an «ineffective mixture risk definition» [10]. However, we believe that both concepts are important, and that their confused sides can be debugged by suppressing some common, not justified assumptions.
Firstly, their formal aspects should be distinguished from the factual ones. From a factual perspective, as it has been said, synergy and antagonism are the result of interactions that increase or decrease the response with respect to that expected in null interaction. This is a broad and unambiguous definition, but it requires taking into account the following issues:

1)
Synergistic and antagonistic responses can be generated by any of the interactions (D, a or l) considered in preceding sections. This implies that such responses can have diverse origins, that these origins require different formal descriptions, and that these descriptions are dependent not only on the elements involved in a given interaction, but also on the inherent mechanism of the system under null interaction conditions.

2)
Association between concave/convex isoboles and synergy/ antagonism is limited and misleading. In IA mode, as it has been seen, it lacks sense, and in CA mode it lacks general validity. As an example, the reciprocal synergistic ainteraction ( Figure 9) increases the response in the entire domain with respect to null interaction, in spite of which its isoboles are straight. In fact, that association is only applicable to D-interactions in CA mode, where the corresponding regular series of concave/convex isoboles enable clear contrasts with the straight series which are typical -although no exclusive-of the null interaction.

3)
If the usual isobolographic convention in CA framework is followed, synergy/antagonism could be defined in a strict sense as those effector-effector interactions that increase/ decrease the response. Such a definition, however, would exclude arbitrarily other interactions (as a or l ones) with similar net effects.

4)
Unfortunately, satisfactory expedients for typifying synergy/ antagonism by means of a single value or a particular isobole do not exist, since the differences between these conditions and null interaction vary throughout the corresponding response surfaces ( Figure 10C2).

5)
Furthermore, there are not theoretical reasons which prevent interactions with opposite regional effects on the response surface (e.g. E 1 increases the effective dose of E 2 , and E 2 increases the parameter m of the response to E 1 ). Consequently, synergy and antagonism can be simultaneously detected in different regions throughout a given response surface.
6. More insufficiencies and ambiguities of the IA-CA dualism The frequent inconclusive character of IA and CA hypothesis is a well-documented experimental fact [39], whose justification was suggested in theoretical section 3. A different justification is provided by the simulations described Simulation of perturbations of the response to an effector and Partial forms of independent action sections. Indeed, when the conditions that define the IA hypothesis are altered in biologically plausible forms, response surfaces with cooperative or competitive effects are obtained, which cannot be acceptably described by any of the possibilities of the IA-CA scheme.
An additional cause of ambiguity -difficult to detect in the absence of explicit models-derives from a more formal issue. The IA and CA response surfaces are in general clearly distinguishable when the asymptotes of the individual responses are less than 1, as in the examples of Figure 6, Figure 7, Figure 8 and Figure 9. But the distinction turns more problematic as these asymptotes move closer to 1, since in such a case the IA surface losses its peculiar top region, in which the joint response surpasses the asymptotes of the individual responses. Figure 10 shows an IA surface obtained by assigning arbitrary parametric values (with K 1 = K 2 = 1) to the model (1), which could be significantly typified as a case of antagonism in CA mode by any assessment method, especially if the results are ''blurred'' by the experimental error. Similarly, a reciprocal asymmetrical synergy in IA mode is practically indistinguishable from synergy in CA mode. In such cases, the false hypothesis could only be detected by the lack of randomness of the residuals, if the number of observations is sufficient and the experimental error is reasonable.
All these reasons lead to doubt about the generalization to more than two effectors of any IA-CA discrimination method, since the probability of all kind of ambiguities increases with the number of agents considered. The main justification of this generalization is the experimental economy in the research of the joint effect of many effectors at moderate levels, an important issue in environmental toxicology, which seems practically unapproachable through the assay of binary combinations. However, it seems as well that the simplification of the experimental arrays only will produce even more ambiguous results.

Experimental examples
Some elements of the approach proposed here are exemplified by two above mentioned cases of study: the larval growth inhibition in sea urchin by the joint action of hydrocarbons and dispersants [16], and the inhibitory perturbation by ouabain of the haemolytic action of palytoxin [27]. In fact, the need to clarify problems as those arose in these cases was the origin of the simulation-based systematics we have attempted in this work.
We present now an experimental example in connection with the antioxidant activity, a field in which the possible interactions between both natural and synthetic products are formally equivalent to the toxicological ones, and they raise similar discussions. The example refers to the inhibition of crocin oxidation by the joint action of two well-known antioxidants: ascorbic acid and trolox. Triplicate analytical results were obtained by monitoring of the oxidation kinetics in 868 concentration arrays, using a microplate method [40,41]. Inhibitory responses were quantified through the ratio between areas under kinetic profiles at the end point of the control [40,[42][43][44].
Antioxidants can compete with the oxidizable substrate for oxygen or the source of radicals (primary antioxidants), or for radicalized products that are formed in more advanced oxidation phases (secondary antioxidants). Therefore, their modes of action are in agreement with the diagrams of Figure 2 and they can be analysed using the general equations [31] or [37].
In this regard, the example of ascorbic acid and trolox is interesting, because it shows some of the features in which a clear decision is difficult ( Figure 11 and Table 6). Although null interaction cannot be accepted under both IA and CA hypotheses, when a synergistic effect is admitted, the difference between the two (statistically significant) options becomes small. Residual distribution inclines the decision towards IA, but a more accurate characterization is a predominantly independent action, with synergy and a cooperative unspecific effect. On the other hand, it should be pointed out that a conventional analysis (use of a model (1) for IA hypothesis and a contrast on the 50% isobole for CA hypothesis) would lead to decide a CA

Discussion
The approach we have defended here is supported on three statements: S1) in any algebraic macroscopic model, any interaction between effectors is necessarily translated into -reciprocal or not-modifications of the parametric values describing the individual action of each effector. In our modelling, such modifications can adopt increasing or decreasing forms, with constant, increasing or decreasing slope in each case; S2) the algorithms used for describing microscopic facts underlying common chemical and biological interactions produce realistic simulations as long as any hypothetical mechanism considered can be: a) defined in terms of levels of effectors, enhancers, inhibitors, receptors and response thresholds; b) expressed involving these elements in Boolean propositions; S3) macroscopic consequences of all the microscopic mechanisms considered were satisfactorily described by the proposed macroscopic modelling.
With these premises, two complementary queries emerge. The first concerns the degree in which the microscopic interactions considered are important for defining observable consequences as synergy or antagonism. The second one refers to the possibility that a concrete microscopic mechanism can be specifically identified through the macroscopic model obtained from a given data set.
Regarding the first query, it can be underlined that the two suppositions in which our approach would be insufficient are: 1) parametric variations (with discontinuities, singularities, and very pronounced inflection points) which cannot be described according to S1; 2) mechanisms which cannot be reduced, in the last analysis, to the terms S2. Since none of these suppositions are too plausible, it can be accepted that the considered interactions are considerably relevant -an absolute answer seems impossible in a factual science-, and that, in any case, our double approach will help in the search for alternative types of interactions.
The second query leads to admit that our modelling is mechanistically in the microscopic-macroscopic direction, but phenomenological in the opposite one. Therefore, it obeys the general fact that none mechanism can be unequivocally deduced from any macroscopic description, because different mechanisms can generate the same phenomenal profile. In perturbator-effector interactions, a reasonably mechanistic systematics is possible on the basis of the relationships between parametric variations ( Table 2). But in effector-effector interactions, these relations provide only mechanistic suggestions, whose validation requires additional experiments as those applied in enzymatic kinetics to the identification of the different inhibition modalities. Nevertheless, our modelling produces explicit algebraic equations able to describe accurately a set of situations more diverse and realistic than those considered in other alternatives, and allows to classify (at least according to the parametric variations) different modalities of synergy and antagonism. The proposed approach solves some recalcitrant and controversial aspects of these concepts, as well as the necessary distinction between the factual and formal sides of these phenomena, and it exposes several types of theoretical reasons that explain the abundance of experimental results that are inconclusive in the IA-CA framework, as it was pointed out by other authors [11,12]. These reasons have perhaps diverse orgin, but they are related, firstly, with the fact that IA and CA hypotheses are far from elemental statements. As it is proven by the rules R1 to R4, IA hypothesis involves conditions that can be combined in ways that do not obey any of the two modes of action, but whose macroscopic consequences can be satisfactorily described in the frame of our proposal.  Hypotheses of null interaction and synergy are compared, under the suppositions of independent action and concentration addition, through the fitting of the experimental results to the respective generalized models. Adj. r 2 : coefficient of multiple determination; res. sk.: residual skewness. See Figure 11 and text for details. doi:10.1371/journal.pone.0061391.t006