^{1}

^{*}

^{1}

MS and SD conceived and designed the experiments. MS performed the experiments and analyzed the data. MS and SD wrote the paper.

The authors have declared that no competing interests exist.

Parsing a mental operation into components, characterizing the parallel or serial nature of this flow, and understanding what each process ultimately contributes to response time are fundamental questions in cognitive neuroscience. Here we show how a simple theoretical model leads to an extended set of predictions concerning the distribution of response time and its alteration by simultaneous performance of another task. The model provides a synthesis of psychological refractory period and random-walk models of response time. It merely assumes that a task consists of three consecutive stages—perception, decision based on noisy integration of evidence, and response—and that the perceptual and motor stages can operate simultaneously with stages of another task, while the central decision process constitutes a bottleneck. We designed a number-comparison task that provided a thorough test of the model by allowing independent variations in number notation, numerical distance, response complexity, and temporal asynchrony relative to an interfering probe task of tone discrimination. The results revealed a parsing of the comparison task in which each variable affects only one stage. Numerical distance affects the integration process, which is the only step that cannot proceed in parallel and has a major contribution to response time variability. The other stages, mapping the numeral to an internal quantity and executing the motor response, can be carried out in parallel with another task. Changing the duration of these processes has no significant effect on the variance.

The inability to operate two cognitive processes simultaneously - a mental bottleneck - can be explained by a model in which evidence is accumulated stochastically to reach a decision.

Even the most simple behaviour involves a chain of computations, which link perception, decision making, and action [

When two tasks are presented simultaneously (or sequentially at a short interval), a delay in the execution of the second task has been systematically observed [

A separate line of psychological research has investigated how the decision to respond is achieved. The decision-making process has been modelled as a noisy integrator that accumulates evidence provided by the sensory system [

In the present work, we propose a single assumption that unifies those two lines of research. We postulate that only the integration process establishes a serial bottleneck, while all other stages can proceed in parallel with stages of another task (

Each task involves a sequence of three stages of processing. The perceptual and motor stages are fixed and can be carried out in parallel with stages of another task, while the central stage consists of a noisy integration (a random walk) until a decision threshold is reached. The central stage of task 2 cannot start until the central stage of task 1 is finished. Thus, this establishes a bottleneck and characterizes a serial process. The distribution of RTs for the second task is wider than that for the first task, because it combines the intrinsic variance of task 2 (the time to reach threshold) and the variance in onset of the central stage of task 2, which is set by the ending of the central stage of task 1.

We designed a behavioural task to test the validity of the model. This number-comparison task involves deciding whether a digit presented on the screen is larger or smaller than 45. Different manipulations of the task render it more difficult, presumably, at different stages of processing. The different task manipulations include notation (whether the number was presented in Arabic digits or in spelled words), distance (the numerical distance between the presented number and 45), and response complexity (whether subjects were asked to tap once or twice to indicate their choice). Previous studies have shown that all of these manipulations change the difficulty of the task: RTs increase when numerical distance decreases and when numbers are presented in spelled words [

Subjects were asked to perform a dual task. One of the two tasks was presented visually and involved a number comparison: subjects decided whether a digit presented on the screen was larger or smaller than 45. Hereafter we will refer to it as “number task”. The other was a tone-discrimination task that involved deciding whether the frequency of a single pure tone that was presented for 150 ms was high (880 Hz) or low (440 Hz) (subjects heard both tones repeatedly before the beginning of the experience). Hereafter we will refer to it as “tone task”. Two different populations of subjects performed the task in the two possible orders, tone task followed by number task or vice versa.

The number task was our main task of study, and was manipulated using three different factors: notation (whether the number was presented in Arabic digits or in spelled words), distance (the numerical distance between the presented number and 45), and response complexity (whether subjects were asked to tap once or twice to indicate their choice). The tone task was never varied throughout the experiment. The rationale underlying this experimental design is that the tone task is used as a probe to study, through interference experiments, the different stages of processing of the number task. This asymmetry between the two tasks, which might be helpful to keep in mind, was of course not stated to the subjects, who were just asked to attend equally to both tasks.

The results section is organized as follows. We first report an analysis of basic measures of central tendency and dispersion. We then address how different manipulations (within the number task or through the interference with the tone task) change the mean RTs and their dispersion. These types of tests allow us to test the additivity of the effects of each factor and, through the interference analysis, whether they affect the perceptual, the central, or the motor stage. A second level of analysis involves a more detailed characterization of the shapes of the distributions of RTs. Fitting the distributions allows us to evaluate the accumulation model of response decision and to relate its components to those identified by their patterns of interference in the first-level analysis.

For clarity and as a reference throughout the paper, all the definitions, components of the models, and experimental manipulations are summarized in

DOI: 10.1371/journal.pbi0.0030037.t001

The first analysis involved studying the effects of the different manipulations of notation, distance, and response complexity on mean RTs and response dispersion on the number task when it came first. Our model predicted that manipulations that affect separate stages should have additive effects on mean RTs, and that only manipulations that affect the central stage should significantly increase response dispersion.

For this analysis (and throughout the paper unless otherwise specified) distance, which is the absolute value of the difference between the presented number and 45, was binned in two groups, close (≤12) and far (>12). Central tendency was measured by estimating the mean RT after trimming for outliers, by discarding responses slower than 1,200 ms. Response dispersion was measured by estimating the interquartile range, i.e., the difference between the 75th percentile and 25th percentile of the distribution of RTs. (Identical results were obtained when using other measures, e.g., median and standard deviation of RTs. Note that, in general, serial stage models predict that factors affecting distinct stages should have additive effects on mean RTs, but not necessarily on median RTs. Our model, however, supposes that factors affecting P and M components do not add to response dispersion, but merely add a constant factor to RTs. Under this hypothesis, factors affecting selectively P, C, and M components should also have additive effects on median RTs. In fact, the effects of perceptual and motor factors should be quantitatively the same on mean and median RTs.)

As we expected from several prior experiments [

(A) Changes in the mean RT of the numeric task when it comes first in the different experimental manipulations. Changing the notation or the response complexity makes mean RT slower, and within each condition, responses are slower for close than for far distances. The difference between far and close conditions is independent of the experimental manipulation, indicating an additive effect that is tested in the ANOVAs (see

(B) A different pattern is observed for the interquartile range, which provides a measure of dispersion. While distance manipulation results in a major change of the interquartile range, there is not a major effect of notation or response complexity.

Red indicates a significant effect

DOI: 10.1371/journal.pbi0.0030037.t002

Interestingly the effects of the different manipulations on response dispersion did not follow the effects on the mean, indicating that some factors slowed RT but did not significantly increase their dispersion. The distance manipulation resulted in a significant increase of the interquartile range typical of stochastic process, where the dispersion increases with the mean. In contrast, notation and response complexity, while causing an important change in the mean, did not result in a significant increase of the interquartile range (

To fully address whether the number-comparison task involves three separate stages with each experimental factor (distance, notation, and response complexity), a complete “additive factors” experimental design is needed, in which the different factors are crossed and thus all the interactions can be tested. However, such a factorial design, if tested in the double-task experiment, would involve an exceedingly large number of conditions, which would be very difficult to test on a subject by subject basis within a single session. Instead, we ran it as an independent experiment, in which a new group of subjects was asked to perform only the number task.

The results of this new experiment, summarized in

In this experiment, the condition Words 2 Taps was included to allow a full factorial design that permits testing all the different double interactions between the three factors and their triple interaction. Same results are obtained for different measures of central tendency (mean and median) and of dispersion (standard deviation and interquartile range). Red indicates a significant effect. Confidence intervals (CI) are reported, with all values in milliseconds

DOI: 10.1371/journal.pbi0.0030037.t003

Taken together with the assumption of our model that only the central stage contributes to response variance, our observations suggest that the numerical distance factor affects the C decision component, while notation and response complexity affect noncentral P or M components.

In addition to tests of additivity, a useful experimental technique to address the separable nature of different components and to understand their organization in time, is the interference analysis, in which the task of study (the number-comparison task) is performed together with a probe task (the tone task). The delay in the onset between the two tasks is controlled experimentally, and to achieve a full separation of the three components, the two tasks must be presented in both possible orders (

(A) Scheme of the main PRP effect. The vertical axis labels RT. The column on the left indicates the first task, and each coloured box within the column represents a different stage of processing: P component (dark green), C component (red), and M component (blue). The series of columns on the right indicate the processing time for task 2 at different delays (Δ), labelled on the x-axis. For each column, the three different boxes represent the three different stages of task 2: P component (green), C component (orange), and M component (cyan). As Δ progresses, the P component starts later. All components can be performed in parallel except for the C component, which establishes a bottleneck. This results in the following predictions: (1) response to the first task is independent of Δ, and (2) the RT2 (from onset of the trial) represented by the black line, is unchanged for small Δ while at sufficiently large Δ (noninterference regime) it increases linearly, with a slope of one, with Δ.

(B) The predicted RT1 and RT2 (from trial onset) as a function of Δ is represented by the grey and black lines, respectively.

(C) The model also establishes definite predictions for experiments in which one of the tasks is changed. The six different panels indicate all possible manipulations: first task changed (left column) or second task changed (right column) and whether the change affects the P component (first row), C component (middle row), or M component (bottom row). The changed component is labelled with a highlighted box and with an arrow. For simplicity, we assumed that the task manipulation always increases the duration of one component. RTs before the manipulation (which are the same across all panels) are represented with a solid line, grey for RT1 and black for RT2, and the RTs of the manipulated task are labelled with a dotted line with the same colour code. If the first task is changed (left column), different effects are observed depending on whether the change is in the M component or in the P–C components (which cannot be distinguished with this manipulation). If the M component is affected (bottom row), RT1 changes, but the response to the second task is unchanged. If the locus of the change is in either the P or the C component (middle and top rows), there is a larger delay until execution of task 2 and the following effect is observed: for small Δ (interference regime), RTs are increased and the regime of interference is increased, which is indicated by a shift of the kink to the right. If the second task is changed (right column), different effects are observed depending on whether the change is in the P component or in either the C or M component. If the change is in the P component (top row), for small Δ there is no net change in the response to the second task (because there was a wait at the end of the P component so extending it does not change total time execution), but there is less wait and thus the kink is shifted to the left. If the change is made in either the C or M component (middle and bottom rows) the result is a rigid shift, which is independent of Δ. By performing experiments in which the two tasks are presented in different orders, all task components can be differentiated. All task manipulations, according to the PRP model, should fall into one of the three categories, perceptual, central, or motor, each defined by its characteristic RT signature.

We begin by describing the mean RT results when the number task (for which experimental parameters were varied) was performed first, and the tone task came second. The PRP model predicts that each of the manipulated variables of notation, distance, and response complexity should have a main effect on the first number-comparison task, but only some of those effects (those that affect P and C components of the first task) should propagate to the RT2, and should do so only at short interstimulus delays (Δ) (

To evaluate these predictions, mean RTs were calculated within each condition and each subject, and submitted to ANOVAs with subjects as a random factor and delay and the variable of interest as within-subject factors. The detailed results of those ANOVAs are reported in

Each column corresponds to a different ANOVA. Each line represents a different effect: task manipulation, delay, and their interaction. Red indicates a significant effect. All 18 data cells follow the predictions of the PRP model

DOI: 10.1371/journal.pbi0.0030037.t004

The ANOVAs on number-comparison RTs (RT1) revealed the expected main effects of number notation (74 ms, slower for verbal than for Arabic numbers), numerical distance (91 ms, slower for close digits than for far digits), and response complexity (175 ms, slower for two-tap responses than for one-tap responses). There was no main effect of Δ, and none of the task effects interacted with delay. These results suggest that, as requested, participants performed the number comparison as task 1 independently of the delay of presentation of the subsequent tone task.

Similar ANOVAs on tone-decision RTs (RT2) revealed a main effect of delay, characteristic of the PRP phenomenon. As shown in

In the left column the number task is performed first and the tone task second. In the right column the tone task is performed first and the number task second. In both cases, the number task is manipulated by the three factors of notation, distance, and response complexity. In all panels the code is identical: RT1is coloured grey while RT2 is coloured black. The “easy” condition is represented by a solid line and the “difficult” condition by a dotted line. All the data can be explained in terms of the PRP model: notation (top row) affects the P component, distance (middle row) affects C, and response complexity (bottom row) affects M (see also

Each column corresponds to a different ANOVA. Each line represents a different effect: task manipulation, delay, and their interaction. Red indicates a significant effect. All 18 data cells follow the predictions of the PRP model

DOI: 10.1371/journal.pbi0.0030037.t006

Crucially, our three experimental factors had differential effects on those two segments of the RT curve. Notation and distance showed both a main effect and an interaction with delay (

As a further test, we analysed the data for short delays, within the interference regime (Δ ≤ 350 ms) and long delays (Δ ≥ 600 ms) (

Effect sizes are shown in milliseconds (in parentheses). Manipulations that differentially affect the short and long delays are responsible for the interactions reported in

DOI: 10.1371/journal.pbi0.0030037.t005

The situation was quite different for the response-complexity variable. The ANOVAs did not reveal either a main effect of response complexity or an interaction with delay on the RT2 (see

We now describe the mean RT results when the tone task was performed first, and the number task (for which experimental parameters were varied) came second. In this case the PRP model predicts that there should be no effect of the manipulated variables of notation, distance, and response complexity on the first tone task; in addition, the RT2 should exhibit a constant increase (independent of delay) when the change affects the M and C components and should change only for large delays when the change affects the P component (see

The ANOVAs on the tone task RTs (first task, RT1) revealed no effects on the task manipulation, as predicted by the PRP model because response to task 1 should be independent of the nature of task 2. The ANOVAs on the number-comparison RTs (second task, RT2) again revealed a very significant nonlinear effect of delay characteristic of the PRP effect. In addition, for the distance and response-complexity manipulations, we observed a task effect that did not interact with delay (see

These observations were consistent with the

The dependence of RT on delay follows the prediction of the PRP model for all conditions, task manipulations, and task orders. However, we find a small departure from the model when we compare the mean RTs for both tasks when they were presented either first or second at the maximum delay (1,025 ms). In both cases we find that the response is slower when the task is presented first: number task, 756 ms when presented first and 678 ms when presented second; tone task, 720 ms when presented first and 518 ms when presented second. Thus there is a fixed component (independent of delay) of approximately 150 ms, which needs to be added to RT1 to fully explain the data.

The shape of the RT distributions (for correct trials) was analysed for each task when it was presented first. For the number task we analysed six different cases corresponding to the three different manipulations (Digits 1 Tap, Words 1 Tap, and Digits 2 Taps), and two levels of numerical distance: close distances (≤12) and far distances (>12). For each of these distributions, the histograms of RTs and their cumulative distributions were calculated, and the latter were fitted to a simple model of RTs. The model was based on a fixed onset delay, _{o}, followed by a forced random walk to a threshold _{o}) corresponds to the sum of the P and the M components (see

(A) RT histograms (when the number task was presented first) fitted by a simple random-walk model, separately for far distances (left column) and close distances (right column) and for the three different tasks: Digits 1 Tap (top row), Words 1 Tap (middle row), and Digits 2 Taps (bottom row).

(B) Cumulative plots of the same data. The effect of both notation and response appears to be a shift of the distribution to the right while the distance effect is a change in the slope. Within each panel, we have overlapped the corresponding fit (blue line) and the fit to the easiest condition—Digits 1 Tap, Far Digits (red line)—to make the change between the different distributions apparent.

(C) The two fitted values (fixed delay and integration time) as a function of numerical distance for the three different tasks. The integration time decreases with distance, but it is independent of the tasks. In contrast, the fixed delay does not change with distance but changes with the task. The summed delay plus integration time fit the mean reaction times for each distance (solid circles).

(D) Statistics performed on the fit reveal that the fixed delay has a slope not significantly different from zero (i.e., it does not depend on distance), but it changes with task. In contrast, the integration time is significantly different from zero, but it does not change with task.

The applicability of random-walk models to RT data has been widely studied in numerous tasks [_{o} could vary (we verified that none of the results depended qualitatively on the particular choice of σ). The best-fitting values were determined by exhaustive search using a minimum-squares criterion. The value of 1/α characterizes the integration time (which explains all the variance), while _{o} captures fixed components that do not contribute to the variance. Thus, our purpose was to test the prediction of our model that the notation and response-complexity manipulations should affect the parameter _{o} while the distance manipulation should affect the parameter α.

For a finer-grained analysis, and to test the significance of this phenomenon, we binned the data in 24 different bins based on their distance to the reference 45 used for numerical comparison. For each bin, we calculated the α and _{o} that provided the best fit. We found that the _{o} changes from task to task but does not depend on distance. In contrast, 1/α does not change across tasks but changes with numerical distance (_{o} the value of the slope (for the three tasks) does not differ significantly from zero (_{o}, while numerical distance affects 1/α. These results are consistent with the prior analysis, which showed that response complexity and notation manipulations did not significantly affect the interquartile range (another measure of dispersion) while the distance manipulation did significantly change the interquartile range.

Here we try to explain the precise shape of RT2s, by combining, based on the PRP model, the distributions obtained for each task when presented first. If the two tasks were completely sequential, then the resulting distribution would be simply the convolution of the two original distributions. However, the PRP model states that only the C component is sequential, and, thus, because some operations can be done in parallel, the resulting RT2s are shorter than expected from a convolution. The operation performed is not completely trivial and is described step by step in

For each task (Digits 1 Tap, Words 1 Tap, and Digits 2 Taps) we tried to fit the 20 distributions of RT2 (ten for each value of the delay and the two possible orders of the tasks (tone–number or number–tone) from the distributions of RT1, with _{d}, a rigid shift in time of all distributions of RT2 (see Discussion for the rationale of this parameter). We then found good fits for the ensemble of distributions (_{d}: Digits 1 Tap, 125 ms; Words 1 Tap, 125 ms; and Digits 2 Taps, 75 ms.

Left: Cumulative plots of RTs to the number task when it is presented second (dots) and the predicted distribution based on the PRP model (solid lines). Each curve (coded in different colours) represents one of the ten possible values of Δ.

Right: Same data for RTs to the tone task when it is presented second (dots) and the predicted distribution from the PRP model (solid lines). Each row corresponds to a different task: Digits 1 Tap (first row), Digits 2 Taps (second row), and Words 1 Tap (third row). Each panel was fit with three parameters: M1, P2, and a fixed delay.

Each fit provides the parameters

The PRP fitting allowed us to estimate the values of P2 + M1. Depending on which task is presented first, we can calculate

Finally, the parameters obtained from the interference experiment may be compared to those of the previous fit, which was based on the shape of the distributions of RTs for the first task, and which yielded estimates of 1/α (the time of integration) and _{o} (a fixed delay). As expected from our model, across the different conditions summarized in _{o} is always approximately equal to the sum of the durations of the P and M components, while 1/α is equal to the duration of the C component. This provides further evidence that the process of accumulation of evidence does indeed constitute the characteristic bottleneck (the C component) in dual-task experiments.

When the fit method is “RT model”, parameters were obtained by fitting the shape of the distribution of RTs when the number task is the first task; when the fit method is “PRP fit”, parameters were obtained following the PRP model of interference, from RTs measured when the number task is the second task. _{o} is the fixed delay and 1/α the integration time. The 1/α row also shows the percent of the total RT dedicated to the central integration process. The following parameters are estimated: _{d}. The comparison of both methods indicates a good quantitative convergence: when summed, the noncentral P and M components of the PRP model account for the same amount of time as the fixed contribution _{o} in the RT distribution

DOI: 10.1371/journal.pbi0.0030037.t007

We proposed a basic model that relates the organization of parallel and serial components and the process of accumulation of evidence to reach a decision. The model, although simple, results in a wide number of predictions that, as we have shown, hold over a vast variety of manipulations. We show that the perceptual transformation of sensory information into an abstract quantity representation can be carried out in parallel with another task and is a low-variability process (whose variability does not increase with the mean); that the accumulation of evidence establishes a bottleneck and is an intrinsically variable process; and that the execution of the response constitutes yet another parallel, low-variability process. Our data suggest that the integration of evidence in time to reach a decision constitutes the only central process in a simple cognitive task that links perceptions to actions.

While dual-task experiments (in which two tasks are presented at variable delays) allow different interpretations, experiments in which one of the two tasks is parametrically manipulated provide a severe test of the PRP model [

Here we have tested, within the number-comparison task, three different manipulations, in the two possible orderings of the sequential tasks, thus providing an exhaustive test of the model. Our finding that all manipulations fall reliably within one of the PRP components provides strong evidence of the generality of this phenomenon. In addition, while it had been shown previously that the distribution of dual-task RTs was wider than that predicted by noninterfering processing of the two tasks [

A striking result, however, is the duration of the C component, which even in a simple task represents about 70% of the total RT. Considering the simplest version of our task (comparison of Arabic numerals, one tap), our results indicate that 180 ms is taken by the sum of P and M components, while a full 550 ms is taken by the C component. Previous event-related potential experiments suggested that it takes approximately 190 ms to identify an Arabic digit and begin to access a quantity representation [

While all the PRP predictions held, the only discrepancy with the model arose from an unexpected slowness of responses to the first task. As predicted, RT1 was independent of the delay. However the mean RT was larger than found previously when subjects performed only the number-comparison task [

Here, as in other PRP experiments, we have designed the tasks in order to maximally separate the inputs and outputs to the system (different perceptual modalities and different response hands). Under these conditions, as described above, we still find a source of central interference. Moreover we find that the transformation from a word form to an abstract semantic representation does not participate in this central process, nor does the execution of two consecutive and repetitive motor actions. The generality of these findings, however, has obvious bounds. We do not state here that any motor manipulation should result in a change in a parallel component; more complex motor responses, however, might require central supervision and create a bottleneck. Similarly, while we claim that mapping a word form to an abstract number representation can be done in parallel, we do not mean that it would not interfere with any possible stimulus. Finally, under some situations that lead to high automaticity, either through extensive training [

There is a vast literature on the analysis of the shape of RT distributions as a source of knowledge about the human information-processing system, and many different models of these distributions have been proposed [

A second important type of alternative to our model concerns the nature of the central process. Instead of a unique integrator, there might be a network of interacting integrators with lateral connections, which collectively implement the decision-making process and whose interactions create a functional bottleneck [

Finally, for simplicity our model assumed a constant decision threshold _{go}, it has been shown that p_{go} affects

While typically even simple tasks result in highly variable distributions of RTs, under some particular circumstances, including extensive practice, very precise (almost invariant) distributions of RTs can be obtained, e.g., in subjects trained to estimate a fixed duration [

While we characterized the different processing stages through behavioural observations, it is an essential issue to relate these findings to brain anatomy and physiology. At the single-task level, the neurophysiological bases of simple perceptual decision making have been widely studied in tactile- [

In humans, a similar indicator of accumulated evidence towards a motor decision is provided by scalp recordings of the lateralized readiness potential (LRP) [

Indeed, functional magnetic resonance imaging studies of the comparison task show that intraparietal and precentral cortices are systematically activated and that their activation correlates with the distance between the objects to be compared [

What happens to those physiological decision processes during dual-task performance? At present, we know of no neurophysiological study and only a handful of human physiological studies of the PRP phenomenon. In event-related potentials, when C and P components were manipulated, perceptual manipulation led to a change in the P2 component (generally associated with perceptual processing), while the central manipulation affected the amplitude and the onset of the P3 component [

Altogether, neurophysiological and brain-imaging studies suggest that, beyond an initial perceptual delay of about 200 ms, there begins a process of accumulation of evidence, which involves the joint activation of a distributed network of areas, with partially changing topography as a function of the nature of the task, but with frequent coactivation of parietal and premotor regions. Our results suggest that this accumulation system is responsible for establishing the PRP bottleneck. This bottleneck might occur because the cerebral accumulation system is broadly distributed and largely shared across tasks, and thus must be entirely “mobilized”, at any given moment, by whichever task is currently performed (for a simulation of this process, see [

A total of 42 participants, all right-handed, were involved in this study (24 males). Sixteen participants (aged 25 y ± 5 y) performed the experiment in which the tone task was presented first, and the other 16 (aged 24 y ± 4 y) performed the experiment in which the number-comparison task was presented first. Ten participants (aged 22 y ± 2 y) performed the numeric task with the addition of the Words 2 Taps condition. Participants were all native French speakers and were remunerated for their participation.

Participants were asked to perform two tasks, with the clear instruction that they had to respond accurately and as fast as possible to each of them. The delay in the onset of the two tasks changed randomly from trial to trial from 0 ms (simultaneous presentation) to 1,025 ms. Subjects responded to both tasks with key presses, with the right hand for the number-comparison task and with the left hand for the tone task. In the number-comparison task, a number was flashed in the centre of the screen for 150 ms, and subjects had to respond whether the number was larger or smaller than 45. The presented number ranged between 21 and 69, excluding 45. In different blocks, subjects performed three different versions of the number task. In the first version, the number was presented in Arabic digits and subjects were asked to respond by tapping once over the corresponding key (Digits 1 Tap). In the second version, the number was presented as a written word (in French), and subjects were also asked to respond with a single key press (Words 1 Tap; we refer to this as the “notation manipulation”). Finally, in the third version, the number was presented in Arabic digits, but subjects were asked to respond by tapping the corresponding key twice (Digits 2 Taps; we refer to this as the “response-complexity manipulation”). Within each block, both the numerical distance between the target and 45 and the delay between the presentation of the two stimuli varied randomly, and trials were presented with an intertrial interval that fluctuated between 2,600 and 3,000 ms.

In each block, which lasted almost 2 min, subjects performed 40 trials. Before the beginning of each block, subjects saw instructions on the screen, which instructed them what the number task would be for this corresponding block. Subjects practiced one block of each task to get familiar with the task. After this brief training, they performed a total of 18 blocks (six for each version) in an approximately 45-min session.

Stimuli were shown on a black-and-white display on a 17-in. monitor with a refresh rate of 60 Hz. Subjects sat 1 m from the screen. Stimuli were always presented in the fovea, and their size was 1° for the Arabic digits and 2.5° for the words. Auditory stimuli were pure tones of 150-ms duration and 440- or 880-Hz frequency. Auditory stimulation was provided through headphones.

All the analyses described here were done only on correct responses (which comprised 83% of the trials). Since there were two tasks and each task had two possible responses, chance level for this experiment is at 25%. Errors (17%) included errors in either the first or second task and trials in which subjects failed to respond to either of the tasks, or both. One subject was discarded from the analysis because the data clearly revealed that he had not performed the task as required. His RT1 arrived systematically a few hundred milliseconds after the onset of the second task, indicating that he was waiting for both tasks to be presented in order to respond and not, as indicated, responding to both tasks as fast as possible. For similar reasons, for all analyses, trials in which the RTs to the first task were larger than 1,200 ms (<5% of the trials) were excluded. All the statistics were done using the R software package (

RTs were fitted to a model based on a fixed delay onset (_{o}) followed by a forced random walk _{R} =

Changing the onset by a fixed delay _{o} and setting the threshold to one simply shifts the distribution, which then becomes

This is the equation we used to fit the RT distributions. All six distributions resulting from the different experimental manipulations corresponding to (Digits 1 Tap, Digits 2 Taps, Words 1 Tap) × (Distance Far, Distance Close) were fit to a fixed value of _{o}, which were allowed to vary across the different experimental conditions. The best parameters were obtained through exhaustive search using a minimum-squares criterion. For each value of _{o} were found for each experimental condition, and the mean square residuals were averaged across all distributions. It was found that the

Here we describe how RTs for task 2 can be predicted based on the distribution of RTs for both tasks when presented first. Because of the presence of the PRP wait (which depends on the value of the response to the first task), this operation is not strictly a convolution. Since the method is not trivial and, to our knowledge, it has not been performed elsewhere, we will describe it step by step:

In a serial sequence of two processes (in which one needs to be finished before the next one starts), each with a probability distribution of RTs given respectively by R_{1} and R_{2}, the probability of performing the sequence at time

This formula is simply the convolution of the two original distributions.

In a PRP experiment, however, the execution of the two tasks is not serial, since there are both serial (central) and parallel (noncentral) components. The first difference is that task 2 waits not for the complete execution of task 1 but rather for the completion of the P and C components of task 1 (see _{1}^{*}(_{1}(_{1}). The second modification, because of the nature of the PRP experiment, is that task 2 obviously cannot start until it is presented and thus the onset time is actually given by _{1}^{**}(_{1}^{*}(_{1} up to Δ (which results in a spike at Δ) followed by the tail of _{1}^{*}(

The last consideration has to do with the time it takes to respond to task 2. If Δ is sufficiently large (in the independent regime), the probability of executing task 2 at time _{2}(_{1}) are finished (see _{2} at _{2} given that task 1 has been executed at time _{1} is given by _{2}(_{2}), where _{2} = min[(_{1}^{**}(_{2}^{*}(_{2}[_{2}(

The final formula (adapting

Since all these transformations depend on Δ, M1, and P2, this prediction is parametric. The data were fit by exhaustive search according to mean squares criteria. We fitted all the data (for each task and for all the different values, to obtain the values of M1 and P2). As described in Results, this model was not sufficient to fit the data (note that we are simultaneously fitting a family of 30 curves), so we included a third fixed delay parameter (_{d}) to the fit. With the inclusion of the parameter _{d}, the errors, measured as the mean square residual (i.e., the mean of the squares of the difference between the data and the fit across all the points of the ten distributions corresponding to all possible delays) were consistently below 0.015 (20 times smaller than could be found without the inclusion of this parameter), and we observed a parabolic type of distribution, with a clear minimum (reported in

We thank Sarah Addleman, Sarah Kouhou, and Jerome Sackur for helping us in data acquisition, and Christophe Pallier for useful suggestions on the statistical procedures. MS was supported by a Human Frontiers Science Program fellowship, and SD by a centennial fellowship of the McDonnell Foundation.

interstimulus delay

central component

lateralized readiness potential

motor component

motor component of the first task

perceptual component

perceptual component of the second task

psychological refractory period

response time

response time for the first task

response time for the second task