Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Two different sampling assumptions for language learning.

a) Under the weak sampling assumption, the learner infers a mapping from sentence constructions (C1, C2, etc.) to grammaticality labels without making assumptions about how the sentences are generated. b) Under the strong sampling assumption, sentences are assumed to be generated from the distribution that the learner seeks to estimate.

More »

Fig 1 Expand

Table 1.

Artificial language used in initial simulations and Experiment 1.

More »

Table 1 Expand

Fig 2.

Model predictions.

Model predictions for grammaticality judgments under strong sampling and weak sampling assumptions. The exception verb V4 is never shown in C2.

More »

Fig 2 Expand

Fig 3.

Presentation of linguistic input in Experiment 1.

The strong sampling condition presented (a) positive examples generated by a speaker of the language and (b) negative examples generated by a non-speaker. The weak sampling condition presented (c) positive and (d) negative examples as feedback to a prediction about grammaticality. Note that because verb-action pairings were randomized between subjects, the same verb does not correspond to the same actions in the different conditions.

More »

Fig 3 Expand

Fig 4.

Results of Experiment 1.

In this language, the absent construction was verb V4 in sentence structure C2. (a) Grammaticality judgments, showing proportion of times each sentence was judged to be grammatical for each of the four verbs (V1-V4) in the artificial language. The vertical axis shows the proportion of times each sentence was judged to be grammatical. These results are averaged over all judgments for each sentence, and averaged over all participants. The black vs. white bars indicate the results for the strong sampling vs. weak sampling condition respectively. The horizontal axis shows different sentence constructions (i.e., a particular verb-order). The results suggest that participants in both conditions were largely able to learn much of the grammatical structure. Also, participants in the weak sampling condition rated the exception construction, V4 in C2 significantly more grammatically acceptable than participants in the strong sampling condition, which is the prediction of our models. (b) Production results, showing proportion of productions made in each sentence structure for each verb. X denotes productions that were not in the form of any of the sentence structures. Again, results are averaged over all judgments for each sentence and averaged over all participants.

More »

Fig 4 Expand

Table 2.

Artificial language used in simulations and Experiment 2.

More »

Table 2 Expand

Fig 5.

Results of Experiment 2.

In this language, the absent construction was verb V5 in sentence structure C2. (a) Grammaticality judgments, showing proportion of times each sentence was judged to be grammatical for each of the five verbs (V1-V5) in the artificial language. As in Experiment 1, participants in the weak sampling condition rated the exception construction, V5 in C2 significantly more grammatically acceptable than participants in the strong sampling condition, which is the prediction of our models. (b) Human production results, showing proportion of productions made in each sentence structure for each verb. X denotes productions that were not in the form of any of the sentence structures.

More »

Fig 5 Expand

Table 3.

Artificial language used in simulations and Experiment 3.

More »

Table 3 Expand

Fig 6.

Results of Experiment 3.

This language involved learning rules governing modifier contractions. All modifiers could appear in both positions, but only M1 was shown to be grammatical when contracted in both positions. M2 and M3 were only grammatical when contracted in one position. Thus, in the weak sampling condition M2 and M3 were shown to be ungrammatical when contracted in C1 and C2 respectively. The exception modifier-construction was M4 in P2, i.e., M4 was never shown contracted in position P2 during training for the models as well as the human participants. (a) Strong sampling and weak sampling model predictions for grammaticality judgments for contraction of each modifier, M1-M4. The vertical axis shows predicted grammaticality and horizontal axis shows the two different positions, P1 and P2 under which contractions could occur. (b) Human grammar judgments, showing proportion of times each sentence was judged to be grammatical. (c) Human sentence completion results, showing proportion of times that contraction was chosen over no-contraction for each modifier in each position.

More »

Fig 6 Expand