Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Communicative competence assessment for learning: The effect of the application of a model on teachers in Spain

  • Juan Jesús Torres-Gordillo ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    juanj@us.es

    Affiliation Department of Educational Research Methods and Diagnostics, Educational Sciences Faculty, University of Seville, Seville, Spain

  • Fernando Guzmán-Simón,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Supervision, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Language and Literature Teaching, Educational Sciences Faculty, University of Seville, Seville, Spain

  • Beatriz García-Ortiz

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Department of Educational Research Methods and Diagnostics, Educational Sciences Faculty, University of Seville, Seville, Spain

Abstract

The evolution of the results of Progress in International Reading Literacy Study in 2006, 2011 and 2016, as well as the difficulties found by teachers implementing the core competences, have led to the need to reflect on new assessment models. The objective of our research was to design a communicative competence assessment model and verify its effect on primary education teachers. The method applied was a focus group study. Participants came from four primary education schools in the province of Seville (Spain). The data were gathered through discussion groups. The COREQ checklist was followed. Qualitative thematic analysis of the data was carried out using Atlas-ti. An inductive coding scheme was established. The results have enabled the construction of a communicative competence assessment model and its application in primary education classrooms with HERACLES. The effects of the assessment model and the computer software were different according to teachers' profiles. On the one hand, teachers open to educational innovation remained positive when facing the systematic and thorough assessment model. On the other hand, teachers less receptive to changes considered the model to be complex and difficult to apply in the classroom. In conclusion, HERACLES had a beneficial effect on communicative competence assessment throughout the curriculum and made teachers aware of the different dimensions of communicative competence (speaking, listening, reading and writing) and discourse levels (genre, macrostructure and microstructure).

1 Introduction

Assessments carried out by the International Association for the Evaluation of Educational Achievement (IEA) in Spain have provided new evidence for the effects of the educational improvement measures applied in primary education in the last two decades. In particular, the assessment performed in Progress in International Reading Literacy Study (PIRLS) in 2006, 2011 and 2016 has shown how competence in communication in Spanish primary education has not progressed at the rate of that of other European countries [13].

The Spanish government and different regional authorities have implemented diverse improvement plans, which have been focused on the modifications of the official curriculum and on educational legislation to reverse this situation [46]. However, their results have not been expected in the area of communicative competence. Today, we are familiar with numerous definitions of communicative competence [713]. The publication in 2001 of the Common European Framework of Reference for Languages [14] has enabled us to describe the skills required for communication and their levels of achievement related to reading, writing, listening and speaking.

Moreover, the development of communicative competence in the educational curriculum must be related to ‘accountability’ within teaching programmes, which leads us to delve deeper into the link had by the school curriculum, based on key competences and their assessment. Training in the assessment of competences in general and of communicative competence in particular presents numerous deficiencies in the initial and continuous training of primary education teachers. Similarly, the difficulty in adapting the communicative competence theoretical concept to assessment in classrooms has led numerous authors to analyse the need to incorporate linguistic, cultural and social elements into the current educational context [15, 16]. Consequently, our paper focuses on the design and evaluation of a model for the assessment of communicative competence based on the Spanish curriculum through the use of a custom-designed computer application.

1.1 Assessment for learning

The design of an assessment model of communicative competence in the school context requires prior reflection related to the dimension assessment model, first, and an assessment of communicative competence, second. Our research started with a reflection on which assessment model for learning was the most appropriate for incorporating communicative competence assessment in the primary education classroom. Assessment for learning is considered an assessment that fosters students’ learning [1719]. Wiliam and Thompson [20] have developed five key strategies that enable this process to become an educational assessment:

  1. Clarify and share the learning intentions and criteria for success.
  2. Conduct effective discussions in the classroom and other learning tasks, which provide us with evidence of students’ comprehension.
  3. Provide feedback, which allows students to progress.
  4. Activate the students themselves as didactic resources for their peers.
  5. Foster the students as masters of their own learning.

The assessment that truly supports learning has two characteristics [21]: the feedback generated must provide information about learning activities for the improvement of performance, and the student must participate in actions for the improvement of learning based on heteroassessment, peer assessment and self-assessment.

Assessment for learning must set out by gathering information, which enables teachers and learners to be able to use it for feedback; that is, the result of the assessment must be information that both the teacher and the student can interpret for the improvement of the task. Wiliam [21] proposes an assessment that is incorporated into classroom programming and the information of which is relevant for the improvement of the teaching-learning process. Decision making for the improvement of the task must be based on the information that the assessment indicators contribute to the learning process. In conclusion, the effort that the school makes to emphasise the learning assessment is justified for the following reasons:

  1. Assessment must not be limited to marking (summative assessment); rather, it has to do with helping students learn [22].
  2. Assessment is a key element in effective teaching, as it measures the results of learning addressed in the teaching-learning process [21].
  3. Feedback plays a fundamental role and requires the information received to be used by students to improve their learning [23].
  4. Instead of being content with solving obstacles in students’ learning, teachers must offer opportunities from the assessment to develop learning strategies [24].

1.2 Communicative competence in the European educational framework

The theoretical construct on which our research is based has different sources. Since the 1960s, communicative competence has been approached in different ways [25], from Chomsky’s cognitive focus [26], followed by Hymes’ social approximation [11, 27], to Wiemann’s approximation of relational competence [28] and the approach based on the development of language of Bloom and Lahey [29] and Bryan [30]. Communicative competence in our research is founded on the works carried out by Bachman [7], Canale [31], Hymes [11] and the Common European Framework of Reference for Language: Learning, Teaching, Assessment (CEFR) [14].

The first allusion to the concept of ‘communicative competence’ came from Hymes [11]. He defined it as competence that uses the specific knowledge of a language’s structure; usually, there is not an awareness of having such knowledge, nor does one spontaneously know how it was acquired. However, the development of communication requires the presence of communicative competence between speakers [25].

Consequently, communicative competence not only is linked with formal aspects imposed from the structure of the language itself (grammatical) but also acquires meaning according to the sociocultural context in which it is developed. The incorporation of these sociocultural communication elements became the pillars of the models developed by Canale [31] and Bachman [7], which is the framework that the CEFR has adopted. In turn, the educational legislation in Spain has also carried out its particular adaptation to the national and regional context with the state regulation [5] and the regional law [32]. The particularity of the adaptation of communicative competence in primary education to national and regional educational legislation has brought about a certain confusion in the Spanish educational panorama. Nevertheless, the diverse conceptualisations of the communicative competence theoretical construct, found in the contributions of Canale [31] and Bachman [7] and in the different Spanish legislations (national and regional), maintain the same basic scheme of communicative competences.

Table 1 shows the correspondences between the different competences of the theoretical proposals of Canale [31], Bachman [7] and the state [33] and regional legislations [34] in Spain. A careful reading of this table highlights how the concept of communicative competence is not affected by the diverse terms used for its designation. Different authors and the legal texts propose the same parameters but present a different degree of specification and depth. The fundamental differences between the theoretical constructs of Canale [31] and Bachman [7] and the state [33] and regional legislations [34] are based on the creation of new competences, such as ‘personal competence’ (made up of three dimensions—attitude, motivation and individual differences—regarding communicative competence) in the first and ‘literary competence’ (referring to the reading area, the capacity of enjoying literary texts, etc.) in the second.

thumbnail
Table 1. Comparative table of the communicative competence components.

https://doi.org/10.1371/journal.pone.0233613.t001

1.3 Communicative competence assessment

Communicative competence assessment must be considered in the process of the communicative teaching-learning of the language (‘communicative language teaching’ or CLT). This teaching model’s axis is ‘communicative competence’ [35]. This perspective aligns with that of Halliday’s systemic functional linguistics [36] and its definitions of the contexts of culture and situation [37]. Savignon’s CLT model [38] expands the previous research of Canale and Swain [8] and Canale [31] and adapts communicative competence to a school model (or framework of a competential curriculum). This model develops communicative competence regarding the ‘context’ and stresses communication’s functional character and its interdependence on the context in which it is developed. The communicative competence learning process in primary education is related to the implementation of programmes, which foster the participation of students in a specific communicative context and the regulation of the distinct competences to the social context of the classroom where the learning is performed.

Communicative competence assessment in our research expands upon Lave and Wenger’s notion of ‘community of practice’ [39], the ‘theories of genre’, which underline the use of language in a specific social context [36, 40], and the ‘theory of the socialisation of language’ [41, 42]. These notions are integrated into the acts of communication [11, 38, 43] and give rise to diverse communicative competences, which are disaggregated to be assessed.

The changes introduced into the curriculum (with the inclusion of key competences) and in the theories of learning (with the cognitive and constructivist conceptions) have forced the rethinking of assessment [44]. From this perspective, a new evaluation of communicative competence has been constructed from the improvement of the learning processes, not through certain technical measurement requirements [45]. Assessment based on competences or as an investigation has become an excellent model for solving the problem of communicative competence assessment.

Moreover, the modalities of heteroassessment and self-assessment [25] enhance the impact of assessment on children’s cognitive development. Basically, there are three factors that influence communicative competence assessment: (a) the culture and context of observation (the culture of the observers is different and makes use of distinct criteria), (b) standards (they cannot be applied to all the individuals of the same community) and (c) conflicts of observation (the valuations of the observations can apply the assessment criteria with a different measurement). Furthermore, Canale and Swain [8] previously underlined the differences between the assessment of the metadiscursive knowledge of competence and the capacity to demonstrate correct use in a real communicative situation. In their reflections, they proposed the need to develop new assessment formats and criteria, which must be centred on communicative skills and their relation between verbal and non-verbal elements.

The perspective adopted in this article sets out from the communicative competence assessment of the analysis of Halliday’s systemic functional linguistics [36] and its adaptation to the School of Sydney’s pedagogy of genres developed by Rose and Martin [46]. The School of Sydney’s proposal has as its starting point the development of an awareness of the genre in the speaker or writer [47]. Similarly, the discourse’s adaptation to the social context at which it is aimed (situation and cultural contexts) has to be taken into account.

In summary, communicative competence assessment sets out from the tools supplied by the analysis of the discourse [48], taking up some elements of diverse discursive traditions, such as pragmatic, conversational analysis and the grammar of discourse (for more information, see [49, 50]). These tools will respond to the levels of the genre, register and language (textual macrostructure and microstructure) [51, 52].

1.4 Aims

Setting out from these suppositions, this paper addresses the following aims:

  1. To design a communicative competence assessment model based on the Spanish primary education curriculum.
  2. To check the effect of the communicative competence assessment model on primary education teachers using a computer application.

2 Method

The research design is based on the use of the focus group technique for the study of the same reality, developed through four study groups. Each of these groups represents a school with different characteristics and profiles (see Table 2), enabling a multi-perspective approach, where schools represent different opinions and experiences. The COREQ checklist was followed. All the participants were informed of the nature and aim of the research, thus conforming to the rules of informed consent, and signed written consent forms [dx.doi.org/10.17504/protocols.io.bd8ei9te]. In addition, this research was approved and adhered to the standards of the Social Sciences of the Ethical Committee of Experimentation of the University of Seville.

2.1 Participants

Twenty teachers from the second, fourth and sixth years, belonging to four primary education centres in the province of Seville, took part in this study. Prior to consent, participants knew the objectives of the research project and the profiles of the researchers and agreed to collaborate voluntarily in the project. Participants were intentionally selected face-to-face for their diversity in school typology. In this way, participants were obtained from public, private and charter schools. Two of the initially contacted schools refused to participate due to technical problems with their Internet connectivity in the school and the staff’s lack of time to attend the training in the evaluation of communicative competence. Participant teachers undertook a training course on communicative competence assessment. The course was developed in the b-learning modality using the Moodle e-learning platform. During the training, teachers learned how to use a computer application to assess communicative competence using tablets. This tool, called the ‘tool for the assessment of linguistic communication competence’ (hereafter, HERACLES), was custom-designed. Later, teachers had the opportunity to implement what had been learned in their classes during a term. The application of the tool took place with 368 students in the experimental group and 285 in the group without the application (see Table 2).

After the application of the tool, the teachers were invited to participate in different discussion groups to note the results of the experience and the effect that HERACLES had on their training. The different focus groups were conducted in teachers’ workplaces by the three PhD authors of this paper, one female senior lecturer and two male senior lecturers from the universities of [authors] and experts in educational research. In two of the four schools, members of the management team also attended the focus groups, in addition to participant teachers. The discussion groups were audio-recorded and took place in the educational centres between June and September 2017.

2.2 Instruments

The analysis of the audio recordings of the discussion groups and the field notes taken has generated a system of inductive categories (see Table 3). This category system was compiled from the information provided by teachers in the discussion groups. The system of inductive categories was structured through a thematic frame based on the teaching staff’s experience in the use of a computer application to assess competence in communication in the classroom. The indicators focused on the ease of use of the computer tool, its usefulness in classroom evaluation, and teachers' assessment of the tool itself. The coding of the discussion group transcripts was performed by the three authors of the current paper. This system has been applied both in the codification phase and in the later analysis of relations with Atlas-ti. The focus group script was designed by the team of authors of this paper and was evaluated by six experts in educational research. Their analysis relied on input based on the understandability of the interview questions and on questions’ pertinence to the purpose of the research. The duration of the focus groups was approximately two hours. Recordings’ transcriptions were sent to the schools for review. The participants did not make any corrections to the content of the transcripts.

thumbnail
Table 3. System of inductive categories for communicative competence assessment through a computer application.

https://doi.org/10.1371/journal.pone.0233613.t003

2.3 Data analysis

The first aim was accomplished through a comparative analysis of the communicative competence’s main components gathered in the models of Canale [31] and Bachman [7] and their relations with both national legislation [33] and regional legislation [32]. This analysis was the basis of the development of a communicative competence assessment model.

The second aim is approached through a qualitative thematic analysis [53, 54] of the discussion groups. The data analysis of the discussion groups’ recordings was carried out through Atlas-ti version 6.2. In the operationalisation phase [55], the system of inductive categories [56] was elaborated after listening to all the recordings. The codification of each discussion group was performed a posteriori by three researchers, and the coefficient of agreement between codifiers was calculated via the Fleiss’ kappa technique [57, 58].

Fleiss’ kappa calculation showed a value of K = 0.91 (see Table 4), which can be described as an excellent interjudge concordance [57]. The disagreement between the different coders was motivated by their interpretation of the application of the transcription categories, which was the result of the inductive process of the creation of the category system. These disagreements were solved through a process of iterative review and clarification of the indicators of the category scheme. After the categorisation of the focal group transcriptions, the three authors of this paper carried out a synthesis and summary of the data. The final report with the results of the research was sent to the different schools for review and feedback.

Finally, we use different analyses of associations and semantic networks [59]. In the search for relations between the codes, we rely on the Atlas-ti Query Tool option. We similarly use the Network tool to carry out the graphic representation of these associations.

3 Results

3.1 A new communicative competence assessment model

The communicative competence assessment model proposed by Bachman [7] established a clear trend to measure competence as an interpersonal communication product. The elements that it proposes are based on an assessment of both the analysis of the environment of the assessment tasks (environment and type of test) and the indicators that differentiate diverse degrees of achievement of communicative competence in primary education (format, nature of the language, facet of response expected and relation between the input and output information).

The assessment model elaborated (see Table 5) presents the assessment indicators described generally. However, these indicators must be adapted to each of the tasks and genres evaluated in the classroom. The assessment tool was based on the application of distinct elements of the analysis of the discourse and on the selection and transformation of the elements into assessment indicators in the different dimensions. Table 5 presents examples of the assessment indicators related to the following aspects:

  1. the levels of discourse (genre, macrostructure and microstructure);
  2. the four communicative competence dimensions (speaking, listening, reading and writing);and
  3. the classification of each indicator according to its belonging to various competences (textual, discursive, sociocultural, pragmatic, strategic or semilogical).

The assessment of all these indicators in a school context made the development of the HERACLES computer application for tablets necessary. With this assessment tool (see Figs 1 and 2), it is possible to address not only the broad diversity of assessment indicators but also the heterogeneity of the students themselves, considering their individual variables.

thumbnail
Fig 1. HERACLES’ upper menu.

Reprinted from the COMPLICE project under a CC-BY license.

https://doi.org/10.1371/journal.pone.0233613.g001

thumbnail
Fig 2. HERACLES’ assessment area.

Reprinted from the COMPLICE project under a CC-BY license.

https://doi.org/10.1371/journal.pone.0233613.g002

This application enables the carrying out of a learning assessment, providing information concerning the communicative competence teaching-learning process in students during a prolonged period of time. The process assessment can be performed through diverse techniques, such as observation, thinking aloud, or interviews via stimulated recall. Similarly, HERACLES can relate the process’ assessment with that of the product through the analysis tools of the oral and written discourse. It was designed to facilitate students’ daily follow-up work, streamline the registering of students’ communicative competence development, gather information on the teaching-learning process and facilitate decision making for the programming of communicative-competence-related tasks. With this tool, the communicative competence learning assessment process is systematised and allows for the task’s assessment to be carried out efficiently and without excessive resource costs in the performance of the teaching work [60].

3.2 Effects of the use of the computer application of the communicative competence assessment

The second aim of this research has been addressed from the perspective of the qualitative thematic analysis of the discussion groups. The study of the effect is divided into two perspectives: the positive effects regarding the applicability of HERACLES and teachers’ methodological changes and the negative effects of its use. The positive effects have been characterised through causal relations (‘cause-effect’) or associative relations (‘related to’) (see Fig 3). The analyses performed have not shown any significant differences between the cases studied. Consequently, in this section, the different cases have not been described separately.

thumbnail
Fig 3. Graphic representation of the relations between codes about the use of the communicative competence assessment computer application carried out in Atlas-ti.

https://doi.org/10.1371/journal.pone.0233613.g003

The positive effects are organised into three groups of relations. The first, composed of the causal relation of the applicability of daily use and methodological changes, tackles the changes detected in the methodology when HERACLES has been used with the tablets. In particular, the application of the communicative competence assessment criteria has enabled the improvement of the teaching-learning process in the centres analysed (‘the criteria of assessment (…) have helped me to focus on teaching’ [GD 1]). The communicative competence assessment has led some teachers to modify the assessment process, incorporating feedback (‘Yes, there are things I have proposed changing in the assessment: different forms of feedback with the students in the oral expositions and in the reading’ [GD 2]) and a process based on the learning assessment and adapted to the context of the classroom (‘Everything that is the theme of oral exposition and everything written (summaries) is something that I have had to introduce changes in to spotlight the assessment of the competence’ [GD 4]).

The second consists of the associative relation between the methodological change and the incorporation of assessment indicators in their daily activity. This has allowed for the evaluation of communicative competence dimensions that were not previously assessed in the classroom (‘I have used the tablet (…) when the children were speaking: if they gesticulated, if they stared, or if they used the appropriate vocabulary’ [GD 1]). In particular, the assessment of oral communication was developed due to the simple use of the tablet as an assessment instrument during the teaching-learning process (‘Not a specific activity or day, but rather, it depends on the tasks of each subject’ [GD 1]. Moreover, the ease of assessing communicative competence in very disparate circumstances within the school day permits this assessment to be extended to different areas of the curriculum (‘It was not specifically in the language class but in the classes in which they carried out a task or an activity’ [GD 1]). Finally, the use of indicators has generated the teaching perception of a more ‘objective’ assessment in the classroom (‘Assessment is an attitude, and it is very subjective. (…) The tool helps me to be more objective’ [GD 4]).

A third associative relation is established between maintaining a positive opinion about the use of HERACLES to assess communicative competence and the application of the daily use of the tablet as an assessment instrument. Teachers perceived the use of the assessment with tablets as simple and intuitive (‘It seemed to me quite simple and intuitive’ [GD 1]). The use of assessment tools and their indicators has led to their use being conceived as something easy and practical for the communicative competence assessment (‘It has been much more practical to assess according to the item they asked you’ [GD 3]). Similarly, the use of tablets relates HERACLES and its assessment with the facilitators of specific techniques, such as assessment through observation in the classroom (‘I would like to use it because it seems handier’ [GD 1]), making them quicker and more efficient in the current educational context.

The negative evaluations of the teachers have concentrated on the mistakes of the computer application. The relation between the mistakes and the methodological change is causal. Some difficulties found in the use of HERACLES have led to fewer effects on the methodological change. On the one hand, they are centred on the lack of a button to cancel the different notes recorded (‘I would have put the Yes/No option, but I missed the delete option’ [GD 1]). On the other hand, the difficulties come from the listing of the students being in an alphabetical order of their first names (and not by their surnames) and of the impossibility of selecting assessment indicators to adapt them to the task assessed and the age of the subjects (‘We did not have the option of marking which indicators we wanted to assess and which we did not’ [GD 1]). Finally, teachers suggest greater flexibility in being able to incorporate data from the group and students in HERACLES. In this sense, the computer application does not allow for an adaptation to a specific context or the modification of the communicative competence assessment model to adapt it to the programming of the classroom (‘I cannot continue using the material because it is closed’ [GD 2]).

4 Discussion

Our research has addressed the design and effect of a communicative competence assessment model through a computer application. The first aim proposed an evaluation design that facilitates a tool that helps teachers solve the complex process of assessment in the primary education classroom context.

The construction of an assessment model for communicative competence was based on the assessment of the learning concept in the context of the primary education curriculum. This model encourages a deeper analysis of communicative competence, incorporating the different competences involved (linguistic, pragmatic, strategic, etc.). Thus, the assessment of communicative competence (considered a formative assessment) requires a complex process of systematic data collection in the classroom, open to the different indicators determined by the model. In this way, teachers can evaluate communicative competence in different school subjects and develop improvement strategies aimed at one competence or another in a specific and personalised way. This proposal enables a clear heightened awareness of how the discourse has to be assessed, irrespective of the particularities of the assessment activities. This model enables the simple and systematic accessing of the analysis of the oral and written discourse, making it accessible to both teachers (in summative assessment) and students (through the feedback of the assessment for learning).

The application of this model as an assessment of communicative competence in primary education poses several problems. One of the problems of teachers in communicative competence assessment is the time cost that individualised attention requires. The proposed model advocates for a sustainable assessment [61]. The difficulty of communicative competence assessment requires teachers to address the complexity of the communicative competence teaching-learning process from an individualised perspective. This assessment model allows for reducing the time of this assessment and, in turn, addressing diversity respecting the learning rhythms. The learning assessment will only have an effect in the medium and long term when it is maintained over time. That is, both investment in teachers’ training time and handling of the data, which are obtained with computer applications, must be preserved to offer greater rapidity in the feedback and feedforward [18, 6264].

The second aim presents the effect of the communicative competence assessment model’s application on teachers through the use of a custom-designed computer application in primary education. The results reveal a polarisation between two profiles of teachers. The first brings together those who have a positive attitude towards the implementation of new assessment tools. For these teachers, the tool has been useful and has helped improve the communicative competence teaching-learning process. The second model groups those teachers who resist changes to the assessment models. For this group, the implementation of the new model presents numerous difficulties. The motives have resided in the conceptual comprehension of technology in general and of tablets in particular and the resistance to changes in an area such as the culture of school assessment. This resistance to the assessment model’s implementation has revealed how primary education assessment processes are the least porous to change in teachers’ continuous training process [65].

The assessment model’s application has enabled teachers of the first profile to incorporate communicative competence assessment into other curricular areas. The teachers understood that communicative competence assessment must not only be applied to Spanish language and literature. The model’s implementation has helped these teachers raise their awareness of assessment for key primary education competences [66].

5 Limitations and prospective research directions

The analysis of the research developed in this article has revealed some limitations. The first refers to the communicative competence assessment model. The indicators require teachers to adapt to the different assessment tasks. This possibility must be taken into account in the future development of the HERACLES assessment tool with a view to training the teachers and optimising its use in the classroom.

The effect of the results of the communicative competence model’s implementation in the studied centres showed that the processes of change in assessment require a greater time period. In this sense, some of the teachers did not attain a higher degree of advantage and systematicity in the use of the assessment tool model, as individual variables affected this model’s rhythm of implementation. Future research projects will have to expand upon the rhythms of learning of the teachers themselves when implementing improvements in the evaluation of the associated key competences.

Relatedly, the use of the HERACLES application presented some difficulties motivated by teachers’ scant development of digital competence. Consequently, this has meant a greater investment of time and effort in the adaptation of the assessment model and has brought about a certain dissatisfaction among participants due to their slow progress in the communicative competence assessment model’s changes.

Future works could extend the study to more educational centres that are interested in improving learning assessment. This would give greater potential to the impact it could have on primary education. Similarly, the HERACLES tool must be completed and modified by teachers with the aim of adapting it to each classroom’s teaching-learning processes. HERACLES must provide a model that is adapted later by the teacher to systematically and efficiently undertake the communicative competence assessment.

References

  1. 1. Mullis I.V.S., Martin M.O., Foy P., & Drucker K.T. (2012) PIRLS 2011 International results in reading. Chestnut Hill: International Association for the Evaluation of Educational Achievement (IEA).
  2. 2. Mullis I.V.S., Martin M.O., Foy P., Hooper M. (2016) PIRLS 2016 International results in reading. Chestnut Hill: International Association for the Evaluation of Educational Achievement (IEA).
  3. 3. Mullis I.V.S., Martin M.O., Kennedy A.M., & Foy P. (2007) PIRLS 2006 international report. Chestnut Hill: International Association for the Evaluation of Educational Achievement (IEA).
  4. 4. Ley Orgánica 10/2002, de 23 de diciembre, de Calidad de la Educación. Available from: https://www.boe.es/eli/es/lo/2002/12/23/10
  5. 5. Ley Orgánica 2/2006, de 3 de mayo, de Educación. Available from: https://www.boe.es/eli/es/lo/2006/05/03/2/con
  6. 6. Ley Orgánica 8/2013, de 9 de diciembre, para la Mejora de la Calidad Educativa. Available from: https://www.boe.es/eli/es/lo/2013/12/09/8/con
  7. 7. Bachman LF. Fundamental considerations in language teaching. Oxford: Oxford University Press; 1990.
  8. 8. Canale M, Swain M (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics. 1980; 1(1): 1–47.
  9. 9. Celce-Murcia M. Rethinking the role of communicative competence in language teaching. In Alcón E, Safont MP, eds. Intercultural language use and language learning. Netherlands: Springer; 2007. p. 41–57
  10. 10. Coseriu E. Competencia lingüística. Elementos de la teoría del hablar [Linguistic competence. Elements of the theory of speaking]. Madrid: Gredos; 1992.
  11. 11. Hymes D. On communicative competence. In Pride IB & Holmes J, eds. Sociolinguistics. Baltimore, USA: Penguin Education, Penguin Books Ltd; 1972. p. 269–293.
  12. 12. Savignon SJ. Communicative competence: Theory and classroom practice. Texts and contexts in second language learning. Reading, Massachusetts (MA): Addison-Wesley Publishing Company; 1983.
  13. 13. Widdowson HG. Learning purpose and language use. Oxford: Oxford University Press; 1983.
  14. 14. Council of Europe. Common European Framework of Reference for Languages: Learning, teaching, assessment. Cambridge: Cambridge University Press; 2001.
  15. 15. Leung C. Convivial communication: recontextualizing communicative competence. International Journal of Applied Linguistics. 2005; 15(2): 119–144.
  16. 16. Leung C and Lewkowicz J. Language communication and communicative competence: a view from contemporary classrooms. Language and Education. 2013; 27(5): 398–414.
  17. 17. Black P et al. Working inside the black box: Assessment for learning in the classroom. Phi Delta Kappan. 2004; 86(1): 4–17.
  18. 18. Carless D. Scaling up assessment for learning: Progress and prospects. In: Carless D, Bridges SM, Chan CKY and Glofcheski R, editor, Scaling up Assessment for learning in Higher Education. Singapore: Springer; 2017. p. 3–17.
  19. 19. Hutchinson C and Young M. Assessment for learning in the accountability era: Empirical evidence from Scotland. Studies in Educational Evaluation. 2011; 37: 62–70.
  20. 20. Wiliam D and Thompson M. Integrating assessment with instruction: What will it take to make it work? In: Dwyer CA, editor. The future of assessment: Shaping teaching and learning. Mahwah, NJ: Erlbaum; 2007. p. 53–82.
  21. 21. Wiliam D. What is assessment for learning? Studies in Educational Evaluation. 2011; 37: 3–14. http://doi.org/10.1016/j.stueduc.2011.03.001
  22. 22. Crooks TJ. The impact of classroom evaluation practices on students. Review of Educational Research. 1988; 58(4): 438–481.
  23. 23. Black P and Wiliam D. Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability. 2009; 21: 5–31. http://doi.org/10.1007/s11092-008-9068-5
  24. 24. Poehner ME and Lantolf JP. Dynamic assessment in the language classroom. Language Teaching Research. 2005; 9(3): 233–65.
  25. 25. Tsai MJ. Rethinking communicative competence for typical speakers. An integrated approach to its nature and assessment. Pragmatics & Cognition. 2013; 21(1): 158–77. http://doi.org/10.1075/pc.21.1.07tsa
  26. 26. Chomsky N. Aspects of the theory of syntax. Cambridge: MIT Press; 1965.
  27. 27. Hymes D. Foundations in Socio-Linguistics: an ethnographic approach. London: Tavistock Publications; 1977.
  28. 28. Wiemann JM. Explication and test of a model of communicative competence. Human Communication Research. 1977; 3: 195–213.
  29. 29. Bloom L and Lahey M. Language development and language disorders. New York: John Wiley & Sons, Inc; 1978.
  30. 30. Bryan T. A review of studies on learning disabled children’s communicative competence. In: Schiefelbusch RL, editor. Language competence: Assessment and intervention. San Diego, CA: College-Hill Press; 1986. p. 227–59.
  31. 31. Canale M. From communicative competence to communicative language Pedagogy. In: Richards JC and Smith R, editor. Language and Communication. London: Longman; 1983. p. 2–14.
  32. 32. Decreto 230/2007, de 31 de julio, por el que se establece la ordenación y las enseñanzas correspondientes a la educación primaria en Andalucía. Available from: https://www.juntadeandalucia.es/boja/2007/156/1
  33. 33. Orden ECD/65/2015, de 21 de enero, por la que se describen las relaciones entre las competencias, los contenidos y los criterios de evaluación de la educación primaria, la educación secundaria obligatoria y el bachillerato. Available from: https://www.boe.es/buscar/doc.php?id=BOE-A-2015-738
  34. 34. Orden de 17 de marzo de 2015, por la que se desarrolla el currículo correspondiente a la Educación Primaria en Andalucía. Available from: https://www.juntadeandalucia.es/boja/2015/60/1
  35. 35. Savignon SJ. Communicative competence. In: Liontas JI, editor. The TESOL Encyclopedia of English Language Teaching. John Wiley & Sons; 2018. p. 1–7. http://doi.org/10.1002/9781118784235.eelt0047
  36. 36. Halliday MAK. Language as social semiotic: the social interpretation of language and meaning. London: Edward Arnold; 1978.
  37. 37. Halliday MAK and Hasan R. Language, context, and text: Aspects of language in a social-semiotic perspective. Victoria: Deakin University Press; 1985.
  38. 38. Savignon SJ. Communicative language teaching: Linguistics theory and classroom practice. In: Savignon SJ, editors. Interpreting communicative language teaching. Contexts and concerns in teacher education. New Haven & London: Yale University Press; 2002. p. 1–28.
  39. 39. Lave J and Wenger E. Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press; 1991.
  40. 40. Miller C. Genre as social action. Quarterly Journal of Speech. 1984; 70: 151–167.
  41. 41. Lee JS and Bucholtz M. Language socialization across learning spaces. In: Markee N editor. The handbook of classroom discourse and interaction. New York: John Wiley & Sons; 2015. p. 319–336.
  42. 42. Ochs E. Introduction. In: Schieffelin B and Ochs E, editors. Language socialization across culture. New York: Cambridge University Press; 1986. p. 1–13.
  43. 43. Halliday MAK. Learning how to mean-Exploration in the development of language. London: Edward Arnold; 1975.
  44. 44. Shepard, LA. The role of classroom assessment in teaching and learning. Los Ángeles, CA: CRESST–CSE Technical Report 517; 2000.
  45. 45. Boud, D. Great designs: what should assessment do? International Online Conference sponsored by the REAP Project: Assessment design for learner responsibility; 2007 May 29–31; Available at: http://www.reap.ac.uk/reap07/Portals/2/CSL/boudpres/AssessmentREAPConference07Boud.zip
  46. 46. Rose D and Martin JR. Learning to write, reading to learn. Genre, knowledge and pedagogy in the Sydney School. Sheffield: Equinox; 2012.
  47. 47. Silva Joyce H de and Feez S. Text based language and literacy education. Programming and methodology. Putney (Australia): Phoenix Education; 2012.
  48. 48. Paltridge B. Genre, frames, and writing in research settings. Amsterdam & Philadelphia: John Benjamins; 1997.
  49. 49. Gee JP and Handford M, editors. The Routledge Handbook of Discourse Analysis. New York: Routledge; 2012.
  50. 50. Paltridge B. Discourse Analysis. An introduction. London: Continuum; 2006.
  51. 51. Paltridge B. Working with genre: a pragmatic perspective. Journal of Pragmatics, 1995; 24:393–406.
  52. 52. Van Dijk TA and Kintsch W. Strategies of Discourse Comprehension. Orlando: Academic Press; 1983.
  53. 53. Fereday J and Muir-Cochrane E. Demonstrating rigor using Thematic Analysis: A hybrid approach of inductive and deductive coding and theme development. International Journal of Qualitative Methods, 2006; 5(1):80–92.
  54. 54. Tuckett AG. Applying Thematic Analysis Theory to practice: A researcher's experience. Contemporary Nurse. 2005; 19(1–2):75–87. http://doi.org/10.5172/conu.19.1-2.75
  55. 55. Corbetta P. Metodología y técnicas de investigación social. Madrid: McGraw-Hill; 2003.
  56. 56. McMillan JH and Schumacher S. Investigación educativa. 5th. ed. Madrid: Pearson Educación; 2005.
  57. 57. Fleiss JL. Statistical methods for rates and proportions. New York: John Wiley & Sons. 1981.
  58. 58. Viera AJ. and Garrett JM. Understanding interobserver agreement: The kappa statistic. Family Medicine, 2005; 37(5):360–3.
  59. 59. Friese, S. (2013). ATLAS.ti 7 User Manual [Internet]. Berlin, Germany: Scientific Software Development. 2013 [revised 2018; cited 2019 Mar 29] Available from: http://atlasti.com/manual.html
  60. 60. Krasch D and Carter D. Monitoring and evaluating classroom behavior in Early Childhood setting. Early Childhood Education Journal, 2009; 36(6):475–82.
  61. 61. Boud D. Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education. 2000: 22(2), 151–67. http://doi.org/10.1080/713695728
  62. 62. Carless D, Salter D, Yang M and Lam J. Developing sustainable feedback practices. Studies in Higher Education. 2011; 36(4): 395–407.
  63. 63. Dawson P, Henderson M, Mahoney P, Phillips M, Ryan T, Boud D, et al. What makes for effective feedback: staff and student perspectives. Assessment & Evaluation in Higher Education. 2019; 44(1):25–36. http://doi.org/10.1080/02602938.2018.1467877
  64. 64. García-Jiménez E. La evaluación del aprendizaje: de la retroalimentación a la autorregulación. El papel de las tecnologías. RELIEVE. 2015; 21(2). http://doi.org/10.7203/relieve.21.2.7546
  65. 65. Monarca H and Rappoport S. Investigación sobre los procesos de cambio educativo: el caso de las competencias básicas en España. Revista de Educación. 2013; número extraordinario:54–78. http://doi.org/10.4438/1988-592X-RE-2013-EXT-256
  66. 66. Perrenoud P. Cuando la escuela pretende preparar para la vida. ¿Desarrollar competencias o enseñar otros saberes? Barcelona: Graó; 2012.