Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Differentiating the learning styles of college students in different disciplines in a college English blended learning setting

  • Jie Hu ,

    Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Supervision, Writing – original draft, Writing – review & editing

    huj@zju.edu.cn

    Affiliations Department of Linguistics, School of International Studies, Zhejiang University, Hangzhou City, Zhejiang Province, China, Center for College Foreign Language Teaching, Zhejiang University, Hangzhou City, Zhejiang Province, China, Institute of Asian Civilizations, Zhejiang University, Hangzhou City, Zhejiang Province, China

  • Yi Peng,

    Roles Formal analysis, Project administration, Writing – review & editing

    Affiliation Department of Linguistics, School of International Studies, Zhejiang University, Hangzhou City, Zhejiang Province, China

  • Xueliang Chen,

    Roles Formal analysis, Writing – original draft

    Affiliation Department of Linguistics, School of International Studies, Zhejiang University, Hangzhou City, Zhejiang Province, China

  • Hangyan Yu

    Roles Writing – review & editing

    Affiliation Department of Linguistics, School of International Studies, Zhejiang University, Hangzhou City, Zhejiang Province, China

Abstract

Learning styles are critical to educational psychology, especially when investigating various contextual factors that interact with individual learning styles. Drawing upon Biglan’s taxonomy of academic tribes, this study systematically analyzed the learning styles of 790 sophomores in a blended learning course with 46 specializations using a novel machine learning algorithm called the support vector machine (SVM). Moreover, an SVM-based recursive feature elimination (SVM-RFE) technique was integrated to identify the differential features among distinct disciplines. The findings of this study shed light on the optimal feature sets that collectively determined students’ discipline-specific learning styles in a college blended learning setting.

Introduction

Research background

Learning style, as an integral and vital part of a student’s learning process, has been constantly discussed in the field of education and pedagogy. Originally developed from the field of psychology, psychological classification, and cognitive research several decades ago [1], the term “learning style” is generally defined as the learner’s innate and individualized preference for ways of participation in learning practice [2]. Theoretically, learning style provides a window into students’ learning processes [3, 4], predicts students’ learning outcomes [5, 6], and plays a critical role in designing individualized instruction [7]. Knowing a student’s learning style and personalizing instruction to students’ learning style could enhance their satisfaction [8], improve their academic performance [9], and even reduce the time necessary to learn [10].

Researchers in recent years have explored students’ learning styles from various perspectives [1113]. However, knowledge of the learning styles of students from different disciplines in blended learning environments is limited. In an effort to address this gap, this study aims to achieve two major objectives. First, it investigates how disciplinary background impacts students’ learning styles in a blended learning environment based on data collected in a compulsory college English course. Students across 46 disciplines were enrolled in this course, providing numerous disciplinary factor resources for investigating learning styles. Second, it introduces a novel machine learning method named the SVM to the field of education to identify an optimal set of factors that can simultaneously differentiate students of different academic disciplines. Based on data for students from 46 disciplines, this research delves into the effects of a massive quantity of variables related to students’ learning styles with the help of a powerful machine learning algorithm. Considering the convergence of a wide range of academic disciplines and the detection of latent interactions between a large number of variables, this study aims to provide a clear picture of the relationship between disciplinary factors and students’ learning styles in a blended learning setting.

Literature review

Theories of learning styles.

Learning style is broadly defined as the inherent preferences of individuals as to how they engage in the learning process [2], and the “cognitive, affective and physiological traits” of students have received special attention [14]. To date, there has been a proliferation of learning style definitions proposed to explain people’s learning preferences, each focusing on different aspects. Efforts to dissect learning style have been contested, with some highlighting the dynamic process of the learner’s interaction with the learning environment [14] and others underlining the individualized ways of information processing [15]. One vivid explication involved the metaphor of an onion, pointing out the multilayer nature of learning styles. It was proposed that the outermost layer of the learning style could change in accordance with the external environment, while the inner layer is relatively stable [16, 17]. In addition, a strong concern in this field during the last three decades has led to a proliferation of models that are germane to learning styles, including the Kolb model [18], the Myers-Briggs Type Indicator model [19] and the Felder-Silverman learning style model (FSLSM) [20]. These learning style models have provided useful analytical lenses for analyzing students’ learning styles. The Kolb model focuses on learners’ thinking processes and identifies four types of learning, namely, diverging, assimilating, converging, and accommodating [18]. The Myers-Briggs Type Indicator model classifies learners into extraversion and introversion types, with the former preferring to learn from interpersonal communication and the latter inclining to benefit from personal experience [19]. As the most popular available model, the FSLSM identifies eight categories of learners according to the four dimensions of perception, input, processing and understanding [20]. In contrast to other learning style models that divided students into only a few groups, the FSLSM describes students’ learning styles in a more detailed manner. The four paired dimensions delicately distinguish students’ engagement in the learning process, providing a solid basis for a steady and reliable learning style analysis [21]. In addition, it has been argued that the FSLSM is the most appropriate model for a technology-enhanced learning environment because it involves important theories of cognitive learning behaviors [22, 23]. Therefore, a large number of scholars have based their investigations of students’ learning styles in the e-learning/computer-aided learning environment on FSLSM [2428].

Learning styles and FSLSM.

Different students receive, process, and respond to information with different learning styles. A theoretical model of learning style can be used to categorize people according to their idiosyncratic learning styles. In this study, the FSLSM was adopted as a theoretical framework to address the collective impacts of differences in students’ learning styles across different disciplines (see Fig 1).

thumbnail
Fig 1. The adapted Felder-Silverman learning style model.

This model specifies the four dimensions of the construct of learning style: visual/verbal, sensing/intuitive, active/reflective, and sequential/global. These four dimensions correspond to four psychological processes: input, perception, processing, and understanding.

https://doi.org/10.1371/journal.pone.0251545.g001

The FSLSM includes learning styles scattered among four dimensions.

Visual learners process information best when it is presented as graphs, pictures, etc., while verbal learners prefer spoken cues and remember best what they hear. Sensory learners like working with facts, data, and experimentation, while intuitive learners prefer abstract principles and theories. Active learners like to try things and learn through experimentation, while reflective learners prefer to think things through before taking action. Sequential learners absorb knowledge in a linear fashion and make progress step by step, while global learners tend to grasp the big picture before filling in all the details.

Learning styles and academic disciplines.

Learning styles vary depending on a series of factors, including but not limited to age [29], gender [30], personality [2, 31], learning environment [32] and learning experience [33]. In the higher education context, the academic discipline seems to be an important variable that influences students’ distinctive learning styles, which echoes a multitude of investigations [29, 3441]. One notable study explored the learning styles of students from 4 clusters of disciplines in an academic English language course and proposed that the academic discipline is a significant predictor of students’ learning styles, with students from the soft-pure, soft-applied, hard-pure and hard-applied disciplines each favoring different learning modes [42]. In particular, researchers used the Inventory of Learning Styles (ILS) questionnaire and found prominent disparities in learning styles between students from four different disciplinary backgrounds in the special educational field of vocational training [43]. These studies have found significant differences between the learning styles of students from different academic disciplines, thus supporting the concept that learning style could be domain dependent.

Learning styles in an online/blended learning environment.

Individuals’ learning styles reflect their adaptive orientation to learning and are not fixed personality traits. Consequently, learning styles can vary among diverse contexts, and related research in different contexts is vital to understanding learning styles in greater depth. Web-based technologies eliminate barriers of space and time and have become integrated in individuals’ daily lives and learning habits. Online and blended learning have begun to pervade virtually every aspect of the education landscape [40], and this warrants close attention. In addition to a series of studies that reflected upon the application of information and communication technology in the learning process [44, 45], recent studies have found a mixed picture of whether students in a web-based/blended learning environment have a typical preference for learning.

Online learning makes it possible for students to set their goals and develop an individualized study plan, equipping them with more learning autonomy [46]. Generally, students with a more independent learning style, greater self-regulating behavior and stronger self-efficacy are found to be more successful in an online environment [47]. For now, researchers have made substantial contributions to the identification and prediction of learning styles in an online learning environment [27, 4851]. For instance, an inspiring study focused on the manifestation of college students’ learning styles in a purely computer-based learning environment to evaluate the different learning styles of web-learners in the online courses, indicating that students’ learning styles were significantly related to online participation [49]. Students’ learning styles in interactive E-learning have also been meticulously investigated, from which online tutorials have been found to be contributive to students’ academic performance regardless of their learning styles [51].

As a flexible learning method, blended courses have combined the advantages of both online learning and traditional teaching methods [52]. Researchers have investigated students’ learning styles within this context and have identified a series of prominent factors, including perceived satisfaction and technology acceptance [53], the dynamics of the online/face-to-face environment [54], and curriculum design [55]. Based on the Visual, Aural, Reading or Write and Kinesthetic model, a comprehensive study scrutinized the learning styles of K12 students in a blended learning environment, elucidating the effect of the relationship between personality, learning style and satisfaction on educational outcomes [56]. A recent study underscored the negative effects of kinesthetic learning style, whereas the positive effects of visual or auditory learning styles on students’ academic performance, were also marked in the context of blended learning [57].

Considering that academic disciplines and learning environment are generally regarded as essential predictors of students’ learning styles, some studies have also concentrated on the effects of academic discipline in a blended learning environment. Focusing on college students’ learning styles in a computer-based learning environment, an inspiring study evaluated the different learning styles of web learners, namely, visual, sensing, global and sequential learners, in online courses. According to the analysis, compared with students from other colleges, liberal arts students, are more susceptible to the uneasiness that may result from remote teaching because of their learning styles [11]. A similar effort was made with the help of the CMS tool usage logs and course evaluations to explore the learning styles of disciplinary quadrants in the online learning environment. The results indicated that there were noticeable differences in tool preferences between students from different domains [12]. In comparison, within the context of blended learning, a comprehensive study employed chi-square statistics on the basis of the Community of Inquiry (CoI) presences framework, arguing that soft-applied discipline learners in the blended learning environment prefer the kinesthetic learning style, while no correlations between the learning style of soft-pure and hard-pure discipline students and the CoI presences were identified. However, it is noted that students’ blended learning experience depends heavily on academic discipline, especially for students in hard-pure disciplines [13].

Research gaps and research questions

Overall, the research seems to be gaining traction, and new perspectives are continually introduced. The recent literature on learning styles mostly focuses on the exploration of the disciplinary effects on the variation in learning styles, and some of these studies were conducted within the blended environment. However, most of the studies focused only on several discrete disciplines or included only a small group of student samples [3441]. Data in these studies were gathered through specialized courses such as academic English language [42] rather than the compulsory courses available to students from all disciplines. Even though certain investigations indeed boasted a large number of samples [49], the role of teaching was emphasized rather than students’ learning style. In addition, what is often overlooked is that a large number of variables related to learning styles could distinguish students from different academic disciplines in a blended learning environment, whereas a more comprehensive analysis that takes into consideration the effects of a great quantity of variables related to learning styles has remained absent. Therefore, one goal of the present study is to fill this gap and shed light on this topic.

Another issue addressed in this study is the selection of an optimal measurement that can effectively identify and differentiate individual learning styles [58]. The effective identification and differentiation of individual learning styles can not only help students develop greater awareness of their learning but also provide teachers with the necessary input to design tailor-made instructions in pedagogical practice. Currently, there are two general approaches to identify learning styles: a literature-based approach and a data-driven approach. The literature-based approach tends to borrow established rules from the existing literature, while the data-driven approach tends to construct statistical models using algorithms from fields such as machine learning, artificial intelligence, and data mining [59]. Research related to learning styles has been performed using predominantly traditional instruments, such as descriptive statistics, Spearman’s rank correlation, coefficient R [39], multivariate analysis of variance [56] and analysis of variance (ANOVA) [38, 43, 49, 57]. Admittedly, these instruments have been applied and validated in numerous studies, in different disciplines, and across multiple timescales. Nevertheless, some of the studies using these statistical tools did not identify significant results [36, 53, 54] or reached only loose conclusions [60]; this might be because of the inability of these methods to probe into the synergistic effects of variables. However, the limited functions of comparison, correlation, prediction, etc. are being complemented by a new generation of technological innovations that promise more varied approaches to addressing social and scientific issues. Machine learning is one such approach that has received much attention both in academia and beyond. As a subset of artificial intelligence, machine learning deals with algorithms and statistical models on computer systems, performing tasks based on patterns and inference instead of explicit instruction. As such, it can deal with high volumes of data at the same time, perform tasks automatically and independently, and continuously improve its performance based on past experience [54]. Similar machine learning approaches have been proposed and tested by different scholars to identify students’ learning styles, with varying results regarding the classification of learning styles. For instance, a study that examined the precision levels of four computational intelligence approaches, i.e., artificial neural network, genetic algorithm, ant colony system and particle swarm optimization, found that the average precision of learning style differentiation ranged between 66% and 77% [61]. Another study that classified learning styles through SVM reported accuracy levels ranging from 53% to 84% [62]. A comparison of the prediction performance of SVM and artificial neural networks found that SVM has higher prediction accuracy than the latter [63]. This was further supported by another study, which yielded a similar result between SVM and the particle swarm optimization algorithm [64]. Moreover, when complemented by a genetic algorithm [65] and ant colony system [66], SVM has also shown improved results. These findings across different fields point to the reliability of SVM as an effective statistical tool for identification and differentiation analysis.

Therefore, a comprehensive investigation across the four general disciplines in Biglan’s taxonomy using a strong machine learning approach is needed. Given the existence of the research gaps discussed above, this exploratory study seeks to address the following questions:

  1. Can students’ learning styles be applied to differentiate various academic disciplines in the blended learning setting? If so, what are the differentiability levels among different academic disciplines based on students’ learning styles?
  2. What are the key features that can be selected to determine the collective impact on differentiation by a machine learning algorithm?
  3. What are the collective impacts of optimal feature sets?

Materials and methods

This study adopted a quantitative approach for the analysis. First, a modified and translated version of the original ILS questionnaire was administered to collect scores for students’ learning styles. Then, two alternate data analyses were performed separately. One analysis involved a traditional ANOVA, which tested the main effect of discipline on students’ learning styles in each ILS dimension. The other analysis involved the support vector machine (SVM) technique to test its performance in classifying students’ learning styles in the blended learning course among 46 specializations. Then, SVM-based recursive feature elimination (SVM-RFE) was employed to specify the impact of students’ disciplinary backgrounds on their learning styles in blended learning. By referencing the 44 questions (operationalized as features in this study) in the ILS questionnaire, SVM-RFE could rank these features based on their relative importance in differentiating different disciplines and identify the key features that collectively differentiate the students’ learning style. These steps are intended to not only identify students’ learning style differences but also explain such differences in relation to their academic disciplinary backgrounds.

Participants

The participants included 790 sophomores taking the blended English language course from 46 majors at Z University. Sophomore students were selected for this study for two reasons. First, sophomores are one of the only two groups of students (the other group being college freshmen) who take a compulsory English language course, namely, the College English language course. Second, of these two groups of students, sophomores have received academic discipline-related education, while their freshmen counterparts have not had disciplinary training during the first year of college. In the College English language course, online activities, representing 55% of the whole course, include e-course teaching designed by qualified course teachers or professors, courseware usage for online tutorials, forum discussion and essay writing, and two online quizzes. Offline activities, which represent 45% of the whole course, include role-playing, ice-breaker activities, group presentations, an oral examination, and a final examination. Therefore, the effects of the academic discipline on sophomores’ learning styles might be sufficiently salient to warrant a comparison in a blended learning setting [67]. Among the participants, 420 were male, and 370 were female. Most participants were aged 18 to 19 years and had taken English language courses for at least 6 years. Based on Biglan’s typology of disciplinary fields, the students’ specializations were classified into the four broad disciplines of hard-applied (HA, 289/37.00%), hard-pure (HP, 150/19.00%), soft-applied (SA, 162/20.00%), and soft-pure (SP, 189/24.00%).

Biglan’s classification scheme of academic disciplines (hard (H) vs. soft (S) disciplines and pure (P) vs. applied (A) disciplines) has been credited as the most cited organizational system of academic disciplines in tertiary education [6870]. Many studies have also provided evidence supporting the validity of this classification [69]. Over the years, research has indicated that Biglan’s typology is correlated with differences in many other properties and serves as an appropriate mechanism to organize discipline-specific knowledge or epistemologies [38] and design and deliver courses for students with different learning style preferences [41]. Therefore, this classification provides a convenient framework to explore differences across disciplinary boundaries. In general, HA disciplines include engineering, HP disciplines include the so-called natural sciences, SA disciplines include the social sciences, and SP disciplines include the humanities [41, 68, 71].

Instrument

In learning style research, it is difficult to select an instrument to measure the subjects’ learning styles [72]. The criteria used for the selection of a learning style instrument in this study include the following: 1) successful use of the instrument in previous studies, 2) demonstrated validity and reliability, 3) a match between the purpose of the instrument and the aim of this study and 4) open access to the questionnaire.

The Felder and Soloman’s ILS questionnaire, which was built based on the FSLSM, was adopted in the present study to investigate students’ learning styles across different disciplines. First, the FSLSM is recognized as the most commonly used model for measuring individual learning styles on a general scale [73] in higher education [74] and has remained popular for many years across different disciplines in university settings and beyond. In the age of personalized instruction, this model has breathed new life into areas such as blended learning [75], online distance learning [76], courseware design [56], and intelligent tutoring systems [77, 78]. Second, the FSLSM is based on previous learning style models; the FSLSM integrates all their advantages and is, thus, more comprehensive in delineating students’ learning styles [79, 80]. Third, the FSLSM has a good predictive ability with independent testing sets (i.e., unknown learning style objects) [17], which has been repeatedly proven to be a more accurate, reliable, and valid model than most other models for predicting students’ learning performance [10, 80]. Fourth, the ILS is a free instrument that can be openly accessed online (URL: https://www.webtools.ncsu.edu/learningstyles/) and has been widely used in the research context [81, 82].

The modified and translated version of the original ILS questionnaire includes 44 questions in total, and 11 questions correspond to each dimension of the Felder-Silverman model as follows: questions 1–11 correspond to dimension 1 (active vs. reflective), questions 12–22 correspond to dimension 2 (sensing vs. intuitive), questions 23–33 correspond to dimension 3 (visual vs. verbal), and questions correspond 34–44 to dimension 4 (sequential vs. global). Each question is followed by five choices on a five-point Likert scale ranging from “strongly agree with A (1)”, “agree with A (2)”, “neutral (3)”, “agree with B (4)” and “strongly agree with B (5)”. Option A and option B represent the two choices offered in the original ILS questionnaire.

Ethics statements

The free questionnaires were administered in a single session by specialized staff who collaborated on the investigation. The participants completed all questionnaires individually. The study procedures were in accordance with the ethical standards of the Helsinki Declaration and were approved by the Ethics Committee of the School of International Studies, Zhejiang University. All participants signed written informed consent to authorize their participation in this research. After completion of the informed consent form, each participant was provided a gift (a pen) in gratitude for their contribution and participation.

Data collection procedure

Before the questionnaires were distributed, the researchers involved in this study contacted faculty members from various departments and requested their help. After permission was given, the printed questionnaires were administered to students under the supervision of their teachers at the end of their English language course. The students were informed of the purpose and importance of the study and asked to carefully complete the questionnaires. The students were also assured that their personal information would be used for research purposes only. All students provided written informed consent (see S2 File). After the questionnaires were completed and returned, they were thoroughly examined by the researchers such that problematic questionnaires could be identified and excluded from further analysis. All questionnaires eligible for the data analysis had to meet the following two standards: first, all questions must be answered, and second, the answered questions must reflect a reasonable logic. Regarding the few missing values, the median number of a given individual’s responses on 11 questions per dimension included in the ILS questionnaire was used to fill the void in each case. In statistics, using the median number to impute missing values is common and acceptable because missing values represent only a small minority of the entire dataset and are assumed to not have a large impact on the final results [83, 84].

In total, 850 questionnaires were administered to the students, and 823 of these questionnaires were retrieved. Of the retrieved questionnaires, the remaining 790 questionnaires were identified as appropriate for further use. After data screening, these questionnaires were organized, and their respective results were translated into an Excel format.

Data analysis method

During the data analysis, as a library of the SVM, the free package LIBSVM (https://www.csie.ntu.edu.tw/~cjlin/libsvm/) was first applied as an alternative method of data analysis. Then, a traditional ANOVA was performed to examine whether there was a main effect of academic discipline on Chinese students’ learning styles. ANOVA could be performed using SPSS, a strong data analysis software that supports a series of statistical analyses. In regard to the examination of the effect of a single or few independent variables, SPSS ANOVA can produce satisfactory results. However, SVM, a classic data mining algorithm, outperforms ANOVA for dataset in which a large number of variables with multidimensions are intertwined and their combined/collective effects influence the classification results. In this study, the research objective was to efficiently differentiate and detect the key features among the 44 factors. Alone, a single factor or few factors might not be significant enough to discriminate the learning styles among the different disciplines. Selected by the SVM, the effects of multiple features may collectively enhance the classification performance. Therefore, the reason for selecting SVM over ANOVA is that in the latter case, the responses on all questions in a single dimension are summed instead of treated as individual scores; thus, the by-item variation is concealed. In addition, the SVM is especially suitable for statistical analysis with high-dimensional factors (usually > 10; 44-dimensional factors were included in this study) and can detect the effects collectively imposed by a feature set [85].

Originally proposed in 1992 [86], the SVM is a supervised learning model related to machine learning algorithms that can be used for classification, data analysis, pattern recognition, and regression analysis. The SVM is an efficient classification model that optimally divides data into two categories and is ranked among the top methods in statistical theory due to its originality and practicality [85]. Due to its robustness, accurate classification, and prediction performance [8789], the SVM has high reproducibility [90, 91]. Due to the lack of visualization of the computing process of the SVM, the SVM has been described as a “black box” method [92]; however, future studies in the emerging field of explainable artificial intelligence can help solve this problem and convert this approach to a “glass box” method [67]. This algorithm has proven to have a solid theoretical foundation and excellent empirical application in the social sciences, including education [93] and natural language processing [94]. The mechanism underlying the SVM is also presented in Fig 2.

thumbnail
Fig 2. The mechanism underlying the support vector machine.

Hyperplanes 1 and 2 are two regression lines that divide the data into two groups. Hyperplane 1 is considered the best fitting line because it maximizes the distance between the two groups.

https://doi.org/10.1371/journal.pone.0251545.g002

The SVM contains the following two modules: one module is a general-purpose machine learning method, and the other module is a domain-specific kernel function. The SVM training algorithm is used to build a training model that is then used to predict the category to which a new sample instance belongs [95]. When a set of training samples is given, each sample is given the label of one of two categories. To evaluate the performance of SVM models, a confusion matrix, which is a table describing the performance of a classifier on a set of test data for which the true values are known, is used (see Table 1).

Based on the confusion matrix, several indicators were developed to measure the performance of SVM models; of these indicators, the five most common indicators include accuracy (ACC), specificity (SPE), sensitivity (SEN) (also known as ‘recall’), area under the receiver operating characteristic curve (AUC), and F-measure. All five values were used in this study as performance evaluators of the SVM models and generally have a value ranging from 0 to 1. The mathematical formulae used to produce these values are provided as follows, along with a brief explanation of their functions: (1) (2) (3) (4) (5) where

ACC represents the proportion of true results, including both positive and negative results, in the selected population;

SPE represents the proportion of actual negatives that are correctly identified as such;

SEN represents the proportion of actual positives that are correctly identified as such;

AUC is a ranking-based measure of classification performance that can distinguish a randomly chosen positive example from a randomly chosen negative example; and

F-measure is the harmonic mean of precision (another performance indicator) and recall.

The ACC is a good metric frequently applied to indicate the measurement of classification performance, but the combination of the SPE, SEN, AUC, F-measure and ACC may be a measure of enhanced performance assessment and was frequently applied in current studies [96]. In particular, the AUC is a good metric frequently applied to validate the measurement of the general performance of models [97]. The advantage of this measure is that it is invariant to relative class distributions and class-specific error costs [98, 99]. Moreover, to some extent, the AUC is statistically consistent and more discriminating than the ACC with balanced and imbalanced real-world data sets [100], which is especially suitable for unequal samples, such as the HA-HP model in this study. After all data preparations were completed, the data used for the comparisons were extracted separately. First, the processed data of the training set were run by using optimized parameters. Second, the constructed model was used to predict the test set, and the five indicators of the fivefold cross-validation and fivefold average were obtained. Cross-validation is a general validation procedure used to assess how well the results of a statistical analysis generalize to an independent data set, which is used to evaluate the stability of the statistical model. K-fold cross-validation is commonly used to search for the best hyperparameters of SVM to achieve the highest accuracy performance [101]. In particular, fivefold, tenfold, and leave-one-out cross-validation are typically used versions of k-fold cross-validation [102, 103]. Fivefold cross-validation was selected because fivefold validation can generally achieve a good prediction performance [103, 104] and has been commonly used as a popular rule of thumb supported by empirical evidence [105]. In this study, five folds (groups) of subsets were randomly divided from the entire set by the SVM, and four folds (training sample) of these subsets were randomly selected to develop a prediction model, while the remaining one fold (test sample) was used for validation. The above functions were all implemented with Python Programming Language version 3.7.0 (URL: https://www.python.org/).

Then, SVM-RFE, which is an embedded feature selection strategy that was first applied to identify differentially expressed genes between patients and healthy individuals [106], was adopted. SVM-RFE has proven to be more robust to data overfitting than other feature selection techniques and has shown its power in many fields [107]. This approach works by removing one feature each time with the smallest weight iteratively to a feature rank until a group of highly weighted features were selected. After this feature selection procedure, several SVM models were again constructed based on these selected features. The performance of the new models is compared to that of the original models with all features included. The experimental process is provided in Fig 3 for the ease of reference.

thumbnail
Fig 3. Experimental process and working mechanism of SVM and SVM-RFE.

https://doi.org/10.1371/journal.pone.0251545.g003

Results

The classification results produced by SVM and the ranking of the top 20 features produced by SVM-RFE were listed in Table 2. Twenty variables have been selected in this study for two reasons: a data-based reason and a literature-based reason. First, it is clear that models composed of 20 features generally have a better performance than the original models. The performance of models with more than 20 is negatively influenced. Second, SVM-based studies in the social sciences have identified 20 to 30 features as a good number for an optimal feature set [108], and 20 features were selected for inclusion in the optimal feature set [95]. Therefore, in this study, the top 20 features were selected for subsequent analysis, as proposed in previous analyses that yielded accepted measurement rates. These 20 features retained most of the useful information from all 44 factors but with fewer feature numbers, which showed satisfactory representation [96].

Results of RQ (1) What are the differentiability levels among different academic disciplines based on students’ learning styles?

To further measure the performance of the differentiability among students’ disciplines, the collected data were examined with the SVM algorithm. As shown in Table 2, the five performance indicators, namely, the ACC, SPE, SEN, AUC and F-measure, were utilized to measure the SVM models. Regarding the two general performance indicators, i.e., the ACC value and AUC value, the HA-HP, HA-SA, and HA-SP-based models yielded a classification capacity of approximately 70.00%, indicating that the students in these disciplines showed a relatively large difference. In contrast, the models based on the H-S, A-P, HP-SA, HP-SP, and SA-SP disciplines only showed a moderate classification capacity (above 55.00%). This finding suggests that these five SVM models were not as effective as the other three models in differentiating students among these disciplines based on their learning styles. The highest ACC and AUC values were obtained in the model based on the HA-HP disciplines, while the lowest values were obtained in the model based on the HP-SA disciplines. As shown in Table 2, the AUCs of the different models ranged from 57.76% (HP-SA) to 73.97% (HA-HP).

To compare the results of the SVM model with another statistical analysis, an ANOVA was applied. Prior to the main analysis, the students’ responses in each ILS dimension were summed to obtain a composite score. All assumptions of ANOVA were checked, and no serious violations were observed. Then, an ANOVA was performed with academic discipline as the independent variable and the students’ learning styles as the dependent variable. The results of the ANOVA showed that there was no statistically significant difference in the group means of the students’ learning styles in Dimension 1, F(3, 786) = 2.56, p = .054, Dimension 2, F(3, 786) = 0.422, p = .74, or Dimension 3, F(3, 786) = 0.90, p = .443. However, in Dimension 4, a statistically significant difference was found in the group means of the students’ learning styles, F (3, 786) = 0.90, p = .005. As the samples in the four groups were unbalanced, post hoc comparisons using Scheffé’s method were performed, demonstrating that the means of the students’ learning styles significantly differed only between the HA (M = 31.04, SD = 4.986) and SP (M = 29.55, SD = 5.492) disciplines, 95.00% CI for MD [0.19, 2.78], p = .016, whereas the other disciplinary models showed no significant differences. When compared with the results obtained from the SVM models, the three models (HA-HP, HA-SA, and HA-SP models) presented satisfactory differentiability capability of approximately 70.00% based on the five indicators.

In the case of a significant result, it was difficult to determine which questions were representative of the significant difference. With a nonsignificant result, it was possible that certain questions might be relevant in differentiating the participants. However, this problem was circumvented in the SVM, where each individual question was treated as a variable and a value was assigned to indicate its relative importance in the questionnaire. Using SVM also circumvented the inherent problems with traditional significance testing, especially the reliance on p-values, which might become biased in the case of multiple comparisons [109].

Results of RQ (2) What are the key features that can be selected to determine the collective impact on differentiation by a machine learning algorithm?

To examine whether the model performance improved as a result of this feature selection procedure, the 20 selected features were submitted to another round of SVM analysis. The same five performance indicators were used to measure the model performance (see Table 2). By comparing the performance of the SVM model and that of the SVM-RFE model presented in Table 2, except for the HA-SP model, all other models presented a similar or improved performance after the feature selection process. In particular, the improvement in the HA-HP and HP-SA models was quite remarkable. For instance, in the HA-HP model, the ACC value increased from 69.32% in the SVM model to 82.59% in the SVM-RFE model, and the AUC score substantially increased from 73.97% in the SVM model to 89.13% in the SVM-RFE model. This finding suggests that the feature selection process refined the model’s classification accuracy and that the 20 features selected, out of all 44 factors, carry substantive information that might be informative for exploring disciplinary differences. Although results for the indicators of the 20 selected features were not very high, all five indicators above 65.00% showed that the model was still representative because only 20 of 44 factors could present the classification capability. Considering that there was a significant reduction in the number of questions used for the model construction in SVM-RFE (compared with those used for the SVM model), the newly identified top 20 features by SVM-RFE were effective enough to preserve the differential ability of all 44 questions. Thus, these newly identified top 20 factors could be recognized as key differential features for distinguishing two distinct disciplines.

To identify these top 20 features in eight models (see Table 2), SVM-RFE was applied to rank order all 44 features contained in the ILS questionnaire. To facilitate a detailed understanding of what these features represent, the questions related to the top 20 features in the HA-HP model are listed in Table 3 for ease of reference.

thumbnail
Table 3. Question descriptions of the top 20 features in the HA-HP model.

https://doi.org/10.1371/journal.pone.0251545.t003

Results of RQ (3) What are the collective impacts of optimal feature sets?

The collective impacts of optimal feature sets could be interpreted from four aspects, namely, the complexities of students’ learning styles, the appropriate choice of SVM, the ranking of SVM-RFE and multiple detailed comparisons between students from different disciplines. First, the FSLSM considers the fact that students’ learning styles are shaped by a series of factors during the growth process, which intertwine and interact with each other. Considering the complex dynamics of the learning style, selecting an approach that could detect the combined effects of a group of variables is needed. Second, recent years have witnessed the emergence of data mining approaches to explore students learning styles [28, 4850, 110]. Specifically, as one of the top machine learning algorithms, the SVM excels in identifying the combined effects of high-order factors [87]. In this study, the SVM has proven to perform well in classifying students’ learning styles across different disciplines, with every indicator being acceptable. Third, the combination of SVM with RFE could enable the simultaneous discovery of multiple features that collectively determine classification. Notably, although SVM-FRE could rank the importance of the features, they should be regarded as an entire optimal feature set. In other words, the combination of these 20 features, rather than a single factor, could differentiate students’ learning styles across different academic disciplines. Last but not least, the multiple comparisons between different SVM models of discipline provide the most effective learning style factors, giving researchers clues to the nuanced differences between students’ learning styles. It can be seen that students from different academic disciplines understand, see and reflect things from individualized perspectives. The 20 most effective factors for all models scattered within 1 to 44, verifying students’ different learning styles in 4 dimensions. Therefore, the FSLSM provides a useful and effective tool for evaluating students’ learning styles from a rather comprehensive point of view.

Discussion

The following discussions address the three research questions explored in the current study.

Levels of differentiability among various academic disciplines based on students’ learning styles with SVM

The results suggest that SVM is an effective approach for classification in the blended learning context in which students with diverse disciplinary backgrounds can be distinguished from each other according to their learning styles. All performance indicators presented in Tables 2 and 3 remain above the baseline of 50.00%, suggesting that between each two disciplines, students’ learning style differences can be identified. To some extent, these differences can be identified with a relatively satisfactory classification capability (e.g., 69.32% of the ACC and 73.97% of the AUC in the HA-HP model shown in Table 2). Further support for the SVM algorithm is obtained from the SVM-RFE constructed to assess the rank of the factors’ classification capacity, and all values also remained above the baseline value, while some values reached a relatively high classification capability (e.g., 82.59% of the ACC and 89.13% of the AUC in the HA-HP model shown in Table 2). While the results obtained mostly show a moderate ACC and AUC, they still provide some validity evidence supporting the role of SVM as an effective binary classifier in the educational context. However, while these differences are noteworthy, the similarities among students in different disciplines also deserve attention. The results reported above indicate that in some disciplines, the classification capacity is not relatively high; this was the case for the model based on the SA-SP disciplines.

Regarding low differentiability, one explanation might be the indistinct classification of some emerging “soft disciplines.” It was noted that psychology, for example, could be identified as “a discipline that can be considered predominantly ‘soft’ and slightly ‘purer’ than ‘applied’ in nature” [111] (p. 43–53), which could have blurred the line between the SA and SP disciplines. As there is now no impassable gulf separating the SA and SP disciplines, their disciplinary differences may have diminished in the common practice of lecturing in classrooms. Another reason comes from the different cultivation models of “soft disciplines” and “hard disciplines” for sample students. In their high school, sample students are generally divided into liberal art students and science students and are then trained in different environments of knowledge impartation. The two-year unrelenting and intensive training makes it possible for liberal art students to develop a similar thinking and cognitive pattern that is persistent. After the college entrance examination, most liberal art students select SA or SP majors. However, a year or more of study in university does not exert strong effects on their learning styles, which explains why a multitude of researchers have traditionally investigated the SA and SP disciplines together, calling them simply “social science” or “soft disciplines” compared with “natural science” or “hard disciplines”. There have been numerous contributions pointing out similarities in the learning styles of students from “soft disciplines” [37, 112114]. However, students majoring in natural science exhibit considerable differences in learning styles, demonstrating that the talent cultivation model of “hard disciplines” in universities is to some extent more influential on students’ learning styles than that of the “soft disciplines”. Further compelling interpretations of this phenomenon await only the development of a sufficient level of accumulated knowledge among scholars in this area.

In general, these results are consistent with those reported in many previous studies based on the Felder-Silverman model. These studies tested the precision of different computational approaches in identifying and differentiating the learning styles of students. For example, by means of a Bayesian network (BN), an investigation obtained an overall precision of 58.00% in the active/reflective dimension, 77.00% in the sensing/intuitive dimension and 63.00% in the sequential/global dimension (the visual/verbal dimension was not considered) [81]. With the help of the keyword attributes of learning objects selected by students, a precision of 70.00% in the active/reflective dimension, 73.30% in the sensing/intuitive dimension, 73.30% in the sequential/global dimension and 53.30% in the visual/verbal dimension was obtained [115].

These results add to a growing body of evidence expanding the scope of the application of the SVM algorithm. Currently, the applications of the SVM algorithm still reside largely in engineering or other hard disciplines despite some tentative trials in the humanities and social sciences [26]. In addition, as cross-disciplines increase in current higher education, it is essential to match the tailored learning styles of students and researchers studying interdisciplinary subjects, such as the HA, HP, SA and SP disciplines. Therefore, the current study is the first to incorporate such a machine learning algorithm into interdisciplinary blended learning and has broader relevance to further learning style-related theoretical or empirical investigations.

Verification of the features included in the optimal feature sets

Features included in the optimal feature sets provided mixed findings compared with previous studies. Some of the 20 identified features are verified and consistent with previous studies. A close examination of the individual questions included in the feature sets can offer some useful insights into the underlying psychological processes. For example, in six of the eight models constructed, Question 1 (“I understand something better after I try it out/think it through”) appears as the feature with the number 1 ranking, highlighting the great importance attached to this question. This question mainly reflects the dichotomy between experimentation and introspection. A possible revelation is that students across disciplines dramatically differ in how they process tasks, with the possible exception of the SA-SP disciplines. This difference has been supported by many previous studies. For example, it was found that technical students tended to be more tactile than those in the social sciences [116], and engineering students (known as HA in this study) were more inclined toward concrete and pragmatic learning styles [117]. Similarly, it was explored that engineering students prefer “a logical learning style over visual, verbal, aural, physical or solitary learning styles” [37] (p. 122), while social sciences (known as SA in this study) students prefer a social learning style to a logical learning style. Although these studies differ in their focus to a certain degree, they provide an approximate idea of the potential differences among students in their relative disciplines. In general, students in the applied disciplines show a tendency to experiment with tasks, while those in the pure disciplines are more inclined towards introspective practices, such as an obsession with theories. For instance, in Biglan’s taxonomy of academic disciplines, students in HP disciplines prefer abstract rules and theories, while students in SA disciplines favor application [67]. Additionally, Question 10 (“I find it easier to learn facts/to learn concepts”) is similar to Question 1, as both questions indicate a certain level of abstraction or concreteness. The difference between facts and concepts is closely related to the classification difference between declarative knowledge and procedural knowledge in cognitive psychology [35, 38]. Declarative knowledge is static and similar to facts, while procedural knowledge is more dynamic and primarily concerned with operational steps. Students’ preferences for facts or concepts closely correspond to this psychological distinction.

In addition, Questions 2, 4, 7, and 9 also occur frequently in the 20 features selected for the different models. Question 2 (“I would rather be considered realistic/innovative”) concerns taking chances. This question reflects a difference in perspective, i.e., whether the focus should be on obtaining pragmatic results or seeking original solutions. This difference cannot be easily connected to the disciplinary factor. Instead, there are numerous factors, e.g., genetic, social and psychological factors, that may play a strong role in defining this trait. The academic discipline only serves to strengthen or diminish this difference. For instance, decades of research in psychology have shown that males are more inclined towards risk taking than females [118121]. A careful examination of the current academic landscape reveals a gender difference; more females choose soft disciplines than males, and more males choose hard disciplines than females. This situation builds a disciplinary wall classifying students into specific categories, potentially strengthening the disciplinary effect. For example, Question 9 (“In a study group working on difficult material, I am more likely to jump in and contribute ideas/sit back and listen”) emphasizes the distinction between active participation and introspective thinking, reflecting an underlying psychological propensity in blended learning. Within this context, the significance of this question could also be explained by the psychological evaluation of “loss and gain”, as students’ different learning styles are associated with expected reward values and their internal motivational drives, which are determined by their personality traits [122]. When faced with the risk of “losing face”, whether students will express their ideas in front of a group of people depends largely on their risk and stress management capabilities and the presence of an appropriate motivation system.

The other two questions also convey similar messages regarding personality differences. Question 4 concerns how individuals perceive the world, while Question 7 concerns the preferred modality of information processing. Evidence of disciplinary differences in these respects was also reported [35, 123125]. The other questions, such as Questions 21, 27, and 39, show different aspects of potential personality differences and are mostly consistent with the previous discussion. This might also be a vivid reflection of the multi-faceted effects of blended learning, which may differ in their consonance with the features of each discipline. First, teachers from different domains use technology in different ways, and student from different disciplines may view blended learning differently. For instance, the characteristics of soft-applied fields entail specialized customization in blended courses, further broadening the gulf between different subjects [126]. Second, although blended learning is generally recognized as a stimulus to students’ innovation [127], some students who are used to an instructivist approach in which the educator acts as a ‘sage on the stage’ will find it difficult to adapt to a social constructivist approach in which the educator serves as a ‘guide on the side’ [128]. This difficulty might not only negatively affect students’ academic performance but also latently magnify the effects of different academic disciplines.

Interpretation of the collective impact of optimal feature sets

In each SVM model based on a two-discipline model, the 20 key features (collectively known as an optimal feature set) selected exert a concerted effect on students’ learning styles across different disciplines (see Table 2). A broad examination of the distribution of collective impact of each feature set with 20 features in the eight discipline models suggests that it is especially imperative considering the emerging cross-disciplines in academia. Current higher education often involves courses with crossed disciplines and students with diverse disciplinary backgrounds. In addition, with the rise of technology-enhanced learning, the design of personalized tutoring systems requires more nuanced information related to student attributes to provide greater adaptability [59]. By identifying these optimal feature sets, such information becomes accessible. Therefore, understanding such interdisciplinary factors and designing tailor-made instructions are essential for promoting learning success [9]. For example, in an English language classroom in which the students are a blend of HP and SP disciplines, instructors might consider integrating a guiding framework at the beginning of the course and stepwise guidelines during the process such that the needs of both groups are met. With the knowledge that visual style is dominant across disciplines, instructors might include more graphic presentations (e.g., Question 11) in language classrooms rather than continue to use slides or boards filled with words. Furthermore, to achieve effective communication with students and deliver effective teaching, instructors may target these students’ combined learning styles. While some methods are already practiced in real life, this study acts as a further reminder of the rationale underlying these practices and thus increases the confidence of both learners and teachers regarding these practices. Therefore, the practical implications of this study mainly concern classroom teachers and educational researchers, who may draw some inspiration for interdisciplinary curriculum design and the tailored application of learning styles to the instructional process.

Conclusions

This study investigated learning style differences among students with diverse disciplinary backgrounds in a blended English language course based on the Felder-Silverman model. By introducing a novel machine learning algorithm, namely, SVM, for the data analysis, the following conclusions can be reached. First, the multiple performance indicators used in this study confirm that it is feasible to apply learning styles to differentiate various disciplines in students’ blended learning processes. These disciplinary differences impact how students engage in their blended learning activities and affect students’ ultimate blended learning success. Second, some questions in the ILS questionnaire carry more substantive information about students’ learning styles than other questions, and certain underlying psychological processes can be derived. These psychological processes reflect students’ discipline-specific epistemologies and represent the possible interaction between the disciplinary background and learning style. In addition, the introduction of SVM in this study can provide inspiration for future studies of a similar type along with the theoretical significance of the above findings.

Despite the notable findings of this study, it is subject to some limitations that may be perfected in further research. First, the current analysis examined the learning styles without allowing for the effects of other personal or contextual factors. The educational productivity model proposed by Walberg underlines the significance of the collected influence of contextual factors on individuals’ learning [129]. For example, teachers from different backgrounds and academic disciplines are inclined to select various teaching methods and to create divergent learning environments [130], which should also be investigated thoroughly. The next step is therefore to take into account the effects of educational background, experience, personality and learning experience to gain a more comprehensive understanding of students’ learning process in the blended setting.

In conclusion, the findings of this research validate previous findings and offer new perspectives on students’ learning styles in a blended learning environment, which provides future implications for educational researchers, policy makers and educational practitioners (i.e., teachers and students). For educational researchers, this study not only highlights the merits of using machine learning algorithms to explore students’ learning styles but also provides valuable information on the delicate interactions between blended learning, academic disciplines and learning styles. For policy makers, this analysis provides evidence for a more inclusive but personalized educational policy. For instance, in addition to learning styles, the linkage among students’ education in different phases should be considered. For educational practitioners, this study plays a positive role in promoting student-centered and tailor-made teaching. The findings of this study can help learners of different disciplines develop a more profound understanding of their blended learning tendencies and assist teachers in determining how to bring students’ learning styles into full play pedagogically, especially in interdisciplinary courses [131134].

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments on this paper and Miss Ying Zhou for her suggestions during the revision on this paper.

References

  1. 1. Sternberg R, Grigorenko E. Are cognitive styles still in style? American Psychologist. 1997;52: 700–712.
  2. 2. Ehrman M, Oxford R. 1990. Adult language learning styles and strategies in an intensive training setting. The Modern Language Journal. 1990;74: 311–327.
  3. 3. Moser S, Zumbach J. Exploring the development and impact of learning styles: An empirical investigation based on explicit and implicit measures. Computers & Education. 2018;125: 146–157.
  4. 4. Van Waes L, van Weijen D, Leijten M. Learning to write in an online writing center: The effect of learning styles on the writing process. Computers & Education. 2014;73: 60–71.
  5. 5. Chen CC, Chen CY. Exploring the effect of learning styles on learning achievement in a u-Museum. Interactive Learning Environments. 2018;26(5): 664–681.
  6. 6. Komarraju M, Karau SJ, Schmeck RR, Avdic A. The big five personality traits, learning styles, and academic achievement. Personality and Individual Differences. 2011;51(4): 472–477.
  7. 7. Buckley P, Doyle E. Individualising gamification: An investigation of the impact of learning styles and personality traits on the efficacy of gamification using a prediction market. Computers & Education. 2017;106: 43–55.
  8. 8. Popescu E. Adaptation provisioning with respect to learning styles in a Web-based educational system: An experimental study. Journal of Computer Assisted Learning. 2010;26(4): 243–257.
  9. 9. Kuo YC, Chu HC, Huang CH. A learning style-based grouping collaborative learning approach to improve EFL students’ performance in English courses. Educational Technology & Society. 2015;18(2): 284–298. Retrieved from: https://pdfs.semanticscholar.org/28b6/068e6971cc2207f5d07726dbfe1c6c793276.pdf
  10. 10. Felder RM, Spurlin J. Applications, reliability and validity of the Index of Learning Styles. International Journal of Engineering Education. 2005;21(1): 103–112. Retrieved from: https://www.ijee.ie/articles/Vol21-1/IJEE1553.pdf
  11. 11. Ku DT, Chang CS. The effect of academic discipline and gender difference on taiwanese college students’ learning styles and strategies in web-based learning environments. Turkish Online Journal of Educational Technology. 2011;10(3): 265–272. Retrieved from: http://www.tojet.net/articles/v10i3/10330.pdf
  12. 12. Smith GG, Heindel AI, Torres-Ayala AT. E-learning commodity or community: Disciplinary differences between online courses. Internet & Higher Education, 2008;11(3): 152–159.
  13. 13. Chang-Tik C. Impact of learning styles on the community of inquiry presences in multi-disciplinary blended learning environments. Interactive Learning Environments. 2018;26(6): 827–838.
  14. 14. Reid JM. The learning style preferences of ESL students, TESOL Quarterly. 1987;21(1): 87–110.
  15. 15. Dunn R, Dunn K, Perrin J. Teaching young children through their individual learning styles. Boston, MA: Allyn & Bacon, Inc;1994.
  16. 16. Curry L. An organisation of learning styles theory and construct. Cognitive style. 1983;28: 1–28. Retrieved from: https://files.eric.ed.gov/fulltext/ED235185.pdf
  17. 17. Curry L. Integrating concepts of cognitive or learning style: A review with attention to psychometric standards. Ottawa, ON: Canadian College of Health Service Executives;1987.
  18. 18. Kolb DA. Experiential learning: experience as the source of learning and development. Englewood Cliffs, NJ: Prentice-Hall;1984.
  19. 19. Keefe JW. Learning Style: Cognitive and Thinking Skills. Reston: National Association of Secondary School Principals; 1991.
  20. 20. Felder RM, Silverman LK. Learning styles and teaching styles in engineering education. Engineering Education. 1988;78: 674–681.
  21. 21. Anitha D, Deisy C, Lakshmi SB, Meenakshi MK. Proposing a Classification Methodology to Reduce Learning Style Combinations for Better Teaching and Learning. 2014 IEEE Sixth International Conference on Technology for Education, Amritapuri, India, 2014; 208–211. https://doi.org/10.1109/T4E.2014.5
  22. 22. Graf S, Viola SR, Leo T, Kinshuk. In-depth analysis of the felder-silverman learning style dimensions. Journal of Research on Technology in Education. 2007; 40(1): 79–93.
  23. 23. Kuljis J, Liu F. A comparison of learning style theories on the suitability for e-learning. Web Technologies, Applications, and Services, 2005; 191–197. Retrieved from: https://www.mendeley.com/catalogue/da014340-bfb5-32d1-b144-73545a86d440/
  24. 24. Abdullah MA. Learning style classification based on student’s behavior in moodle learning management system. Transactions on Machine Learning and Artificial Intelligence, 2015; 3(1), 28. Retrieved from: http://www.scholarpublishing.org/index.php/TMLAI/article/view/868
  25. 25. Azzi I, Jeghal A, Radouane A, Yahyaouy A, Tairi H. A robust classification to predict learning styles in adaptive e-learning systems. Education & Information Technologies, 2020; 25: 437–448.
  26. 26. Garcia P, Amandi A, Schiaffino S, Campo M. Evaluating Bayesian networks’ precision for detecting students’ learning styles. Computers & Education, 2007; 49(3): 794–808.
  27. 27. Hmedna B, El Mezouary A, Baz O. A predictive model for the identification of learning styles in MOOC environments. Cluster Computing, 2020; 23:1303–1328.
  28. 28. Kolekar SV, Pai RM, Pai MMM. Prediction of Learner’s profile based on learning styles in adaptive E-learning system. International Journal of Emerging Technologies in Learning, 2017; 12(6): 31–51.
  29. 29. Vermunt JD. Relations between student learning patterns and personal and contextual factors and academic performance. Higher Education, 2005; 49: 205–234.
  30. 30. Richardson JTE. Researching student learning. Buckingham: SRHE and Open University Press; 2000.
  31. 31. Furnham A, Jackson C, Miller T. Personality, learning style and work performance. Personality & Individual Differences. 1999;27(6): 1113–1122.
  32. 32. Richardson JTE, Morgan A, Woodley A. Approaches to studying in distance education. Higher Education. 1999;37: 23–55.
  33. 33. Marton F, Säljö R. Approaches to learning. Edinburgh: Scottish Academic Press; 1984.
  34. 34. Tawil NM, Zaharim A, Asshaari I, Nopiah ZM, Ismail NA, Osman H. A study on engineering undergraduate learning styles towards mathematics engineering. Procedia Social & Behavioral Sciences. 2012;60: 212–220.
  35. 35. Lee CK, Sidhu . Engineering students learning preferences in UNITEN: Comparative study and patterns of learning styles. Educational Technology & Society. 2015;18 (3): 266–281. Retrieved from: https://www.ds.unipi.gr/et&s/journals/18_3/21.pdf
  36. 36. Engels PT, de Gara C. Learning styles of medical students, general surgery residents, and general surgeons: implications for surgical education. BMC Medical Education. 2010;10: 51. pmid:20591159
  37. 37. Mccrow J, Yevchak A, Lewis P. A prospective cohort study examining the preferred learning styles of acute care registered nurses. Nurse Education in Practice. 2014;14(2): 170–175. pmid:24075793
  38. 38. Naserieh F, Anani Sarab MR. Perceptual learning style preferences among iranian graduate students. System. 2013;41(1): 122–133.
  39. 39. Oravcova J. Learning styles of university students in relation to educational methods, The New Educational Review. 2009;19(3): 72–82.
  40. 40. Hill F, Tomkinson B, Hiley A, Dobson H. Learning style preferences: An examination of differences amongst students with different disciplinary backgrounds. Innovations in Education and Teaching International. 2016;53(2): 122–134.
  41. 41. Karimi MN. Disciplinary variations in English domain-specific personal epistemology: Insights from disciplines differing along Biglan’s dimensions of academic domains classification. System. 2014;44: 89–100.
  42. 42. Slaats A, Lodewijks H, Van der Sanden J. Learning styles in secondary vocational education: disciplinary differences. Learning & Instruction, 1999;9(5): 475–492.
  43. 43. Lau K, Gardner D. Disciplinary variations in learning styles and preferences: Implications for the provision of academic English. System. 2018;80: 257–268.
  44. 44. Chen X, Hu J. ICT-related behavioral factors mediate the relationship between adolescents’ ICT interest and their ICT self-efficacy: Evidence from 30 countries. Computers & Education. 2020;159, Article 104004.
  45. 45. Xiao Y, Liu Y, Hu J. Regression analysis of ICT impact factors on early adolescents’ reading proficiency in five high-performing countries. Frontiers in Psychology. 2019;10: 1–14. pmid:30713512
  46. 46. Zacharis NZ. The effect of learning style on preference for web‐based courses and learning outcomes. British Journal of Educational Technology, 2011; 42: 790–800.
  47. 47. Chang CC, Yang FY. Exploring the cognitive loads of high-school students as they learn concepts in web-based environments. Computers & Education, 2010; 55(2): 673–680.
  48. 48. Allioui YE. Advanced prediction of learner’s profile based on Felder Silverman learning styles using web usage mining approach and fuzzy c-means algorithm. International Journal of Computer Aided Engineering & Technology, 2019;11(4–5):495–512.
  49. 49. Cheng G, Chau J. Exploring the relationships between learning styles, online participation, learning achievement and course satisfaction: An empirical study of a blended learning course. British Journal of Educational Technology. 2016;47(2): 257–278.
  50. 50. Khamparia A, Pandey B. SVM and PCA based learning feature classification approaches for e-learning system. International Journal of Web Based Learning & Teaching Technologies, 2018; 13(2): 32–45.
  51. 51. Bolliger DU, Supanakorn S. Learning styles and student perceptions of the use of interactive online tutorials. British Journal of Educational Technology, 2011; 42: 470–481.
  52. 52. Thorne K. Blended learning: How to integrate online and traditional learning, London: Kogan Page; 2003.
  53. 53. Al-Azawei A, Parslow P, Lundqvist K. Investigating the effect of learning styles in a blended e-learning system: an extension of the technology acceptance model (tam). Australasian Journal of Educational Technology, 2017; 33(2):1–23.
  54. 54. Akkoyunlu B, Soylu MY. A study of student’s perceptions in a blended learning environment based on different learning styles. Educational Technology & Society. 2008;11 (1): 183–193.
  55. 55. Tekane R, Pilcher LA, Potgieter M. Blended learning in a second year organic chemistry class: Students’ perceptions and preferences of the learning support. Chemistry Education Research and Practice, 2020; 21:24–36.
  56. 56. Vasileva-Stojanovska T, Malinovski T, Vasileva M, Jovevski D, Trajkovik V. Impact of satisfaction, personality and learning style on educational outcomes in a blended learning environment. Learning and Individual Differences. 2015;38: 127–135
  57. 57. Yusoff S, Yusoff R, Noh NHM. Blended learning approach for less proficient students. Sage Open. 2017;7(3):1–8.
  58. 58. Labib AE, Canós JH, Penadés MC. On the way to learning style models integration: A learner’s characteristics ontology. Computers in Human Behavior, 2017;73: 433–445.
  59. 59. Graf S. Adaptivity in learning management systems focusing on learning styles. Vienna, Austria: Vienna University of Technology;2007.
  60. 60. Tóth P. Learning strategies and styles in vocational education. Acta Polytechnica Hungarica. 2012;9(3): 195–216.
  61. 61. Bernard J, Chang TW, Popescu E, Graf S. Learning style identifier: Improving the precision of learning style identification through computational intelligence algorithms. Expert Systems with Applications. 2017;75: 94–108.
  62. 62. Wei YE, Yang QX, Chen JP, Hu J. The exploration of a machine learning approach for the assessment of learning styles changes. Mechatronic Systems and Control. 2018;46(3): 121–126.
  63. 63. Wu QD, Yan B, Zhang C, Wang L, Ning GB, Yu B. Displacement prediction of tunnel surrounding rock: A comparison of support vector machine and artificial neural network. Mathematical Problems in Engineering. 2014: 351496.
  64. 64. Kuo YC, Chu HC, Huang CH. A learning style-based grouping collaborative learning approach to improve EFL students’ performance in English courses. Journal of Educational Technology & Society. 2015;18(2): 284–298. Retrieved from http://web.a.ebscohost.com/ehost/pdfviewer/pdfviewer?vid=1&sid=ec95f2a8-8173-4427-9126-5f21c2f73a77%40sdc-v-sessmgr01
  65. 65. Huang J, Hu X, Yang F. Support vector machine with genetic algorithm for machinery fault diagnosis of high voltage circuit breaker. Measurement. 2011;44(6): 1018–1027.
  66. 66. Li X, Zheng A, Zhang X, Li C, Zhang L. Rolling element bearing fault detection using support vector machine with improved ant colony optimization. Measurement. 2013;46(8): 2726–2734.
  67. 67. Holzinger A. Introduction to machine learning & knowledge extraction (MAKE). Machine Learning and Knowledge Extraction. 2017;1(1): 1–20.
  68. 68. Xiao Y, Hu J. Assessment of optimal pedagogical factors for Canadian ESL learner’s reading literacy through artificial intelligence algorithms. International Journal of English Linguistics. 2019;9(4): 1–14.
  69. 69. Huang F, Hoi CKW, Teo T. The influence of learning style on English learning achievement among undergraduates in Mainland China. Journal of Psycholinguistic Research. 2018;47(5):1069–1084. pmid:29582221
  70. 70. Biglan A. The characteristics of subject matter in different academic areas. Journal of Applied Psychology. 1973;57: 195–203.
  71. 71. Simpson A. The surprising persistence of Biglan’s classification scheme. Studies in Higher Education. 2017;42(8): 1520–1531.
  72. 72. Del Favero M. Disciplinary variation in preparation for the academic dean role. Higher Education Research & Development. 2006;25(3): 277–292.
  73. 73. Becher T. The significance of disciplinary differences. Studies in Higher Education. 1994;19(2): 151–161.
  74. 74. Wintergerst AC, DeCapua A, Ann Verna M. Conceptualizing learning style modalities for ESL/EFL students. System. 2003;31(1): 85–106.
  75. 75. Akbulut Y, Cardak CS. Adaptive educational hypermedia accommodating learning styles: A content analysis of publications from 2000 to 2011. Computers & Education. 2012;58(2): 835–842.
  76. 76. Ültanir E, Ültanir YG, Temel G. The examination of university students’ learning styles by means of Felder-Silverman Index. Egitim ve Bilim. 2012;37: 29–42. Retrieved from: http://egitimvebilim.ted.org.tr/index.php/EB/index.php/EB/article/download/480/335
  77. 77. Heidrich L, Barbosa JLV, Cambruzzi W, Rigo SJ, Martins MG, dos Santos RBS. Diagnosis of learner dropout based on learning styles for online distance learning. Telematics and Informatics. 2018;35(6): 1593–1606.
  78. 78. Anitha D, Deisy C. Proposing a novel approach for classification and sequencing of learning objects in E-learning systems based on learning style. Journal of Intelligent & Fuzzy Systems. 2015; 29(2): 539–552.
  79. 79. Crockett K, Latham A, Mclean D, Bandar Z, O’Shea J. On predicting learning styles in conversational intelligent tutoring systems using fuzzy classification trees. IEEE International Conference on Fuzzy Systems. 2011; 2481–2488. https://doi.org/10.1109/FUZZY.2011.6007514
  80. 80. Latham A, Crockett K, McLean D, Edmonds B. A conversational intelligent tutoring system to automatically predict learning styles. Computers & Education. 2012; 59(1): 95–109.
  81. 81. Jing YP, Li B, Chen N, Li XF, Hu J, Zhu F. The discrimination of learning styles by Bayes-based statistics: An extended study on ILS system. Control and Intelligent Systems. 2015;43(2): 68–75.
  82. 82. Coffield, Ecclestone K, Moseley, Hall E. Learning styles and pedagogy in post 16 education: A critical and systematic review. London, UK: Learning and Skills Research Centre;2004.
  83. 83. Wei Y, Hu J. A cross-sectional evaluation of EFL students’ critical thinking dispositions in digital learning. Advances in Social Science, Education and Humanities Research (ASSEHR), 2018;195: 27–30.
  84. 84. Lee C, Yeung AS, Ip T. Use of computer technology for English language learning: Do learning styles, gender, and age matter? Computer Assisted Language Learning. 2016;29(5): 1035–1051.
  85. 85. Cui XJ, Yang QX, Li B, Tang J, Zhang XY, Li S, et al. Assessing the effectiveness of direct data merging strategy in long-term and large-scale pharmacometabonomics. Frontiers in Pharmacology. 2019;10. pmid:30733675
  86. 86. Acuna E, Rodriguez C. The Treatment of Missing Values and its Effect on Classifier Accuracy. In Banks D, House L, McMorris FR, Arabie P, Gaul W, editors. Classification, Clustering, and Data Mining Applications. Springer Berlin Heidelberg;2004. p. 639–647.
  87. 87. Dosenbach NUF, Nardos B, Cohen AL, Fair DA, Power JD, Church JA, et al. Prediction of individual brain maturity using fMRI. Science. 2010;329(5997): 1358–1361. pmid:20829489
  88. 88. Boser BE, Guyon IM, Vapnik VN. A training algorithm for optimal margin classifiers. A training algorithm for optimal margin classifiers. Proceedings of The Fifth Annual Workshop on Computational Learning Theory. New York: ACM Press, 1992: 144–152. https://doi.org/10.1145/130385.130401
  89. 89. Shao Y, Lunetta RS. Comparison of support vector machine, neural network, and CART algorithms for the land-cover classification using limited training data points. Journal of Photogrammetry and Remote Sensing. 2012;70: 78–87.
  90. 90. Tang J, Fu JB, Wang YX, Luo YC, Yang QX, Li B, et al. Simultaneous improvement in the precision, accuracy, and robustness of label-free proteome quantification by optimizing data manipulation chains. Molecular & Cellular Proteomics. 2019;18(8): 1683–1699. pmid:31097671
  91. 91. Shafri HZM, Ramle FSH.A comparison of support vector machine and decision tree classifications using satellite data of Langkawi Island. Information Technology Journal. 2009;8:373–395.
  92. 92. Chang CC, Lin CJ. LIBSVM: A library for support vector machines. Acm Transactions on Intelligent Systems and Technology. 2011;2(3).
  93. 93. Fan RE, Chen PH, Lin CJ. Working set selection using second order information for training support vector machines. Journal of Machine Learning Research. 2005;6: 1889–1918. Retrieved from: http://www.jmlr.org/papers/volume6/fan05a/fan05a.pdf
  94. 94. Wei XL, Li KC. Exploring the within- and between-class correlation distributions for tumor classification. Proceedings of the National Academy of Sciences of the United States of America. 2010;107(15): 6737–6742. pmid:20339085
  95. 95. Dong X, Hu J. An exploration of impact factors influencing students’ reading literacy in Singapore with machine learning approaches. International Journal of English Linguistics. 2019;9(5): 52–65.
  96. 96. Pembe FC, Gungor T. A tree-based learning approach for document structure analysis and its application to web search. Natural Language Engineering. 2015;21(4): 569–605.
  97. 97. Chen J, Zhang Y, Wei Y, Hu J. Discrimination of the contextual features of top performers in scientific literacy using a machine learning approach. Research in Science Education. [Preprint]. Retrieved from https://rdcu.be/btN56
  98. 98. Pham BT, Pradhan B, Bui DT, Prakash I, Dholakia MB. A comparative study of different machine learning methods for landslide susceptibility assessment: A case study of Uttarakhand area (India). Environmental Modelling & Software. 2016;84: 240–250.
  99. 99. Pradhan B. A comparative study on the predictive ability of the decision tree, support vector machine and neuro-fuzzy models in landslide susceptibility mapping using GIS. Computers & Geosciences. 2013;51: 350–365.
  100. 100. Maloof MA. Learning when data sets are imbalanced and when costs are unequal and unknown. Proceedings of the 20th International Conference on Machine Learning (ICML-2003). 2003. Retrieved from: http://www.site.uottawa.ca/~nat/Workshop2003/maloof-icml03-wids.pdf
  101. 101. Yan L, Rodier R, Mozer M, Wolniewicz R. Optimizing classifier performance via the Wilcoxon-Mann-Withney statistics. Proceedings of the 20th International Conference on Machine Learning (ICML-2003). 2003. Retrieved from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.324.1091&rep=rep1&type=pdf
  102. 102. Huang J, Ling CX. Using AUC and accuracy in evaluating learning algorithms. Proceedings of the IEEE Transactions on Knowledge and Data Engineering. 2005;17(3):299–310.
  103. 103. Li Q, Salman R, Test E, Strack R, Kecman V. Parallel multitask cross validation for support vector machine using GPU. Journal of Parallel and Distributed Computing. 2013;73(3): 293–302.
  104. 104. Meijer RJ, Goeman JJ. Efficient approximate k-fold and leave-one-out cross-validation for ridge regression. Biometrical Journal. 2013;55(2): 141–155. pmid:23348970
  105. 105. James G, Witten D, Hastie T, Tibshirani R. An introduction to statistical learning: with applications in R (Springer Texts in Statistics). 1st ed. New York, NY: Springer Verlag;2013.
  106. 106. Rodriguez JD, Perez A, Lozano JA. Sensitivity analysis of k-Fold cross validation in prediction error estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2010;32(3): 569–575. pmid:20075479
  107. 107. Guyon I, Weston J, Barnhill S, Vapnik V. Gene selection for cancer classification using support vector machines. Machine Learning. 2002;46(1–3): 389–422.
  108. 108. Mou WJ, Liu ZQ, Luo Y, Zou M, Ren C, Zhang CY, et al. Development and cross-validation of prognostic models to assess the treatment effect of cisplatin/pemetrexed chemotherapy in lung adenocarcinoma patients. Medical Oncology. 2014;31(9): 1–9. pmid:25119500
  109. 109. Saeys Y, Inza I, Larranaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007;23(19): 2507–2517. pmid:17720704
  110. 110. Krishnamoorthy D, Lokesh D. Process of building a dataset and classification of vark learning styles with machine learning and predictive analytics models. Journal of Contemporary Issues in Business and Government, 2020; 26(2): 903–910.
  111. 111. Garcia S, Fernandez A, Luengo J, Herrera F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Information Sciences. 2010;180(10): 2044–2064.
  112. 112. Gorostiaga A, Rojo-Álvarez JL. On the use of conventional and statistical-learning techniques for the analysis of PISA results in Spain. Neurocomputing. 2016;171: 625–637.
  113. 113. Smith SN, Miller RJ. Learning approaches: examination type, discipline of study, and gender. Educational Psychology. 2005;25(1): 43–53.
  114. 114. North S. Different values, different skills? A comparison of essay writing by students from arts and science backgrounds. Studies in Higher Education. 2005;30(5): 517–533. pmid:19906328
  115. 115. Parpala A, Lindblom-Ylänne Sari, Komulainen E , Litmanen T , Hirsto L. Students’ approaches to learning and their experiences of the teaching-learning environment in different disciplines. British Journal of Educational Psychology. 2010;80(2): 269–282. pmid:19906328
  116. 116. Ozpolat E, Akar GB. Automatic detection of learning styles for an e-learning system. Computers & Education. 2009;53(2): 355–367.
  117. 117. Jersakova R, Allen RJ, Booth J, Souchay C, O’Connor AR. Understanding metacognitive confidence: Insights from judgment-of-learning justifications. Journal of Memory and Language. 2017;97: 187–207.
  118. 118. Anderson JR. Language, memory, and thought. Mahwah, NJ: Lawrence Erlbaum Associates;1976.
  119. 119. Anderson JR. The architecture of cognition. Cambridge, Massachusetts: Harvard University Press;1983.
  120. 120. Charness G, Gneezy U. Strong evidence for gender differences in risk taking. Journal of Economic Behavior & Organization. 2012;83(1): 50–58.
  121. 121. Chen XL, Hu J. Evolution of U.S. presidential discourse over 230 years: A psycholinguistic perspective. International Journal of English Linguistics. 2019;9(4): 28–41.
  122. 122. Aberg KC, Doell KC, Schwartz S. Linking individual learning styles to approach-avoidance motivational traits and computational aspects of eeinforcement learning. PLoS ONE. 2017;12(2): e0172379. pmid:28192524
  123. 123. Bem SL. The measurement of psychological androgyny. Journal of Consulting and Clinical Psychology. 1974;42(2): 155–162. pmid:4823550
  124. 124. Prentice DA, Carranza E. What women and men should be, should not be, are allowed to be, and don’t have to be: The contents of prescriptive gender stereotypes. Psychology of Women Quarterly. 2002;26: 269–281. Retrieved from: https://psych.princeton.edu/file/214/download?token=yuumEVlO
  125. 125. Baragash RS, Al-Samarraie H. Blended learning: Investigating the influence of engagement in multiple learning delivery modes on students’ performance. Telematics and Informatics. 2018;35(7): 2082–2098.
  126. 126. Pektas ST, Gürel MÖ. Blended learning in design education: An analysis of students’ experiences within the disciplinary differences framework, Australasian Journal of Educational Technology. 2014;30(1): 31–44.
  127. 127. Maor D. Teacher’s and students’ perspectives on on-line learning in a social constructivist learning environment. Technology, Pedagogy and Education. 2003;12(2):201–18.
  128. 128. Acosta ML, Sisley A, Ross J, Brailsford I, Bhargava A, Jacobs R, et al. Student acceptance of e-learning methods in the laboratory class in optometry. PLoS ONE. 2018;13(12): e0209004. pmid:30543719
  129. 129. Psaltou-Joycey A, Kantaridou Z. Major, minor, and negative learning style preferences of university students. System. 2011;39(1): 103–112.
  130. 130. Nam C, Oxford RL. Portrait of a future teacher: Case study of learning styles, strategies, and language disabilities. System. 1998;26(1): 51–63.
  131. 131. Walberg HJ, Pascarella E, Hacrtel GD, Junker LK, Boulanger FB. Probing a model of educational productivity in high school science with National Assessment samples. Journal of Educational Psychology.1982;74: 295–307.
  132. 132. Hu J, Chen KZ, Liu DF. Chinese university faculty members’ visiting experience and professional growth in American universities. Social Behavior and Personality: An International Journal, 2020;48(5): 1–13.
  133. 133. Chen J, Zhang Y, Hu J. Synergistic effects of instruction and affect factors on high- and low-ability disparities in elementary students’ reading literacy. Reading and Writing: An Interdisciplinary Journal. 2021;34(1): 199–230.
  134. 134. Yang X, Zhou X, Hu J. Students’ preferences for seating arrangements and their engagement in cooperative learning activities in college English blended learning classrooms in higher education. Higher Education Research & Development. 2021;40(3): 487–501.