Comparison between parameter-efficient techniques and full fine-tuning: A case study on multilingual news article classification

Adapters and Low-Rank Adaptation (LoRA) are parameter-efficient fine-tuning techniques designed to make the training of language models more efficient. Previous results demonstrated that these methods can even improve performance on some classification tasks. This paper complements existing research by investigating how these techniques influence classification performance and computation costs compared to full fine-tuning. We focus specifically on multilingual text classification tasks (genre, framing, and persuasion techniques detection; with different input lengths, number of predicted classes and classification difficulty), some of which have limited training data. In addition, we conduct in-depth analyses of their efficacy across different training scenarios (training on the original multilingual data; on the translations into English; and on a subset of English-only data) and different languages. Our findings provide valuable insights into the applicability of parameter-efficient fine-tuning techniques, particularly for multilabel classification and non-parallel multilingual tasks which are aimed at analysing input texts of varying length.


Introduction
The development of language models has led to a significant increase in the number of trainable parameters needed to fine-tune such models, with state-of-the-art models comprising of millions or even billions of parameters [1,2].This poses a serious constraint on the process of fine-tuning such models, often relying on significant computational resources.Many of the recent research efforts are therefore focused on the development of more efficient training techniques [3][4][5].Decreasing computational costs makes (large) language models more accessible to researchers and practitioners with limited computational resources, and reduces the carbon footprint of their training.
In this study, we investigate two out of five parameter-efficient fine-tuning techniques (PEFTs) that have been evaluated on large ranges of data (>20B) in prior research [6].Adapter-based fine-tuning represents a family of efficiency techniques that work by freezing a pre-trained language model and adding a small number of trainable parameters in the layers of the language model [7][8][9].This significantly reduces the training time at the cost of a small or no performance penalty.Another method of reducing the number of trainable parameters is based on performing Low-Rank Adaptation (LoRA) [10].The main idea behind the LoRA approach is to freeze the weights of pre-trained language models and insert low-rank decomposition matrices into the transformer layers.
The motivation behind this paper is the lack of prior research on the advantages of these two parameter-efficient fine-tuning techniques specifically on complex NLP classification tasks.In particular, we consider NLP tasks to be complex when they combine both limited amounts of multilingual data and a high number of predicted classes or multiple labels.
Prior studies on different non-complex tasks compared adapter-based models to full fine-tuning (FFT) in multilingual scenarios only [11][12][13].Prior comparisons between FFT and LoRA suggest that, in addition to being parameter-efficient, LoRA can, for certain models, outperform FFT [10].The tasks considered for these comparisons, however, are not the complex multilingual and multilabel tasks that we focus on here, as they only address binary sentence classification for grammaticality detection and sentence pair classification for inference, textual entailment, question answering and paraphrase detection.All of the aforementioned tasks are performed on large, monolingual training sets with a small number of labels.
This paper aims to fill this gap through a systematic, comparative investigation of how an adapter-based method and LoRA techniques perform on multilingual multilabel classification tasks, both in terms of classification performance and computation costs.In particular, we study the behaviour of these PEFTs on three multilingual multilabel news article classification tasks introduced as three separate sub-tasks of the recent SemEval 2023 Shared Task 3 [14]: news genre, framing and persuasion technique detection.
Further motivation for this research came from the success of our three original best performing approaches in each of these three different sub-tasks [15,16].Our solution for sub-task 1 was based on an ensemble of FFT and adapter-based models, as well as language-specific checkpoint selection.Our best sub-task 2 approach was based on mono-and multilingual ensembles, one of which combined FFT and adapter methods with task adaptive pre-training [17].Finally, our models for sub-task 3 included language-specific classification threshold selection and the incorporation of unlabelled data into the training corpus.Overall, we found that adapters improved performance in certain monolingual scenarios in sub-task 1 and for multilingual ones in sub-task 2.
The novel contribution of this paper is in the consistent analysis of the effectiveness of the adapter method across all three complex classification sub-tasks, coupled also with a new comparison against the second promising PEFT (LoRA).
The main contributions of this paper are the following: • We provide an evaluation of PEFTs on complex text classification tasks with different properties.This series of experiments demonstrate how they compare against fully fine-tuned models and with each other on classifications tasks with varying numbers of labels, input length and overall classification difficulty.
• We improve on our original SemEval 2023 results.For sub-task 3, we achieved a better performance on eight out of the nine languages compared to the top results in the official leaderboard.For sub-tasks 1 and 2 the results reported here are mostly comparable to our original SemEval 2023 submissions despite that fact that in this study we used significantly less complex models (the original solutions utilised multiple sub-task tailored steps and complex ensembles).

Related work
Parameter-efficient fine-tuning techniques As already mentioned above, PEFTs are computationally efficient due to limiting the number of trainable parameters.All such techniques freeze the pre-trained model, but differ in the location of the inserted trainable parameters.We focus specifically on Bottleneck adapters and on Low-Rank Adaptation (LoRA).
Bottleneck adapters [8] have a structure similar to autoencoders.The transformer hidden state h is first down-projected to some smaller dimensionality d bottleneck with matrix W down , passes through non-linearity function f , and is then up-projected to the original dimensionality with matrix W up , and a residual connection r.This is defined formally in Eq 1.
The location of the adapter layer depends on the adapter type.Houlsby adapters [8] place adapter layers after the multi-head attention and feed-forward block, whereas Pfeiffer adapters [18] place the adapter layer only after the feed-forward block.Although adding the extra layer reduces the number of trainable parameters and thus increases the speed of fine-tuning, it also increases the number of overall parameters permanently, slowing down inference.
LoRA [10] adds low-rank decomposition matrices to the query (Q) and value (V ) matrices of the self-attention sub-layers of the transformer.
Given a layer expressed as the matrix multiplication h ← W 0 x (where W 0 ∈ R d×k ), during the fine-tuning process, the value of W 0 is modified by some ∆W .LoRA represents this delta as the low-rank decomposition ∆W = BA (where Here, W 0 is frozen, while B and A are initialised randomly and updated during fine-tuning.The decomposition is scaled by hyperparameter α and rank r, thus giving the new expression in Equation 2.
Once the fine-tuning stage has been completed, the additional matrices can be removed by simplifying W 0 , A and B to a single matrix W ′ 0 , thus giving the same number of parameters as the original pre-trained model.This solves the problem of increased inference time.When performing hyperparameter search, the α hyperparameter can be fixed since it is proportional to the learning rate [19].
Evaluating using LoRA adaptation in other parts of the model (all attention layers, all feed-forward layers, all layers, and attention and feed-forward output layers) revealed that inserting LoRA adaptation in all layers results in the highest performance, and that in this configuration the hyperparameter r has no effect [19].

Application of adapters and LoRA
Despite significantly reducing the number of trainable parameters, bottleneck adapters were previously found to have minimal negative impact on the performance of fine-tuned models for simple sentence classification tasks.In particular, the evaluation on the GLUE benchmark dataset [20] (a collection of sentence and sentence-pair classification tasks) showed the bottleneck adapter performance to be within 0.8% of the performance of FFT, whilst only training 3.6% of parameters [8].Additionally, in the field of machine translation, Bapna and Firat [21] found that adapters produce equivalent or even better results compared to FFT.He et al. [11] investigated the cases in which adapters outperform FFT and found that the former method is beneficial in low-resource and cross-lingual tasks, since it mitigates forgetting effects by minimising the differences between the representations of the fine-tuned and the pretrained model.The authors also found adapters to be less prone to overfitting and more stable over a wider range of learning rates, therefore widening the range of good hyperparameters.Additionally, they found that the behaviour of the bottleneck dimension varies depending on the task, with some tasks not affected by its fluctuation and other ones benefiting from a higher dimensionality.
Chalkidis et al. [12] found that bottleneck adapters outperformed full fine-tuning, and provided better zero-shot cross-lingual capability.Their findings are based on the MultiEURLEX dataset, which consists of 65,000 EU law texts in 23 languages, categorised at multiple levels of detail (between 21 and 567 categories).In some aspects, this dataset is comparable to sub-task 3, as both datasets are multilabel and multilingual, and have a comparable number of labels at MultiEURLEX's lowest level.However the style of EU law is naturally much more rigid and very different to news articles.Additionally, due to the specific data collection methodology used for MultiEURLEX, this dataset is not likely to contain irrelevant text.
More recent research by Xenouleas et al. [13] has questioned whether the findings of Chalkidis et al. would generalise to other datasets, since MultiEURLEX consists mostly of parallel texts (the same content in multiple languages).When the dataset is modified to include only non-parallel documents, they found that translation-based methods outperform multilingual models.We note that, this is likely dependent on domain and on whether the relevant properties are significantly affected by translation: legal documents are more likely to be properly represented in the target language than, for example, the language-specific linguistic properties signalling certain persuasion techniques.
To our knowledge, no comparison of LoRA to FFT method in a similar multilingual scenario exists.
It should also be noted that three of the systems that participated in the original SemEval-2023 Task 3 [14] evaluation exercise used adapters.Teams HHU [22] and NAP [23] entered only sub-task 3, in which they used adapters, whereas SheffieldVeraAI [15] applied adapters to sub-tasks 1 and 2. Initial performance analysis in these sub-tasks showed the effect of adapters to be inconsistent across the different sub-tasks.Namely, adapters achieved higher average performance for monolingual models in sub-task 1, while hindering performance of monolingual models in sub-task 2 but achieving better results there for multilingual models.
To gain better understanding of the effectiveness of adapter methods across a range of complex classification tasks, this paper performs a more detailed validation.In order to ensure comparable results across the three SemEval 2023 Shared task 3 sub-tasks, we keep the same models and settings across all three sub-tasks and the original data split provided by the organisers for training.Moreover, this paper presents a new comparable investigation of the properties of LoRA on the same multilingual multilabel classification problems.

Dataset
The dataset selected for our comparative experiments was created recently as part of SemEval-2023 Task 3: "Detecting the genre, the framing, and the persuasion techniques in online news in a multi-lingual setup" [14].Prior to SemEval 2023, a number of other related challenging multilingual misinformation and propaganda detection tasks were addressed in SemEval (https://semeval.github.io)shared tasks, including detection of hyperpartisan content [24], sarcasm [25], and a smaller set of persuasion techniques in textual [26] and multimodal [27] data.The Shared Task 3 within SemEval 2023 challenge extended this prior work on persuasion techniques by introducing new kinds of persuasion techniques, as well as addressing two other related sub-tasks, namely news genre categorisation and framing detection.
Sub-task 1: News Genre Categorisation.Given a news article, determine whether it is objective news reporting, an opinion piece, or satire.
Sub-task 2: Framing Detection.Given a news article, identify one or more of fourteen framing dimensions used: Economic, Capacity and resources, Morality, Fairness and equality, Legality, constitutionality and jurisprudence, Policy prescription and evaluation, Crime and punishment, Security and defense, Health and safety, Quality of life, Cultural identity, Public opinion, Political, External regulation and reputation.The set of framing techniques used in this shared task was defined following a pre-existing taxonomy [28].
Sub-task 3: Persuasion Techniques Detection.Given a paragraph of a news article, identify zero or more out of 23 persuasion techniques used (see S1 Appendix for a detailed list of the techniques).The set of techniques represents an extension of the taxonomy used in previous SemEval datasets [27,29].The task additionally provides 6 high-level categories that subsume similar persuasion techniques.Although the task is paragraph level, each of the articles has at least one labelled paragraph.
These three sub-tasks use broadly overlapping data, with the characteristics and summary statistics for the datasets of each sub-task shown in Table 1.Dataset collection: The data is extracted from both mainstream and alternative media sources, collected through news aggregators (e.g.Google News, Europe Media Monitor) and fact-checking organisations (e.g.MediaBiasFactCheck, NewsGuard), respectively.All of the news articles were published between 2020 to mid 2022.The text of each article was extracted automatically from the HTML source of each web page by using either the text-gathering tool Trafilatura [30] or a site-specific procedure.Notably, this process is error prone as it sometimes includes textual content which is not a part of the news article itself, such as web polls, newsletter sign-up forms, and author information.For English, a pre-existing dataset was also utilised [29], but the organisers of the shared task did not make it sufficiently clear as to what other English data was included in the new dataset.
Dataset splits and exploratory analysis: Three sets of data are provided for each language and task: labelled training and development (except for unseen languages), and unlabelled testing.
The task organisers provided test data in nine languages: English (EN), French (FR), German (DE), Georgian (KA), Greek (EL), Italian (IT), Polish (PL), Russian (RU), and Spanish (ES).Three of the languages (Georgian, Greek and Spanish) are 'surprise' languages, meaning that no corresponding labelled training data exists in the dataset.Therefore, in order to make predictions for these languages, their test set must either be translated to a 'seen' language, or a multi-lingual approach capable of supporting zero-shot evaluation must be applied.For the remaining 6 languages, labelled training and validation data is included in the dataset.
It must be noted that the task organisers have not yet released the gold labels for the test set in order to prevent reearchers from overfitting their systems.This means that detailed error analysis can only be carried out on the six languages for which the development sets are available.
Tables 2, 3 and 4 show the detailed analysis of the training, development and test data used in sub-tasks 1, 2 and 3 respectively.The average length, calculated in the number of tokens, was estimated using the tokenizer for RoBERTa-large model [31], since this is the model we use in our experiments.For the training and development sets in sub-task 3, the average length was calculated for the paragraphs that have at least one persuasion technique assigned.For the test sets in sub-task 3, the average length includes every paragraph due to the lack of gold-standard labels for the test data.This is why the number of examples in the in the test-sets for sub-task 3 is significantly higher than that for the training and development sets.However, not all of these examples are expected to contain at least one persuasion technique.Sub-tasks 1 and 2 use the same set of articles in the test set, while the accumulative set of articles used in the development and training sets is also identical for these two sub-tasks, their assignment to a certain set varies slightly.This is why, as we can see from Tables 2 and 3, the data statistics for these sub-tasks are quite similar.As can be seen, the distribution of the training examples across the languages is not even, with EN accounting for almost 4 times as many articles as DE, RU and PL.We can also observe that the average length of the articles is highly dependent on the language, with articles in the test set for Georgian (KA) being more than 4.5 times shorter than that for the articles in the test set for Polish (PL).This aspect is important for our experiments since it suggests that the models are more likely to omit important information for certain languages compared to others, due the the limitation of transformer models in terms of the input length.Another observation is that the training set is not always representative of the test set.For example, the articles in the test set for Russian (RU) are twice shorter, on average, than the articles in the training set for this language.The difference in terms of the input length between training and test data is less significant for sub-task 3, which uses paragraphs as an input, and for which all the inputs are within the limit of transformer models.
Class Imbalance: In addition to posing complex multilingual and multiclass classification challenges, the data for these three sub-tasks is highly imbalanced, which adds further complexity.In particular, the class distribution for sub-task 1 is highly skewed, with 76% falling into the opinion class and satire accounting for only less than 6% of the data.For sub-task 2, the distribution of classes is also uneven, but is less skewed compared to sub-task 1.The most common frame is Political which appears in 49.4% of the training articles.The least common frame is Cultural Identity which appears in just 10.8% of the articles.Finally, in sub-task 3 loaded language, doubt and name calling are the most common persuasion techniques, accounting for 22%, 15.6% and 12.8% of the training paragraphs, respectively.The remaining 20 classes, on average, account for 2.5% of the training set, totalling 49.6% together.Particularly, appeal to time, whataboutism and red herring are the least frequent persuasion techniques, representing 0.5%, 0.5% and 0.7% of the training paragraphs, respectively.

Classification methods
We experiment with XLM-RoBERTa Large [32], using the following training techniques: • Full fine-tuning (FFT): All parameters of the model are updated during fine-tuning.
• Low-Rank Adaptation (LoRA): The model's parameters are frozen and LoRA matrices (key, query, value) are added to both the MLP and attention layers.
• Bottleneck Adapter (Adapter): The model's parameters are frozen and bottleneck adapters in the Pfeiffer configuration [18] are added.

Training scenarios
While our primary focus is on the multilingual fine-tuning scenario, we introduce two additional settings where models are trained on English-only data in order to investigate whether the effectiveness of each training method differs depending on the composition and the size of the training set.These three different training scenarios are summarised below: • Multilingual Joint (many-to-many): models are fine-tuned using all training data in the original 6 languages.
• English + Translations (one-to-many): models are fine-tuned on all the original English training data and English translations of the training data in the other 5 languages.It is important to mention that the test data in this scenario was kept in its original languages, meaning that the predictions on all the languages except for English were made in a zero-shot cross-lingual way.
• English Only (one-to-many): models are fine-tuned on only the original English data in the training set.Similarly to the 'English + Translations' scenario, the test set was not translated into English.
The choice of the three training scenarios above is motivated by the fact that we want to evaluate the effect of two different factors on each of the classification methods: the language diversity.This eliminates the possibility that differences in performance across the three methods could be due to the size of the training data for each language.At the same time, machine translation, as a specific transfer paradigm for cross-lingual learning [33], may introduce some level of noise and thus break the required correspondence between the original and translated sample.
2. The 'English Only' training scenario enables the analysis of the effect of training data size on each method.This can be achieved by comparing performance on 'English Only' training data against performance on 'English + Translations' data, where the only difference between the two is in the number of training examples.This training scenario, however, is not directly comparable against the multilingual training scenario, since it does not eliminate the possibility that differences in the method's performance could be due to the different linguistic characteristics of the multilingual training data.
The next section describes the methodology behind each classification method and training scenario.

Hyperparameters:
We first perform search for the best hyperparameters for each training scenario and classification method within each sub-task.The search is performed on the original development set as provided by the organisers of SemEval 2023 Task 3. The best configuration obtained for each method can be found in Table 5.One needs to bear in mind that our objective is to maximise model performance per training scenario for each sub-task rather than to minimise the computational costs.In other words, greater parameter efficiency could be possible, but at the cost of model performance.For sub-task 1, the articles that are longer than 512 tokens are separated into sentences, which are then sampled sequentially from the beginning and the end of the article, preserving the original order, until the maximum of 512 tokens is reached.Such a truncation approach is motivated by our experiments on sub-task 1 data during the competition stage of SemEval 2023 Task 3 [15].This approach yielded a significant improvement in the F1 macro score over the setting that simply truncates texts to the first 512 tokens.This improvement can potentially be explained by the fact that the instructions for human annotators in sub-task 1 highlighted the importance of opinionated sentences which tend to be found towards the end of the articles.
We perform text preprocessing for sub-tasks 1 and 2 by applying the following steps for all languages: • a full stop is added at the end of each title; • duplicate sentences directly following each other are removed; • the @ symbol is removed from any Twitter handles; • hyperlinks to websites and images are also removed.
English articles were further preprocessed as follows: • text promoting sharing on different social media platforms was removed from the bottom of the articles; • sentences encouraging user participation in online polls, comments, or advertisements were also removed; • sentences stipulating the site's terms of use were removed; • removal of sentences indicating licensing and containing phrases such as 'reprinted with permission', 'posted with permission' and 'all rights reserved'; • sentences detailing author biographies were also removed.
For sub-task 3, preliminary experiments found no performance gains when text preprocessing was applied, thus our experiments for this sub-task use directly the original text.Importantly, for sub-task 3 experiments, we include sentences that do not have assigned labels into the training data by assigning them a vector of zeros to indicate that they do not belong to any class.This approach was shown to significantly improve classification performance on this sub-task in our initial experiments [15,16].The size of the training set displayed in Table 1 ( The multilabel sub-tasks 2 and 3 use confidence thresholds of 50% and 30%, respectively, after applying a sigmoid activation function to the logits.The confidence threshold for sub-task 3 is purposefully lower and was selected according to our previous experiments [16], which revealed that its careful calibration can significantly influence the performance of the model. Training scenarios: We experiment with the three training scenarios described previously.All models are trained on the original training split provided by the task organisers, using either data in all 6 seen languages; all EN data and the translations into English of the data in the other 5 languages; or only using the EN part of the training data.Each model is then evaluated on the task organiser's test split (6 seen and 3 surprise languages), without translation.
The three unseen test set languages -Greek, Georgian, and Spanish -allow us to evaluate the zero-shot cross-lingual transfer learning capabilities of the classification methods trained in first, fully multilingual setting.In the other two training scenarios ('English + Translations' and 'English Only'), the remaining 8 languages (FR, DE, IT, PL, RU, ES, EL and KA) provide an insight into the models' performance in the cross-lingual zero-shot setting.

Evaluation metrics:
The performance of the different classification methods is then compared using two sets of criteria: (1) computational resource efficiency; (2) classification performance.For the latter, both F1 micro and F1 macro are reported as performance metrics for all three sub-tasks.However, it must be noted that the SemEval 2023 Task 3 organisers used only F1 macro as the official scoring metric for sub-task 1, whereas sub-tasks 2 and 3 used only F1 micro .Therefore, where a more detailed language-specific analysis is carried out in this paper, only the respective official metric for each sub-task is provided.
Mean and standard deviation are computed over three different random seed initialisations.
Resource efficiency is measured through four metrics: (i) the peak amount of VRAM used during training; (ii) speedup relative to the fully fine-tuned method, which is the number of training steps per second N m /t m of the respective method (LoRA or Adapter) divided by the number of training steps per second of the fully fine-tuned method N F F T /t F F T (Equation 3); (iii) the number of trainable parameters; and (iv) the number of non-trainable parameters.
Implementation details: All experiments were performed with the AdapterHub framework [34].In order to obtain the 'English + Translations' data, we translate all available training and development data into English using Google Cloud Translation API.

Results
The analysis of our results is structured around the following three main research questions: RQ1: How does the classification performance and computational costs of each classification method differ for each sub-task?
RQ2: How do the training scenarios (determining the diversity of the languages in the training set and its size) affect the performance of each classification method?
RQ3: How do the classification methods compare with each other for each training scenario and language?

Comparison of the computational and performance properties of classification methods
To answer the first research question (RQ1), we select the best training scenario for each method and sub-task and compare the performance of the FFT model against August 15, 2023 11/22 the performance of the LoRA and adapter methods for each sub-task.The results of this comparison are reported in Table 6.The best scores and performance metrics appear in bold.The main metric for a certain sub-task is marked with an asterisk ( * ).
The results demonstrate that: (1) FFT and adapters perform better in sub-tasks 1 and 2, while LoRA performs better for sub-task 3. We observe that for longer texts, such as the articles analysed in sub-tasks 1 and 2, FFT and adapter-based classification demonstrates better results on average than LoRA.At the same time, LoRA on average outperforms FFT and adapters for sub-task 3, which is trained on shorter texts.
(2) 'Multilingual Joint' training scenario performs best, regardless of the sub-task and classification method.We observe a pattern of 'Multilingual Joint' training scenario achieving the best results for all three sub-tasks as well as all three classification methods.This implies that, in general, training models on larger datasets with a variety of languages, can be beneficial for both FFT and PEFT methods applied to the tasks with various properties.This effect has not been, however, consistently observed for all combinations of training scenarios and August 15, 2023 12/22 classification methods (the more detailed analysis per individual training scenarios is provided in the following section).
(3) LoRA and adapters can save computational costs significantly.By design, the PEFTs reduce the number of trainable parameters significantly (between 140 and 280 times less parameters).As a result, for sub-task 1 and 3, the utilisation of LoRA led to a significant decrease in the memory consumption -from 39GB to 24GB (38%), and from 20GB to 13GB (35%) respectively.For sub-task 2, the adapter achieved the best memory efficiency, while decreasing the peak VRAM usage from 23GB to 14GB (39%).The similar pattern can be observed for a total training time, which decreased to 56-71% of the FFT training time.
(4) Saving computational costs results in lower performance, some exceptions, however, exist.Saving the VRAM usage and shortening training time is naturally reflected in the lower performance compared to that of FFT.Adapters are consistently outperformed by FFT in all three sub-tasks.However, in the case of sub-task 3, LoRA not only achieved highly comparable results, but also outperformed FFT for the most of languages (the difference is, however, not statistically significant -the more detailed analysis is provided in the next sections).For sub-tasks 2 and 3, we also observe a higher standard deviation of the results, implying a higher instability of fine-tuning when PEFTs are applied.

Comparison of the effect of a training scenario on each classification method
To answer the second question (RQ2), we compare FFT, LoRA and adapter classification methods across the three training scenarios introduced above, namely 'Multilingual Joint', 'English + Translations' and 'English Only'.It should be noted that in the latter two scenarios all languages in the test set are unseen (except English) as the model did not have access to training data in those languages.The results are measured in the official sub-task metrics (F1 macro for sub-task 1 and F1 micro for sub-tasks 2 and 3) and are shown in Tables 7, 8 and 9. (1) The diversity of languages in the training set improves the average performance of the FFT classification method.For all three sub-tasks, we observe a significantly better average classification performance when the training data is provided in the original 6 different languages as opposed to providing the same amount of data in English only.When looking at the individual languages, this effect holds for all languages in sub-task 1 except for English (the only seen language in the 'English + Translations' training scenario) and Spanish (one of the unseen languages in the joint multilingual setting).For sub-task 2, the only exception is French, which benefits from a monolingual training setting.Notably, the performance on the English test set decreases in the 'English + Translations' scenario for sub-task 2, despite being trained on much more data in this language.Finally, for sub-task 3, French is the only language in the test set that benefits from being trained on monolingual English data, which is consistent with sub-task 2.
(2) The diversity of languages in the training set has an inconsistent effect on the performance of the LoRA classification method across the three sub-tasks.While we observe an average decrease in classification performance in the 'English + Translations' training scenario for sub-tasks 2 and 3, this setting August 15, 2023 14/22 improves the average results for sub-task 1.For sub-task 3, this effect holds for every language in the test set, while for sub-task 2, LoRA improves performance on EN and FR when trained using monolingual English data.For sub-task 1, LoRA benefits from the multilingual training data when making predictions on 5 out of the 9 languages (DE, IT, PL, RU, and KA) and favours the monolingual English training setting for the 4 remaining languages (EN, FR, EL and ES).The difference is particularly high for Spanish, resulting in a slightly better average performance in a monolingual training scenario for sub-task 1.
(3) The diversity of languages in the training set has an inconsistent effect on adapter classification performance across all tasks.While the 'English + Translations' training scenario insignificantly improves average performance in sub-task 1, it decreases the average performance in sub-tasks 2 and 3. Adapters benefit from a monolingual training scenario when making predictions on 6 out of the 9 languages (EN, FR, IT, RU, ES and EL) in sub-task 1.For sub-tasks 2 and 3, this is the case only for 3 languages (IT, EL and KA) and 2 languages (EN and ES) respectively.
(4) Decrease in the size of the training set decreases the performance of the FFT, LoRA as well as adapter method on all seen and zero-shot languages.We observe a significant decrease in FFT's performance and both PEFTs in the 'English Only' training scenario compared to the 'English + Translation' setting across all the sub-tasks and for every language within each sub-task.This result is particularly noteworthy for the English test set as it indicates that even potentially noisy translated data is able to improve the performance on a given language as compared to using a smaller but better quality dataset.
In summary, the effect of removing language diversity from the training set is not consistent across SemEval 2023 sub-tasks and classification methods.Sub-task 1, in particular, demonstrates slight improvement in performance for LoRA and adapter methods when trained on 'English + Translations' data.All classification methods show a significantly decreased performance when trained on the original English-only data, demonstrating the importance of the size of the training data.
In the next section, we compare the results of the three classification methods within each of the three training scenarios.

Comparison of classification methods for each training scenario and language
To answer the third research question (RQ3), we analyse whether the performance of FFT, LoRA and adapter methods is consistent across training scenarios or are certain scenarios more prone to the lack of training data and or language diversity within the training data.This analysis, on the top of results presented in Tables 7, 8 and 9 for sub-tasks 1, 2 and 3 respectively, complement the previous RQ2.
(1) FFT outperforms LoRA and adapter methods for sub-tasks 1 and 2 in a 'Multilingual Joint' training scenario, while for sub-task 3, only zero-shot predictions on unseen languages benefit from the FFT method.We observe that FFT yields best performance in a multilingual setting for most of the seen and unseen languages in sub-tasks 1 and 2. While the differences between FFT and LoRA are often less than 1% for sub-task 1, sub-task 2 demonstrates a clear preference for the FFT classification approach.
(2) In an 'English + Translations' training scenario, adapters outperform FFT across all sub-tasks and show better or on-par performance compared to LoRA.A particularly clear preference for adapters can be observed for sub-task 2, where the majority of seen and unseen languages languages benefit from this classification method.For sub-tasks 1 and 2, most of the zero-shot predictions on unseen languages favour FFT.The performance of the adapter method is particularly consistent on English, the only seen language in this training scenario, with all sub-tasks favouring the adapter classifier.
(3) In an 'English Only' training scenario with less data, differences between FFT, LoRA and adapters become less obvious across all sub-tasks.In this setting, the differences in the average classification performances between FFT, adapters and LoRA across all sub-tasks is often insignificant, with less than 1% of one method over the other.For example, the difference between the adapter method and LoRA for sub-task 1 is 0.2%, and the average performance of FFT and LoRA is the same and differs only in the confidence intervals.Similarly, for sub-task 2, FFT is comparable to the adapter method, and for sub-task 3, FFT is very close in the performance to LoRA.We observe that for sub-tasks 1 and 2, adapters perform better for EN (the only seen language), while for sub-task 3, LoRA yields better performance for the seen language (EN).
(4) The overall best performance across all training scenarios and classification methods is achieved in a 'Multilingual Joint' training scenario.While FFT method works better for sub-tasks 1 and 2 in this setting, sub-task 3 shows a clear improvement when trained using LoRA method.While sub-tasks 1 and 2 are consistent in favouring FFT for both seen and unseen (ES, EL, KA) languages, sub-task 3 favours LoRA when making the predictions on seen languages only.For unseen languages, sub-task 3 agrees with sub-tasks 1 and 2 in favouring the FFT classification approach.Some of the languages demonstrate strong preferences towards certain classification methods and training scenarios: (1) English consistently favours adapter classification approach across all the training scenarios in sub-task 1.
(2) FFT method yields best performance on Georgian zero-shot predictions across all settings in sub-tasks 1 and 3.
(3) In a 'Multilingual Joint' training scenario, German demonstrates a consistent preference for the adapter classification method across all three sub-tasks.In particular, German is the only language in sub-task 2 where FFT is not producing the best performance for the 'Multilingual Joint' training scenario.It is also the only case where adapter models demonstrate better performance than LoRA for seen languages for sub-task 3 in the joint training scenario.
We also observe that LoRA, in a 'Multilingual Joint' training scenario, shows the strong best overall performance for sub-task 3. Since this method was not used by any of the teams participating in the shared task, we want to look into how this approach compares to the official leaderboard results after the competition.Table 10 demonstrates the scores of the winning system for sub-task 3 along with the scores achieved in sub-task 3 in these experiments.As can be seen, LoRA method for sub-task 3 outperforms the most of the results of the winning systems.We achieve an increase in the performance of up to 19.63% for all the languages except for Georgian (KA).6 out of 9 languages (FR, IT, PL, RU, ES and EL) achieve the best result in the 'Multilingual Joint' training scenario when applying LoRA.Not surprisingly, the best score for English is achieved in one-to-one 'English only' scenario.This increase also results in the first placings in 8 out of 9 languages.prone than FFT to favoring the most frequent class, since as can be observed from Table 6, F1 micro score for sub-task 1 is significantly higher for adapter method compared to FFT, while FFT results in a higher F1 macro .However, a detailed error analysis to confirm or refute this assumption is currently not possible since the gold-standard labels for the test set are not released.
Low-resource languages may have a preference for FFT in all training scenarios when predictions are made in cross-lingual zero-shot way.This assumption is suggested by the fact that Georgian is the only language that consistently prefers the FFT approach across all training scenarios and for all sub-tasks.
Interestingly, the performance on Georgian in sub-tasks 1 and 2 is higher than that on seen languages, despite being low-resource and zero-shot.One potential reason for this observation could be the fact that, as was previously shown in Tables 2 and 3, the texts in the test set for Georgian are, on average, within the limit of XLM-R and are much shorter than the input lengths for other 8 languages.This could explain why the same effect is not observed for sub-task 3, where all the inputs are within the transformer token limit and are relatively short.However, with the lack of the gold standard labels for surprise languages, it is not possible to eliminate other reasons for this phenomenon, as it could also be explained with the lack of particularly difficult to predict classes in the test set for Georgian for both sub-tasks.

Conclusion
In this work, we performed the first (to our knowledge) analysis of the performance of Low-Rank Adaptation (LoRA) technique and its comparison with the adapter and full fine-tuning (FFT) methods in a multilinqual multiclass scenario.
We found that parameter-efficient fine-tuning techniques (PEFTs), LoRA and bottleneck adapter, provide significant computation efficiency compared to FFT in terms of the training time, the number of trainable parameters and the amount of VRAM memory required.In particular, they reduce the number of trainable parameters between 140 and 280 times and achieve between 32% and 44% shorter training time.
The comparison between LoRA and adapter method in terms of the parameter efficiency suggests that their performance depends on a certain sub-task and hyperparameters used.This observation is aligned with the results of the previous study by He et al. [11], who found the benefit of the the adapter approach to be task-dependent.While we observe LoRA to be more efficient than the adapter method for tasks of news articles' genre and framing detection, the adapter method takes less average time and uses less training parameters for latter one.
Moreover, we found the performance of the methods to be highly dependent on the training scenario.Adapter method performs better than LoRA and FFT in the scenario where there is a lack of language diversity in the training set across the sub-tasks.
The differences between all three methods become insignificant, often less than 1% on average, as the size of the training data decreases.This indicates that it is possible to achieve high computational efficiency by using PEFT methods without losing much in terms of the classification performance in this setting.More experiments on this result involving a gradual decrease in the size of the training set would be beneficial in future to find the threshold when the performances across the methods match or when PEFT methods become more classification-efficient.
The performance on the unseen languages is often highly dependent on the training scenario.We found that FFT performs better than PEFT methods in zero-shot cross-lingual predictions when trained on a joint multilingual dataset, which is August 15, 2023 18/22 different from the results reported by Chalkidis et al. [12].However, we observe the effect reported by the authors in a monolingual training scenario, where adapter method performs better on zero-shot languages.Finally, the multilingual joint LoRA setting allowed us to significantly improve our official results on the SemEval 2023 sub-task 3 (persuasion techniques detection) and to outperform most of the official leaderboard-best results, placing first in all languages except Georgian, where we are in the second place compared to the official leaderboard results.

1 .
The 'English + Translations' training scenario enables the evaluation of the effect of multilinguality.In particular, we want to compare the effect of having multilingual training data against the scenario where training data is available in only one language.By translating other languages into English, we compose a dataset consisting of the same number of training examples but without having August 15, 2023 8/22 10,927 examples) is based on the number of labelled examples.When unlabelled sentences are added, training data for sub-task 3 grows to 20,704 instances.

S1
Appendix -Sub-task 3 Category Taxonomy Descriptions of each category are provided in the original task paper.[14]

Table 2 .
Data statistics per language for sub-task 1: Genre Detection.

Table 3 .
Data statistics per language for sub-task 2: Framing Detection.

Table 4 .
Data statistics per language for sub-task 3: Persuasion Techniques.

Table 5 .
Hyperparameters.Text preprocessing: As shown in Table1, sub-tasks 1 and 2 have an average number of tokens per article of 1,157.Thus in those sub-tasks, 80.0% of articles are truncated to a maximum of 512 tokens.In contrast, sub-task 3 presents an average

Table 6 .
Performance and computational costs for each sub-task and classification method.

Table 7 .
Sub-task 1: Genre Detection -Mean ± 1 STD F1 macro scores.Best scores by language are marked with an asterisk ( * ).Best scores by training method for each training scenario are in bold.