![]() ![]() ![]() To establish the absolute best classifier in a multiclass case, further research is needed that would deploy GPT-2 on a multiclass case. The models that performed the best on a multiclass case are C-Support Vector Classifier and BERT. It should be born in mind that these two models were tested on a binary case though, whereas the other ones were tested on a multiclass case. ![]() The results show that for the datasets on hand, BERT and GPT-2 models perform the best, though the BERT model outperforms GPT-2 by one percentage point in terms of accuracy. In this paper we showcase the most interesting and relevant results. While BERT was tested both as a multiclass as well as a binary model, GPT-2 was used as a binary model on all the classes of a certain dataset. Results are compared among all of these classification models on two multiclass datasets, ‘Text_types’ and ‘Digital’, addressed later on in the paper. On the other hand, the state-of-the-art models used were classifiers that include pretrained embeddings layers, namely BERT or GPT-2. Simple models under review were the Logistic Regression, naïve Bayes, k-Nearest Neighbors, C-Support Vector Classifier, Linear Support Vector Machine Classifier, and Random Forest. This paper is discussing a review of different text classification models, both the traditional ones, as well as the state-of-the-art models. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |