Sentiment Classification Using Text Embedding for Thai Teaching Evaluation

Abstract:

Article Preview

Sentiment classification gains a lot of attention nowadays. For a university, the knowledge obtained from classifying sentiments of student learning in courses is highly valuable, and can be used to help teachers improve their teaching skills. In this research, sentiment classification based on text embedding is applied to enhance the performance of sentiment classification for Thai teaching evaluation. Text embedding techniques considers both syntactic and semantic elements of sentences that can be used to improve the performance of the classification. This research uses two approaches to apply text embedding for classification. The first approach uses fastText classification. According to the results, fastText provides the best overall performance; its highest F-measure was at 0.8212. The second approach constructs text vectors for classification using traditional classifiers. This approach provides better performance over TF-IDF for k-nearest neighbors and naïve Bayes. For naïve Bayes, the second approach yields the best performance of geometric mean at 0.8961. The performance of TF-IDF is better suited to using decision tree than the second approach. The benefit of this research is that it presents the workflow of using text embedding for Thai teaching evaluation to improve the performance of sentiment classification. By using embedding techniques, similarity and analogy tasks of texts are established along with the classification.

Info:

Periodical:

Edited by:

Ruangdet Wongla

Pages:

221-226

Citation:

K. Boonchuay, "Sentiment Classification Using Text Embedding for Thai Teaching Evaluation", Applied Mechanics and Materials, Vol. 886, pp. 221-226, 2019

Online since:

January 2019

Authors:

Export:

Price:

$41.00

* - Corresponding Author

[1] T. Chumwatana, Using sentiment analysis technique for analyzing Thai customer satisfaction from social media,, (2015).

[2] A. Go, R. Bhayani, and L. Huang, Twitter sentiment classification using distant supervision,, CS224N Proj. Rep. Stanf., vol. 1, no. 12, (2009).

[3] M. Gamon, Sentiment classification on customer feedback data: noisy data, large feature vectors, and the role of linguistic analysis,, in Proceedings of the 20th international conference on Computational Linguistics, 2004, p.841.

DOI: https://doi.org/10.3115/1220355.1220476

[4] K. Pasupa, P. Netisopakul, and R. Lertsuksakda, Sentiment analysis of Thai children stories,, Artif. Life Robot., vol. 21, no. 3, p.357–364, (2016).

DOI: https://doi.org/10.1007/s10015-016-0283-8

[5] A. Kennedy and D. Inkpen, Sentiment classification of movie reviews using contextual valence shifters,, Comput. Intell., vol. 22, no. 2, p.110–125, (2006).

DOI: https://doi.org/10.1111/j.1467-8640.2006.00277.x

[6] Z. S. Harris, Distributional structure,, Word, vol. 10, no. 2–3, p.146–162, (1954).

[7] T. Mikolov, K. Chen, G. Corrado, and J. Dean, Efficient estimation of word representations in vector space,, ArXiv Prepr. ArXiv13013781, (2013).

[8] Q. Le and T. Mikolov, Distributed representations of sentences and documents,, in International Conference on Machine Learning, 2014, p.1188–1196.

[9] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, Distributed representations of words and phrases and their compositionality,, in Advances in neural information processing systems, 2013, p.3111–3119.

[10] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, Enriching word vectors with subword information,, ArXiv Prepr. ArXiv160704606, (2016).

[11] A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov, Bag of tricks for efficient text classification,, ArXiv Prepr. ArXiv160701759, (2016).

DOI: https://doi.org/10.18653/v1/e17-2068

[12] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, and T. Mikolov, Fasttext. zip: Compressing text classification models,, ArXiv Prepr. ArXiv161203651, (2016).

[13] J. Pennington, R. Socher, and C. Manning, Glove: Global vectors for word representation,, in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, p.1532–1543.

DOI: https://doi.org/10.3115/v1/d14-1162

[14] R. Kiros et al., Skip-thought vectors,, in Advances in neural information processing systems, 2015, p.3294–3302.

[15] S. Sudprasert, KUCut thai word segmentor. (2004).

[16] F. Pedregosa et al., Scikit-learn: Machine learning in Python,, J. Mach. Learn. Res., vol. 12, no. Oct, p.2825–2830, (2011).

[17] D. Zhang, H. Xu, Z. Su, and Y. Xu, Chinese comments sentiment classification based on word2vec and SVMperf,, Expert Syst. Appl., vol. 42, no. 4, p.1857–1863, (2015).

DOI: https://doi.org/10.1016/j.eswa.2014.09.011

[18] C. Zhang, Y. Guo, J. Wu, S. Wang, Z. Niu, and W. Cheng, An Approach for Identifying Author Profiles of Blogs,, in International Conference on Advanced Data Mining and Applications, 2017, p.475–487.

DOI: https://doi.org/10.1007/978-3-319-69179-4_33