A New Automated Essay Scoring:Teaching Resource Program

Article Preview

Abstract:

This study focuses on the automated essay scoring system: Teaching Resource Program (TRP) from a cross-disciplinary perspective. Different from previous research, this research emphasized the functions of Teaching Resource Program in the respect of evaluation and the capacity to enhance the writing competence. In what follows, we attempt to specify Teaching Resource Program and its contributions. We start with a brief introduction to Teaching Resource Program. Then, we provide the techniques, related theories, and computer programs used in Teaching Resource Program. We have found in this research that Teaching Resource Program and evaluations of e-writing are highly significant correlation. Based on these findings, some useful implications from this study include that automated essay scoring system: Teaching Resource Program needs to be further incorporated with the evaluations of other disciplines. We end the study with the conclusion that Teaching Resource Program contributes to evaluate e-writing pieces, consequently enhances the students' writing performance and further advances the development of writing curricula and other disciplines.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

608-611

Citation:

Online since:

January 2013

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] S. Dikli, Turkish Online Journal of Distance Education-TOJDE Volume: 7 Number: 1 Article: 5 Florida State University Tallahassee, FL, USA (2006).

Google Scholar

[2] J. Wang, and M. S. Brown, Automated essay scoring versus human scoring: A correlational study. Contemporary Issues in Technology and Teacher Education, 8(4) (2008).

Google Scholar

[3] P. W. Foltz, Latent Semantic Analysis for text-based research, Behavior Research Methods, Instruments and Computers, 28(2) (1996).

DOI: 10.3758/bf03204765

Google Scholar

[4] T. K. Landauer, and S. T. Dumais, A Solution to Plato's Problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge, Psychological Review, 104, (1997) 211-240.

DOI: 10.1037/0033-295x.104.2.211

Google Scholar

[5] , 9] S. Valenti, F. Neri and A. Cucchiarelli, An Overview of Current Research on Automated Essay Grading, Journal of Information Technology Education , Universita' Politecnica delle Marche, Ancona, Italy Volume 2 (2003).

DOI: 10.28945/331

Google Scholar

[6] Y. L. Yang, Teaching Resource Program, Higher Education Press, Beijing, (2011).

Google Scholar

[7] E. B. Page, The use of the computer in analyzing student essays, International Review of Education, 14, (1968) 210–225.

Google Scholar

[8] J. Palmer, R. Williams , and H. Dreher Curtin, Automated Essay Grading System Applied to a First Year University Subject – How Can We do it Better? Informing Science In SITE - Where Parallels Intersect, University of Technology, Perth, WA, Australia (2002).

DOI: 10.28945/2553

Google Scholar

[10] C. Mikulas K. Kern, A Comparison of the Accuracy of Automated Essay Scoring: Using Prompt-Specific and Prompt-Independent Training, the Annual Meeting of the American Educational Research Association, San Francisco, CA-April (2006).

Google Scholar

[11] X. Y. Kui, How to use Experience English Writing Corpus to Conduct Writing Evaluation, Foreign Languages in China (8) , (2005) 65-68.

Google Scholar