DescriptionPETRA 1.0® is a computerized prototype that helps evaluate the grammatical quality of Spanish translations of “general-language” English texts – newspaper articles, essays, best-selling fiction, etc. It works on language-pair-bound data, which means that for each language combination and direction PETRA 1.0 activates a different set of language nodes.
PETRA 1.0 runs on empirical, corpus-based statistically significant contrastive differences between English and Spanish. It offers two types of evaluation: one is fully automated and uses quantitative information exclusively. The other, more advanced, requires some simple user intervention and uses both quantitative and qualitative data. In both cases, the system i) offers an assessment on a scale between 5 and 0, and ii) identifies and details the elements that need improvement. The assessment report can be downloaded as a PDF file.
The app presents a highly usable interface that is platform-independent and attractive to users thanks to web-based technology. Testers have reported high reliability. The straightforward marking of “areas to improve” has also received a high rate of approval.
PETRA 1.0 is addressed to language service providers who work in the fields of translation, revision, training, language competence evaluation, etc.
|Period||1 Nov 2014|
|Event title||Mediterranean Editors & Translators Meeting: Innovation and Tradition: Mining the Human Resource|
|Location||San Lorenzo de El Escorial, Spain|
|Degree of Recognition||International|