J2450 Translation Quality Metric

The standard was developed with the aim to maintain “a consistent standard against which the translation quality of automotive service information can be objectively measured”:

  •  regardless of the source language,
  • regardless of the target language,
  • regardless of how the translation is performed, i.e., human translation or machine translation.”

The method is based on a points-based system. The less points a translation scores, the higher the quality of the translation.

Errors are divided into 7 categories and each category is outlined comprehensively:

  •  Wrong Term
  • Wrong Meaning
  • Omission
  • Structural Error
  • Misspelling
  • Punctuation Error
  • Miscellaneous Error

Each category is weighted. Some error categories are considered to affect final quality more than others. For instance, a Spelling Error may score fewer points than errors found in the Wrong Term category.

Once an error is identified as belonging to a category, the human reviewer decides whether the error is either serious or a minor. A serious error is weighted to score more points than a minor error.

Pangeanic’s evaluation

The metric is both easy to follow and easy to implement. It is an excellent step towards creating an objective linguistic quality measurement. Furthermore, it is highly customisable: if you feel that in a particular application a spelling error can be more damaging to a translated version than the use of an incorrect term, you can easily change the weighting.

Finally, the results of the metrics can be used for benchmarking linguistic standards and serve as a basis for discussion with both clients or translation groups.

If your organisation is planning to use J2450, please remember that:

The metric is published by SAE and specifically thinking about the automotive service information industry. Consequently, it may not be the most suitable method for evaluating translation where style and/or voice are of paramount importance.

Results for the reviews have to be collated manually and comparisons can be difficult if you are working directly from spreadsheets.

Guidance is needed for reviewers using the metric. It needs to be very clear what constitutes a serious or a minor error. Evaluators need to be trained to ensure clear and common understanding.

Once the points have been allocated, there is no benchmark to determine what constitutes a good or bad mark. We carried out a test of the metric at Pangeanic by submitting translations of the same text into 4 different languages to our in-house and external proof-readers. After evaluation, we found that the language with the second least points was not approved by the proof-reader as publishable. However, proof-readers for languages that had scored more points were generally happy with the translation.

The J2450 Translation Quality Metric can be purchased through the SAE internet site.