Request Quotation

An Overview of Machine Translation

MT_Alpha 600x300

Machine translation (MT) is defined as translation from one language to another as performed through software and unaided by the human hand, at least at the time the actual “raw” translation is rendered. MT systems are informed by advanced computational linguistics in order to identify, study and attempt to solve the various problems involved in automated translation. The benefits of MT are numerous: greatly increased productivity through automation, access to instantaneous results without dependence on human translators' availability (such as due to time zone differences or if content volumes are too high for the number of available translators) and the ability to translate any text type or length, providing that the lexical information to support it has been inputted into the MT system in use.

MT can be divided into three distinct categories:

  • Rules-based MT (RBMT) relies upon systematized grammar rules used in conjunction with standard lexicons, which are augmented by specialist dictionaries according to the application in question.
  • Rather than focus on linguistic rules, statistical MT (SMT) models undergo a learning process during which they are fed a significant amount of linguistic data to analyze, mostly at the phrase level, and then apply it while rendering translations.
  • Neural MT (NMT), the newest type of MT, employs complex networks of processing units to perform translation based on word sequencing. They look for patterns and predict the most viable translation. The interconnections of these networks are modeled after the neural networks of the human brain.

Let’s take a more in-depth look at each type of MT, including their various strengths and weaknesses in the context of localization.

Rules-based MT

Rules-based MT systems rely on bilingual corpora to analyze input and generate output according to linguistic regularities in both languages involved in the translation. Such models require inputting broad data on linguistic rules (such as morphological, syntactic and semantic information), as well as robust bilingual dictionaries for the language pair in question. Rules-based models do not require pre-existing bilingual texts to be fed in ahead of time. However, augmenting these systems with specialized lexicons relevant to the content to be translated has, not surprisingly, been shown to boost MT consistency and accuracy in terms of terminology choices.

One of the shortcomings of rules-based systems is that in many cases, truly adequate bilingual dictionaries do not exist, and creating new ones can be costly and time-consuming. Moreover, due to the manual input required, adapting such models, such as to include new grammar rules or lexical data, typically involves a large investment. Lastly, these systems tend to fall short when dealing with ambiguous language like metaphors or idiomatic expressions. The associated cost of the human maintenance these models require is among the key reasons this type of MT is not always the most popular choice.

Statistical MT

Statistical MT systems analyze both monolingual and bilingual content in order to generate sophisticated statistical models to draw on when performing translation services. These systems are popular in no small part due to the ease and speed with which they can be set up, especially when compared to rules-based models. Data such as translation memories and bilingual dictionaries, including specialized glossaries for the application at hand, are uploaded to train the system to identify and replicate linguistic patterns at either the word or phrase level. This data is usually supplemented with further monolingual texts to enable greater fluency, as the system learns to stylistically mimic the data it is fed based on the frequency of a given linguistic pattern.

Statistical MT models tend to achieve a considerable level of fluency over rules-based MT systems, but at the cost of consistency. Other drawbacks are that they are generally quite CPU-intensive (as a result of the complex analyses and projections required of them) and demanding as far as storage is concerned (in order to have enough data for efficient modeling). In recent years, however, the advent of cloud technology has opened up new technical possibilities for statistical models. Local users can now set up and modify systems that draw on bilingual corpora stored remotely, which means greater ease of logistics and lower costs for the user.

Neural MT

Considered the cutting edge of MT tech, neural systems are inspired by the complex interconnected web of processing systems that are found in the human brain. Neural MT models are essentially an artificial neural network of multiple processing devices that determine the likelihood of word sequences. These processing devices use so-called deep learning, meaning they can learn from unstructured or unlabeled data in an unsupervised environment to predict the probability of sequences one word at a time. In this sense, they represent a simplification from phrase-oriented statistical MT. Neural MT models are also simpler structurally in that they use a single sequencing component rather than separate language, translation and reordering models, as in the case of statistical MT.

However, neural MT systems are not without their own complications. While they do tend to produce more fluent-sounding translations, this often comes at the cost of accuracy. In fact, they have been known to produce idiomatic-sounding but unfaithful translations. These errors of accuracy can be challenging for the human eye to spot. Such complications mean greater time and resources may be required in the post-editing phase.

Furthermore, and of particular relevance to localization, neural models lack what we might call common sense, meaning they struggle with things like appropriate register and often balk in the face of humorous, metaphorical or sarcastic language. In addition, as their resources are chiefly dedicated to the correct prediction of the next likeliest word in a sequence, they tend to sacrifice consistency and even coherence in that they do not "learn" stylistically in the same way a statistical model does. In the worst cases, these models can render inappropriate or even unintelligible translations. That being said, such challenges can often be overcome through proper operationalization and integration with other MT and human-based tools in a company's localization workflow.

No "one size fits all" solution

To sum up, MT offers a world of possibilities, each with its own pros and cons when it comes to translating high-volume, quick-turnaround or additional content that would not otherwise be localized. Every language pair presents unique challenges, and every business will have its own particular needs and priorities. Therefore, your best bet is to work with a professional localization firm with a broad knowledge and long history of working with MT solutions. They will help you assess your needs, prioritize your considerations of cost, time and quality and custom-tailor an integrated solution to get you the results you want, whether it’s through a statistical, neural or even rules-based system.

 

At RWS Alpha, it is our passion to work with companies to identify and implement the right machine translation technology for their specific goals. Let us know how we can help you.

Comments