WHAT PROBLEMS DOES IT HAVE?
Lide Zubiaurre Arzanegi
This is a report about Machine Translation (MT), which means a fully automatic translation without the intervention of any human being. This report is divided mainly in three parts, which review what Machine Translation is and its different definitions. It also reviews a little bit of Machine Translation’s history and how it has evolved from its first computer-based applications to what it is and how it is applied nowadays. Finally, I am going to deepen more in what Machine Translation’s main problems are and why it is difficult. However, I am not going to touch the technical side of MT. I am just going to touch what this means for society and its advantages and problems.
Through this report we will see how important language has become in New Technologies. In this report I am going to explain how Machine Translation is very important in the sense that we can integrate other languages to ours through Machine Translation, without the intervention of any human being, in order to help the human understanding. There are three different cases in Machine Translation: Assimilation, Dissemination and Communication. These three types help very much the human beings to have a written language communication. Assimilation refers to the class of translation in which material written by others is translated to our own language. However, Dissemination refers to an individual or an organisation which broadcasts its material into different languages in the world. There is also a very important third class which has become very evident: Communication. This is when two or more individuals are in interaction with a MT system mediating between them. These three types of MT have been a great advantage in technology and social communication.
Machine Translation has evolved very much in the last years. I am going to show how Machine Translation was five years ago and where we are nowadays. In this report I will portrait the evolution that language technology, specifically Machine Translation has had in the last ten years. The usage of this kind of language technology has changed and has grown through the years and has become accessible for ordinary people. It is not only accessible for big stores, companies, engineers and specialists any more.
Summarising, when a word has more than one meaning it becomes lexically ambiguous, when a phrase has more than one meaning, it is structurally ambiguous. That causes that when translating some sentences or words into other languages literally, for example "tomar el pelo", they may have different connotations and meanings and they will cause structural and lexical mismatches. Words will have double senses.
I have not followed the order of the questionnaries presented in each class. However, I have chosen different questions from different questionnaries to complete this report.
1. DIFFERENT DEFINITIONS AND TYPES OF MACHINE TRANSLATION
The term machine translation (MT) is normally taken in its restricted and precise meaning of fully automatic translation. However, in this we consider the whole range of tools that may support translation and document production in general, which is especially important when considering the integration of other language processing techniques and resources with MT. We therefore define Machine Translation to include any computer-based process that transforms (or helps a user to transform) written text from one human language into another. We define Fully Automated Machine Translation (FAMT) to be MT performed without the intervention of a human being during the process. Human-Assisted Machine Translation (HAMT) is the style of translation in which a computer system does most of the translation, appealing in case of difficulty to a (mono- or bilingual) human for help. Machine-Aided Translation (MAT) is the style of translation in which a human does most of the work but uses one of more computer systems, mainly as resources such as dictionaries and spelling checkers, as assistants.
Traditionally, two very different classes of MT have been identified. Assimilation refers to the class of translation in which an individual or organization wants to gather material written by others in a variety of languages and convert them all into his or her own language. Dissemination refers to the class in which an individual or organization wants to broadcast his or her own material, written in one language, in a variety of language to the world. A third class of translation has also recently become evident. Communication refers to the class in which two or more individuals are in more or less immediate interaction, typically via email or otherwise online, with an MT system mediating between them. Each class of translation has very different features, is best supported by different underlying technology, and is to be evaluated according to somewhat different criteria
2.EVOLUTION AND HISTORY OF MACHINE TRANSLATION
2.1 Where We Were Five Years Ago
Machine Translation was the first computer-based application related to natural language, starting after World War II, when Warren Weaver suggested using ideas from cryptography and information theory. The first large-scale project was funded by the US Government to translate Russian Air Force manuals into English. After a decade of initial optimism, funding for MT research became harder to obtain in the US. However, MT research continued to flourish in Europe and then, during the 1970s, in Japan. Today, over 50 companies worldwide produce and sell translations by computer, whether as translation services to outsiders, as in-house translation bureaux, or as providers of online multilingual chat rooms. By some estimates, MT expenditure in 1989 was over $20 million worldwide, involving 200—300 million pages per year (Wilks 92).
Ten years ago, the typical users of machine translation were large organizations such as the European Commission, the US Government, the Pan American Health Organization, Xerox, Fujitsu, etc. Fewer small companies or freelance translators used MT, although translation tools such as online dictionaries were becoming more popular. However, ongoing commercial successes in Europe, Asia, and North America continued to illustrate that, despite imperfect levels of achievement, the levels of quality being produced by FAMT and HAMT systems did address some users’ real needs. Systems were being produced and sold by companies such as Fujitsu, NEC, Hitachi, and others in Japan, Siemens and others in Europe, and Systran, Globalink, and Logos in North America (not to mentioned the unprecedented growth of cheap, rather simple MT assistant tools such as PowerTranslator).
In response, the European Commission funded the Europe-wide MT research project Eurotra, which involved representatives from most of the European languages, to develop a large multilingual MT system (Johnson, et al., 1985). Eurotra, which ended in the early 1990s, had the important effect of establishing Computational Linguistics groups in a several countries where none had existed before. Following this effort, and responding to the promise of statistics-based techniques (as introduced into Computational Linguistics by the IBM group with their MT system CANDIDE), the US Government funded a four-year effort, pitting three theoretical approaches against each other in a frequently evaluated research program. The CANDIDE system (Brown et al., 1990), taking a purely-statistical approach, stood in contrast to the Pangloss system (Frederking et al., 1994), which initially was formulated as a HAMT system using a symbolic-linguistic approach involving an interlingua; complementing these two was the LingStat system (Yamron et al., 1994), which sought to combine statistical and symbolic/linguistic approaches. As we reach the end of the decade, the only large-scale multi-year research project on MT worldwide is Verbmobil in Germany (Niemann et al., 1997), which focuses on speech-to-speech translation of dialogues in the rather narrow domain of scheduling meetings.
2.2 Where We Are Today
Thanks to ongoing commercial growth and the influence of new research, the situation is different today from ten years ago. There has been a trend toward embedding MT as part of linguistic services, which may be as diverse as email across nations, foreign-language web searches, traditional document translation, and portable speech translators with very limited lexicons (for travelers, soldiers, etc.; seeChapter 7).
In organizations such as European Commission, large integrated environments have been built around MT systems; cf. the European Commission Translation Service’s Euramis (Theologitis, 1997).
The use of tools for translation by freelancers and smaller organizations is developing quickly. Cheap translation assistants, often little more than bilingual lexicons with rudimentary morphological analysis and some text processing capability, are making their way to market to help small companies and individuals write foreign letters, email, and business reports. Even the older, more established systems such as Globalink, Logos, and Systran, offer pared-down PC-based systems for under $500 per language pair. The Machine Translation Compendium available from the International Association of MT (Hutchins, 1999) lists over 77 pages of commercial MT systems for over 30 languages, including Zulu, Ukrainian, Dutch, Swahili, and Norwegian.
MT services are offered via the Internet, often free for shorter texts; see the websites of Systran and Lernout and Hauspie. In addition, MT is increasingly being bundled with other web services; see the website of Altavista, which is linked to Systran.
3. MACHINE TRANSLATION’S MAIN PROBLEMS
In this chapter we will consider some particular problems which the task of translation poses for the builder of MT systems --- some of the reasons why MT is hard. It is useful to think of these problems under two headings: (i) Problems of ambiguity , (ii) problems that arise from structural and lexical differences between languages and (iii) multiword units like idiom s and collocations . We will discuss typical problems of ambiguity.
Of course, these sorts of problem are not the only reasons why MT is hard. Other problems include the sheer size of the undertaking, as indicated by the number of rules and dictionary entries that a realistic system will need, and the fact that there are many constructions whose grammar is poorly understood, in the sense that it is not clear how they should be represented, or what rules should be used to describe them. This is the case even for English, which has been extensively studied, and for which there are detailed descriptions -- both traditional `descriptive' and theoretically sophisticated -- some of which are written with computational usability in mind. It is an even worse problem for other languages. Moreover, even where there is a reasonable description of a phenomenon or construction, producing a description which is sufficiently precise to be used by an automatic system raises non-trivial problems.
3.1 Why computers do not translate better?
for the difficulties encountered by present computer system which attempt to produce partial or complete translations of texts from one natural language into another. The emphasis will be on what can or cannot be achieved automatically at present.
I shall not be concerned with the relative merits of different approaches to translation problems, for example, whether systems which switch between languages through some kind of interlingual representation are better than those which do not, or whether systems which employ methods from artificial intelligence are better than those which are use more familiar methods of computational linguistics, and I shall say virtually nothing about what developments may bring improvements in the future. Furthermore, I shall not be describing any particular system of whatever kind, past or present, or any methods of analysis or processing, or how dictionaries may be structured and compiled, whether monolingual or bilingual.
My aim is to give an introduction, for those unfamiliar with machine translation (MT), to the main areas which must be taken into consideration, even if designers of particular systems have opted deliberately to ignore some of them. The aim is to highlight in the broadest terms those areas of translation which are relatively easy for computerised handling and those areas which are relatively difficult, and I am describing the present situation and make no predictions about the future. The purpose, therefore, is not to describe the inherent limitations of machine translation, but to give a rather simplified explanation of what can be expected from any system at the present time, whatever its particular methodology. (For more details about the way MT systems work see Hutchins & Somers 1992).
Finally, it should be clear that I am not providing a methodology for evaluating system, only giving at best a check list of areas in which evaluation can take place. Evaluation involves much more than the quality of translation, although that is obviously a most important aspect. It involves also, for example, the integration of an MT system in the whole processing framework: transmission and receipt of texts, formatting, dictionary updating, editing, printing, and distribution. It involves examination of the compatibility of systems with other computer facilities, and in particular it embraces the integration of the system into the working patterns, practices and attitudes of existing staff, and the aims of the organisation as a whole.
In the best of all possible worlds (as far as most Natural Language Processing is concerned, anyway) every word would have one and only one meaning. But, as we all know, this is not the case. When a word has more than one meaning, it is said to be lexically ambiguous. When a phrase or sentence can have more than one structure it is said to be structurally ambiguous.
Ambiguity is a pervasive phenomenon in human languages. It is very hard to find words that are not at least two ways ambiguous, and sentences which are (out of context) several ways ambiguous are the rule, not the exception. This is not only problematic because some of the alternatives are unintended (i.e. represent wrong interpretations), but because ambiguities `multiply'. In the worst case, a sentence containing two words, each of which is two ways ambiguous may be four ways ambiguous, one with three such words may be ,ways ambiguous etc. One can, in this way, get very large numbers indeed. For example, a sentence consisting of ten words, each two ways ambiguous, and with just two possible structural analyses could have different analyses. The number of analyses can be problematic, since one may have to consider all of them, rejecting all but one.
Fortunately, however, things are not always so bad. In the rest of this section we will look at the problem in more detail, and consider some partial solutions.
Imagine that we are trying to translate these two sentences into French :
In the first sentence use is a verb, and in the second a noun, that is, we have a case of lexical ambiguity. An English-French dictionary will say that the verb can be translated by (inter alia) se servir de and employer, whereas the noun is translated as emploi or utilisation. One way a reader or an automatic parser can find out whether the noun or verb form of use is being employed in a sentence is by working out whether it is grammatically possible to have a noun or a verb in the place where it occurs. For example, in English, there is no grammatical sequence of words which consists of the + V + PP --- so of the two possible parts of speech to which use can belong, only
As we have noted we can give translation engines such information about grammar, in the form of grammar rules. This is useful in that it allows them to filter out some wrong analyses. However, giving our system knowledge about syntax will not allow us to determine the meaning of all ambiguous words. This is because words can have several meanings even within the same part of speech. Take for example the word button. Like the word use, it can be either a verb or a noun. As a noun, it can mean both the familiar small round object used to fasten clothes, as well as a knob on a piece of apparatus. To get the machine to pick out the right interpretation we have to give it information about meaning.
In fact, arming a computer with knowledge about syntax, without at the same time telling it something about meaning can be a dangerous thing. This is because applying a grammar to a sentence can produce a number of different analyses, depending on how the rules have applied, and we may end up with a large number of alternative analyses for a single sentence. Now syntactic ambiguity may coincide with genuine meaning ambiguity, but very often it does not, and it is the cases where it does not that we want to eliminate by applying knowledge about meaning.
We can illustrate this with some examples. First, let us show how grammar rules, differently applied, can produce more than one syntactic analysis for a sentence. One way this can occur is where a word is assigned to more than one category in the grammar. For example, assume that the word cleaning is both an adjective and a verb in our grammar. This will allow us to assign two different analyses to the following sentence.
One of these analyses will have cleaning as a verb, and one will have it as an adjective. In the former (less plausible) case the sense is `to clean a fluid may be dangerous', i.e. it is about an activity being dangerous. In the latter case the sense is that fluids used for cleaning can be dangerous. Choosing between these alternative syntactic analyses requires knowledge about meaning.
It may be worth noting, in passing, that this ambiguity disappears when can is replaced by a verb which shows number agreement by having different forms for third person singular and plural. For example, the following are not ambiguous in this way: a)has only the sense that the action is dangerous, b) has only the sense that the fluids are dangerous.
Another source of syntactic ambiguity is where whole phrases, typically prepositional phrases, can attach to more than one position in a sentence. For example, in the following example, the prepositional phrase with a Postscript interface can attach either to the NP the word processor package, meaning ``the word-processor which is fitted or supplied with a Postscript interface'', or to the verb connect, in which case the sense is that the Postscript interface is to be used to make the connection.
Notice, however, that this example is not genuinely ambiguous at all, knowledge of what a Postscript interface is (in particular, the fact that it is a piece of software, not a piece of hardware that could be used for making a physical connection between a printer to an office computer) serves to disambiguate. Similar problems arise with which could mean that the printer and the word processor both need Postscript interfaces, or that only the word processor needs them.
This kind of real world knowledge is also an essential component in disambiguating the pronoun it in examples such as the following
In order to work out that it is the printer that is to be switched on, rather than the paper, one needs to use the knowledge of the world that printers (and not paper) are the sort of thing one is likely to switch on.
There are other cases where real world knowledge , though necessary, does not seem to be sufficient. The following, where two people are re-assembling a printer, seems to be such an example:
It is not clear that any kind of real world knowledge will be enough to work out that it in the last sentence refers to the cartridge, rather than the new paper, or toner. All are probably equally reasonable candidates for fixing. What strongly suggests that it should be interpreted as the cartridge is the structure of the conversation --- the discussion of the toner and new paper occurs in a digression, which has ended by the time it occurs. Here what one needs is knowledge of the way language is used. This is knowledge which is usually thought of as pragmatic in nature. Analysing the meaning of texts like the above example is important in dialogue translation, which is a long term goal for MT research, but similar problems occur in other sorts of text.
Another sort of pragmatic knowledge is involved in cases where the translation of a sentence depends on the communicative intention of the speaker --- on the sort of action (the speech act) that the speaker intends to perform with the sentence. For example, could be a request for action, or a request for information, and this might make a difference to the translation.
In some cases, working out which is intended will depend on the non-linguistic situation, but it could also depend on the kind of discourse that is going on --- for example, is it a discourse where requests for action are expected, and is the speaker in a position to make such a request of the hearer? In dialogues, such pragmatic information about the discourse can be important for translating the simplest expressions. For example, the right translation of Thank you into French depends on what sort of speech act it follows. Normally, one would expect the translation to be merci. However, if it is uttered in response to an offer, the right translation would be s'il vous plaît (`please').
At the start of the previous section we said that, in the best of all possible worlds for NLP, every word would have exactly one sense. While this is true for most NLP, it is an exaggeration as regards MT. It would be a better world, but not the best of all possible worlds, because we would still be faced with difficult translation problems. Some of these problems are to do with lexical differences between languages --- differences in the ways in which languages seem to classify the world, what concepts they choose to express by single words, and which they choose not to lexicalize. We will look at some of these directly. Other problems arise because different languages use different structures for the same purpose, and the same structure for different purposes. In either case, the result is that we have to complicate the translation process. In this section we will look at some representative examples.
Examples like the ones in below are familiar to translators, but the examples of colours c), and the Japanese examples in d) are particularly striking. The latter because they show how languages need differ not only with respect to the fineness or `granularity' of the distinctions they make, but also with respect to the basis for the distinction: English chooses different verbs for the action/event of putting on, and the action/state of wearing. Japanese does not make this distinction, but differentiates according to the object that is worn. In the case of English to Japanese, a fairly simple test on the semantics of the NPs that accompany a verb may be sufficient to decide on the right translation. Some of the colour examples are similar, but more generally, investigation of colour vocabulary indicates that languages actually carve up the spectrum in rather different ways, and that deciding on the best translation may require knowledge that goes well beyond what is in the text, and may even be undecidable. In this sense, the translation of colour terminology begins to resemble the translation of terms for cultural artifacts (e.g. words like English cottage, Russian dacha, French château, etc. for which no adequate translation exists, and for which the human translator must decide between straight borrowing, neologism, and providing an explanation). In this area, translation is a genuinely creative act, which is well beyond the capacity of current computers.
Calling cases such as those above lexical mismatches is not controversial. However, when one turns to cases of structural mismatch, classification is not so easy. This is because one may often think that the reason one language uses one construction, where another uses another is because of the stock of lexical items the two languages have. Thus, the distinction is to some extent a matter of taste and convenience.
A particularly obvious example of this involves problems arising from what are sometimes called lexical holes --- that is, cases where one language has to use a phrase to express what another language expresses in a single word. Examples of this include the `hole' that exists in English with respect to French ignorer (`to not know', `to be ignorant of'), and se suicider (`to suicide', i.e. `to commit suicide', `to kill oneself'). The problems raised by such lexical holes have a certain similarity to those raised by idiom s: in both cases, one has phrases translating as single words. We will therefore postpone discussion of these until Section node55.html - Idiomsnode55.html - Idioms.
One kind of structural mismatch occurs where two languages use the same construction for different purposes, or use different constructions for what appears to be the same purpose.
Cases where the same structure is used for different purposes include the use of passive constructions in English, and Japanese . In the example below, the Japanese particle wa, which we have glossed as `TOP' here marks the `topic' of the sentence --- intuitively, what the sentence is about.
We can see different constructions used for the same effect in cases like the following:
The first example shows how English, German and French choose different methods for expressing `naming'. The other two examples show one language using an adverbial ADJUNCT ( just, or graag(Dutch) `likingly' or `with pleasure'), where another uses a verbal construction. This is actually one of the most discussed problems in current MT, and it is worth examining why it is problematic.
These representations are relatively abstract (e.g. the information about tense and aspect conveyed by the auxiliary verb have has been expressed in a feature) , but they are still rather different. In particular, notice that while the main verb of a) is see, the main verb of b) is venir-de. Now notice what is involved in writing rules which relate these structures (we will look at the direction English French).
Figure: Translating have-just into venir-de
Of course, given a complicated enough rule, all this can be
stated. However, there will still be problems because writing a rule in
isolation is not enough. One must also consider how the rule interacts with
other rules. For example, there will be a rule somewhere that tells the system
how see is to be translated, and what one should do with its SUBJECT
and OBJECT. One must make sure that this rule still works (e.g. its application
is not blocked by the fact that the SUBJECT is dealt with by the special rule
above; or that it does not insert an extra SUBJECT into the translation, which
would give * Sam vient de Sam voir Kim). One must also make sure that
the rule works when there are other problematic phenomena around. For example,
one might like to make sure the system produces b) as the translation of
Figure: The Representation of venir-de
Of course, one could try to argue that the difference between English just and French venir de is only superficial. The argument could, for example, say that just should be treated as a verb at the semantic level. However, this is not very plausible. There are other cases where this does not seem possible. Examples like the following show that where English uses a `manner' verb and a directional adverb/prepositional phrase, French (and other Romance languages ) use a directional verb and a manner adverb. That is where English classifies the event described as `running', French classifies it as an `entering':
A slightly different sort of structural mismatch occurs where two languages have `the same' construction (more precisely, similar constructions, with equivalent interpretations), but where different restrictions on the constructions mean that it is not always possible to translate in the most obvious way. The following is a relatively simple example of this.
In general, relative clause constructions in English consist of a head noun ( letters in the previous example), a relative pronoun (such as which), and a sentence with a `gap' in it. The relative pronoun (and hence the head noun) is understood as if it filled the gap --- this is the idea behind the representations in. In English, there are restrictions on where the `gap' can occur. In particular, it cannot occur inside an indirect question, or a `reason' ADJUNCT. Thus, b), and d) are both ungrammatical. However, these restrictions are not exactly paralleled in other languages. For example, Italian allows the former, as in a), and Japanese the latter, as in c). These sorts of problem are beyond the scope of current MT systems --- in fact, they are difficult even for human translators.
Roughly speaking, idioms are expressions whose meaning cannot be completely understood from the meanings of the component parts. For example, whereas it is possible to work out the meaning of a) on the basis of knowledge of English grammar and the meaning of words, this would not be sufficient to work out that b) can mean something like `If Sam dies, her children will be rich'. This is because kick the bucket is an idiom.
In many cases, a natural translation for an idiom will be a single word --- for example, the French word mourir (`die') is a possible translation for kick the bucket. This brings out the similarity, which we noted above, with lexical holes of the kind shown in.
In general, there are two approaches one can take to the treatment of idioms. The first is to try to represent them as single units in the monolingual dictionaries. What this means is that one will have lexical entries such as kick_the_bucket. One might try to construct special morphological rules to produce these representations before performing any syntactic analysis --- this would amount to treating idioms as a special kind of word, which just happens to have spaces in it. As will become clear, this is not a workable solution in general. A more reasonable idea is not to regard lexical lookup as a single process that occurs just once, before any syntactic or semantic processing, but to allow analysis rules to replace pieces of structure by information which is held in the lexicon at different stages of processing, just as they are allowed to change structures in other ways. This would mean that kick the bucket and the non-idiomatic kick the table would be represented alike (apart from the difference between bucket and table) at one level of analysis, but that at a later, more abstract representation kick the bucket would be replaced with a single node, with the information at this node coming from the lexical entry kick_the_bucket. This information would probably be similar to the information one would find in the entry for die.
In any event, this approach will lead to translation rules saying something like the following, in a transformer or transfer system (in an interlingual system , idioms will correspond to collections of concepts, or single concepts in the same way as normal words).
in_fact => en_fait in_view_of => étant_donné kick_the_bucket => mourir kick_the_bucket => casser_sa_pipe
The final example shows that one might, in this way, be able to translate the idiom kick the bucket into the equivalent French idiom casser sa pipe --- literally `break his/her pipe'.
The second approach to idioms is to treat them with special rules that change the idiomatic source structure into an appropriate target structure. This would mean that kick the bucket and kick the table would have similar representations all through analysis . Clearly, this approach is only applicable in transfer or transformer systems, and even here, it is not very different from the first approach --- in the case where an idiom translates as a single word, it is simply a question of where one carries out the replacement of a structure by a single lexical item, and whether the item in question is an abstract source language word such as kick_the_bucket or a normal target language word (such as mourir).
Figure:Dealing with Idioms 1
One problem with sentences which contain idioms is that they are typically ambiguous , in the sense that either a literal or idiomatic interpretation is generally possible (i.e. the phrase kick the bucket can really be about buckets and kicking). However, the possibility of having a variety of interpretations does not really distinguish them from other sorts of expression. Another problem is that they need special rules (such as those above, perhaps), in addition to the normal rules for ordinary words and constructions. However, in this they are no different from ordinary words, for which one also needs special rules. The real problem with idioms is that they are not generally fixed in their form, and that the variation of forms is not limited to variations in inflection (as it is with ordinary words). Thus, there is a serious problem in recognising idioms.
This problem does not arise with all idioms. Some are completely frozen forms whose parts always appear in the same form and in the same order. Examples are phrases like in fact, or in view of. However, such idioms are by far the exception. A typical way in which idioms can vary is in the form of the verb, which changes according to tense , as well as person and number. For example, with bury the hatchet (`to cease hostilities and becomes reconciled', one gets He buries/buried/will bury the hatchet, and They bury/buried/shall bury the hatchet. Notice that variation in the form one gets here is exactly what one would get if no idiomatic interpretation was involved --- i.e. by and large idioms are syntactically and morphologically regular --- it is only their interpretations that are surprising.
A second common form of variation is in the form of the possessive pronoun in expressions like to burn one's bridges (meaning `to proceed in such a way as to eliminate all alternative courses of action'). This varies in a regular way with the subject of the verb:
Rather different from idioms are expressions like those in which are usually referred to as collocations . Here the meaning can be guessed from the meanings of the parts. What is not predictable is the particular words that are used.
In what we have called linguistic knowledge (LK) systems, at least, collocations can potentially be treated differently from idioms. This is because for collocations one can often think of one part of the expression as being dependent on, and predictable from the other. For example, one may think that make, in make an attempt has little meaning of its own, and serves merely to `support' the noun (such verbs are often called light verbs, or support verbs). This suggests one can simply ignore the verb in translation, and have the generation or synthesis component supply the appropriate verb. For example, in Dutch , this would be doen, since the Dutch for make an attempt is een poging doen (`do an attempt').
One way of doing this is to have analysis replace the lexical verb (e.g. make) with a `dummy verb' (e.g. VSUP). This can be treated as a sort of interlingual lexical item, and replaced by the appropriate verb in synthesis (the identity of the appropriate verb has to be included in the lexical entry of nouns, of course --- for example, the entry for poging might include the feature support_verb=doen. The advantage is that support verb constructions can be handled without recourse to the sort of rules required for idioms (one also avoids having rules that appear to translate make into poging `do').
Of course, what one is doing here is simply recording, in each lexical entry, the identity of the words that are associated with it, for various purposes --- e.g. the fact that the verb that goes with attempt is make (for some purposes, anyway). An interesting generalisation of this is found in the idea of lexical functions . Lexical functions express a relation between two words. Take the case of heavy smoker, for example. The relationship between heavy and smoker is that of intensification, which could be expressed by the lexical function Magn as follows, indicating that the appropriate adjective for English smoker is heavy, whereas that for the corresponding French word fumeur is grand (`large') and that for the German word Raucher is stark (`strong').
(English) Magn(smoker) = heavy (French) Magn(fumeur) = grand (German) Magn(Raucher) = stark
If one wants to translate heavy smoker into French , one needs to map smoker into fumeur, together with the information that fumeur has the lexical function Magn applied to it, as in English. It would be left to the French synthesis module to work out that the value Magn(fumeur) = grand, and insert this adjective appropriately. Translation into German is done in the same way.