Chapter 5 POSTCRIPT by Thomas Wasow The preceding chapters have presented an enormous amount of information about the assumptions, mechanisms, and results of three contemporary theories of syntax. To the newcomer, this may seem overwhelming. The purpose of this postscript is to provide a somewhat more global perspective, bringing out some of the important similarities and differences among the theories. 1 Since one important similarity is that they share & common ancestry, 1 will begin with a few remarks on the recent history of theoretical syntax. For this purpose, it is useful to note the unique position occupied by Noam Chomsky in the field of theoretical linguistics. Probably no other Academic discipline has been so dominated by one individual in recent times. Virtually every innovation that has occurred in this field over the past quarter century has been either an elaboration of or a react ion to some suggestion of Chomsky's. On at least three occasions, Chomsky has revolutionized the way in which syntactic theorizing has been pursued. A brief summary of these changes will be helpful in contrasting the theories covered in this monograph. THREE PHASES Chomsky's early work, best exemplified by Syntactic Structures, was pri- marily concerned with establishing the need for generative grammars. At the time, linguistics (in America, at least) focussed their attention on meth- ods of data collection and analysis. The goal was to develop precise and objective procedures for classifying corpora of utterances. Chomsky ar- gued that linguists should be concerned primarily with theory construction rather than methodology. He pointed out that languages are infinite and argued that this fact should be of central importance in linguistics. Rather than worrying about the taxonomy of finite corpora, Chomsky proposed that linguists should be writing grammars for infinite languages and testing them against the intuitions of native speakers. This phase of Chomsky's work was also characterised by a high de- gree of formal explicitness and an interest in the mathematical properties of grammar formalisms. Chomsky (1959) developed a hierarchy of gram- mar types (now known as the Chomsky Hierarchy), and proved a number of theorems about what kinds of sets of strings each grammar type could and couldn't generate. These mathematical results, it was claimed, could be used to demonstrate conclusively the inadequacy of certain theories of syntax, on the grounds that natural languages exhibited constructions provably beyond the generative capacity of formalised versions of the the- ories in question. In the place of the discredited formalisms, Chomsky put forward his theory of transformational grammar. Advocating the method of rigorously stating a proposed theory and applying it strictly to linguis- tic material with no attempt to avoid unacceptable conclusions by ad hoc adjustments or loose formulation" (Chomsky, 1957, 5), he presented his analyses in the form of explicit rules for generating a substantial fragment of English. The emphasis, in short, was largely on what was later to be called "ob- servational adequacy": generating the correct set of strings for a natural language. Meaning as deemed to be outside of the realm of linguistics, and psychological considerations played no role in syntactic theorizing. While some attention was paid to making analyses simple and elegant, top pri- ority was given to developing a theory capable of generating all and only those strings that are well-formed sentences. The major result of this pe- riod was the claim that observational adequacy could be achieved with a transformational grammar, but not with a phrase structure grammar. In the mid-1960's, Chomsky's focus changed dramatically. The so- called `Standard Theory' of Chomsky (1965) (following earlier work by Fodor, Katz, and Postal) included a semantic component and was explicitly linked to questions of how knowledge of language was represented in the mind. Language was viewed as a system of connections between meanings and sounds, and the job of the linguist was to discover the rules that speakers employ in associating meanings with sounds. The Standard Theory identified two distinguished levels of representa- tion for sentences: deep and surface structures. Deep structures served as The basis by semantic interpretation, and surface structures as the basis for phonological interpretation. The two levels were related to one another by transformations. Hence, transformations played the central role in connect- ing meanings with sounds. And it was claimed that the same transformational rules needed to distinguish sentences from non-sentences would also serve this connecting function between sounds and meanings. Moreover, it was expected that they would turn out to be "psychologically real," in the sense that they could be shown to play a role in human sentence-processing.3 The focus of attention, then, was what Chomsky (1964) called "descrip- tive adequacy": modeling the ability of speakers to relate meanings and sounds. Interest in the mathematics properties of the theory was subordi- nated to concern with semantic and psychological questions; consequently, standards of explicitness and rigor were relaxed. The principal result claimed for this period was that descriptive adequacy could be achieved with a transformational grammar of a certain form. The widespread acceptance of the Standard Theory lasted only a few years. Beginning with the bitter battle over Generative Semantics in the late sixties; generative grammar became fragmented and fractionalised. Un- til the late seventies, most syntactic research was carried en within some revised and/or extended form of the Standard Theory, trough radically different approaches also began to gain visibility. A common theme in the most important work of this period was the need to constrain the power of transformation grammar. The Standard Theory was so rich in descriptive devices that its ability to provide anal- yses of particular construction began to seem rather unremarkable. This intuition was substantiated by Peters and Ritchie (1973), who proved that standard transformational grammar had the power of Tuning machines- that is, that they could be used to formalize any procedure that was in principle formalizable. Further, linguists began to take seriously Chom- sky's earlier claim that there was a higher level of adequacy than descriptive adequacy to aspire to. A theory would attain "explanatory adequacy" if. it provided a means for inferring a grammar en the basis of the facts of the language. In other words, explanatory adequacy is concerned with learnability: it says that syntactic theory should only permit grammars that could be learned on the basis of The primary data available to real language learners. Such a theory could not have the descriptive power of existing versions of generative grammar, which all permitted infinitely many different analyses of any phenomenon. The third phase of Chomsky's work is dominated by the quest for explanatory adequacy. His books and papers consistently assert that the fundamental question linguistics knows to answer is how language can be learned. The most striking fact about language, he says, la the gap between the small and arbitrary corpora children are exposed to and the unbounded ability people have to produce and understand utterances. This "argument from the poverty of the stimulus" indicates that the innate human language faculty narrowly constrains the class of possible hypotheses available to the child about the structure of the language being learned. Then even a small amount of data about a language may provide the learner with enough information to identify the language uniquely. Thus, this line of reasoning leads to a picture in which as much as possible is factored out of the grammars of particular languages and put into the theory of grammar- or "universal grammar" as it is sometimes called. It is difficult to pinpoint when this third phase began. As noted in Footnote 3, Chomsky advocated the goal of explanatory adequacy during his second phase. However, the concern with learnability actually did not become a serious motivating force in syntactic analyses until the mid sev- enties. Government-Binding Theory (GB), introduced by Chormsky (1981), represents the culmination of this tendency. GB research seeks to reduce the grammars of particular languages to settings for a small number of parameters, leaving the remainder to a rich set of universal principles. Because of the focus on universals, the GB literature differs from earlier transformational work in devoting considerable attention to cross-language comparisons. Only in this way can the parameters of language variation be identified and tested. With the concentration on learnability and universal grammar, many details in the analyses of particular constructions began to receive less attention. Likewise, concern for explicitness and formalization diminished. Indeed, there seems to be quite a general trade-off between theoretical elegance and attention to empirical detail. As Chomsky has sought to attain higher and higher levels of adequacy in his theories, he has concerned himself less and less with analyzing the specifics of particular constructions. Thus, although the discussion of levels of adequacy in the literature claims that attainment of any level presupposes attainment of the lower levels, in actual practice, there has been a cost attached each time the sights have been raised. THE PLACE OF GPSG AND LFG The preceding section described a monotonic course of development in syn- tactic theorizing over the past thirty years, with formal rigor and attention to grammatical details gradually giving way to an emphasis on universal grammar and larger theoretical questions. Generalized Phrase Structure Grammar (GPSG) and Lexical-Functional Grammar (LFG) can be viewed as attempts to preserve certain attractive features of the earlier phases of generative grammar. More specifically, I will argue that GPSG represents a return to a serious concern for observational adequacy, while LFG's em- phasis is on descriptive adequacy. I hasten to add, however, that this is an oversimplification, as both GPSG and LFG do address the question of uni- versals in substantive ways. Nevertheless, I think it is fair to characterize the emphases of the theories in this way, and that it is useful to do so, in trying to fit them into a larger picture. Like Chomsky's early work, the GPSG literature exhibits a keen interest in the mathematical properties of grammar formalisms. More specifically, Gazdar and others (see especially Pullum and Gazdar (1982)) have revived interest among linguists in questions of "weak generative capacity" -that is, of the sets of strings generable by various types of grammars. In order to investigate such questions, it is necessary that the grammars in question be formulated with considerable precision. Thus, work in GPSG resembles the transformational literature of twenty-five years earlier in its formal rigor. Likewise, GPSG papers typically present explicit grammar fragments, which are evaluated on the basis of the acceptability of the strings they generate. One striking dissimilarity, however, between GPSG and Chomsky's early theory is the role of semantics. Whereas Syntactic Structures ex- cluded the study of meaning from the domain of linguistics, GPSG's seman- tic analyses are such an integral part of the theory as to be inseparable from the syntactic proposals. More than anything else, this reflects the fact that the intervening decades saw the development of a rigorous formal approach to natural language semantics (Montague Grammar). Unlike the work on meaning by transformationalists in the sixties, no psychological significance has been attached to the semantics of GPSG. More generally, like the Syntactic Structures theory, GPSG has not been tied to any psychological claims, and its proponents have generally been agnostic on the question of the proper relationship between theories of grammar and models of language users.5 Moreover, though GPSG differs from early transformational grammar in that it makes far stronger claims about universal grammar, its proponents do not invoke the problem of language acquisition as the reason for putting forward such claims. Rather, the motivation appears to stem from general methodological considerations, viz., that universal claims are preferable to existential ones. Thus, even with regard to the quest for linguistic universals, GPSG has been largely free from psychological claims. LFG, in contrast, has its origin in the concern for the role of grammat- ical theory in models of processing. Ronald Kaplan, one of the developers of LFG, was trained as a psychologist, and began his career doing ex- perimental work on human sentence processing. In an important paper marking the transition from transformational grammar to LFG, Bresnan (1978) argued for the innovations she proposed on the grounds that they were psychologically "realistic." The focus on LFG as a basis for modeling how people process language has been maintained, especially in work by Marilyn Ford. Further, Pinker (1984) has argued that LFG can provide natural explanations for many facts about language acquisition. In short, LFG resembles the work of the Standard Theory period in its emphasis on descriptive adequacy. LFG and standard transformational grammar are also alike in positing two distinguished levels of grammatical representation, one which is used as the basis for semantic interpretation and one which is used as the basis for phonological interpretation. Of course, the f-structures of LFG do not look like the deep structures of the Standard Theory, but their roles in the two theories are quite similar. Specifically, deep structures and f-structures are the loci of grammatical information and constraints which depend on the predicate-argument relations in sentences. Especially important among these are subcategorization and control relations. A crucial difference between LFG and transformational grammar is the place of grammatical functions (or "grammatical relations," as they are sometimes called) like `subject' and `object' in the two theories. Chom- sky (1965) argued that insofar as grammatical functions played a role in linguistic descriptions, they could be defined in terms of configurations in phrase structure trees. For example, he proposed that a subject is an NP directly dominated by S. LFG, in contrast, takes grammatical functions to be primitives of the theory, in terms of which a great many rules and conditions are stated. In this, LFG is like the theory of Relational Grammar developed by David Perlmutter, Paul Postal, and others over the past dozen years or so. During that period, a large body of literature has been produced, analyz- ing syntactic phenomena (especially those having to do with the internal structure of clauses) in an impressively wide variety of languages. Though it is rarely acknowledged, the influence of Relational Grammar on all con- temporary work in syntactic theory would be hard to overestimate. Of the theories discussed in this monograph, only LFG adopts the central tenet of Relational Grammar (namely, that grammatical functions are primitive), but proponents of all three have devoted considerable energy to describ- ing phenomena and capturing generalizations first discovered by relational grammarians. Noteworthy examples are Burzio (1981), Dowty (1982), and Bresnan (1982, passim). I return now to the main theme of this section. The three theories represented in this monograph correspond-at least in what they choose to emphasize-to the three stages of Chomsky's work described in the pre- vious section. The correspondence is not, of course, perfect. However, the styles of doing linguistics, the kind of questions asked, and the criteria for evaluating analyses do match fairly well. It is important to understand that I am not accusing OPSO or LEG of arrested development. As I pointed out above, each stage in Chomsky's work has had its strengths and weaknesses. As the focus changed from observational to descriptive to explanatory ad- equacy, standards of explicitness, rigor, and attention to empirical detail declined. This is natural, for the larger the questions addressed, the harder it is to give complete answers. Like Chomsky's three stages, the theories under consideration constitute different choices regarding this trade-off. SOME POINTS OF CONVERGENCE In spite of the considerable differences in emphasis, formalism, and sub- stance, there are some respects in which the theories under discussion are surprisingly similar. In this section, I will describe some that have struck me; there may well be others. These points of convergence are of spe- cial interest because they indicate areas where linguists may have attained some real new insight transcending the more superficial differences among theories. Perhaps the most obvious similarity is the reduced role of transfor- mations in the these theories. Their common ancestor, standard trans- formational grammar, encoded most relationships among the elements of sentences by positing levels of representation at which the related elements were identical or adjacent; it then turned these abstract representations into the actual sentences by means of transformational rules. For example, the Standard Theory posited a transformation of `Equi-NP Deletion' to re- move the subject of a subordinate clause when it was identical with an NP in the main clause; so, the fact that Pat is the `understood' subject of Leave in Pat wants to Leave would be encoded by deriving it from Pat wants Pat to leave by deletion under identity. None of the theories considered in this monograph adopts this kind of analysis. More generally, alternative devices for encoding grammatical relation- ships have supplanted transformations almost everywhere. GPSG and LFG have no transformations, and GB has only one (Move-a). Moreover, most of the real work accomplished by Move-a is a function of the coindex- ing that is required between the moved element and the pre-movement position. Indeed, Chomsky (1982, 33) goes so far as to say, "It is imma- terial ... whether Move-a is regarded as a rule forming s-structure from d-structure, or whether it is regarded as a property of s-structures that are `base-generated' ... It is in fact far from clear that there is a distinction apart from terminology between these two formulations." The reduced status of transformations in contemporary linguistic theo- ries can be traced back to the observation that a great many of the transfor- mations in the Standard Theory produced outputs that were structurally identical to base-generated trees. For example, a passive sentence like The dog was chased by the cat appears to have the same constituent structure as an active sentence like The dog was racing by the house. This fact led Emonds (1976) to develop a theory in which a large class of transforma- tions was required to be `structure preserving.' This idea is manifested, in a more general form, in the Projection Principle of GB. It also served as an important motivation for eliminating the transformational component altogether in GPSG and LFG. If deep structures and surface structures were isomorphic, it was reasoned, then why relate them by means of rules with the power to alter structure? There is considerable diversity in the mechanisms proposed in the dif- ferent theories to do what had formerly been done with transformations. Even here, however, I think that there are significant commonalities. This is most evident in the treatment of unbounded dependencies. In stan- dard transformational grammar, these were handled by means of rules that moved (or deleted) elements across arbitrarily large stretches of a sentence. Contemporary theories, on the other hand, adopt analyses in which the re- lationship between `fillers'6 and `gaps' is mediated by intervening elements. In GB, this is accomplished by means of `successive cyclic' movement: wh- elements are moved through the COMP nodes of intervening clauses, and the resulting coindexed traces form a chain connecting the surface position of the wh-element with its d-structure position. In GPSG, the SLASH fea- ture is passed up the tree along a path of nodes connecting the gap with its filler. In the current LFG analysis, fillers and gaps are connected by means of a sequence of grammatical functions, summarized by the nota- tion `(I ... ),` though in this case the path through the f-structure is stated as one expression, not as a series of links. While the formal mechanisms are different in these three theories, they share the property of effectively reducing unbounded dependencies to se- quences of local dependencies, thereby taking the same position on what had been a controversial issue in the transformational literature of the early seventies. The differences among these analyses are slight in comparison with the difference between any of them and any treatment that posits no licensing relation between filler and gap- While clear empirical evidence has been found for the existence of elements sensitive to the presence of unbounded dependencies (see Zaenen (1983)), 1 know of no direct argument to choose among coindexing, feature passing, and sequences of functions. Similarly, the three theories agree, roughly speaking, that the gaps in these constructions must, in some sense, be licensed by a lexical element in its clause. In GB, this requirement is embodied in the Empty Cate- gory Principle.7 In GPSG, it follows from the Lexical Head Constraint that the Slash Termination Metarules can only introduce gaps as sisters to lexical heads- Finally, since unbounded dependencies in LFG are handled by means of sequences of grammatical functions, a gap must be identified in terms of the grammatical function it plays with respect to some lexical predicate.8 In short, not only must fillers and gaps be connected by some chain of intermediate elements, but the gap itself must stand in a special relationship with a lexical head close to it. These somewhat technical similarities reflect what I believe is a more fundamental insight, namely that unbounded dependencies are permitted only under rather limited circumstances. Standard transformational grain- mar treated movement or deletion over arbitrary stretches as the norm, specifying certain configurations as `islands' blocking such operations; con- temporary theories, in contrast, treat dependencies between widely sepa- rated elements as the exception, requiring special mechanisms to license them. Another basic idea embodied in these three theories is that clause struc- ture is largely predictable from the semantics of predicates. That is, if you know what a verb (or a predicative adjective or noun) means, you can tell a great deal about what else will occur in a clause it heads. Grammar rules are needed only to state certain language-wide generalizations about how the pieces of sentences are put together and to deal with apparent excep- tions to the normal patterns. Most of what was stipulated in the grammars of earlier theories is taken to be a function of lexical semantics. This idea is clearest in GB. The 0-Criterion says (oversimplifying somewhat) that the meaning of a predicate determines what grammati- cal arguments it will have. The Projection Principle guarantees that the structure determined by the lexical head's meaning cannot be altered in essential ways. The problem of acquiring a language, then, reduces largely to learning the meanings of words- There is more to it, of course, such as discovering the basic order of constituents, determining what the bounding nodes are, and learning which verbs trigger S'-deletion (such as seem and believe in English), and so on. In the canonical cases, however, sentence structure is a projection of the semantics of words. In LFG, the Principle of Function-Argument Biuniqueness ensures that grammatical functions will be paired with thematic roles. The Complete- ness and Coherence conditions, in turn, see to it that every grammatical function is filled by exactly one constituent in the f-structure. Hence, aside from exceptional predicates permitting non-thematic functions, clause structure is essentially determined by the thematic roles required by the predicate. Again, a certain amount of idiosyncratic information must be stipulated, but far less than in earlier theories. The version of GPSG presented in Chapter 3 does not really embody this idea of clause structure as a projection of lexical semantics. The closest thing to it is "shake´n´ bake semantics": the lexical type of a predicate determines the number and type of arguments that it will combine with; any sentence in which a predicate occurred with the wrong number or types of arguments would be uninterpretable. But the alert reader will recall that subcategorization is handled in the syntax, so that a sentence whose verb has the wrong number or category of sister constituents will be ungrammatical, not just semantically anomalous. That is, the lexical meaning of a predicate determines how the semantics of the pieces of a clause should be composed, but it does not determine what those pieces will be. In recent modifications of GPSG, however, this has been changed. In particular, Pollard's (1984, 1985) work on Head-driven Phrase Structure Grammar involves specifying subcategorization information in lexical en- tries, rather than in ID-rules. This permits the subcategorization to be linked rather directly to the lexical semantics, while at the same time al- lowing the ID-rules to be extremely general schemata. Thus, in the most recent incarnations of GPSG, it resembles GB and LFG in deriving canonical clause structure largely from lexical semantics. CONCLUSION It is interesting that contemporary syntactic theories seem to be converg- ing on the idea that sentence structure is generally predictable from word meanings, for this seems to be close to the naive view of a great many non-linguists. The layperson generally equates languages with collections of words, assuming, for example, that learning a new language consists of learning its vocabulary. It might seem unimpressive, then, that linguists are finally coming around to this common-sense view. I contend, quite the contrary, that this is rather remarkable, for the conventional wisdom appears, on closer inspection, to be hopelessly simplistic. Consider, for example, what happens when one takes a sentence of one language and translates each word into some unrelated language; in general, such word-for-word translations are not only not sentences of the second language, they are not even comprehensible to its speakers. Further, most linguists can produce numerous examples of synonyms or near synonyms that exhibit significant syntactic differences. For example, likely takes an infinitival complement (as in Pat is likely to win), but probable does not (hence, *Pat is probable to win); and have, in the sense of possession, cannot appear in the passive voice, unlike other verbs of possession (hence, Too many TV stations are owned/* had by fundamentalists). Thus, the naive view appears at first to be too naive to be taken very seriously; and it was not, as indicated by the emphasis on the study of rule systems in earlier stages of generative grammar. What has happened to change this is that syntacticians have iden- tified the ways in which languages deviate from the naive view. They have isolated certain kinds of grammatical information that are not pre- dictable from lexical semantics, and have developed theories to permit them to be expressed compactly. The surprising thing (to linguists) has been how little needs to be stipulated beyond lexical meaning. Languages dif- fer (within certain specifiable limits) in constituent and word ordering, in where unbounded dependencies will be permitted, in which constituents can be omitted, in which words have syntactic idiosyncrasies, and in a few other ways. They do not, as an earlier generation of linguists maintained, differ without limit. Indeed, the naive view that word meanings determine sentence structure turns out not to be a bad first approximation, though it leaves the most challenging problems in the study of syntax still to be accounted for. In short, there is evidence here of real progress. Current theories of syntax have focussed on a few key types of phenomena, namely those that aren't fully explainable in terms of what the words in the sentences mean. These are the loci both of cross-language variation and of the most interest- ing linguistic universals. There is significant disagreement about what the relevant generalizations are and how they should be formulated, but there is even more significant agreement about what the important phenomena are. Finally, it should be emphasised that all of the theories presented here, as well as the relationships among them, are in the process of fairly rapid change. It is safe to say that most of what appears in this work will be rendered obsolete within a few years. However, if my assessment that genuine progress is taking place is correct, then some familiarity with the current state of these syntactic theories should be useful to the specialist and interested non-specialist alike. 1 It should be emphasised that, like much else in syntax, the issue of how much real difference there is among contemporary grammatical theories (and where the differences lie) Ls quite controversial. By and large, those with the strongest commitment to one theory tend to see greater differences between it and the others than those of us who are less committed. This postscript, then, must be taken as my own personal perspective on the current state of the field. A number of people (including some close colleagues) have taken exception to some of the claims made below. 2 This most extreme version of this view was what was called “Generative Semantics," which held that deep structures could be identified with repre- sentations of meaning (see Pewmeyer (1980, Chaps. 4 and 5) for an overview and references) While Chomsky opposed Generative Semantics, he did at one time endorse the idea that transformations provided the primary link between meaning and sound. 3 For a survey of literature en this question, see Fodor, Boyer, and Garrett (1974» 4 Some clarification is in order here, for in introducing the distinctions among levels of adequacy for a linguistic theory, Chomsky argued that standard transformational grammar attained (at least en some points) the highest level, namely “explanatory adequacy”. Thus, my association of the Stan- dard Theory with concern for descriptive adequacy appears to be in direct conflict with what Chomsky himself asserted at the time. I contend, how ever, that the overwhelming bulk of the research of the time was concerned with generating the right strings and assigning to them structures which were semantically or psychologically plausible. Little mere than lip service was paid to explanatory adequacy, that Ls, to the goal of establishing a highly constrained theory of universal grammar. 5 There are a few exceptions, e.g., Grain and Fodor (1985). 6 Referred to in the chapters above as `displaced phrases.' 7 The ECP also permits gaps to be licensed through coindexing, but, as Sells notes, these are in a sense not the "core" cases of the ECP. 8 An earlier LFG treatment of unbounded dependencies required that for each gap there he a `lexical signature' (Kaplan and Bresnan (1982, 246ff)). This requirement was even closer to the FOP and the effects of the Lexical Read Constraint than the current LFG treatment. More information about certain points 1st phase: ‘On gathering data he [Chomsky] says that “There are.... very few reliable experimental or data-processing procedures for obtaining significant information concerning the linguistic intuition of the native speaker” [.....] The problem is not insufficient data but inadequate theories. The data required comes from introspective evidence nad elicited native speaker intuition.’ “[Chomsky] says he will be concerned with the syntacti component of a GENERATIVE GRAMMAR that specifies the WELL-FORMED STRINGS of a natural language and also assigns a STRUCTURAS DESCRIPTION to these strings. He also points out that strings which are not well-formed will also be structurally characterized. Language will thus be that set of well-formed strings produced by a system of rules, a grammar. The strings which are not well-formed will be of interst –perhaps of even greater interst than the well-formed ones- because they help to stablish the need for a particular rule. This is a similar kind of argument to the old saw that the exception proves the rule; by being able to recognize what the rule must not do, one gains knowledge about how to refine the rule toward avoiding strings which are not well-formed. Chomsky claims, moreover, that “a generative grammar must be a system of rules that can enumerate or generate an indefinitely large number of structures” [....] Sentences and syntax receive the greatest emphasis: all structural pieces [...] must begin as sentences.” (pp. 34-35) “The lowest level, observational adequacy, involves the accurate observation, collection and recording of data. This data may need to be arranged in some order of classification.” (p. 41) 2nd phase: “Chomsky sees a generative grammar as being compartmentalized into a number of subcomponents that are separated by level with no overlap. [....] There is, first and foremost, a division into a SYNTACTIC COMPONENT, a SEMANTIC COMPONENT, and a PHONOLOGICAL COMPONENT. The syntactic component sets itself off from the other two by being the only generative component. Framed in terms of the definition of a formal grammar, it alone has a start symbol; it alone can initiate a derivation [=generate a structure]. The remaining two types have, according to the definition, only vocabulary and productions.[...] By virtue of lacking the capacity to initiate a derivation or to generate a structure, these components are not generative, only interpretive. Moreover, every syntactic string must be interpreted in these two ways: semantically (it must be assigned a meaning) and phonologically (it must be assigned a manifestation as a sound sequence).” (p. 37) “The Aspects model of Chomsky (1965): PS rules lexicon Semantic interpretation Phonetic representation Note that each of the components and all others as well are integral and homogeneous; ther is no level mixing or mixing of rules [...] That a formal grammar intended to describe and ultimately explain human languages can be so organized is one of the two carrying hypotheses of the Aspects model. I is known as the DEEP STRUCTURE HYPOTHESIS. Notice, furthermore, that the semantic component, the only source sentences have for interpretation, has access to symbol strings at only the deep structure level. Since the transformational component will perform many changes of structure, it must be the case that these changes in form DO NOT change meaning, for the interpretation comes from only one source, the deep structure, and that level has already been passed. This second carrying hypothesis of the Aspects model, that transformations so not change meaning, is known as the KATZ-POSTAL HYPOTHESIS.” (p. 39) “The base component is composed of two subparts: the PHRASE STRUCTURE RULES and the LEXICON. [...] Chomsky cleaves the base component into two components, one for phrase structure which is hierarchical and one for lexical (formative) items which are not. These are known as the PHRASE STRUCTURE RULES and the LEXICON. The first of the base subcomponents is made up of intrinsically orederd context-free rules. The lexicon is composed of context-sensitive rules. It resembles in many ways a kind of dictionary that contains details about a particular word’s context of use.” (pp. 39-40) “The base [component]: [...] (1) category information, (2) grammatical relations, and (3) lexical restrictions. [..] For Chomsky, they are, at this point in generative transformational linguistics, the only significant ones. [...] The categorial part will be handled by rules like the ones in: S NP VP NP (Det) N {(PP) (S) N (Adj) n ({Adj PP S ) CP V (NP) (PP) These are called PHRASE STRUCTURE RULES.[...] In order to deal with the lexical properties of words as they in fluence their ability to combine with other words, a second rule type is needed in the base. Each terminal symbol , which for a natural language can be called a LEXICAL ITEM, will have an entry in the lexicon. The entry will contain two kinds of distributional information: (a) those syntactic properties of the environments in which it fits are called STRICT SUBCATEGORIZATION RESTRICTIONS; and (b) those semantic properties with syntacti consequences for the environment are called SELECTIONAL RESTRICTIONS. [... ] The claim Chomsky wishes to make is that with the combination of a generative categorial component –the set of PHRASE STRUCTURE RULES, and a lexical restriction component- the LEXICON, exactly the set of deep structure phrase markers of English or any other language for which the grammar is written will be produced. [...] The transformational component: The trasformational component of a grammar will be a set of potentially context-sensitive rules that operate to map one phrase marker onto the next in a sequence until the syntactic surface structure is produced. [...] The semantic component: The semantic component willmake use of SEMANTIC PROJECTION RULES. These rules take deep structure phrase markers as input along with the dictionary meanings of individual words. The projection rules then produce interpretations of word complexes working hierarchically up the deep structure tree. From the lexical meanings of each word and from the phrase marker of the sentence the interpretation of the whole can be calculated.” (pp. 46-49) “The rules –base and transformational- are better than a list of the cases. They intend to capture the LINGUISTICALLY SIGNIFICANT GENERALIZATION. A list captures no generalization. [...] The rules [...] must be capable of producing ALL but ONLY the well-formed strings (sentences). [...] The ALL guarantees that the grammar is sufficiently strong to produce each well-formed sentence and the ONLY guarantees that the grammar is sufficiently weak to produce no ill-formed sentences as well.” (p. 57) “A desciptively adequate grammar of a natural language must [...] make available for each sentence of the language a structurañ description that corresponds with the linguistic competence (knowledhe) of the native speaker.” (p. 42) 3rd phase: “Chomsky’s intersts are in a historical position he calls RATIONALISM in which he includes such men as Rene Descartes amd Wilhelm von Humboldt. He identifies himself with them and constrasts himself with EMPIRICISTS Willard Van Orman Quine and B:F: Skinner. The crucial point of disagreement is the ntion of INNATE IDEAS. Children in acquiring their mother tongue are involved in drawing out what is innate in the mind”. The empiricists and modern linguists before 1965 maintained that children learn by conditioning, by drill and explicit explanation from their caregivers, or by elementary data-processing. In these passages, Chomsky pleads for a universalist (language universals play a signifiacant role in language acquisition and language strucuture) and platonic view of knowledge. (p.34) [Relation between observational, descriptive and explanatory adequacy] “[Chomsky] points out that even if a proposal is capable of accurately predicting large amounts of observed phenomena [observational adequacy], this may not be relevant in evaluating its descriptive or explanatory adequacy. For example, it is possible on the basis of observation of the tide heights at some point –say the berth of the Queen Mary in Long Beach harbor- to develop a predictive account of the water´s depth, anf the cyclic changing of more or less water. It may not even be recognized that fromm time to time the predictions are slightly –sometimes not so slightly- incorrect. These could be seen as insignificant disturbances of unknown source; troublesome anomalies but usually not sufficient to ground a ship. Names could be assigned to the values of high and low water: MEAN LOW WATER, MEAN LOWER-LOW WATER, etc. [...] Yet, none of these observations and classifications would ever be capable of producing the explanatory account, for the ultimate explanation of the tides lies outside the data. In fact, nothing resembling the explanation –the gravitational influence of the earth’s only natural satellite- can even be observed in water height.” (p. 45) “Chomsky takes a position not only on the nature of grammars but also on the area to be coverd by a linguistic theory [...]: Linguistic theory is concerned primatily with an ideal speaker-listener, in a completely homogeneous speech-community, who knows its language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interst, and errors (random or characteristic) in applying his knowledge of the language in actual performance. [...] This interest in an ideal instead of real speaker-listener corresponds to what is called ESSENTIALISM. [...] We can say that the Generative Transformational Grammar research tradition is oriented on language universals not language particulars, grammars not languages, ideal speakers and hearers, and the knowledge of a speaker not his or her use of that knowledge.” (pp. 35 and 65) 1st phase: “El modelo de gramática ideado por Chomsky, considerado como un sistema finito de reglas o mecanismo que genera una cantidad infinitas de oraciones gramaticales correctas y sólo correctas –sin consideración a la medida en que son significativos- ha encontrado la más grande resonancia” (877) 2nd phase. “Chomsky (…) inquiere la multitud de reglas que el que habla y el que escucha han de almacenar inconscientemente en su cerebro desde que en su niñez aprendió a entender y hablar para producir y entender frases siempre nuevas.” (878) “<>” (884-885) “Cuando se dice que una oración de tipo Alfred parlé contiene sólo dos elementos, se analiza la superficialidad, morfológicamente, y se prescinde de lo que constituye su esencia, el enlace sintáctico. No hay nada que exprese estas conexiones o relaciones, pero son aprehendidas por lamente, de lo contrario la oración sería incomprensible.” (p. 900) “<> pretenda pasar estos límites). De aquí que sea capaz de prestar atención a procesos constantemente recurrentes (recursive), que subyacen en la formación de la frase, y parece considerar a esta cosa del habla más que de la lengua, de libre y voluntaria creación más que de reglas sistemáticas. [...] En su esquema no hay lugar para la <> de la índole de la que tiene lugar en el uso cotidiano y corriente de la lengua. La moderna Lingüística está sometida al fuerte influjo de Saussure sobre la lengua considerada como un inventario de elementos ... y al estudio preferente de sistemas de elementos en vez del sistema de reglas que figuraba en el foco de atención de la Gramática tradicional y de la Lingüística general de Humboldt>>” (p. 758) 3rd phase. “<>” (881-2) “<>” (879) P.e.: de oración activa a pasiva, de enunciativa a interrogativa… “125 años después de la última y más importante obra de Humboldt se produce en la Lingüística americana una apelación y vuelta a una de sus ideas más importantes y un intento de aplicarla siguiendo las tendencias y orientaciones de nuestra época. <>” (pág. 881/879) “<>” (pp. 899-900) MADDI ZUBIAURRE. ANA ÁLVAREZ. TATIANA FAJARDO. BELEN PIKABEA. Mª ANTONIA MORA.