Stardate
20040512.1306 (On Screen): Amritas (Marc Miyaki) is a linguist, and a firm opponent of Noam Chomsky both scientifically and politically. He has argued that Chomsky's linguistic theories are bunk, most recently with this post. (Previously also here and here; more examples linked from here.)
I find Marc's arguments very compelling, and I have little doubt that he's right. IANAL (I Am Not A Lawyer, but in this case also I Am Not A Linguist) so I can follow along with his analysis and dimly understand it, even though I don't have all the technical background to really fully comprehend it.
Chomsky makes a strong claim that there is a universal grammar in all human languages, a deep level in which all of them are the same. Amritas says:
In theory, the existence of universal grammar could be empirically demonstrable. There is no a priori reason that humans could not be born with the same hard-wiring for language. I would bet that most linguists, both Chomskyans and non-Chomskyans like myself, think that there is an innate component of language.
I, too, think there's an innate component of language. Modern human language is the result of a combination of biological/evolutionary change in human brains driven by and interacting with developing human cultural processes in support of the tribe-unit. The "purpose" of language is to permit humans to disseminate useful information, and the reason it was driven evolutionarily was because that kind of communication enhanced survival, especially at the level of tribes (since those within a tribe were usually more closely related genetically than members of different tribes). In the early part of that process (i.e. the first 95% of it up until maybe 20,000 years ago) most of the information which was most useful to communicate was about the real world and the mundane existence of humans.
Thus there are essential characteristics of the universe itself which are reflected in language. That's why I suspect that every human language contains nouns, verbs, and adjectives. It is not as clear whether they also all contain adverbs as an essential category. Naturally, these categories are extremely broad and there will be a considerable degree of softness around the edges, as well as words and phrases which can not easily be placed into only one of those categories, or into any of them at all.
Nouns are verbal references to things, and likely the earliest ones referred to simple and common things which were readily observable and essential parts of life: food, water, fire, fruit/vegetable (things to eat which didn't have to be chased), prey-animal (things to hunt), predator-animal (things which hunt us), poison (things which were not good to eat), manure. Others would have dealt with the internal structure and components of human tribes: man, woman, baby, child, adult, old-person, brother, sister, mother, father, family, tribe-leader, wife, husband, criminal/outcast, priest/shaman.
Verbs are references to processes and actions. Again, it seems likely that the earliest ones referred to common actions essential for life: eat, drink, run, kill, gather, cook, sleep, have sex, get sick, birth a baby, die.
It's informative to note that the first thing babies learn in the process of developing language skills are nouns, and initially there are only three sentences they use: Look, there's a <noun>! and I want <noun>! (The third sentence is "No!", which is not relevant to this discussion.)
Both of those sentences are expressed by saying the noun. The context and delivery and certain critical body-language cues makes clear which sentence the baby intends.
The biggest leap for babies is acquisition of verbs and construction of two word sentences consisting of a noun and a verb. From there language development accelerates, and within a few more months language is an acquired skill.
Adjectives are the next thing that babies acquire and begin to use. Adjectives are verbal references to properties of things which are recognized as being the same even though the things themselves may not be similar. (E.g. both apples and blood have the property of being red. It's notable that they're not exactly the same shade of red, and that not all apples are red at all.)
Early adjectives would have referred to properties for which we have direct sensors: hot, cold, sweet, sour, loud, red, green, bright, dark, putrid. (It is by no means clear that such shared properties were recognized as "things" in themselves which were assigned nouns. That may have come later.) There were also indirect observed properties which would have been critical: fast (prey which was hard to catch because it could flee), slow (prey which was easy to chase down), heavy, edible, delicious, inedible, disgusting, rare, common. Within a tribe, you might also need: wise, foolish, crippled, strong, clever, reliable, untrustworthy.
These categories are part of the real world or how we perceive it, and the culture/evolution interaction would have reinforced the development of neural circuits to support manipulation and communication of them.
Until such time as we invent time travel, it is not possible for us to determine the details of how this happened. If we ever become able to analyze the biological structures in our brains which we use in language, we'll be able to learn something about it all, but that won't tell us anything about the cultures or actual languages which helped drive that evolutionary change. For instance, I suspect it isn't possibly for us to know whether "friend" (a noun) or "friendly" (an adjective) developed first, or whether they developed simultaneously. The concept of "friend/enemy" is another which likely needed verbal tags very early, to refer to interactions within a tribe and interactions between tribes, but how did they actually handle it, as a noun (a thing) or an adjective (a property)? We probably will never know.
Adverbs are more sophisticated and I'm not at all certain that they are biologically innate, or at least anything like as innate as the other three. "Adverb" is actually something of a garbage-can category anyway, encompassing all words which are not adjectives which modify the meaning of other words or modify the impact of entire sentences, or even entire multi-sentence speeches. (That's why "very" and "slightly are considered adverbs; they usually are used to modify adjectives or other adverbs, to enhance or diminish respectively.)
Ignoring the garbage-can, broadly speaking adverbs are to verbs as adjectives are to nouns. Broadly speaking, adverbs refer to properties shared by the actions referred to by verbs which are common even if the actions themselves are disparate. But it may not have been essential to distinguish shared properties of actions and processes. It's not clear that the kinds of actions which got assigned the earliest verbs actually did share many such properties, or that there was any utility in being able to talk about such properties as they did share. It might be that humans only began to need adverbs once language had become quite sophisticated, after most of the evolutionary change.
About the only candidate adverbs I can think of which might have been needed very early would have been "quickly", "slowly", "quietly", "noisily", "cautiously", because of the utility of those concepts in making plans in hunting. And it's possible to communicate most of those concepts using adjective constructions.
If the need for adverbs only becomes significant after the development of a considerable mental facility for manipulating nouns, adjectives, and verbs, then it may be that there was the beginning of biological development of mental mechanisms related to adverbs, but that it hadn't gotten very far before the development of technology eclipsed biological evolution as the engine of change in the human condition.
It's noteworthy that children begin to use and understand adverbs much later in their process of acquiring language, only when they begin to construct quite complex multi-word sentences about much more sophisticated subjects. It isn't necessarily the case that children acquire language in the same way within a few years that the human species did over a period of millennia, but this is still suggestive.
There are other properties of modern languages which I think are extremely unlikely to be innate, because they support discussion about far more sophisticated and esoteric issues which would not directly contribute to the survival prospects of a stone-age tribe. For instance, in modern English we have nouns, or can create nouns, which refer to the properties described by adjectives, and we use such a noun when we want to talk about the characteristics of the property represented by a given adjective independently of the characteristics of the objects which exhibit that property. Thus "fast"/"speed", "heavy"/"heaviness".
I don't think that's probably innate. Some elements of language almost certainly are innate, but as language and human brains developed in tandem, part of what was created was a more general-purpose ability to manipulate symbols without biological assignment of categories to those symbols. In addition to development of dedicated hardware to deal with certain kinds of linguistic problems, we also developed considerable ability to create our own software to deal with new challenges.
That's why we didn't have to evolve new brain structures in order to talk about Quantum Mechanics.
It's hardly profound to say that we're wired to understand the distinction between nouns and verbs. Chomsky goes much further than that, however. He claims that grammar is also universal.
A grammar is a set of rules describing how words are combined to produce sentences which also explains how to interpret the resulting sentences. A grammar permits multiple words to be used together to convey information for which there is no single word, using order, context, construction, conjugation, and other mechanisms to encode information beyond that conveyed by the component words.
In many languages the same words in different orders convey different information. "John struck Jerry" is certainly not the same as "Jerry struck John" (at least, in English). But there's more involved than just order, since "John struck Jerry" means the same thing as "Jerry was struck by John". It is the grammar of the language which permits us to understand the information encoded in those three sentences, and to understand that two have the same meaning which is different from the third.
As I understand it, in some languages the indication of who struck who doesn't come from word order. Rather, indicator words or prefixes or suffixes or some other modifications to the names would indicate who did the striking and who got struck. But that's the point where I would have to defer to an expert like Amritas.
Most of my knowledge of grammar comes from computer science, where we use formal, precise and rigorous descriptions of grammars in order to describe computer languages which permit programmers to convey instructions to computers. Computer languages are synthetic and highly restricted in subject matter, and are extremely constrained and limited by comparison to human languages. All computer languages only have the imperative voice, for example; all speak only in the present tense. Computer languages contain nouns, verbs, adjectives and adverbs, but each such category is sharply delimited and rigorously described, and interpretation of a "sentence" or "work" in a computer language is objectively unambiguous. Moreover, a rigorous computer language description not only describes the grammar, but also the entire lexicon (the "dictionary") of "words" permitted to be used within that language. (For instance, the description of the "C" computer language contains a rigorous description of construction of variable names.)
We programmers have precise meta-language conventions for description of grammars, and one of the most common is called Backus-Naur Form, or BNF. BNF itself is thus a grammar, in a sense, but it is a very limited one which is entirely descriptive. BNF contains exactly one verb: "Is defined as" (which in BNF is spelled ::=).
Grammars in human languages are not remotely as clean, constrained, and consistent as the synthetic grammars we have created for computer languages. And the existence of puns demonstrates that natural languages are potentially ambiguous.
Grammars for computer languages are the result of a creative process, driven by the constraints of available representations and encodings and by the function the language is intended to support.
Study of grammar in natural languages is observational and descriptive, since the languages already exist. Thus it is (or should be) similar to biology, which studies living beings as they actually exist. But Chomsky has identified the fundamental grammar behind all natural languages, and doesn't feel any need to observe them.
As Marc describes the Chomskyian linguistic theory, Chomsky says that the innate biological structure of language is far more comprehensive and broad and intricate than a mere categorization of words as noun/adjective/verb/something-else. Chomsky says that grammar is also innate, driven by brain structure, and that all existing natural languages actually share the same grammar, since all humans share the same brain structure.
Unfortunately, it's empirically demonstrable that this isn't remotely true. Natural-language grammars are bewilderingly different from one another, in ways which suggest they're the result of Lamarckian cultural evolution rather than Darwinian biological evolution.
Chomsky says that's because there's a deep structure which is common, and which gets translated into a surface structure just before being encoded in words to be emitted via sound or writing. Therefore all the observed differences in grammars between languages are the result of how that translation takes place, and none of them really matter since they're just local details of implementation.
I suspect that there's an analogy to compilers. If two compilers for the same source language target different destination machines, those compilers will parse the program they are translating into some sort of internal representation, and will then translate that internal representation into machine code for the destination machine. Since computer architectures vary dramatically, the emitted code will also vary dramatically even in structure and organization.
Related compilers [e.g. GNU C targeting x86 and PPC] create the same internal representation when they parse the target program, even though the emitted code can be drastically different. You can't see the fact that the internal representation is common, but it is.
However, it doesn't have to be. Two compilers for the same language which were developed independently probably will produce an internal representation of the target program but there's no reason to expect that the internal representation they use will be similar. There are a lot of ways to do that. And though most compilers do create such an internal representation, there have been compilers which did not do so. So it isn't all that easy to tell whether the internal representation used by two different compilers is the same or is different, or even present at all, without dissecting them or analyzing their genealogy.
With respect (or disrespect) to Chomsky's theory, Amritas asks how we can know whether all humans share a common deep grammar. How do we differentiate between a deep structure which really exists, and rationalization and apology for a figment of Chomsky's imagination?
How do we know the transformations between (1) and (4) (and presumably (4) and (3)) exist? You can't perceive them either.
The only answer to my last three questions is "Because Mark C. Baker said so."
Real scientific discoveries should be independently verifiable. Although we wouldn't use the same terminology as aliens from space, multiple species could have discovered, say, atoms and molecules without ever having contacted each other. They wouldn't call them 'atoms' and 'molecules', but the arbitrary labels aren't as important as their referents.
If Baker hadn't studied with Chomsky, would he have come up with the same analysis - with a D-structure in (1) that looks like English? Would a scholar with no knowledge of English or any similar European language have come up with (1)? Unless that scholar could wire up a Quechua speaker and somehow demonstrate with hard evidence that English-like structures are reshuffled into Quechua structures, that's practically impossible.
Actually, there is a way of testing this. But you don't test it with Quechua speakers; you test it with Chomskyian followers. And there's a beautiful demonstration of how one would perform such a test in the history of Marc's own field of linguistics.
David Kahn's The Codebreakers is a comprehensive history of code, ciphers and secret writing up to the age of the computer, at which point nearly all of the "technology" of cryptology was made obsolete. But he also includes chapters about related subjects, and to me one of the most fascinating chapters in the book is about study of inscriptions in languages that humanity had forgotten how to read. Probably the most famous example of that is the story of Champollion, the Rosetta stone, and Egyptian hieroglyphics.
Kahn describes another case, the decipherment of the cuneiform representation of Akkadian, the ancient language of Assyria and Babylonia.
In terms of numbers of signs
In terms of numbers
of signs, this script was the most complicated of the three found at Behistun
and on other trilingual inscriptions. Rawlinson and other scholars located the
repeated schemata that included the names of the kings. Comparisons of these
with their sounds and meanings – now known from the Old Persian solution –
showed that the Akkadian script was partly syllabic, partly logographic. For
example, it represented the word "king" by a single sign whereas it
spelled out the names of the kings with several signs. The number of signs used
in these names equalled the number of consonants in them. This led the Swede
Isidor Löwenstern to conclude that the language was Semitic, a family whose
later scripts, at least, write only the consonants as letters, representing the
vowels by points and lines. However, he discovered an abnormal number of signs
representing a single consonant. An Irish clergyman, Edward Hincks, showed that
these actually stood for syllables based on that consonant, such as /ra/, /ri/,
/ru/, /ar/, /er/, /ir/, /ur/. He also recognized that a single sign could serve
as a word-sign, a syllable-sign, or a determinative much like the hieroglyphic
determinatives.
Rawlinson,
meanwhile, continued to substitute new-found phonetic values back into the
cuneiform texts. At times the suggested values appeared jarringly out of place,
and after many occurrences in which a single sign appeared to be wrong in a word
that context compelled to be right, a regularity in these apparent errors
impressed itself on him. He finally concluded that a single sign could possess
several different sound values, much as the english c
can sound like either an /s/ or a /k/. Thus, in the Akkadian, Rawlinson
discovered, the sign that usually represents /ud/ can also stand for /tam/,
/par/, /lah/, and /his/. The 246 polyphonous symbols that he established by 1851
proved in the long run to be almost entirely correct. They were confirmed by
finding, among the 20,000 clay tablets of the library of Ashurbanipal, about 100
on which students learning the complicated language had correlated the various
signs, syllabic polyphones, and logograms. Only then was it possible to
understand why the name Nabu-kudurrī-uṣur
("Nebuchadnezzar"), meaning "O Nabu, protect my boundary
mark," came out as AN-AG-ŠA-DU-ŠIŠ. It turned out that AN-AG was a
logographic symbol for the god Na-bi-um, ŠA-DU represented the word kudurru
("boundary mark"), and ŠIŠ stood for naṣāru
("to protect"), the imperative form of which was uṣur.
In view of complexities like this, it was hardly surprising
that many scholars jeered at the results as pure imagination. To settle the
question of reliability, the Royal Asiatic Society in 1857 sent a newly
discovered cuneiform inscription to four experts, Rawlinson, Hincks, William
Henry Fox Talbert, and Jules Oppert, with the request that they work on it
independently. The sealed envelopes containing the four solutions were opened at
a formal meeting. In all essential points their translations agreed.
That was a very powerful test. If they had been building castles of smoke, seeing what they hoped to see and creating elaborate justifications along the way, there would have been some significant cases where they would have disagreed with one another, and quite possibly there would have been no resemblance at all amongst their results. Since they all saw and described the same thing, it can only be because what they saw was real.
Chomsky's theory could be tested the same way. Several supporters of Chomsky's theory (perhaps even including Chomsky himself, if he were willing to participate) would be given texts in several languages and would independently analyze them to derive the deep structure behind them. Their analyses would then be compared. If their analyses closely agreed, it would strongly support the validity of their theory. If they didn't agree at all, it would mean they were living in an intellectual home made of smoke and mirrors.
There are aspects of Chomsky's theories which would have to be taken into account in locating potential texts in various languages to be used for such tests. They would have to be potentially subject to multiple interpretations, or multiple English representations of the same interpretation. If I understand Chomsky's theory, it would mean that the analysis would have to distinguish between "John struck Jerry" and "Jerry was struck by John", because those sentences mean the same thing but are grammatically unrelated to one another, and in fact are distinct in Chomskyian terms.
If the test texts were chosen badly, there would be a risk that each Chomskyite in the test would independently identify the same destination because there would be a single obvious choice, and therefore each would create the same rationalization for that destination. There would be a shared "real" thing, but it would be the single obvious English translation, not any "deep structure".
So the test texts would have to be susceptible to more than one "deep structure" analysis, so that the test could determine empirically whether all the Chomskyites identified the same one, presumably a real one.
Given a reasonably extensive set of test texts from a variety of languages, and if several Chomskyite linguists did work independently (and not cheat by surreptitiously communicating with one another to explicitly choose a "consensus" answer), this could empirically settle the issue. I think that at least four Chomskyian linguists (or teams) would have to be involved, just as four were tested for the Akkadian translation.
But I don't think that any top Chomskyian supporters would be willing to participate in such a test, and if lesser lower-ranked supporters did and the result was strongly negative (with little or no agreement between the analyses) I think that top supporters would dismiss that result as being due to incompetence and lack of understanding of the theory by the lower-ranked supporters, rather than being an indication that Chomsky's theory was wrong.
That's how supporters of pseudoscientic theories react to empirical disproof or any attempt to empirically evaluate their favored superstition.
Update: Amritas has more here.
Update: And now Amritas responds to this post.
Update 20040514: Those who can, do. Those who can't, sneer.
Update 20040515: Nelson Ascher, in an unrelated post, makes a good point: there's a difference between criticism of the material presented in an article and mocking disparagement of the author of the article. The former is welcome; the latter is not.
I do not need any refer traffic from those who cannot tell the difference between these. That is why blog authors who express their "disagreement" via ad hominem will be placed into another kind of "bozo bin", and why direct refers from their sites will be refused by this server. (Their readers can still visit my site directly, but only by going to a small amount of extra effort. Most will not.)
include
+force_include -force_exclude
|