How do we understand meaning? is a question that has baffled many—psychologists, philosophers and linguists alike. We may not have a definite answer to it as yet, but all three have approached the question from their respective perspective. In their aim they all target the same, but in their methodology used and approach taken they all may appear different. The present paper is not concerned with how are meanings psychologically or cognitively processed in our brains, or what are the higher truths regarding the nature of signs that relate to the reality in the external world. It aims to deal with it from the linguistic perspective only, using tools that are part of the realm of linguistics only. In its approach it is going to analyse language as an end in itself, assuming that language itself is inherently complex enough, and its analysis will yield fruitful results which may be put to test at a later stage when compared with the findings of a psychologist or a philosopher.
The present essay deals with the understanding of meaning—meaning of words. When asked the meaning of a certain word x we often come out with answers like:
1. x means y (one word): Synonym,
2. x means the opposite of z (one word) : Antonym, 3. x is a kind of p: Hypernym, 4. it is similar to y but differs in such and such respect: Hyponyms, 5. x means y and z, depending on its context: Polysemy, or 6. x is this (pointing to a physical object in the real world)
7. x means 1 (in another language)
Out of all these possible answers 6 is mostly used for explaining words to children. 7 assumes that the interlocutor knows the second language. However, in our daily conversation we are most commonly going to come across the first five. Now, one might notice that in answers 1-5 the meaning(s) of a word are explained in terms of its relations to other words in the same language. 1 is a relation of equality (all features shared), 2 is of oppositeness (most features negate each other), 3 is a kind of inclusion (p contains all the features of x, but more as well), 4 is of similarity (some features shared), 5 can be 1 or 4 but on multiple level. Further, whatever be the relation (1-5) all words have been explained with the help of other words from the same language. In an ideal situation most of the explaining words should be simpler and less complex than the ones being explained. Now, it wouldn’t be wrong to say that given the basic vocabulary[i] one can explain the meaning of unfamiliar words with the help of the familiar ones. In other words there are basic words which explain the complex words. Now one may ask questions like:
a) Why do we have complex words?
b) Why do we have so many of them?
c) Why do we have different complex words for the same concept? etc
The answer to a is very simple—for the sake of brevity. One would not like to say ‘the elder brother of my dad’ several times in a conversation when he can refer to him as ‘him’ and ‘Taya[ii]’. b is also not very complex. We live in a complex world and our lives are very complex. We keep on developing new concepts, hence new words, plus we also carry on assigning new concepts or meanings to old words. c is slightly complex. There could be many answers to this—but none would be definite. One history plays a role. We keep on borrowing new words, hence new concepts, from other languages which eventually become a part of our own—so much so that the difference between an original and a borrowed word becomes the concern of an etymologist only. Second, perhaps we get bored with a frequently used word, so we get a new one. Third, a word exists only because it is used—and used frequently enough to be lexicalized. Thus in English linguistic culture there may be many words for the breeds of dogs, but very few for camels. It’ll be totally opposite in Arabic where they have many words for camel but few for dog. Thus our environment plays a vital role here. Fourth, we may add certain niceties with an already existing word and then lexicalize it. This way we keep on developing our lexicon—new words join and old and obsolete ones leave.
Let’s go back to the original question of ‘relations between words’. The relations of meanings that exist within a language require a complex and rigorous analysis albeit their apparent simplicity. Various theories have been put forward to explain the complex nature of meanings of a particular word and its compound relations to other words in the lexicon. The difficulty in the task ahead becomes evident at the very first step. There will be a common consensus that ‘words’ are one of the most fundamental and important building blocks of any language, without which there can’t be a language. However defining what is a ‘word’ appears to be quite crucial. No matter how widespread its use and how common it may sound, its illusive and deceptive nature has puzzled many. It may be very easy to find out the total numbers of words in this paragraph, but finding out what are the occurrences of the same word may appear quite confusing. Consider the following three sentences:
a. John plays cricket.
b. John is playing cricket.
c. John played cricked.
Are plays, playing, played three different words or the different realisations of the same word ‘play’? We’ll not go into more details, and take the lexicologist’s approach oflexemes, where one lexeme ‘play’ represents the whole class of its different realisation. In this essay, we’ll use the word ‘word’ in the sense of a lexeme.
Further, many of us will agree that words have some meaning. Consider this sentence.
John went to play cricket.
Now the meanings of words like John, went, play and cricket are easy to understand, but what does to mean. It may not have some lexical meanings like other words in the same sentence but it does have some structural or grammatical function or ‘grammatical meaning’. We’ll not go into the detail of that and will take Henry Sweet’s classification who divided the lexicon of English language into two very broad categories of full words and form words. In today’s terms we may label them as content words and structure words. It is only the content or full words that have the kind of meaning that we would expect to find in a dictionary. (Palmer 1981)
Going back to the original question of defining word meaning, earlier we said that we define them in terms of their relationship with other words. Words appear to have a shared domain of meanings among them. To explain this, let’s ask two very simple questions:
a. What’s the meaning of ‘woman’?
b. What’s the opposite of ‘woman’?
Now, pointing to a certain lady in the physical world will not give us the right meaning of a. That particular lady may, no doubt, be a woman, but ‘woman’ does not mean that particular lady. We’ll have to define it in terms of other words such as female, human, adult, etc. Cambridge Advanced Learner’s dictionary defines woman as ‘an adult female human being.’ For b almost all of us will answer ‘man’, which, in the dictionary, is defined as ‘an adult male human being’. One may notice that words like adult, male, female, human are all features that are found in lexemes ‘men’ and ‘women’. We have four features—in fact three as ‘male’ and ‘female’ are the same. One entails the negative of other. i.e., male= not female, and female = not male. For conventional reasons (although there’s no intrinsic order of precedence[iii] here) we’ll take ‘male’ as the basic feature and ‘female’ will be represented as ‘not male’. So here’s our list of three basic features which define ‘man’ and ‘woman’
Although ‘man’ and ‘woman’ are opposites in conventional terms, out of the three features they share two and differ on one only. Further their sharing of two features puts them in the same domain. They are the hyponyms of the hypernym ‘Human Being’. Further, imagine the difficulty of explaining one in the absence of other. Can we explain ‘woman’ when there’s nothing like ‘man’? How can we explain ‘child’ if there’s no concept of adult? Just like Day-n-Night , the absence of one is responsible for the existence of other, the existence of certain concepts (like man, child) is responsible for the existence of others (like woman, adult, etc). Now all these words can not be understood as ‘bundle of separate items attached to one another in a fairly random way’. (Aitchison 1994) Many scholars have come up with the view that words, or lexemes, are ‘built up from a common pool of meaning atoms, and that related words have some atoms in common.’ (Aitchison 1994)
Various scholars have differently labeled these meaning atoms such as ‘semantic primitives’, ‘atomic globule’, ‘semantic features’, ‘semantic components’ etc, but the concept behind them is the same. An understanding of these primitives is very important for linguistics because:
they help us in capturing the exact nature of lexical relations that exists between words (or lexemes)
they have linguistic import from outside semantics, and
such semantic primitives (may) form part of our psychological architecture—they (may) provide us a unique view of our (human) conceptual structure.
Many linguists use a binary format for semantic components and represent their presence or absence with ‘+’ and ‘-’ sings. Thus the notation for man and woman would be like:
There’s a common agreement that semantic primitives are the basic units which can not be further divided.[iv] However, the primitive HUMAN does not fit this criterion. However these are the redundancy rules which predict the automatic relationship between components. HUMAN presupposes ANIMATE and MARRIED presupposes HUMAN and ADULT. Thus for the sake of brevity HUMAN may be taken as a primitive if it is not going to lead to any confusion.
Some scholars have also suggested that these semantic primitives are not language confined. Bierswisch (1970)[v] is of the view that ‘semantic Features can not be different from language to language, but are rather part of the general human capacity for language, forming a universal inventory used in particular ways in individual languages.’ Jean Aitchison (1994) traces this idea back to seventeenth century German philosopher Leibniz who proposed that human beings are born with a basic ‘alphabet of human thought’, which is ‘the catalogue of those concepts which can be understood by themselves, and by whose combination all our other ideas are formed.’ (Aitchison 1994)
The notion of semantic primitives becomes a little complex now. It is rather easy to find out meaning atoms in words like man, woman, boy, bull, dog, etc but how about words like kill, die, consume, eat, digest, etc. They require a different type of semantic primitives. Aitchison (1994) differentiates between things— such as nouns, and events—such as verbs. She accepts that although these are yet an unsolved matter nevertheless finding out more about these is interesting. The search for semantic components is an ongoing search with people making contribution to the ever evolving body of knowledge. Below is a brief study of some such original contributions.
We have shown above that contrary to popular belief that words are the basic building blocks of meanings, they are, in fact, not basic and are composed of many semantic primitives. However, understanding the meanings of words in isolation is not going to serve any higher purpose, as how often is it in real language communication that a single word is used to express a proposition. The real motivation for understanding word meaning comes from the analysis of sentence meaning which derives its meanings from a sum of the individual meanings of its constituents (words), which furnish their individual meanings (in that particular context) in relations to other constituents in the same sentence. One of the first persons to take this approach is J J Katz. Some seminal points of his semantic theory are:
semantic rules have to be recursive, just like syntactic rules, to reflect the generative nature of language
syntactic structure and lexical content interact to determine meaning (e.g. John killed Fred has a different meaning than Fred killed John). In other words meaning is compositional— the way words are combined into phrases, and phrases into sentences determines the meaning of the sentences
Katz’s theory takes input from both the syntactic components of grammar and the semantic components of the lexicon. In this theory the aims of semantic component, paralleling the aims of syntax, are:
to give specification of the meanings of lexical items;
to give rules showing how the meanings of lexical items build up into the meanings of phrases and so on up to sentences;
to do this in a universally applicable metalangauge. (Saeed 2003)
Katz took the notion of semantic components from a generative grammar perspective. He said that to construct the true meanings of a phrase only those meaning components of individual words should be taken into account which are compatible to convey the required sense. For example, he gives the semantic components of these two words ‘colourful’ and ‘ball’ as[vi]:
colourful: {ADJ}
1a. (colour) [abounding in contrast or variety of bright colours] <(physical object) or (social activity)>
1b. (evaluative) [Having distinctive character, vividness, or picturesqueness] <(aesthetic object) or (social activity)>
ball: {N}
2a. (social activity) (larger) (assembly) [for the purpose of social dancing]
2b. (physical object) [having globular shape]
2c. (physical object) [solid missile for projection by engine of war] (Saeed 2003)
Now Katz argues that to understand the meaning of ‘colourful ball’ in the sentence ‘The man hit the colourful ball’ the meanings of only certain components in the words ‘colourful’ and ‘ball’ should be taken into consideration. He introduced selection restrictions which propose that, in the context of the above sentence, the only relevant components taken into consideration should be 1a and 2b.
Katz’s work focused on showing how semantic components influenced grammatical processes and grammatical structures. However, he also provided his justification for semantic components by arguing that, apart from helping in building up the meanings of phrases and sentences, the internal structure of components can explain the relations of hyponymy, antonymy, synonymy, contradiction and entailment etc. (Saeed 2003)
Just after Katz’s Semantic Theory, Ray Jackendoff introduced his conceptual semantics. According to him, describing meaning involves describing mental representations. By asserting that meaning in a sentence is a conceptual structure, Jackendoff endorsed the justification for semantic components as having an important role in describing rules for semantic inference. He asserted that semantic components could represent underlying mental representation or associations of individual words. For example, he explains the notion of entailment as:
a. George killed the dragon.
b. The dragon died.
He explains the relationship between ‘killed’ and ‘died’ by giving ‘killed’ a semantic component of ‘cause to die’. He asserts that the semantic component ‘cause’ could be found in many lexical items such as sell, life, give, persuade, etc. Jackendoff also introduced the idea of universal semantic categories and said that at the conceptual structure level a sentence is built up of such semantic categories asEvent, State, Material Thinkg (or Object), Path, Place, and Property. Below are two examples of two semantic categories Event and State:
> an EVENT describing movement: Bill went into the house.
syntactic structure: [S [NP Bill] [VP [V went] [PP [P into] [NP the house]]]]
conceptual structure: [Event GO ([Thing BILL], [Path TO ([Place IN ([ThingHOUSE])])])]
> a STATE: The car is in the garage
syntactic structure: [S [The car] [VP [V is] [PP [P in] [NP the garage]]]]
conceptual structure: [State BE ([Thing CAR], [Place IN ([Thing GARAGE])])] (Saeed 2003)
He then introduces four subcategories of Spatial Location (Loc), Property or Identity (Ident), Temporal Location (Temp) and Possession (Poss), which differentiate between the subtle different meanings of verbs BE and GO.
Jackendoff’s approach uses lexical decomposition to investigate the semantics-grammar interface…(this)…presents a view of semantic primitives occurring in highly articulated semantic representation. In this theory these representations are proposed as conceptual structures underlying linguistic behaviour. (Saeed 2003)
Similar to Jackendoff’s approach, George Miller and Phil Johnson-Laird (Aitchison 1994) also tried to relate semantic primitives to perceptual primitives. They attempted to anchor them to some kind of observable human experience, to things people can see, hear or feel. (Aitchison 1994) This approach sounds interesting as many people have suggested that human beings understand the unknown and abstract with the help of known and perceptible. For example, babies learn their first word by relating them to certain object, experience or phenomenon in the physical word, and when they have developed a certain basic pool, they start abstracting and developing/understanding new and unknown concepts with the help of the known ones.
Although, in the words of Jean Aitchison, Millar and Johnso-Laird’s work is based on somewhat shaky foundations, they’ve come up with quite a thorough and lenghty investigation of the subject. For physical objects alone they come up with around 100 primitives such as PLACE, SIZE, STRAIGHT, HORIZONTAL, VERTICAL, BOTTOM, TOP, etc. However, many a psychologists don’t agree with their proposal that such primitives represent underlying perceptual primitives. And when it comes to explaining intangible and abstract words such as promise, predict or disagree, further problems arise as how can one ‘tie down these to something one can perceive directly’. (Aitchison 1994)
Many other people also tried to explain the notion of semantic components but not with much success. Roger Shank, a researcher in artificial intelligence, suggested that there are around a dozen semantic primitives which form the basis of all the verb we commonly use. However, he and his colleagues first had the difficulty of deciding how many such primitives were there. Further, although the analysis of a few words appeared useful, they couldn’t find any empirical evidence for their existence.
In fact, no body has so far come up with some convincing evidence for the existence of semantic primitives. Why are people interested in these? There appear to be many reasons. One there’s no conclusive evidence that they don’t exist. The absence of the evidence for their existence does not entail that they don’t exist. So the search continues. Second, many a linguists believe, and perhaps rightly so, that they certainly help in understanding why certain words overlap in meaning. Third, the concept of meaning atom fits in the way other realms of knowledge are explored. Like chemicals can be decomposed into basic elements, atom into protons, neutron etc, words could also be decomposed into more basic units of meanings, and their (word’s) formation with other words depended on their similarities or shared components just like (in chemistry) the functioning of atoms in a molecule where electrons plays a decisive role. Jean Aitchison, who appears to be a strong opponent of semantic components suggests that another reason for proposing semantic components is wishful thinking.There existence would make the life easier for anybody working with the problems of meaning, since they solve the problem of where definitions stop. (Aitchison 1994) However, this appears to be a superficial judgement. Would that be theoretically possible? Perhaps not. Otherwise how would we explain the process of language change—where words keep on getting new meaning (or new semantic components) and/or keep on losing old meanings (shaking off certain semantic components)
To conclude, it may not be a very good idea to assume that the way we analyse words in componential analysis reflects on similar processes that go on in our minds. That may not be true at all. The study of some certain semantic components has suggested that they take longer to process then the original composite word. For example, ‘He is a bachelor’ may take less time to be understood than ‘He is not married’[vii]. For this matter, the study of semantic components should be taken only to understand the relations within words of a language. It may appear to be helpful, until we find a better solution, in constructing the meanings of sentences for the purposes of developing computer generated language. However one must accept that to understand the subtle differences between similar words, an understanding of semantic components is very helpful. Further they also appear to be very crucial for the understanding of synonymy, antonymy, polysemy, hoponymy, hypernymy, contradiction and entailment.
References:
[i] LDOC (Longman Dictionary of Contemporary English) has come up with a basic vocabulary of around 1800 words, which were used to explain the meanings of over 100,000 vocabulary items.
[ii] In Urdu language, there are more family related words as compared to English, only because the social and family structure demands for frequent communication/interaction.
[iii] Unless you believe that woman was created through the rib of man.
[iv] This is very confusing. There may be nothing that is undividable. We may represent HUMAN as
+Animate +Bipedal + Language
However all these primitive can be further divided. So for this reason, linguists agree that the term ‘undividable’ or ‘can not be further subdivided’ is to be understood within that particular frame of reference. Thus, while analysing man, woman and married, the notion HUMAN becomes primitive and undividable as its subdivision will yield no effect on the understanding or analysis of man, woman and married.
[v] quoted by Aitchison 1994
[vi] One may find the subdivision of these words into such complex semantic components a little contrary to the presupposition that semantic components are basic, primitive and atoms of meaning. However, as mentioned above, though these components may appear complex, they are basic as far as the contextual explanation is concerned.
[vii] Jean Aitchison mentions that processing ‘not married’ took longer than ‘bachelor’. However, this is a bit controversial. I personally think that psychologically processing ‘not’ is more complex. To do this one has to first go through the whole original concept and then add negation to it. So, it is no surprise that it took longer to process ‘not married’.
Bibilography:
Aitchison, Jean (1994) Words in the Mind: An Introduction to the Mental Lexicon. Oxford: Blackwell.
Lyons, John (1977) Semantics. 2 Vols. Cambridge: CUP
Lyons, John (1995) Linguistic Semantics: An Introduction. Cambridge: CUP
Palmer, Frank (1981) Semantics: A New Outline. Cambridge: CUP
Saeed, John I (2003) Semantics. (2nd Edition) Oxford: Blackwell.