March 27, 2011

PAGE 8

 Content-involving states are actions individuated in party reference to the agent’s relations to things and properties in his environment. Wanting to see a particular movie and believing that the building over there is a cinema showing it makes rational the action of walking in the direction of that building.
 However, in the general philosophy of mind, and more recently, desire has received new attention from those who understand mental states in terms of their causal or functional role in their determination of rational behaviour, and in particular from philosophers trying to understand the semantic content or intentional; character of mental states in those terms as ‘functionalism’, which attributes for the functionalist who thinks of mental states and evens asa causally mediating between a subject’s sensory inputs and that subject’s ensuing behaviour. Functionalism itself is the stronger doctrine that makes a mental state the type of state it is ~ in pain, a smell of violets, a belief that the koala (an arboreal Australian marsupial (Phascolarctos cinereus), is dangerous ~ is the functional relation it bears to the subject’s perceptual stimuli, behavioural responses, and other mental states.
 In the general philosophy of mind, and more recently, desire has received new attention from those who would understand mental stat n terms of their causal or functional role in the determination of rational behaviour, and in particularly from philosophers trying to understand the semantic content or the intentionality of mental states in those terms.
 Conceptual (sometimes computational, cognitive, causal or functional) role semantics (CRS) entered philosophy through the philosophy of language, not the philosophy of mind. The core idea behind the conceptual role of semantics in the philosophy of language is that the way linguistic expressions are related to one another determines what the expressions in the language mean. There is a considerable affinity between the conceptual role of semantics and structuralist semiotics that has been influence in linguistics. According to the latter, languages are to be viewed as systems of differences: The basic idea is that the semantic force (or, ‘value’) of an utterance is determined by its position in the space of possibilities that one’ language offers. Conceptual role semantics also has affinities with what the artificial intelligence researchers call ‘procedural semantics’, the essential idea here is that providing a compiler for a language is equivalent to specifying a semantic theory of procedures that a computer is instructed to execute by a program.
 Nevertheless, according to the conceptual role of semantics, the meaning of a thought I determined by the though’s role in a system of states, to specify a thought is not to specify its truth or referential condition, but to specify its role. Walter’s and twin-Walter’s thoughts, though different truth and referential conditions, share the same conceptual role, and it is by virtue of this commonality that they behave type-identically. If Water and twin-Walter each has a belief that he would express by ‘water quenches thirst’ the conceptual role of semantics can explained predict their dripping their cans into H2O and XYZ respectfully. Thus the conceptual role of semantics would seem, though not to Jerry Fodor, who rejects of the conceptual role of semantics for both external and internal problems.
 Nonetheless, if, as Fodor contents, thoughts have recombinable linguistic ingredients, then, of course, for the conceptual role of semantic theorist, questions arise about the role of expressions in the language of thought as well as in the public language we speak and write. And, according, the conceptual role of semantic theorbists divide not only over their aim, but also about conceptual roles in semantic’s proper domain. Two questions avail themselves. Some hold that public meaning is somehow derivative (or inherited) from an internal mental language (mentalese) and that a mentalese expression has autonomous meaning (partly). So, for example, the inscriptions on this page require for their understanding translation, or at least, transliterations. Into the language of thought: representations in the brain require no such translation or transliteration. Others hold that the language of thought just is public language internalized and that it is expressions (or primary) meaning in virtue of their conceptual role.
 After one decides upon the aims and the proper province of the conceptual role for semantics, the relations among expressions ~ public or mental ~ constitute their conceptual roles. Because most conceptual roles of semantics as theorists leave the notion of the role in conceptuality as a blank cheque, the options are open-ended. The conceptual role of a [mental] expression might be its causal association: Any disposition to token or example, utter or think on the expression ‘ℯ’ when tokening another ‘ℯ’ or ‘a’ an ordered n-tuple < ℯ’ ℯ’‘,  . . .  >, or vice versa, can count as the conceptual role of ‘ℯ’. A more common option is characterized conceptual role not causally but inferentially (these need compatible, contingent upon one’s attitude about the naturalization of inference): The conceptual role of an expression ‘ℯ’ in ‘L’ might consist of the set of actual and potential inferences from ‘ℯ’, or, as a more common, the ordered pair consisting of these two sets. Or, if it is sentences which have non-derived inferential roles, what would it mean to talk of the inferential role of words? Some have found it natural to think of the inferential role of as words, as represented by the set of inferential roles of the sentence in which the word appears.
 The expectation of expecting that one sort of thing could serve all these tasks went hand in hand with what has come to be called the ‘Classical View’ of concepts, according to which they had an ‘analysis’ consisting of conditions that are individually necessary and jointly sufficient for their satisfaction, which are known to any competent user of them. The standard example is the especially simple one of [bachelor], which seems to be identical to [eligible unmarried male]. A more interesting, but analysis was traditionally thought to be [justified true belief].
 This Classical View seems to offer an illuminating answer to a certain form of metaphysical question: In virtue of what is something the kind of thing it is ~ i.e., in virtue of what is a bachelor a bachelor? ~ and it does so in a way that supports counter-factual: It tells us what would satisfy the conception situations other than the actual ones (although all actual bachelors might turn out to be freckled, its possible that there might be unfreckled ones, since the analysis does not exclude that). The view also seems to offer an answer to an epistemological question of how people seem to know a priori (or independently of experience) about the nature of many things, e.g., that bachelors are unmarried: It is constitutive of the competency (or possession) conditions of a concept that they know its analysis, at least on reflection.
 The Classic View, however, has alway ss had to face the difficulty of primitive concepts: Its all well and good to claim that competence consists in some sort of mastery of a definition, but what about the primitive concept in which a process of definition mus t ultimately end: Here the British Empiricism of the seventeenth century began to offer a solution: All the primitives were sensory, indeed, they expanded the Classical View to include the claim, now often taken uncritically for granted in the discussions of that view, that all concepts are ‘derived from experience’:’Every idea is derived from a corresponding impression’, in the work of Walter Locke (1632-1704), George Berkeley (1685-1753) and David Hume (1711-76) were often thought to mean that concepts were somehow composed of introspectible mental items ~ ‘images’, ‘impressions’ ~ that were ultimately decomposable into basic sensory parts. Thus, Hume analysed the concept of [material object] as involving certain regularities in our sensory experience and [cause] as involving spatio-temporal contiguity ad constant conjunction.
 The Irish ‘idealist’ George Berkeley, noticed a problem with this approach that every generation has had to rediscover: If a concept is a sensory impression, like an image, then how does one distinguish a general concept [triangle] from a more particular one ~ say, [isosceles triangle] ~ that would serve in imagining the general one. More recently, Wittgenstein (1953) called attention to the multiple ambiguity of images. And in any case, images seem quite hopeless for capturing the concepts associated with logical terms (what is the image for negation or possibility?) What ever the role of such representation, full conceptual competency must involve something more.
 Conscionably, in addition to images and impressions and other sensory items, a full account of concepts needs to consider is of logical structure. This is precisely what the logical positivist did, focussing on logically structured sentences instead of sensations and images, transforming the empiricist claim into the famous ‘Verifiability Theory of Meaning’, the meaning of s sentence is the means by which it is confirmed or refuted, ultimately by sensory experience the meaning or concept associated with a predicate is the means by which people confirm or refute whether something satisfies it.
 This once-popular position has come under much attack in philosophy in the last fifty years, in the first place, few, if any, successful ‘reductions’ of ordinary concepts (like [material objects] [cause] to purely sensory concepts have ever been achieved. Our concept of material object and causation seem to go far beyond mere sensory experience, just as our concepts in a highly theoretical science seem to go far beyond the often only meagre evidence we can adduce for them.
 The American philosopher of mind Jerry Alan Fodor and LePore (1992) have recently argued that the arguments for meaning holism are, however less than compelling, and that there are important theoretical reasons for holding out for an entirely atomistic account of concepts. On this view, concepts have no ‘analyses’ whatsoever: They are simply ways in which people are directly related to individual properties in the world, which might obtain for someone, for one concept but not for any other: In principle, someone might have the concept [bachelor] and no other concepts at all, much less any ‘analysis’ of it. Such a view goes hand in hand with Fodor’s rejection of not only verificationist, but any empiricist account of concept learning and construction: Given the failure of empiricist construction. Fodor (1975, 1979) notoriously argued that concepts are not constructed or ‘derived’ from experience at all, but are and nearly enough as they are all innate.
 The deliberating considerations about whether there are innate ideas is much as it is old, it, nonetheless, takes from Plato (429-347 Bc) in the ‘Meno’ the problems to which the doctrine of ‘anamnesis’ is an answer in Plato’s dialogue. If we do not understand something, then we cannot set about learning it, since we do not know enough to know how to begin. Teachers also come across the problem in the shape of students, who can not understand why their work deserves lower marks than that of others. The worry is echoed in philosophies of language that see the infant as a ‘little linguist’, having to translate their environmental surroundings and grasp on or upon the upcoming language. The language of thought hypothesis was especially associated with Fodor that mental processing occurs in a language different from one’s ordinary native language, but underlying and explaining our competence with it. The idea is a development of the Chomskyan notion of an innate universal grammar. It is a way of drawing the analogy between the workings of the brain or mind and those of the standard computer, since computer programs are linguistically complex sets of instruments whose execution explains the surface behaviour of computer. As an explanation of ordinary language has not found universal favour. It apparently only explains ordinary representational powers by invoking innate things of the same sort, and it invites the image of the learning infant translating the language whose own powers are a mysterious a biological given.
 René Descartes (1596-1650) and Gottfried Wilhelm Leibniz (1646-1716), defended the view that mind contains innate ideas: Berkeley, Hume and Locke attacked it. In fact, as we now conceive the great debate between European Rationalism and British Empiricism in the seventeenth and eighteenth centuries, the doctrine of innate ideas is a central bone of contention: Rationalist typically claim that knowledge is impossible without a significant stoke of general innate concepts or judgements: Empiricist argued that all ideas are acquired from experience. This debate is replayed with more empirical content and with considerably greater conceptual complexity in contemporary cognitive science, most particularly within the domain of psycholinguistic theory and cognitive developmental theory.
 Some of the philosophers may be cognitive scientist other’s concern themselves with the philosophy of cognitive psychology and cognitive science. Since the inauguration of cognitive science these disciplines have attracted much attention from certain philosophes of mind. The attitudes of these philosophers and their reception by psychologists vary considerably. Many cognitive psychologists have little interest in philosophical issues. Cognitive scientists are, in general, more receptive.
 Fodor, because of his early involvement in sentence processing research, is taken seriously by many psycholinguists. His modularity thesis is directly relevant to question about the interplay of different types of knowledge in language understanding. His innateness hypothesis, however, is generally regarded as unhelpful,. And his prescription that cognitive psychology is primarily about propositional attitudes is widely ignored. The American philosopher of mind, Daniel Clement Dennett (1942- )whose recent work on consciousness treats a topic that is highly controversial, but his detailed discussion of psychological research finding has enhanced his credibility among psychologists. In general, however, psychologists are happy to get on with their work without philosophers telling them about their ‘mistakes’.
 Connectionmism has provided a somewhat different reaction mg philosophers. Some ~ mainly those who, for other reasons, were disenchanted with traditional artificial intelligence research ~ have welcomed this new approach to understanding brain and behaviour. They have used the success, apparently or otherwise, of connectionist research, to bolster their arguments for a particular approach to explaining behaviour. Whether this neuro-philosophy will eventually be widely accepted is a different question. One of its main dangers is succumbing to a form of reductionism that most cognitive scientists and many philosophers of mind, find incoherent.
 One must be careful not to caricature the debate. It is too easy to see the debate as one pitting innatists, who argue that all concepts of all of linguistic knowledge is innate (and certain remarks of Fodor and of Chomsky lead themselves in this interpretation) against empiricist who argue that there is no innate cognitive structure in which one need appeal in explaining the acquisition of language or the facts of cognitive development (an extreme reading of the American philosopher Hilary Putnam 1926-). But this debate would be a silly and a sterile debate indeed. For obviously, something is innate. Brains are innate. And the structure of the brain must constrain the nature of cognitive and linguistic development to some degree. Equally obvious, something is learned and is learned as opposed to merely grown as limbs or hair growth. For not all of the world’s citizens end up speaking English, or knowing the Relativity Theory. The interesting questions then all concern exactly what is innate, to what degree it counts as knowledge, and what is learned and to what degree its content and structure are determined by innately specified cognitive structure. And that is plenty to debate about.
 The arena in which the innateness takes place has been prosecuted with the greatest vigour is that of language acquisition, and it is an appropriate to begin there. But it will be extended to the domain of general knowledge and reasoning abilities through the investigation of the development of object constancy ~ the disposition to concept of physical objects as persistent when unobserved and to reason about there properties locations when they are not perceptible.
 The most prominent exponent of the innateness hypothesis in the domain of language acquisition is Chomsky (1296, 1975). His research and that of his colleagues and students is responsible for developing the influence and powerful framework of transformational grammar that dominates current linguistic and psycholinguistic theory. This body of research has amply demonstrated that the grammar of any human language is a highly systematic, abstract structure and that there are certain basic structural features shared by the grammars of all human language s, collectively called ‘universal grammar’. Variations among the specific grammars of the world’s ln languages can be seen as reflecting different settings of a small number of parameters that can, within the constraints of universal grammar, take may of several different valued. All of type principal arguments for the innateness hypothesis in linguistic theory on this central insight about grammars. The principal arguments are these: (1) The argument from the existence of linguistic universals, (2) the argument from patterns of grammatical errors in early language learners: (3) The poverty of the stimulus argument, (4) the argument from the case of fist language learning (5) the argument from the relative independence of language learning and general intelligence, and (6) The argument from the moduarity of linguistic processing.
 Innatists argue (Chomsky 1966, 1975) that the very presence of linguistic universals argue for the innateness of linguistic of linguistic knowledge, but more importantly and more compelling that the fact that these universals are, from the standpoint of communicative efficiency, or from the standpoint of any plausible simplicity reflectively adventitious. These are many conceivable grammars, and those determined by universal grammars, and those determined by universal grammar are not ipso facto the most efficient or the simplest. Nonetheless, all human languages satisfy the constraints of universal grammar. Since either the communicative environment or the communicative tasks can explain this phenomenon. It is reasonable to suppose that it is explained by the structures of the mind ~ and therefore, by the fact that the principles of universal grammar lie innate in the mind and constrain the language that a human can acquire.
 Hilary Putnam argues, by appeal to a common-sens e ancestral language by its descendants. Or it might turn out that despite the lack of direct evidence at present the feature of universal grammar in fact do serve either the goals of commutative efficacy or simplicity according in a metric of psychological importance. finally, empiricist point out, the very existence of universal grammar might be a trivial logical artefact: For one thing, many finite sets of structure es whether some features in common. Since there are some finite numbers of languages, it follows trivial that there are features they all share. Moreover, it is argued that many features of universal grammar are interdependent. On one , in fact, the set of fundamentally the same mental principle shared by the world’s languages may be rather small. Hence, even if these are innately determined, the amount not of innate knowledge thereby, required may be quite small as compared with the total corpus of general linguistic knowledge acquired by the first language learner.
 These relies are rendered less plausible, innatists argue, when one considers the fact that the error’s language learners make in acquiring their first language seem to be driven far more by abstract features of gramma r than by any available input data. So, despite receiving correct examples of irregular plurals or past-tense forms for verbs, and despite having correctly formed the irregular forms for those words, children will often incorrectly regularize irregular verbs once acquiring mastery of the rule governing regulars in their language. And in general, not only the correct inductions of linguistic rules by young language learners  but more importantly, given the absence of confirmatory data and the presence of refuting data, children’s erroneous inductions always consistent with universal gramma r, oftentimes simply representing the incorrect setting of a parameter in the grammar. More generally, innatists argue (Chomsky 1966, 1977 & Crain, 1991) all grammatical rules that have ever been observed satisfy the structure-dependence constraint. That is, many linguistics and psycholinguistics argue that all known grammatical rules of all of the world’s languages, including the fragmentary languages of young children must be started as rules governing hierarchical sentence structure, and not governing, say, sequence of words. Many of these, such as the constituent-command constraint governing anaphor, are highly abstract indeed, and appear to be respected by even very young children. Such constrain may, innatists argue, be necessary conditions of learning natural language in the absence of specific instruction, modelling and correct, conditions in which all first language learners acquire their native language.
 Ann important empiricist rely to these observations derives from recent studies of ‘conceptionist’ models of first language acquisition, for which of a ‘connection system’, not previously trained to represent any subset universal grammar that induce grammar which include a large set of regular forms and a few irregulars also tend to over-regularize, exhibiting the same U-shape learning curve seen in human language acquire learning systems that induce grammatical systems acquire ‘accidental’ rules on which they are not explicitly trained but which are not explicit with those upon which they are trained, suggesting, that as children acquire portions of their grammar, they may accidentally ‘learn’ correct consistent rules, which may be correct in human languages, but which then must be ‘unlearned’ in their home language. On the other hand, such ‘empiricist’ language acquisition systems have yet to demonstrate their ability to induce a sufficient wide range of the rules hypothesize to be comprised by universal grammar to constitute a definitive empirical argument for the possibility of natural language acquisition in the absence of a powerful set of innate constraints.
 The poverty of the stimulus argument has been of enormous influence in innateness debates, though its soundness is hotly contested. Chomsky notes that (1) the examples of their targe t language to which the language learner is exposed are always jointly compatible with an infinite number of alterative grammars, and so vastly under-determine the grammar of the language, and (2) The corpus always contains many examples of ungrammatical sentences, which should in fact serve as falsifiers of any empirically induced correct grammar of the language, and (3) there is, in general, no explicit reinforcement of correct utterances or correction of incorrect utterances, either by the learner or by those in the immediate  training environment. Therefore, he argues, since it is impossible to explain the learning of the correct grammar ~ a task accomplished b all normal children within a very few years ~ on the basis of any available data or known learning algorithms, it must be ta the grammar is innately specified, and is merely ‘triggered’ by relevant environmental cues.
 Opponents of the linguistic innateness hypothesis, however, point out that the circumstance that the American linguistic, philosopher and political activist, Noam Avram Chomsky (1929-), who believes that the speed with which children master their native language cannot be explained by learning theory, but requires acknowledging an innate disposition of the mind, an unlearned, innate and universal grammar, suppling the kinds of rule that the child will a priori understand to be embodied in examples of speech with which it is confronted in computational terms, unless the child came bundled with the right kind of software. It cold not catch on to the grammar of language as it in fact does.
 As it is wee known from arguments due to the Scottish philosopher David Hume (1978, the Austrian philosopher Ludwig Wittgenstein (1953), the American philosopher Nelson Goodman (1972) and the American logician and philosopher Aaron Saul Kripke (1982), that in all cases of empirical abduction, and of training in the use of a word, data underdetermining the theories. The is moral is emphasized by the American philosopher Willard van Orman Quine (1954, 1960) as the principle of the undetermined theory by data. But we, nonetheless, do abduce adequate theories in silence, and we do learn the meaning of words. And it could be bizarre to suggest that all correct scientific theories or the facts of lexical semantics are innate.
 But, innatists rely, when the empiricist relies on the underdermination of theory by data as a counter-example, a significant disanalogy with language acquisition is ignored: The abduction of scientific theories is a difficult, labourious process, taking a sophisticated theorist a great deal of time and deliberated effort. First language acquisition, by contrast, is accomplished effortlessly and very quickly by a small child. The enormous relative ease with which such a complex and abstract domain is mastered by such a naïve ‘theorist’ is evidence for the innateness of the knowledge achieved.
 Empiricist such as the American philosopher Hilary Putnam (1926-) have rejoined that innatists under-estimate the amount of time that language learning actually takes, focussing only on the number of years from the apparent onset of acquisition to the achievement of relative mastery over the grammar. Instead of noting how short this interval, they argue, one should count the total number of hours spent listening to language and speaking during h time. That number is in fact quite large and is comparable to the number of hours of study and practice required the acquisition of skills that are not argued to derive from innate structures, such as chess playing or musical composition. Hence, they are taken into consideration, and language learning looks like one more case of human skill acquisition than like a special unfolding of innate knowledge.
 Innatists, however, note that while the case with which most such skills are acquired depends on general intelligence, language is learned with roughly equal speed, and to roughly the same level of general intelligence. In fact even significantly retarded individuals, assuming special language deficit, acquire their native language on a tine-scale and to a degree comparable to that of normally intelligent children. The language acquisition faculty, hence, appears to allow access to a sophisticated body of knowledge independent of the sophistication of the general knowledge of the language learner.
 Empiricist’s reply that this argument ignores the centrality of language in  a wide range of human activities and consequently the enormous attention paid to language acquisition by retarded youngsters and their parents or caretakers. They argue as well, that innatists overstate the parity in linguistic competence between retarded children and children of normal intelligence.
 Innatists point out that the ‘modularity’ of language processing is a powerful argument for the innateness of the language faculty. There is a large body of evidence, innatists argue, for the claim that the processes that subserve the acquisition, understanding and production of language are quite distinct and independent of those that subserve general cognition and learning. That is to say, that language learning and language processing mechanisms and the knowledge they embody are domain specific ~ grammar and grammatical learning and utilization mechanisms are not used outside of language processing. They are informationally encapsulated ~ only linguistic information is relevant to language acquisition and processing. They are mandatory, and language learning and language processing are automatic. Moreover, language is subserved by specific dedicated neural structures, damage to which predictable and systematically impairs linguistic functioning. All of this suggests a specific ‘mental organ’, to use Chomsky’s phrase, that has evolved in the human cognitive system specifically in order to make language possible. The specific structure is organ simultaneously constrains the range of possible human language s and guide the learning of a child’s target language, later masking rapid on-line language processing possible. The principles represented in this organ constitute the innate linguistic knowledge of the human being. Additional evidence for the early operation of such an innate language acquisition module is derived from the many infant studies that show that infants selectively attend to soundstreams that are prosodically appropriate, which have pauses at clausal boundaries, and that contain linguistically permissible phonological sequence.
 It is fair to ask where we get the powerful inner code whose representational elements need only systematic construction to express, for example, the thought that cyclotrons are bigger than black holes. But on this matter, the language of thought theorist has little to say. All that ‘concept’ learning could be (assuming it is to be some kind of rational process and not due to mere physical maturation or a bump on the head). According to the language of thought theorist, is the trying out of combinations of existing representational elements to see if a given combination captures the sense (as evinced in its use) of some new concept. The consequence is that concept learning , conceived as the expansion of our representational resources, simply does not happen. What happens instead is that the work with a fixed, innate repertoire of elements whose combination and construction must express any content we can ever learn to understand.
 Representationalist typifies the conforming generality for which of its inclusive manner that by and large induce the doctrine that the mind (or sometimes the brain) works on representations of the things and features of things that we perceive or thing about. In the philosophy of perception the view is especially associated  with the French Cartesian philosopher Nicolas Malebranche (1638-1715) and the English philosopher Walter Locke (1632-1704), who, holding that the mind is the container for ideas, held that of our real ideas, some are adequate, and some are inadequate. Those that have in adequacy to, are those represented as archetypes that the mind supposes them taken from which it tends them to stand for, and to which it refers them. The problem in this account were mercilessly exposed by the French theologian as philosopher Antoine Arnauld (1216-94) and the French critic of Cartesianism Simon Foucher (1644-96), writing against Malebranche , and by the idealist George Berkeley, writing against Locke. The fundamental problem is that the mind is ‘supposing’ its ds to represent something else, but it has no access to this something else, except by forming another idea. The difficulty is to understand how the mind ever escapes from the world of representations, or, acquire genuine content pointing beyond themselves in more recent philosophy, the analogy between the mind and a computer has suggest that the mind or brain manipulates signs and symbols, thought of as like the instructions in a machine’s program of aspects of the world. The point is sometimes put by saying that the mind, and its theory, becomes a syntactic engine rather than a semantic engine. Representation is also attacked, at least as a central concept in understanding the ‘pragmatists’ who emphasize instead the activities surrounding a use of language than what they see as a mysterious link between mind and world.
 Representations, along with mental states, especially beliefs and thought, are said to exhibit ‘intentionality’ in that they refer to or stand for something other than of what is the possibility of it being something else. The nature of this special property, however, has seemed puling. Not only is intentionality oftentimes assumed to be limited to humans, and possibly a few other species, but the property itself appears to resist characterization in physicalist terms. The problem is most obvious in the case of ‘arbitrary’ signs, like words, where it is clear that there is no connection between the physical properties of a word and what it demotes, and, yet it remains for Iconic representation.
 Early attempts tried to establish the link between sign and object via the mental states of the sign and symbol’s user. A symbol # stands for ✺ for ‘S’ if it triggers a ✺-idea in ‘S’. On one account, the reference of # is the ✺idea itself.  Open the major account, the denomination of # is whatever  the ✺-idea denotes. The first account is problematic in that it fails to explain the link between symbols and the world. The second is problematic in that it just shifts the puzzle inward. For example, if the word ‘table’ triggers the image ‘‒’ or ‘TABLE’ what gives this mental picture or word any reference of all, let alone the denotation normally associated with the word ‘table’?
 An alternative to these Mentalistic theories has been to adopt a behaviouristic analysis. Wherefore, this account # denotes ✺ for ‘S’ is explained along the lines of either (1) ‘S’ is disposed to behave to # as to ✺:, or (2) ‘S’ is disposed to behave in ways appropriate to ✺ when presented #. Both versions prove faulty in that the very notions of the behaviour associated with or appropriate to ✺ are obscure. In addition, once seems to be no reasonable correlations between behaviour toward sign and behaviour toward their objects that is capable of accounting for the referential relations.
 A currently influential attempt to ‘naturalize’ the representation relation takes its use from indices. The crucial link between sign and object is established by some causal connection between ✺ and #, whereby it is allowed, nonetheless, that such a causal relation is not sufficient for full-blown intention representation. An increase in temperature causes the mercury to rise the thermometer but the mercury level is not a representation for the thermometer. In order for # to represent ✺ to S’s activities. The flunctuational economy of S’s activity. The notion of ‘function’, in turn is yet to be spelled out along biological or other lines so as to remain within ‘naturalistic’ constraints as being natural. This approach runs into problems in specifying a suitable notion of ‘function’ and in accounting for the possibility of misrepresentation. Also, it is no obvious how to extend the analysis to encompass the semantical force of more abstract or theoretical symbols. These difficulties are further compounded when one takes into account the social factors that seem to play a role in determining the denotative properties of our symbols.
 The problems faced in providing a reductive naturalistic analysis of representation has led many to doubt that this task is achieved or necessary. Although a story can be told about some words or signs what were learned via association of other causal connections with their referents, there is no reason to believe ht the ‘stand-for’ relation, or semantic notions in general, can be reduced to or eliminated in favour of non-semantic terms.
 Although linguistic and pictorial representations  are undoubtedly the most prominent symbolic forms we employ, the range of representational systems human understand and regularly use is surprisingly large. Sculptures, maps, diagrams, graphs. Gestures, music nation, traffic signs, gauges, scale models, and tailor’s swatches are but a few of the representational systems that play a role in communication, though, and the guidance of behaviour. Even, the importance and prevalence of our symbolic activities has been taken as a hallmark of human.
 What is it that distinguishes items that serve as representations from other objects or events? And what distinguishes the various kinds of symbols from each other? As for the first question, there has been general agreement that the basic notion of a representation involves one thing’s ‘standing for’, ‘being about’, referring to or denoting’ something else. The major debates have been over the nature of this connection between a reorientation and that which it represents. As for the second question, perhaps, the most famous and extensive attempt to organize and differentiate among alternative forms of representation is found in the works of the American philosopher of science Charles Sanders Peirce (1839-1914) who graduated from Harvard in 1859, and apart from lecturing at Walter Hopkins university from 1879 to 1884, had almost no teaching, nonetheless, Peirce’s theory of signs is complex, involving a number of concepts and distinctions that are no longer paid much heed. The aspects of his theory that remains influential and ie widely cited is his division of signs into Icons, Indices and Symbols. Icons are the designs that are said to be like or resemble the things they represent, e.g., portrait painting. Indices are signs that are connected in their objects by some causal dependency, e.g., smoke as a sign of fire. Symbols are those signs that are used and related to their object by virtue of use or associations: They a arbitrary labels, e.g., the word ‘table’. This tripartite division among signs, or variants of this division, is routinely put forth to explain differences in the way representational systems are thought to establish their links to the world. Further, placing a representation in one of the three divisions has been used to account for the supposed differences between conventional and non-conventional representations, between representations that do and do not require learning to understand, and between representations, like language, that need to be read, and those which do not require interpretation. Some theorbists, moreover, have maintain that it is only the use of symbols that exhibits or indicates the presence of mind and mental states.
 Over the years, this tripartite division of signs, although often challenged, has retained its influence. More recently, an alterative approach to representational systems (or as he calls them ‘symbolic systems’) has been put forth by the American philosopher Nelson Goodman (1906-98) whose classical problem of ‘induction’ is often phrased in terms of finding some reason to expect that nature is uniform, in Fact, Fiction, and Forecast (1954) Goodman showed that we need in addition some reason for preferring some uniformities to others, for without such a selection the uniformity of nature is vacuous, yet Goodman (1976) has proposed a set of syntactic and semantic features for categorizing representational systems. His theory provided for a finer discrimination among types of systems than a philosophy of science and language as partaken to and understood by the categorical elaborations as announced by Peirce. What also emerges clearly is that many rich and useful systems of representation lack a number of features taken to be essential to linguistic or sentential forms of representation, e.g., discrete alphabets and vocabularies, syntax, logical structure, inferences rules, compositional semantics and recursive e compounding devices.
As a consequence, although these representations can be appraised for accuracy or correctness. It does not seem possible to analyse such evaluative notion along the lines of standard truth theories, geared as they are to the structure found in sentential systems.
 In light of this newer work, serious questions have been raised at the soundness of the tripartite division and about whether various of the psychological and philosophical claims concerning conventionality, learning, interpretation, and so forth, that have been based on this traditional analysis, can be sustained. It is of special significance e that Goodman has joined a number of theorists in rejecting accounts of Iconic representation in terms of resemblance. The rejection has ben twofold, first, as Peirce himself recognized, resemblance is not sufficient to establish the appropriate referential relations. The numerous prints of lithograph do not represent one another any more than an identical twin represent his or her sibling. Something more than resemblance is needed to establish the connection between an Icon and picture and what it represents. Second, since Iconic representations lack as may properties as they share with their referents, sand certain non-Iconic symbol can be placed vin correspondences with their referents. It is difficult to provide a non-circular account of what the similarity I at distinguishes Icons from other forms of representation. What is more, even if these two difficulties could be resolved, it would not show that the representational function of picture can be understood independently of an associated system of interpretations. The design, □, may be a picture of a mountain of the economy in a foreign language. Or it may have no representational significance at all. Whether it is a representation and what kind of representation it uses, is relative to a system of interpretation.
 If so, then, what is the explanatory role of providing reasons for our psychological states and intentional acts? Clearly part of this role comes from the justificatory nature of the reason-giving relation: ‘Things are made intelligible by being revealed to be, or to approximate to being, as they rationally ought to be’. For some writers the justificatory and explanatory tasks of reason-giving simple coincide. The manifestation of rationality is seen as sufficient to explain states or acts quite independently of questions regarding causal origin. Within this model the greater the degree of rationality we can detect, the more intelligible the sequence will b e. where there is a breakdown in rationality, as in cases of weakness of will or self-deception, there is a corresponding breakdown in our ability to make the action/belief intelligible.
 The equation of the justificatory and explanatory role of rationality links can be found within two quite distinct picture. One account views the attribute of rationality from a third-person perspective. Attributing intentional states to others, and by analogy to ourselves, is a matter of applying to them a certain pattern of interpretation. We ascribe that ever states enables us to make sense of their behaviour as conforming to a rational pattern. Such a mode of interpretation is commonly an ex post facto affair, although such a mode of interpretation can also aid prediction. Our interpretations are never definitive or closed. They are always open to revision and modification in the light of future behaviour. If such revision enable person as a whole to appear more rational. Where we fail to detect of seeing a system then we give up the project of seeing a system as rational and instead seek explanations of a mechanistic kind.
 The other picture is resolutely firs-personal, linked to the claimed prospectively of rationalizing explanations we make an action, for example, intelligible by adopting the agent’s perspective on it. Understanding is a reconstruction of actual or possible decision making. It is from such a first-personal perspective that goals are detected as desirable and the courses of action appropriated to the situation. The standpoint of an agent deciding how to act is not that of an observer predicting the next move. When I found something desirable and judge an act in an appropriate rule for achieving it, I conclude that a certain course of action should be taken. This is different from my reflecting on my past behaviour and concluding that I will do ‘X’ in the future.
 For many writers, it is, nonetheless, the justificatory and explanatory role of reason cannot simply be equated. To do so fails to distinguish well-formed cases thereby I believe or act because of these reasons. I may have beliefs but your innocence would be deduced but nonetheless come to believe you are innocent because you have blue eyes. Yet, I may have intentional states that give altruistic reasons in the understanding for contributing to charity but,. Nonetheless, out of a desire to earn someone’s good judgment. In both these cases. Even though my belief could be show to be rational in the light of other beliefs, and my action, of whether the forwarded belief become desirously actionable, that of these rationalizing links would form part of a valid explanation of the phenomena concerned. Moreover, cases inclined with an inclination toward submission. As I continue to smoke although I judge it would be better to abstain. This suggests, however, that the mere availability of reasoning cannot, least of mention., have the quality of being of itself an sufficiency to explain why it occurred.
 If we resist the equation of the justificatory and explanatory work of reason-giving, we must look fora connection between reasons and action/belief in cases where these reasons genuinely explain, which is absent otherwise to mere rationalizations (a connection that is present when enacted on the better of judgements, and not when failed). Classically suggested, in this context is that of causality. In cases of genuine explanation, the reason-providing intentional states are applicable stimulations whose cause of holding to belief/actions for which they also provide for reasons. This position, in addition, seems to find support from considering the conditional and counter-factuals that our reason-providing explanations admit as valid, only for which make parallel those in cases of other causal explanations. Imagine that I am approaching the Sky Dome’s executives suites looking for the cafeteria. If I believe the café is to the left, I turn accordingly. If my approach were held steadfast for which the Sky Dome has, for itself the explanation that is simply by my desire to find the cafê, then in the absence of such a desire I would not have walked in the direction that led toward the executive suites, which were stationed within the Sky Dome. In general terms, where my reasons explain my action, then the presence to the future is such that for reasons were, in those circumstances, necessary for the action and, at least, made probable for its occurrence. These conditional links can be explained if we accept that the reason-giving link is also a causal one. Any alternative account would therefore also need to accommodate them.
 The defence of the view that reasons are causes for which seems arbitrary, least of mention, ‘Why does explanation require citing the cause of the cause of a phenomenon but not the next link in the chain of causes? Perhaps what is not generally true of explanation is true only of mentalistic explanation: Only in giving the latter type are we obliged to give the cause of as cause. However, this too seems arbitrary. What is the difference between mentalistic and non-mentalistic explanation that would justify imposing more stringent restrictions on the former? The same argument applies to non-cognitive mental stares, such as sensations or emotions. Opponents of behaviourism sometimes reply that mental states can be observed: Each of us, through ‘introspection’, can observe at least some mental states, namely our own, least of mention, those of which we are conscious.
 To this point, the distinction between reasons and causes is motivated in good part by a desire to separate the rational from the natural order. However, its probable traces are reclined of a historical coefficient of reflectivity as Aristotle’s similar (but not identical) distinction between final and efficient cause, engendering that (as a person, fact, or condition) which proves responsible for an effect. Recently, the contrast has been drawn primarily in the domain or the inclining inclinations that manifest some territory by which attributes of something done or effected are we to engage of actions and, secondarily, elsewhere.
 Many who have insisted on distinguishing reasons from causes have failed to distinguish two kinds of reason. Consider its reason for sending a letter by express mail. Asked why id so, I might say I wanted to get it there in a day, or simply, to get it there in as day. Strictly, the reason is repressed by ‘to get it there in a day’. But what this express to my reason only because I am suitably motivated: I am in a reason state, as wanting to get the letter there in a day. It is reason state’s especially wants, beliefs and intentions ~ and not reasons strictly so called, that are candidates for causes. The latter are abstract contents of propositional altitudes: The former are psychological elements that play motivational roles.
 If reason states can motivate, however, why (apart from confusing them with reasons proper) deny that they are causes? For one can say that they are not events, at least in the usual sense entailing change, as they are dispositional states (this contrasts them with occurrences, but not imply that they admit of dispositional analysis). It has also seemed to those who deny that reasons are causes that the former justify as well as explain the actions for which they are reasons, whereas the role of causes is at most to explain. As other claim is that the relation between reasons (and for reason states are often cited explicitly) and the actions they explain is non-contingent, whereas the relation causes to their effects is contingent. The ‘logical connection argument’ proceeds from this claim to the conclusion that reasons are not causes.
 These arguments are inconclusive, first, even if causes are events, sustaining causation may explain, as where the [states of] standing of a broken table is explained by the (condition of) support of staked boards replacing its missing legs. Second, the ‘because’ in ‘I sent it by express because I wanted to get it there in a day; is in some semi-causal ~ explanation would at best be construed as only rationalizing, than justifying  action? And third, if any non-contingent connection can be established between, say, my wanting something and the action it explains, there are close causal analogism such as the connection between brining a magnet to iron filings and their gravitating to it: This is, after all, a ‘definitive’ connection, expressing part of what it is to be magnetic, yet the magnet causes the fillings to move.
 There I then, a clear distinction between reasons proper and causes, and even between reason states and event causes: But the distinction cannot be used to show that the relations between reasons and the actions they justify is in no way causal. Precisely parallel points hold in the epistemic domain (and indeed, for all similarly admit of justification, and explanation, by reasons). Suppose my reason for believing that you received it today is that I sent it by express yesterday. My reason, strictly speaking, is that I sent it by express yesterday: My reason state is my believing this. Arguably reason justifies the further proposition I believe for which it is my reason and my reason state ~ my evidence belief ~ both explains and justifies my belief that you received the letter today. I an say, that what justifies that belief is [in fact] that I sent the letter by express yesterday, but this statement expresses my believing that evidence proposition, and you received the letter is not justified, it is not justified by the mere truth of the proposition (and can be justified even if that proposition is false).
 Similarly, there are, for belief for action, at least five main kinds of reason (1) normative reasons, reasons (objective grounds) there are to believe (say, to believe that there is a green-house-effect): (2) Person-relative normative reasons, reasons for [say] me to believe, (3) subjective reasons, reasons I have to believe (4) explanatory reasons, reasons why I believe, and (5) motivating reasons for which I believe. Tenets of (1) and (2) are propositions and thus, not serious candidates to be causal factors. The states corresponding to (3) may not be causal elements. Reasons why, tenet (4) are always (sustaining) explainers, though not necessarily even prima facie justifier, since a belief can be casually sustained by factors with no evidential value. Motivating reasons are both explanatory and possess whatever minimal prima facie justificatory power (if any) a reason must have to be a basis of belief.
 Current discussion of the reasons-causes issue has shifted from the question whether reason state can causally explain to the perhaps, deeper questions whether they can justify without so explaining, and what kind of causal states with actions and beliefs they do explain. ‘Reliabilist’ tend to take as belief as justified by a reason only if it is held at least in part for that reason, in a sense implying, but not entailed by, being causally based on that reason. ‘Internalists’ oftentimes deny this, as, perhaps, thinking we lack internal access to the relevant causal connections. But internalists need internal access to what justified ~ say, the reason state ~ and not to the (perhaps quite complex) relations it bears the belief it justifies, by virtue for which it does so. Many questions also remain concerning the very nature of causation, reason-hood, explanation and justification.
 Nevertheless, for most causal theorists, the radical separation of the causal and rationalizing role of reason-giving explanations is unsatisfactory. For such theorists, where we can legitimately point to an agent’s reasons to explain a certain belief or action, then those features of the agent’s intentional states that render the belief or action reasonable must be causally relevant in explaining how the agent came to believe or act in a way which they rationalize. One way of putting this requirement is that reason-giving states not only cause but also causally explain their explananda.
 The general attempt to understand the components of a working language, the relationship the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore, embraces the traditional division of ‘semiotic into ‘syntax’, ‘semantics’, and ;’pragmatics’. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It also mingles with the metaphysics of truth and the relationship with the metaphysics of truth and the relationship between sign and object. Much philosophy especially in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of ‘logical form’ and the basis of the division between ‘syntax’ and ‘semantics’, as well as problems of understanding the number and nature of specifically semantic relationships such as ‘meaning’, ‘reference’, ‘prediction’, and ‘quantification’. Pragmatics include the theory of ‘speech acts’, while problems of ‘rule following’ and the ‘indeterminacy of translation’ infect philosophies of both pragmatics and semantics.
 There is no denying it, the language of thought hypothesis has a compelling neatness about it. A thought is depicted as a structure of internal representational elements, combined in a lawful way , and playing a certain functional role in an internal processing economy.
 In the philosophy of mind, an adequate conception of mind and its relationship to matter should explain how it is possible for mental events to interact with the rest of the world, and in particular to themselves have a causal influence on the physical world. It is easy to think that this must be impossible: It takes a physical cause to have a physical effect. Yet, every day experience and theory alike show that it is commonplace. Consciousness could hardly have evolved if it had, had no uses. In general, it is a measure of the success of any theory of mind and body that it should enable us to avoid ‘epiphenomenalism’.
 On the same course, the Scottish philosopher, historian and essayist David Hume (1711-76), said that the earlier of two causally related events is always the cause, and the later effect. However, there are a number of objections to using the earlier-later ‘arrow of time’ to analyse the directional ‘arrow of causation’. In that, it seems in principle possible that some causes and effects could be simultaneous. More of the essence, the idea that time is directed from ‘earlier’ to ‘later’ itself stands in need of philosophical explanation ~ and one of the most popular explanation is that the idea of ‘movement’ from earlier to later depend on the fact that cause-effect pairs always have a given orientation in time. Even so, if we adopt such a ‘casual theory of the arrow of time’, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the direction of effects, then we will clearly need to find some account of the direction of causality which does not itself assume the  direction of time.
 A number of such accounts have been proposed. The American philosopher David Lewis (1941-2002), has argued that the asymmetry of causation derives from an ‘asymmetry of over-determination’. The over-determination of present events by past events ~ consider a person who dies after simultaneously being shot and struck by lightning ~ is a very rare occurrence. By contrast, the multiple ‘over determination’ of present events by future events is absolutely normal. This is because the future, unlike the past, will always contain multiple traces of any present event. To use Lewis’s example, when the president presses the red button in the White House, the future effects do not only include the dispatch of nuclear missiles, but also his finger-print on the button, his trembling, the further depletion of his tonic and gin, the recording of the button’s click on tape, the emission of light from the passage of the signal current, and so on, and on, and on.
 Lewis relates this asymmetry of over-determination to the asymmetry of causation as if we are to assume the cause of a given effect to have been absent, then this implies the effect would have been absent too, since (apart from freaks like the lightning-shooting case) there will not be any other causes left to ‘fix’ the effect. By contrast, if we suppose a given effect of some cause to have been absent, this does not imply the cause would have been absent, for there are still all the other traces left to ‘fix’ the cause. Lewis argues that these counterfactual considerations suffice to show why causes are different from effects.
 Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Following Reichenbach (1956), they note that the different causes of any given type of effect are normally probabilistically independent of each other: By contrast, the different effects of any given type of cause are normally probabilistically correlated. For example, both fat people are more likely to get excited than thin ones: The fact that both lung cancer and nicotine-stained fingers can result from smoking does imply that lung cancer is more likely among people with nicotine-stained fingers. So this account distinguishes effects from causes by the fact that the former, but not the ;latter, are probabilistically dependent on each other.
 Even so, fundamental trajectories take upon the crescentic edge-horizons of ‘directedness’ or ‘aboutness’ of many, if not all, conscious states. The term was used by the ‘scholastics’, but revived in the 19th century by German philosopher and phytologist Franz Clemens Brentano (1838-1917). Our beliefs, thoughts, wishes, dreams, and desires are about things. Equally, the words we use to express these beliefs and other mental states are about things. The problem of intentionality is that of understanding the relation obtaining between a mental state, or its expression, and the things it is about. A number of peculiarities attend this relation. First, if I an in some relation to a chair, for instance by sitting on it, then both it and I must exist. But while mostly one thinks about things that exist, sometimes (although this way of putting it has its problems) ne has beliefs, hopes, and fears about things that do not, as when the child expects Santa Claus, and the child fears Zeus. Secondly, if I sit on the chair and the chair is the oldest antique in Toronto, then I it on the oldest antique in Toronto. But if I plan to avoid the mad axeman, and the mad axeman is in fact my friendly postman, I do not therefore plan to avoid my friendly postman. Intentional relations seem to depend on how the object is specified, or as Frége put it, on the mode of presentation of the object. This makes them quite the relations whose logic we can understand by means of the predicate calculus, and this peculiarity has implicated an unusual mental or emotional effect on those capable of reaction, especially those philosophers notably the American philosopher Willard van Orman Quine (1908-2000), who  declared them unfit for use in serious science. More widespread is the view that since the concept is indispensable, we must either declare serious science unable to deal with the serious features of the mind, or explain how serious science may include intentionality. One approach in which we communicate fears and beliefs have a two-fold aspect, involving both the objects referred to, and the mode of presentation under which they are thought of, we can see the mind as essentially related to them, intentionality then becomes a feature of language, rather than a metaphysical or ontological peculiarity of the mental world.
 The attitudes are philosophically puzzling because it is not easy to see how the intentionality of the attitudes fits with another conception of them, as local mental phenomena.
 Beliefs, desires, hopes, and fears seem to be located in the heads or minds of the people that have them. Our attitudes are accessible to us through ‘introspection’. We think of attitudes as being caused at certain times by events that impinge on the subject’s body, specifically by perceptual events, such as reading a newspaper or seeing a picture of an ice-cream cone. Still, the psychological level of description carries with it a mode of explanation which ‘has no echo in physical theory’, wherefore, a major influence on philosophy of mind and language in the latter half of the 20th century brought Davidson to introduce the position known as ‘anomalous monism’ in the philosophy of mind, instigating a vigorous debate over the relation between mental and physical descriptions of persons, and the possibility of genuine explanation of events in terms of psychological properties. Following but enlarging upon the works of Quine on language, Davidson concentrated upon the figure of the ‘radical interpreter’, arguing that the method of interpreting a language could be thought of as constructing a ‘truth definition’ in the style of Alfred Tarski (1901-83), in which the systematic contribution of elements of sentences to their overall meaning is laid bare. The construction takes place within a generally holistic theory of knowledge and meaning. A radical interpreter can tell when a subject holds a sentence true, and using the principle of charity ends up making an assignment of truth conditions to individual sentence s. although Davidson is a defender of the doctrines of the ‘indeterminacy of radical translation and the ‘scrutability’ of reference, his approach has seemed to many to offer some hope of identifying meaning as a respectable notion, even within a broader extensional approach to language. Davidson is also known for rejection of the idea of a conceptual scheme, thought of as something peculiar to one language or one way of looking at the world, arguing that where the possibility of translation stops so does the coherence of the idea that there is anything to translate.
 These attitudinal values can in turn cause in other mental phenomena, and eventually in the observable behaviour of the subject. Seeing the picture of an ice-cream cone leads to a desire for one, which leads me to forget the meaning I am supposed to attend and walk to the ice-cream sho instead. All of this seems to require that attitudes be states and activities that are localized in the subject.
 But the phenomena of intentionality suggests that the attitudinal values are essentially relational in nature, they involve relations to the propositions at which they are directed and at the objects they are about. These objects may be quite remote from the minds of subjects. An attitudinal value seems to be individuated by the agent, the type of attitude (belief, desire, and so forth). It seems essential to the attitude reported by a role of assertion that it is directed toward the proposition that is directed propositionally proper.
 Even so, the formulation ‘actions are doing that are intentional under some description’ reflects the minimizing view of the individuation of actions. The idea is that for what I did that count as an action, there must be a description ‘V-ing’ of what I did, such that I V’ d intentionally. Still, since (on the minimizing view) my causing the modification was the same even s my greeting you, and I greeted you intentionally, this event was an action. Or, suppose I did not know it was you on the phone, and thought it was my spouse. Still, I would have said ‘Good-morning’ intentionally, and that suffices for this event, however described to be an action. My snoring and involuntary coughing, nonetheless, are not intentional under any description, and so are not definite actions.
 No matter, the standard confusion in the philosophical literature is to suppose that there is some special connection between intentionality-with-a-t, and intentionality-with-an-a, some authors even allege that these are identical. But, in fact, the two notions are quite distinct. Intentionality-with-a-t, is that property of the mind by which it is directed at, or is about objects and states of affairs in the world. Intentionality-with-an-s, is that phenomenon by which sentences fail to satisfy certain tests for extentionality.
 There are many standard test for extentionality, but substitutability of identical two most common in the literature are substitutability of identicals and existential inference. The principle of substitutability states that referable expressions can be substituted for other without changing the truth value of the statement in which the substitution is made. The principle of existential inference states that any statement which contains a referring expression implies the existence of the object referred to, by that expression. But there are statements that do not satisfy these principles such statements are said to be intentional with respect to these tests for extentionality. An example is given as such from the statement that:
  (1) The sheriff believes that Mr Howard is an honest man
And:
  (2) Mr Howard is identical with the notorious outlaw, Jesse James
It does not follow that:
  (3) The sheriff believes that the notorious outlaw, Jesse James, is an honest man.
This is a failure of the substitutability of identicals.
From the fact:
  (4) Billy believes that Santa Claus will come on Christmas Eve
It does not follow that:
  (5) There is some ‘x’ such that Billy believes ‘x’ will come on Christmas Eve.
This is a failure of existential inference. Thus, statements (1) and (4) fail tests for extentionality and hence are said to be intentional with respect to these tests.
 A proper understanding of intentionality is crucial to the study of a number of topics in cognitive science, including perception, imagery, and consciousness. The term itself, intentionality, can be misleading, in suggesting intentional action, doing something intentionally, with a certain aim or purpose. In cognitive science, the term is used in a different, more technical sense. Intentionality involves reference or aboutness or some similar relation to something having what the scholastics of the Middle Ages called intentional inexistence.
 When Ruth thinks of Wally K., as a cognitive scientist, the intentional object of her thought is Wally K., and the intentional content of her thought is that Wally K., is a cognitive scientist. She has a mental representation of him as a cognitive scientist. What Ruth thinks about has intentional inexistence in the sense that her thoughts may be wrong and she can have thoughts about things that do not even exist. She may think incorrectly that Wally K., is a computer scientist or even that Santa Claus is a computer scientist.
 If you treat intentionality as a relation to an intentional object, you must remember that it is not a real relation in the way that kissing or touching is. A real relation holds between two existing things independently of how they are conceived. When a woman kisses a man and the man she kisses is bald, the woman kisses a bald man. But Ruth can think about a man who happens to be bald without thinking of him as bald: She may represent him as hairy. Similarly. Ruth can think of someone who does not exist but cannot kiss or touch someone who does no exist.
 Looking for something is an example of an intentional activity in this technical sense of intentional as well as in the more ordinary sense having to do with what you are aiming at. You sometimes look for things that turn out not to exist. Ponce de Leon searched in Florida for the fountain of youth. Also, thee was no such thing to be found.
 There can be intentionality without representation. For example, needing something is an intentional phenomenon. The grass in my lawn can need water even though it is not going to get any and even if there is no water to give it. But the grass does not represent the water it needs.
 Other examples of intentional phenomena include spoken and written language, gestures, representational paintings, photographs, films, road maps, and traffic lights. It is controversial how these last instances of intentionality are related to the intentionality of thoughts and other cognitive states.
 Nonexistent intentional objects like Santa Claus and the fountain of youth raise difficult logical puzzles if taken seriously as objects. What properties do they have? What sorts of properties does Santa Claus have, as he in conceived by a certain child? Perhaps he is fat, lives at the North pole, dresses in red, drives a sleigh, brings presents to children at Christmas time, and has in at least, eight reindeer. But intentional objects cannot always have all the properties which they are envisioned as having, because, as in the case on the child’s conception of Santa Claus, a nonexistent intentional object may be envisioned as existent, and it is inconsistent to suppose that something could be both existent and nonexistent.
 You must resist the temptation to try to resolve such problems by identifying intentional objects with mental objects such as ideas or mental representations. That identification does not work. The child does have an idea of Santa Claus, and Ponce de Leon had an idea of the fountain of youth. But the child does not believe that his idea of Santa Claus lives at the North Pole. Nor was Ponce de Leon looking for a mental representation of the fountain of youth. He already had a mental representation: He was looking for the [intentional] object of that representation.
 Is it enough to say that a nonexistent intentional object is a merely possible object ~ is not a completely general account, because some intentional objects are not even possible? Someone may try to find the greatest prime number without realizing that there is no such thing. The intentional object of the attempt ~ the greatest prime number ~ is not a possible object. There is no possible world in which it exists.
 One controversy concerning intentionality concerns how to provide a logically adequate account of talk of intentional objects. That is a controversy in philosophical logic and may not be especially important to the rest of cognitive science.
 The moral is that, on the other, in which you have to take of nonexistent intentional objects with a grain of salt, without being too serious about the notion that there really are such things. And, yet, you have to be careful not to conclude that the child pondering Santa Claus is not really thinking about anything o that Ponce  de Leon was not really looking for anything as he wandered through Florida.
 To what extent does cognition involve intentionality? In one view, everything cognitive is intentional: Intentional inexistence is the mark of the mental, according to the German philosopher and psychologist Franz Clemens Brentano (1838-1917), who may be regarded as the foundation of the phenomenological movement in philosophy. His major work was ‘Psychologie vom empirischen Standpunld’ (1864, trans., as ‘Psychology from an Empirical Standpoint’, (1973) which rehabilitates the medieval concentration of the mental as a fundamental aspect, as well, he wrote on theological matter, and on moral philosophy, where the directedness of emotions allows a notion of their correct and incorrect objects, thus permitting him a notion of moral objectivity.
 Clearly, many feelings recognized in folk psychology have intentionality and are not simply raw feels. A child hopes that Santa Claus will bring a big red fire truck and fears that Santa Claus will bring a lump of coal instead. The child is happy that Christmas is tomorrow and unhappy that he hasn’t been a good little boy for the past few weeks. A child’s hopes, fears, happiness, and unhappiness have intentional object and intentional content.
 It is unclear whether all feelings or emotions have intentional content in this way. Do feelings of ‘free-floating’ anxiety and depression have no intentional content, so that you are not anxious about anything or depressed about anything, but just depressed? Or do such states have a very general nonspecific content, so that you are anxious about things in general or depressed about things in general, just not anxious or depressed about something specific? It is hard to say what turns on the answer to these questions, however.
 Perceptual experience has intentionality insomuch as it presents or represents a certain environment. How perceptual experience present’s o represents things may be accurate or inaccurate. Things may or may not be as they seem to be. Sometimes what you see or seem to see does not really exit, as when William Shakspere’s Macbeth hallucinated a bloody dagger.
 The intentional content of perceptual experience is sortally perspectival, representing how things are from here, or even representing how things are as perceived from this place. The content of the experience may even be in part about the experience itself: What ids perceived is perhaps seen as causing that very experience.
 The dagger is an intentional object of Macbeth’s perceptual experience. That’s what he is or seems to be aware of. You may be tempted to think that Macbeth must be aware of a mental image of a dagger, but that is like thinking that Ponce de Leon must have been trying to find an idea of the fountain of youth.
 Reconditions amounting to mental imagery have intentionality. What you image or imagine is the intentional object of your imagining or imaging. When you picture Lucy’s smile is what you imagine. Theories of imagery offer accounts of the structure of the inner representation involved in one’s imagery and the processes that operate on the structure. But what you imagine is not that inner mental representation, you imagine Lucy’s smile.
 The term ‘mental image’ is ambiguous. Sometimes it refers to the imagining of that thing, picturing Lucy smiling. Sometimes it refers to the hypothetical inner representation formed when something is imagined, an inner mental picture or description of Lucy smiling. It is important not to confuse these things. Otherwise, the substantial claim that imagination involves the construction of inner pictures or the sorts of mental representations with specific structures will be conflated with the obvious fact that you are capable of imaging various things.
 Similarly, it is important to distinguish imaging something revolving from actually revolving a mental representation in your mind or head: It is important to distinguish imagining scanning a scene from scanning an inner mental representation.
 It is controversial what sort of introspective awareness you have of your inner mental representations. Matters are only confused through failure to distinguish the various senses of mental image. You have something that might be called ‘introspective’ awareness of mental images in the first sense: Namely, the intentional object of your thoughts. You often know what you are thinking about, imagining, perceiving, and so forth. It is unclear whether you have any corresponding access to the mental representations, if any, underlying your thinking, imagining, perceiving, and so forth.
 The ascendancy of cognitive approaches to mind has brought with it a renewed interest in imagery. Two problems concerning representation have held centre stage in these discussions, as the first problem, is of a piece with older ontological worries over the status of so-called ‘pictures in the mind’. Proponents of imaginistic theories often talk in ways that seem to presuppose that images are objects, like physical objects, that can be rotated, scanned, approached, enlarged, and so forth. Yet it is hard to make sense of such reification, given that mental images have no mass, physical size, shape, or location. The second problem concerning imagery has close ties to debates over the adequacy of the (digital) computer model of mind. The reason for this is that images are typically identified with pictures and thus allied with analogue representation. So it is held that if we employ images in cognition, it shows that claims that all mental representation is propositional or sentential, i.e., digital, is false. In turn, if mental processing involves the use of non-digital, pictorial representations, our minds and cognitive activities cannot be understood within the constraints of the standard computer model. Although seemingly separate mattes, the issue of ontological reification and the issue of ontological reification and the issue for those who assume that analogue representational function via their sharing or having features analogous to those they represent. Most proponents of imaginistic explanation allow that their theories would be unsustainable if they did require that their literally be items in the mind that possessed spatial dimensions and other physical properties. They have offered various proposals attempting to show how it is possible to cash in on talk, of using or manipulating images without falling into the trap of reification. In any case, it should be clear that questions of reification also pose a problem for proponents of sentential models of mind, who claim that we think in words. For the ontological quandary of giving a satisfactory account of how there can be pictures or maps in the head is at root no different from the problem of how there can be words and sentences in the head. And if a satisfactory answer is available to the latter, it should be adaptable to the former.
 A good deal of the debate over imagery has been obscured by problematic accounts of the basis of the ‘stand for’ relation and by unsupported assumptions about the nature, function and distinction between and among linguistic and non-linguistic forms of representation. For example, it is common for both proponents and critics of imagery to identify images with pictures or picture-like items, and then take it for granted that pictorial representation can be explained in terms of resemblance or some other notion of 1 ~ 1 correspondence, or assume that since pictures are like their referents they require no interpretation. But it is highly questionable whether such accounts are adequate for dealing with our everyday use of pictures (maps, diagrams, and so forth), in cognition. The difficulties involved with this older understanding of Iconic representation become more acute when applied in imaginistic or mental pictures.
 Expanding the representational domain is something problematic in the very way the imagery controversy, along with other debates over mind and cognition have been set up as a choice between whether humans employ one or two kinds of representational systems. As we know that humans make use of an enormous number of different types of [external] representational systems. These systems differ in form and structure along a variety of syntactic, semantical and other dimensions. It would appear there is no sense in which these various and diverse systems can be divided into two well-specified kinds. Nor does it seem possible to reduce, decode, or capture the cognitive content of all of these forms of representation into sentential symbols. Any adequate theory of mind is going to have to deal with the fact that many more than two types of representation are employed in our cognitive activities, then, to assume that yet-to-be discovered modes of internal representation must fit neatly into one or twp pre-ordained categories.
 Appeals to representations play a prominent role in contemporary work in the study of mind. With some justification, most attention has been focussed on language or language-like symbol systems. Even when some non-linguistic systems are countenanced, they tend to be given second-class status. This practice, however, has had a rather constricting affect on our understanding of human cognitive activities. It has, for example, resulted in a lack of serious examination of the function of the arts in organizing the reorganizing our world. And the cognitive uses of metaphor, expression. Exemplification, and the like are typically ignored. Moreover, recognizing that a much broader range of representational systems play a number of philosophical presuppositions and doctrines in the study of mind into question: (1) Claims about the unique of representation as the mark of the mental (2) the identification of contentful or informational states with the sentential of propositional attitudes: (3) The idea that all thought can be expressed in language (4) the assumption that compositional accounts of the structure of language provide the only model we have for the exhibits or productive nature of representational systems in general, and (5) The tendency to construe all cognitive transitions among representations as cases of inference (based on syntactic or logical form.)
 Thought, in having contents, possess semantic properties, and, fundamentally, a central assumption in much current philosophy of mind, is that, propositional attitudes, like beliefs and desires play a causal or explanatory role in mediating between perception and behaviour ~ in terms of reasons ~ we ourselves and each other as ‘rational purposive creatures, fitting our beliefs to the world as we perceive it and seeing to obtain what we desire in the light of them. Reasoning-giving explanation can be offered not only for actions and beliefs, which will gain most attention to this entry: But, also, for desires, intentions, hopes, fears, angers within a network of rationalizing links is part of the individuating characteristics of this range of psychological states and the intentional acts they explain. Even though
the reason-giving relation is a normative claim, as such of a reason for believing, acting, and so forth, that if, given to other psychological states, this belief/action is justified or appropriate profoundly of someone’s reason consists in making clear this justificatory link. Paradigmatically, the psychological states that provide an agent with reason and intentional states individuated in terms of their propositional content, are links of the rationalization of this range of psychological states and intentional acts they explain. The associated process of simple ideas we are evermore of an understanding the fundamental aspect attributed to content. This causal-explanatory conception of propositional attitudes, however, casts little light on their representational aspects. The casual-explanatory y role of beliefs and desires depend on how they interact with each other and with subsequent actions. But the representational contents of such states can often involve referential relations to external entities with which thinkers are causally quite unconnected. These referential relations thus seem extraneous to the causal-explanatory roles of mental states. It follows that the causal-explanatory conception of mental states must somehow be amplified or supplemented if it is to account for representational content. Yet, mental events, states or processes with content include seeing the door is shut, believing you are being followed and calculating the square root of two. Saying that, as mental state with content can fail to refer, but there always exist s a specific condition for a state with content to refer to certain things. When the state has a correctness or fulfilment condition, its correctness is determined by whether its referents have the properties the content specifies for them.
  In general, we cannot understand a person’s reasons for acting as he does without knowing the array of emotions and sensation to which he is subject, of what is remembered and of what is  forgotten, and how reasons beyond the confines of minimal rationality. Even the content involving perceptual states, which play a fundamental role in individuating content, cannot be understood purely in terms relating to minimal rationality. Overall, contents are normally specified by ‘that . . .’ clauses, and it is natural to suppose that a content has the same kind of sequential and hierarchical structure as the sentence that specifies it. This supposition would be widely accepted for conceptual content. It is, however, a substantive thesis that all content is conceptual. One way of treating one sort of perceptual content is to regard the content as determined by a spatial type, the type under which the region of space around the perceiver must fall if the experience with that content is to represent the environment correctly. So, that all content is conceptual legitimacy for using these spatial types in giving the content of experience does not undermine the thesis that all content is conceptual. Such supporters will say, that the spatial type is just a way of capturing what can equally be captured by conceptual components such as ‘that distance’, or ‘that direction’, where these demonstratives are made available by the perception in question. Thar non-conceptual content will respond that these demonstratives themselves cannot be elucidated without mentioning the spatial type which lack sentence-like structure.
 Beliefs are true or false. If, as representationalists had it, beliefs are relations to mental representations, then beliefs must be relations to representations that have truth values among their semantic properties. Sentences, at least declaratives, are exactly the kind of representations that ave truth values, this in virtue of denoting and attributing. So, if mental representations are as sententialism says, we could readily account for the truth valuation of mental representations.
 Beliefs serve a function within the mental economy. They play a central part in reasoning and, thereby, contribute to the control of behaviour of which has lead into the topic through which elaborative considerations have been defended with that in a number of philosophers and psychologists. The contributive rationalities  depict of a set of beliefs, desires, and actions, also perceptions, intentions, and decisions, must fit together in various ways. If they do not, in the extreme case they fail to constitute a mind at all ~ no rationality, no agent. This core notion of rationality in philosophy of mind thus concern a cluster of personal identity conditions, that is, holistic coherence requirements upon the system of elements comprising a person’s mind. As such, functionalism about content and meaning appears to lead to holism. In general transition between mental stares and between mental states and behaviour depend on the contents of the mental states themselves. If I believe that sharks are dangerous, I will infer from sharks being in the water to the conclusion that people should not be swimming. Suppose I first think that sharks are dangerous, but then change m mind, coming to think that sharks are not dangerous. However, the content that the first belief affirms cannot be the same as the content that the second belief denies, because the transition relations (e.g., the inference from sharks being in the water to what people should do) that constitute the contents changed when I changed my mind. A natural functionalist reply is to say that some transitions are relevant to content individualists have not told us how to do that. Appeal to a traditional analytic/synthetic distinction clearly would do. For example, ‘dog’ and ‘cat’ would have the same content on such a view. It could not be analytic that dogs bark or that cats meow, since we can imagine a non-barking breed of dog and a non-meowing breed of cat. If ‘Dogs are animals’ is analytic, so is ‘Cats are animals’. If ‘Cats are adult kittens’ is analytic, so is ‘Dogs are adult puppies’. Dogs are not cats ~ but then cats are not dogs. So a functionalist account will not find traditional analytic inference relations that will distinguish the meaning of ‘dog’ from the meaning of ‘cat’. Other functionalist accept holism for ‘narrow content’, attempting to accommodate intuitions about the stability of content by appealing to wide content.
 While a person’s putative beliefs must mesh with the person’s desire and decisions, or else they cannot qualify as the individuals beliefs: Similarly, for desire, decision and so forth. This is ‘agent-constitutive rationality’ ~ that agent’s posses it is more than an empirical hypothesis. A related conception; to be rational (that is, reasonable, well-founded, not subject to epistemic criticism) a belief or decision at least, must cohere with the rest of the person’s cognitive system ~ (for instance, in terms of logical consistency and application of valid inference procedures. Rationality constraints therefore, are key linkages among the cognitive, as distinct from qualitative, mental states.
 ‘Reason’ capitalizes on various semantic and evidential relations among antecedently held beliefs (and perhaps other attitudes) to generate new beliefs to which subsequent behaviour ,might be tuned. Apparently, reasoning is a process that attempts to secure new true beliefs by exploiting old [true] beliefs. By the lights of representationalist, reasoning must be a process defined over mental representations. Sententialism tells us that the type of representation in play in reasoning is most likely sentential ~ even in mental ~ representation.
 The sentential theory also seems supported by the argument that the ability to think certain thoughts appears intrinsically connected with the ability to think certain others. For example, the ability to think that Walter hit’s Julie goes hand in hand with the ability to think that Julie hits Walter, but not with the ability to think that Toronto is overcrowded. Why is this? The ability to produce or understand certain sentences is intrinsically connected with the ability to produce or understand certain others. For example, there are no native speakers of English who know how to say ‘Walter hits Julie’ but who do not know how to say ‘Julie hits Walter’. Similarly, there are no native speakers who understand the former sentence but not the latter. These facts are easily explained if sentences have a syntactic and semantic structure. But if sentences are taken to be atomic, these facts are a complete mystery. What is true for sentences involving manipulating mental representations? If mental representations with a propositional content have a semantic and syntactic structure like that of sentences, it is no accident that one who is able to think that Walter hit’s Julie is thereby also able to think that Julie hits Walter. Furthermore, it is no accident that one who can think these thoughts need not thereby be able to think thoughts having different components ~ for example, the thought that Toronto is overcrowded. And what goes here for thought goes for belief and the other propositional attitudes.
 A traditional view of philosophical knowledge can be straightforward and so forth, as held by comparing and contrasting philosophical and scientific investigation, as follows. The two types of investigations differ both in their methods (the former is a priori, and the latter a posteriori)and in the metaphysical status of their results (the former yields facts that are metaphysically necessary and the later yields facts that are metaphysically contingent). Yet the two types of investigations resembled each other in that both, if successful, uncover new facts, and these facts, although expressed in language , are generally not about language (except for investigations I such specialized areas as philosophy of language and empirical linguistics).
 This view of philosophical knowledge has considerable appeal. But it faces problems. First, the conclusions of some common philosophical argument seem preposterous. Such positions as that it is no more reasonable to eat bread than arsenic, because it is only in the past that arsenic poisoned people), or that one can never know he is not dreaming, may seem to go so far against commonsense as to be for that reason unacceptable. Second, philosophical investigation does not lead to a consensus among philosophers. Philosophy, unlike the sciences, lacks an established body of generally-agreed-upon truths. Moreover, philosophy lacks an unequivocally applicable method of setting disagreements. (The qualifier‘unequivocally applicable’ is to forestall the objection that philosophical disagreements are settled by the method of a priori argumentation: There is often unresolvable disagreement about which si de has won a philosophical argument.)
 In the face of these and other considerations, various philosophical movements have repudiated the traditional view of philosophical knowledge. Thus, verificationism responds to the unresolvability of traditional philosophical disagreements by putting forth a criterion of literal manfulness: ‘A statement is held to be literally meaningful if and only if it is either analytic or empirically verifiable’ where a statement is analytic if it is just a matter of definition, and tradition controversial philosophical views, such as that it is metaphysically impossible to have knowledge of the world outside one’s own knowledge of the world outside one’s own mind, would count as neither analytic nor empirically verifiable ‘logical positivism’, in the sense of being incapable of truth or falsity, and so not a possible object of cognition. This required a criterion of meaningfulness, and it was found in the idea of empirical verification. Verification or conformation is not necessarily something that can be carried out by the person who entertains the sentence or hypothesis in question, or even by anyone at all at the stage of intellectual and technological development achieved at the time it is entertained. A sentence is cognitively meaningful if and only if it is in principle empirically verifiable or falsifiable.
 Anything which does not fulfil this criterion is declared literally meaningless. There is no significant ‘cognitive’ question as to its truth or falsity: It not an appropriate object of enquiry. Moral and aesthetic and other ‘evaluative’ sentences are held to be neither conformable nor disconfirmable on empirical grounds, and so are cognitively meaningless. They are at best, expressions of feelings or preference which are neither true nor false. Bu t they did not spend much time trying to show this in detail about the philosophy of the past. They were more concerned with developing a theory of meaning and of knowledge adequate to the understanding and perhaps even the improvement of science.
 The logical positivist conception of knowledge in its original and purest form sees human knowledge as a complex intellectual structure employed for the successful anticipation of future experience. It requires, on the one hand, a linguistic or conceptual framework in which to express what is to be categorized and predicted and, ion the other, a factual element which provides that abstract form with content. This comes of have that anyone can understand or intelligibly think to be so could go beyond the possibility of human experience, and the only reason anyone could have for believing anything must come, ultimately, from actual experience.
 The general project of the positivist theory of knowledge is to exhibit the structure, content, and basis of human knowledge in accordance with these empiricist principles. Since science is regarded as the repository of all genuine human knowledge, this becomes the task of exhibiting the structure, or as it was called, the ‘logic’ of science. The theory of knowledge, thus become the philosophy of science. It has three major tasks (1) to analyses the meaning o f the statements of science exclusively in terms of observations or experiences in principle available  to human beings. (2) To show how certain observations o r experiences serve to confirm a given statement in the sense of making it more warranted or reasonable: (3) To show how non-empirical or a priori knowledge of the necessary truths o f logic and mathematics is possible even though every matter of fact which can be intelligibly thought or known is empirically verifiable or falsifiable.
  1. The slogan ‘the meaning of a statement is its method of verification’ expresses the empirical verification theory of meaningfulness according to which a sentence is cognitively meaningful if and only if it is empirically verifiable. It says in addition what the meaning of each sentence is: It is all those observations which would confirm or disconfirm the sentence. Sentences which would be verified or falsified by all the same observations are empirically equivalent or have the same meaning.
 A sentence recording the result of a single observation is an observation or ‘protocol’ sentence. It can be conclusively verified of falsified on a single occasion. Every other meaningful statement is a ‘hypothesis’ which implies an indefinitely large number of observation sentences which together exhaust its meaning, but at no time all of them have been verified or falsified. To give an ‘analysis’ of the statements of scientific statement can reduced in this way to nothing more than a complex combination of directly verifiable ‘protocol’ sentences.
 Any view according to which he condition of a sentence’s or a thought’s being meaningful or intelligibly are equated with the conditions of its being verifiable of falsifiable. An explicit defence of the position would be a defence of the variability principle of meaningfulness. Implicit verificationism is often present in positions or arguments which do not defend that principle in general. But which reject suggestions to the effect that certain sort of claim is unknowable or  unconfirmable on the sole ground that it would therefore be meaningless or intelligible is indeed a guarantee of knowability or confirmability is the position sound. If it is, nothing we understand could be unknowable or unconfirmable.
  2. The observations recorded in particular ‘protocol’ sentences are said to confirm those ‘hypotheses’ of which they are instances. The task f confirmation theory is therefore to define the notion of a confirming instance of a hypothesis and to show how the occurrence of more and more such instances adds credibility or warrant to the hypothesis in question. A complete answer would involve a solution of the problem of induction: To explain how any past or present experience makes it reasonable to believe in some thing that has not yet been experienced.
  3. Logical and mathematical propositions, and other necessary truths do not predict the course of future sense experience. They cannot be empirically confirmed or disconfirmed. But they are essential to science, and so must be accounted for. They are one and all ‘analytic’ in something like Kant’s sene: True solely in virtue of the meaning of their constituent terms. They serve only to make explicit the contents of and the logical relations among the terms or concept which make up the conceptual framework through which we interpret and predict experience. our knowledge of such truths is simply knowledge of what is and what is not contained in the concepts we use.
 Experience can perhaps show that a given concept has no I instances, or that it is not a useful concept for us to employ. But that would not show that what we understand ti be included in that concept is not really included in it. Or that is not the concept we take it to be. Our knowledge of the constituents of and the relations among our concepts is therefore not dependent on experience: It is a priori. It is knowledge of what holds necessarily, and all necessary truths are ‘analytic’, there is no synthetic a priori knowledge.
 The anti-metaphysical empiricism of logical positivism requires that there be no access to any facts beyond sense experience. The appeal to analyticity succeed in accounting for knowledge of necessary truths only if analytic truths state no facts, and our knowledge of them does not require non-sensory awareness of matters of fact. The reduction of all the concepts of arithmetic, for example, to those of logic alone, as was taken to have been achieved in Whitehead and Russell’s ‘Principia Mathematica’, showed that the truths of arithmetic were derived from nothing more than definitions of their constituent terms and general logical laws. Frége would have called them ‘analytic’ for that reason alone. But for a complete account positivism would have also to show that general logical laws state no facts.
 Under the influence of their reading of Wittgenstein’s ‘Tractatus Logico Philosophicus’, the positivists regarded all necessary and therefore all analytic truths as ‘tautologies’. They do not state relations holding independently of us within an objective domain of concepts,. Their truth is ‘purely formal’: The y are completely ‘empty’ and ‘devoid of factual content’. The y are to be understood as made true solely by our decisions to think and speak in one way than another, as somehow true ‘by convention’. A priori knowledge of them is in this way held to be compatible with there being no non-sensory access to a world of thing s beyond sense experience.
 The full criterion of meaningfulness therefore says that a sentence is cognitively meaningful if and only if either it is analytic or it is in principle empirically verifiable or falsifiable.
 The interest in logic, however, goes beyond the ability to use it to produce detailed proofs. There are interesting properties that can be proven of logical systems themselves. Many of these proofs of what are called ‘metatheorems’ were developed as part of an endeavour to use logic to provide a foundation to arithmetic. The German mathematician and philosopher of mathematics, Gottlob Frége (1848-1925) whose fist important work came in the Begriffsschift (‘concept writing’, 1879). Is also the first example of a formal system in the sense of modern logic? In it Frége undertakes to develop a formal system within which mathematical proofs may be given. It was his discovery of the correct representation of generality, the notion of ‘quantifier’ and ‘variable’, the at opened the possibility of successfully achieving this aim. With the at notation Frége could represent sentences involving multiple generality (such as the form ‘for every small number ‘e’ there is a number ‘n’ such that  . . . ’) on which the validity of much mathematical reasoning depends. In 1884, Frége published the Grundlagen der Arithmetik (translated as, The Foundaments of Arithmetic, by the British linguistic, philosopher J.L. Austin, 1959). The first volume of the Grundgesetze der Arithimetic (1893, translates, as The Basic Laws of Arithmetic, 1964) and formalized the mathematical approach of the Grundlagen, a task that necessitated giving the first formal theory of classes, it was this theory that was later shown inconsist by Russell’s paradox.
 Frége’s distinction as a logician is matched by his deep concern with the basic semantic concepts involved in the logical foundations of his work. In a succession of papers he forges the basic concepts and distinctions that have dominated subsequent philosophical investigation of logic and language. The topics of these writings include sense (Sinn) and reference, negation, assertion, truth/falsity, and the nature of thought. Although Frége’s relation to the philosophical surrounding s of his time are debatable, however, these concerns and his approach to them stamp Frége as the founding figure of ‘analytic philosophy’. Nonetheless, his concern to protect a timeless objectivity for thought and its contents has led to accusations of Platonism, and his own views of the objects of mathematics troubled him until the end of his life.
 The program of reducing arithmetic to logic turned out to be impossible, but pursuit of this program resulted in number of important findings. For example, in addition to consistency another important property of a logical system is completeness. A complete system is one in which the axiom structure is sufficient tp allow derivation of all true statements within the particular domain. The German-speaking mathematician logician, Kurt Gödel (1906-78) was to include the proof of the completeness of the first-order predicate calculus, and the ground-breaking results commonly referred to as ‘Gödel’s theorems’, for which his proof that no system can show its own consistency effectively put and end to Hilbert’s programme, as Gödel’s theorem of 1931, which showed that any system strong enough to provide a consistency proof of arithmetic would need to make logical and mathematical assumptions at least as strong as arithmetic itself , and hence be just as much prey to on hidden inconsistencies. Kurt Gödel established that quantificational logic is complete ~ any statement that must be true whenever the premises are true can, in principle, be derived using the standard inference rules of quantificational logic. But the fact that a system is complete does not mean that a procedure exists to generate a proof of any given logical consequences of the premises. If such a procedure exists the system is decidable. Sentential logic is decidable, and so are some restricted versions of quantificational logic. But Chu proved that general quantificational logic is not decidable. In general quantificational logic, the mere fact that we have failed to derive a result from the postulates does not mean that it could not be derived: It may be that we simply have not yet constructed the right proof. Of even more significant to the program of grounding mathematics in logic was Gödel’s proof that, unlike quantificational logic, there is no consistent axiomatization of arithmetic that is complete. This is referred to as the ‘incompleteness of arithmetic’, and is commonly presented as the claim that for any axiomatization of arithmetic there will be a true statement that cannot be proven within the system.
 Some of these theorems about logic have played important roles in the development of computer science. Other claims of logic, which are commonly accepted as true but which are not or cannot be prove n, have figured prominently in motivating the use of computers to study cognition. An example is Church’s thesis, which holds that any decidable process is effectively decidable or computable, which is to say that it can be automated. If this thesis is true, then it follows that it is possible to implement a formal system on a computer that will generate the proof of any particular theorem that follows from the postulate. The assumption that this thesis is true has buttressed the use of computers in studies of cognitive premisses. Assuming that cognition rules on decidable procedures, this thesis tells us that these procedures can be implemented on a digital computer as well as in the brain. Many have assumed that the procedures of symbolic logic characterize much of human reasoning, and because these procedures can readily be implemented on a computer, many investigators have tried to develop simulations of human reasoning using computers equipped with these inference procedures, however, the interest in logic is that numerous philosophers have tried to explicate scientific theories as logical structures and the structures of scientific explanation in terms of formal logical derivation.
 According to Francis Herbert Bradley (1846-1924), of which the metaphysical picture to which this leads is one that celebrates unity and wholeness as attuned of real, with anything partial and dependent upon division, in the way that thought is, yet, by contrast, formulated in language is always partial, and dependent upon categories themselves are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorizations,. Although absolute idealism maintains few adherents today, Bradley’s dissent from empiricism, his ‘holist’ and the brilliance and style of his writing continue to make him the most interested of the late 19th century writers influenced by the German philosopher Friedrich Wilhelm Georg Hegel (1770-1831). And without a doubt, which Hegel contributed many articles, and wrote his first works the, ‘Phänoomenologie des Geistes’ (1807), and wrote as ‘The phenomenology of Mind, 1977. Again, in 1816 he became professor of philosophy at Heidelberg, where he produced the Enzyklopädie der philosophischen Wissenchaften im Grundrisse (‘Encyclopaedia of the Philosophical Sciences in Outline’) It is, nonetheless, that the cornerstone of Hegel’s system, or world view, is the notion of freedom, conceived not as simple licence to fulfil preferences but as the rare condition of living self-consciously and in a fully rationally organization community or state (this is not, as it charged for exampled by Karl Raimund Popper (1902-19940), who in the traditional attempt to found scientific method in the support that experience gives to suitably formed generalizations and theories. Stressing the difficulty the problem of ‘induction’ puts in front of any such method. Popper substitutes an epistemology that starts with the bold imaginative formation of hypotheses. However, the tribunal of experience, which has the power to falsify them, but not to confirm them. Is that, the theory is capable of being refuted by experience, so that, in the philosophy of science of Popper falsifiablility is the great merit of genuine scientific theory, as opposed to unfalsifiable pseudo-science, notably psychoanalysis and historical materialism? Popper’s idea was that it could be a positive virtue in a scientific theory that it is bold, conjectural and goes beyond the evidence, but that it had to be capable of facing possible refutation. If each and every way things turn out is compatible with the theory, then it is no longer a scientific theory, but, for instance, an ideology or article of faith.
 The complex relationship Bradley had with pragmatism, mark a major crux in the history of philosophy. In brief, the philosophy of meaning and truth especially associated with Charles Sanders Peirce (1839-1914) and William James (1842-1910). Pragmatism is given various formulations by both writers, but the core in the belief that the meaning of a doctrine is the same as the practical effects of adopting it. Peirce interpreted a theoretical sentence as a confused form of thought whose meaning is only that of a corresponding practical maxim (telling us what to do in some circumstances). In James the position issues in a theory of truth, notoriously allowing that beliefs, including for example, belief in God, are true if the belief ‘works’ satisfactorily in the widest sense of the word’. On James’s view almost any belief might be respectable, and even true, provided it works (but working is not a simple matter for James). The apparently subjectivist consequences of this were widely assailed by Russell and Moore, and others in the early years of the 20th-century. This led to a division within pragmatism between those such as Walter Dewey (1859-1952), whose humanistic conception of practice, remains inspired by science and the more ‘idealistic’ route taken especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds, and remarks that the hypothesis would not work because it would not satisfy our (i.e., men’s) egoistic cravings for the recognition and admiration of others. The complication that this is what makes it true that other persons have minds is the disturbing part.
 Peirce’s own approach to truth is that it is what [suitable] processes of enquiry would tend to accept if pursued to an ideal limit. Modern pragmatist such as McKay Richard Rorty (1931-) and some writings that Hilary Putnam (1926-) have usually tried to dispense with an account of truth advocated by a minimal theory  of truth, for example, holds that there is no general problem about what makes sentences or propositions true: A minimal theory of value holds that there is nothing useful to say in general about values and valuing. minimalism approaches arise when the prospects for a substantial meta-theory about some term seem dim. They are thus consonant with suspicion of ‘first philosophy’, or the possibility of a stand-point over and above involvements in some aspect of our activities, from which those activities can be surveyed and described. Minimalism is frequently associated with the anti-theoretical aspects of the later work of Ludwig Wittgenstein (1889-1951) and has also been charged with being a fig-leaf for philosophical bankruptcy or anorexia.
 Originally, a title for those books of Aristotle that came after ‘Physics’, the term is now applied to any enquiry that raises questions about reality that lie beyond or behind those capable of being tackled by the methods of science. Naturally, an immediately contested issue is whether there are any such questions, or whether any text of metaphysics should, in Hume’s words, be ‘committed to the flames, for it can contain nothing but sophistry and illusion’ (Enquiry Concerning Human Understanding). The traditional examples will include questions of ‘mind’ and ‘body’, substance and accident, events, causation, and the categories of things that exist. However, a 17th-century coinage for the branch of metaphysics that concerns itself with what exists. Apart from the ‘ontological’ argument itself there have existed many deductive arguments that the world must continue things of one kind or another, simple things, unexpected things, eternal substances, necessary beings, and so forth. Such arguments often depend on or upon some version of the principle of ‘sufficient reason’, Kant is the greatest opponent of the view that unaided reason can tell us in detail what kinds of things must exist, and therefore do exist. These are the things the variables range over in a properly regimented formal presentation of the theory. Philosophers characteristically charge each other with ‘reifying’ thing improperly, and in the history of philosophy every kind of thing will at one time or another have been thought to be the fictitious result of an ontological mistake.
 Metaphysics seeks to determine what are the basic or fundamental kinds of things that exist and to specify the nature of these entities. Historically, interest in metaphysics cantered on such issues as whether a supreme being or a creator god exists. Whether there are mental phenomena or spiritual phenomena that are different from physical phenomena, or whether there is such a thing as free will. In more recent times it has addressed the question of the kinds of entities that we can include in scientific theories. For example, are mental events the kinds of things that should be posited in a theory of human action? The set of entities posited in general said to specify the ontology to which the theory is committed.
 It is important to note that the charter of metaphysical questions is generally taken to be different from the character of ordinary empirical questions such as whether there are any living dinosaurs. With such empirical questions we rely on such techniques as ordinary observations to settle the issue. Ontological questions are thought to be more fundamental and no resolvable by ordinary empirical investigations. It was thought that to address the classical questions of the existence of God or of minds separate from bodies required a kind of inquiry that went beyond ordinary empirical investigation. Sometimes it was claimed that such issues could be addressed simply through the tools of logic. For example, the ontological argument for God’s existence tried to argue from the idea of God as a perfect being to the actual existence of God did not exist, there would a more perfect being ~ a being just like God but who actually existed. Thus, the assumption that God does not exist is claimed to be contradictory, so God must exist. The modern  ontological questions concern how we should set up the categories through which we conduct our empirical inquiry. The question of the appropriate categories arises prior to empirical observation and so cannot be easily settled by means of such observation.
 To many non-philosophers both classical and contemporary questions of ontology seem peculiarly remote and unproductive. Of what value would it be to have an answer to an ontological question? The very character of ontological questions suggests that they lack practical significance. If ontological differences do not entail physical differences, it would seem that  one could hold whatever ontology one wanted and still deal with the physical world in much the same way. When the challenge is put in this way, philosophers often find themselves hard put to provoke a satisfactory answer. A number of philosophers, in fact, have tried to divert attention away from metaphysical issues. The logical positivists, who clam that most classical questions of ontology were meaningless, whereas Ludwig Wittgenstein (1953) tried to convince readers that when philosophers raised such issues they were letting their language go on a holiday, not raising real questions at all.
 Other philosophers have sought to reduce the distance between ontological inquires and empirical ones.  The most influential American philosopher of the latter half of the 20th century, Orman von Willard Quine (1908-2000), whose early work on mathematical logic, and issued in ‘A System of Logistic’ (1934). ‘Mathematical Logic’ (1940), and ‘Methods of Logic’ (1950). It was with the collection of papers from a ‘Logical Point of View’ (1953) that his philosophical importance became widely recognized. His celebrated attack on the analytic/synthetic distinction heralded a major shift away from the view of language descended from logical positivism, and a new appreciation of the difficulty of providing a sound empirical basis for theses concerning ‘convention’, ‘meaning’, and ‘synonymy’.
 His reputation was cemented by ‘Word and Object’ (1960), in which the indeterminacy of radical translation first takes centre stage. In this and many subsequent writings Quine took a bleak view of the nature of the language with which we ascribe thoughts and beliefs to ourselves and others. The languages that are properly behaved and suitable for literal and true description of the world are those of mathematics and science. The entities to which our best theories refer must be taken with full seriousness in our ontologies, yet an empiricalist. Quine thus supposed that the abstract objects of set theory are required by science, and therefore exist. Quine, for example, proposed that when we settle on a scientific theory we thereby settle the question of what ontological scheme we accept. Invoking the framework of quantificational logic, where all the terms referring to objects can be represented as variable inn quantified expressions, Quine offers the maxim, ‘to be is to be the value of a bound variable’, i.e., the objects to which we attribute properties in our theories are the ones whose existence we accept. Although this attempt to place ontological questions in the context of scientific inquiry may seem particularly attractive when we consider how perplexing the issues are otherwise, we should not think that thereby we really avoid them. What this proposal overlooks is that many of the debates over the adequacy of scientific theories have focussed on the ontology assume by the theory. This has been particularly true in recent psychology, where there have been active disputes over whether to count mental events as causal factors in an explanatory theory. But such questions are not peculiar to psychology. In physics and biology as well, disputes between theories have often turned on ontological issues as much as an empirical issue. For example, there was a long controversy between Cartesians and Newtonians during the 17th and 18th centuries over the legitimacy of appals to action at a distance. Embryology at the end of the last century was torn by a prolonged battle between ‘vitalists’ and ‘mechanists’ over the appropriated kind of explanation for developmental phenomena.
 However, it is once, again, felt in consideration of the argument: That ‘if anyone knows some ‘p’, then he or she can be certain that ‘p’. But no one can be certain of anything, and therefore, no one knows anything’. This argument, advanced in this form by Unger, is instructive, it repeats Descartes’ mistake of thinking that the psychological state of feeling certain ~ which someone Can be in with respect to falsehoods, such as the fact that I can feel certain that Northern Dancer will win the Derby next wee k, and be wrong ~ is what we are seeking in epistemology. But it also exemplifies the tendency in discursions of knowledge as such to make the definition of knowledge so highly restrictive little or nothing passes discernable scrutiny. Should one care if a suggested definition of knowledge is such that, as the argument jus t quoted tells us, no one can know anything? Just so long as one has many well-justified beliefs which work well in practice, can one not be quite content to know nothing? For my part, some might think it not to bad, that the overall interests are in connection
with the justification of beliefs and not the definition of knowledge that they do so. Justification is an important matter, not ;least because in the area of application in epistemology where the really serious interest should lie ~ in question about the ‘philosophy of science’ ~ justification is the crucial problem. That is where epistemologists should be getting down to work. By comparison, efforts to define knowledge’ are trivial and occupy too much effort in epistemology. the disagreeable propensity of the debate generated by Gettier counter-examples, as from the American philosopher Edmund Gettier who provided a range of counter-examples to this formula, in his case a belief is true, and the agent is justified in believing it. But the justification does not relate to the truth of the belief in the right way, so that it is relatively y accidental, or a matter of luck, that the belief is true. for example, I see what I reasonably and justifiably take to be an even of your receiving a bottle of whiskey and this basis I believe you drink whiskey. The truth is the at you do not drink whiskey, but on this occasion you were in fact taking delivery of a medical specimen. In such a case my belief is true and justified, but I do not thereby know that you drink whiskey, since this truth is only accidental relative to m y evidence. The counter-example, sparked a prolonged debate over the kinds of conditions that might be substituted to give a better account of knowledge, or whether all suggestions would met similar problems.
 The overall problem with justification is that the procedures we adopt, across all walks of epistemic life, appear highly permeable to difficulties posed by scepticism. The problem of justification is therefore a large part the problem of scepticism: Which precisely why discussion of scepticism is most central.
 Nonetheless, Russell developed a method of philosophical analysis, the beginning of which are clear in the work of his idealist phase. This method was central to his revolt against idealism and was employed throughout his subsequent career. Its main distinctive feature is that it has two parts. First, it proceeds backwards from a given body of knowledge (the ‘results’) to its premises, and second, it proceeds forwards from the premises to a reconstruction of the original body of knowledge. Russell often referred to the first stage of philosophical analysis simply as ‘analysis’. In contrast to the second stage, which he called ‘synthesis’. While the first stage was seen as being the most philosophic al, both were nonetheless essential to philosophical analysis. Russell consistently adhered to this two-directional view of analysis throughout his career.
 Analytic philosophy has never been fixed or stable, because it s intrinsically self-critical and its practitioners are always challenging their own presuppositions and conclusions. However, it is possible to locate a central period in analytic philosophy ~ the period comprising, roughly speaking, the logical positivist immediately priori to the 1939-45 war and the postwar phase of ;linguistic analysis. Both the prehistory and the subsequent history of analytic philosophy can be defined by the main doctrines of that central period.
 In the central period, analytic philosophy was defined by a belief in two linguistic distinctions, combined with a research programme. The two distinctions are, first, that between analytic and synthetic propositions, and, secondly, that between analytic and synthetic propositions, and, secondly, that between descriptive and evaluative utterances. The research programme is the tradition al philosophical research programme as language knowledge, meaning, truth, mathematics  and so forth. One way to see development of analytic philosophy over the past thirty years is to regard it as the gradual rejection of these distinctions, and a corresponding rejection of foundationalism as the crucial enterprise of philosophy. However, in the central period, these two distinctions served not only to identify the main beliefs of analytic philosophy, but, for those who accepted them  and the research programme, they defined the nature of philosophy itself.
 The distinction between analytic and synthetic prepositions was supposed to be the distinction between those propositions that are true or false as a matter of definition or of the meaning of the terms contained in them (the analytic propositions) and those that are true or false as a matter of fact in the word and not solely in virtue of the meaning of the words (the synthetic propositions) example of analytic truths would be such propositions as ‘Triangles are three-sided plane figures’, ’All bachelors are unmarried’, ‘Women are female’, ‘2 + 2 = 4', and so forth. In each of these, the truth of the proposition is entirely determined by its meaning: They are true by the definitions of the words that they contain. Such propositions can be known to be true or false a priori, and in each case they express necessary truths. Indeed, it was a characteristic feature of the analytic philosophy of this central period that terms such as ‘analytic’, ‘necessary[, and ‘tautological’ were taken to be co-existence. Contrasted with these were synthetic proposition, which, if they were true, were true as a matter of empirical fact and not as a matter of definition alone. Thus, propositions such as ‘There are more women than men’, ‘Bachelors tend to die earlier that married men’ and ‘Bodied attract each other according to the inverse square law’ are all said to be synthetic propositions, , and, if they are true, they express posteriori empirical truths about the real world that are independent of language. Such empirical truths, according to this view, are never necessary rather that they are contingent. For philosophers holding these views, the terms ‘a posteriori’, ‘synthetic’, contingent’, and ‘empirical’ were taken to be more or less co-extensive.
 It was a basic assumption behind the logical positivists movement that all meaningful propositions were either analytic or empirical, as defined by the conception that are so. The positivists wished to build a sharp boundary between meaningful propositions of science and everyday life on the one hand, and nonsensical propositions of metaphysics and theology on the other. They claimed that all meaningful propositions are either analytic of synthetic: Discipline s such as logic and mathematics fall within the analytic camp, the empirical sciences and much of common-sensical fall within the synthetic camp. Propositions that were neither analytic nor empirical propositions or meaningless. The slogan of the positivists was called the verification principle ~ and, in a simple form. It can be stated as follows All meaningful propositions are either analytic or synthetic, and those which are synthetic are empirically verifiable. This slogan was sometimes shortened to an even simpler Verifiability: The meaning of propositions is just its method of verification.
 Nevertheless, how can analysis be informative? This in the question that gives rise to what philosophers have traditionally called ‘the’ paradox of analysis. Thus consider the following propositions:
   (1) To be an instance of knowledge is to be an instance
   of justified true belief not essentially grounded in any
   falsehood.
(1), If true, illustrates an important type of philosophical analysis. For convenience of exposition, and assuming (1) is a correct analysis. The paradox arises from the fact that if the concept of justified true belief not essentially grounded in any falsehood is the ‘analysans’ of the concept of knowledge. It would seem that they are the same concept and hence that:
   (2) To be an instance of knowledge is to be an instance
   of knowledge.
Would have to be the same proposition as (1), but then how can (1) be informative when (2) is not? This is what might be the first paradox of analysis.
 Classical writing on analysis suggest a second paradox of analysis (Moore, 1942). Consider this:
   (3) An analysis of the concept of being a brother is that
   to be a brother is to be a male sibling.
If (3) is true, it would seem that the concept of being a brother would have to be the same concept as the concept of being a male sibling and that
   (4) An analysis of the concept of being a brother is that
   to be a brother is to be a brother.
Would also have to be true and in fact would have to be the same proposition as (rather), Yet (3) is true and (4) is false?
 Both these paradoxes rest on or upon the assumption that analysis is a relation between concepts, rather than one involving entities of other sorts, such as linguistic expression, and that in a true analysis, analysans and analysandum, are the same concept. Both these assumptions are explicit of Moore’s remarks hint at a solution ~ that a statement of an analysis is a statement partly about the concept involved and partly about the verbal expressions used to express it. He says, he thinks a solution of this sort is bound to be right, but fails to suggest one because be cannot see a way in which the analysis can be even partly about the expression (Moore, 1942).
 Its led in suggestion of such a way as a solution to the second paradox, which is to explicate (3) as:
   (5) An analysis is given be saying that the verbal expression
   ‘x is a brother’ expressed the same concept as is expressed
   by the conjunction of the verbal expressions ‘x is a male’
   when used to express the concept of being male and
   ‘x is a sibling’ when used to express the concept of being
   a sibling. (Ackerman, 1990)
An important pint about (5) is such of its philosophical jargon (‘analysis’, ‘concept’, ‘x is a  . . . ’), (5) seems to state the sort of information generally stated in a definition of the verbal expressions ‘brother’ in terms of the verbal expressions ‘male’ and ‘sibling’, where this definition is designed to draw on or upon listeners’ antecedent understanding of the verbal expressions ‘male’ and ‘sibling’, and thus to tell listeners what the verbal expression ‘brother’ really means. Instead of merely providing the information that two verbal expressions are synonymous without specifying the meaning of either one. Thus, finding the solution to the second paradox seems to make the sort of analysis that gives rise to this paradox a matter of specifying the meaning of a verbal expression in terms of separate verbal expressions already understood and saying how the meaning of these separate, already-understood verbal expressions are combined, as should both specify the constituents concepts of the analysandum and tell how they are combined. But is this all there is to philosophical analysis?
 To answer this question, we must note that, in addition to there being two paradoxes of analysis, there are two types of analysis that are relevant here. (There are also other types of analysis, such as reformatory analysis, where the analysans is intended to improve on and replace the analysandum. But since reformatory analysis involves no commitment to conceptual identity between analysans and analysandum identity between analysis does not generates a paradox of analysis and so will not concern us here). One way to recognize the difference between each of the other types of anaplasia is to focus on the difference between the two paradoxes. This can be done by mans of the Frége-inspired sense-individuation condition, which is the condition that two expressions have the same sense if and only if they can be interchanged whenever used in propositional attitude context: If the expressions for the analysans and the analysandum in (1) met this condition. (1) and (2) would not raise the first paradox, but the second paradox arises regardless of whether the expressions for the analysans and the analysandum meet this condition. The second paradox is a matter of the failure of such expressions to be interchangeable in sentences involving such contexts s ‘an analysis is given by’. Thus, a solution (such as the one given or offered) that is aimed only at such contexts can solve the second paradox. Tis is clearly false for the first paradox, however, which will apply to all pairs of propositions expressed by sentences in which expressions for pairs of analysand and anslysantia raising the first paradox are interchanged. For example, consider the following proposition:
   (6) Julie knows that some cats lack tails.
It is possible for Walter to believe (6) without believing
   (7) Julie has justified true belief, not essentially grounded
   in any falsehood, that some cats lack tails.
Yet this possibility clearly does not man that the proposition that Mart knows that some casts lack tails is partly about language.
 One approach to the first paradox is to argue that, despite the apparent epistemic inequivalence of (1) and (2) and concept of justified true believing and essentially grounded in any falsehood is still identical with the concept of knowledge. Another approach is to argue that vin the sort of analysis raising the first paradox, the analysans and analysandum are concepts that are different but that bears a special epistemic relation to each other. Elsewhere, by using developmental approaches and to its finding suggestion, that this analysans-analysandum relation has the following facets:
  (I) The analysans and analysandum are necessarily
  coextensive, i.e., necessarily every instance of one is an
  instance of the other.
  (ii) The analysans and analysandum are knowable
  a priori to be coextensive.
  (iii) the analysandum is simpler than the analysans
  (a condition whose necessarily is recognized in classical writings on analysis, such as Langford, 1942).
  (iv) The analysans does not have the analysandum
  as a constituent.
Condition (iv) rules out circularity, but since many valuable quasi-analyses are partly circular, e.g., knowledge is justified true belief supported by known reasons not essentially grounded in any falsehood, and it seems best to distinguish between full analysis, for which (iv) is a necessary condition, and partial analysis, for which it is not.
 These conditions, while necessary, are clearly insufficient. The basic problem is that they apply to many pairs of concepts that do not seem closely enough related epistemologically to count as analysans and analysandum, such as the concept of being six and the concept of being the fourth root of 1296. Accordingly, its solution finds the fifth condition by drawing on or upon what actually seems epistemologically distinctive about analysis of the sort under consideration, which is a certain way they can be justified. This is by the philosophical example-and-counter-example method, which in general terms goes as follows:’J’ investigates the analysis of K’s concept ‘Q’ (where ‘K’ can but need not be identical to ‘J’) by setting ‘K’ a series or armchair thought experiments, i.e., presenting ‘K’ with a series of simple described hypothetical test cases and asking ‘K’ questions of that form ‘If such-and-such were the case, would this count as a case of ‘Q’? ‘J’ then contrasts the description of the cases to which ‘K’ answers affirmatively with the descriptions of the cases to which ‘K’ does not, and ‘J’ generalizes upon these descriptions to arrive at the concepts (if possible not including the analysandum) and their made of combination that constitute the analysans of K’s concept ‘Q’. Since ‘J’ need not be identical with ‘K’, there is no requirement that ‘K’ himself be able to perform this generalization, to recognize its result as correct, or even to understand the analysans that is it correct. This is reminiscent of Walton’s observation that one can simply recognize a bird as a blue-jay without realizing just what features of the bird (beak, wing configuration, and so forth), form the basis of this recognition. (The philosophical significance of this way of recognizing is self-evident, however, ‘K’ answers the questions based solely on whether the described hypothetical cases just strike him as cases of ‘Q’. ‘J’ observes certain strictures in formulating the cases and questions. He makes the cases as simple as possible, to minimize the possibility of confusion and also to minimize the likelihood that ‘K’ will draw upon his philosophical theories (or, quasi-philosophical, rudimentary notions if he is unsophisticated philosophically) in answers the questions. For this reason, if two hypothetical test cases yield conflicting results, the conflict should be resolved in favour of the simpler case. ‘J’ makes the series of described cases wide-ranging and varied, with the aim of having it to be complete series. Whereby, it might be to say, that a series is complete if and only if no case that is omitted is such that, if included. It would change the analysis arrived at: ‘J’ does not, of course, use as a test-vase description anything complicated and general enough to express the analysans. There is no requirements that the described hypothetical test case be formulated only in terms of what can be observed. Moreover, using described hypothetical situations as test cases enables ‘J’ to frame the question in such a way as to rule out extraneous background assumptions to a degree. Thus, even if ‘K’ correctly believes that all and only P’s are R’s, the question of whether the concepts of ‘P’, ‘R’, or both enter into the analysans of his concept ‘Q’ can be investigated by asking him such questions as ‘Suppose (even if it seems preposterous to you) that you were to find out that there wads a ‘P’ that was not an ‘R’. Would you still consider it a case of ‘Q?’
 Taking all this into account, the fifth necessary condition for this sort of analysans-analysandum relation is s follows:
  (v) If ‘S’ is the analysans of ‘Q’, the proposition that necessarily
  all and only instances of ‘S’ are instances of ‘Q’ can be
  justified by generalizing from intuitions about the correct
  answers to questions about a varied and wide-ranging series
  of simple described hypothetical situations.
Are these five necessary conditions jointly sufficient?
 The view that the truth of a proposition consists in its being a member of some suitably defined body of other propositions: A body that is consistent, coherent, and possible endowed with other virtues, provided there are not defined in terms of truth. The theory, though surprising at first sight, has two strengths (1) we test beliefs for truth in the light of other beliefs, to see ho well it is doing in terms of correspondence with the world. To many thinkers the weak point of pure coherence theories is that they fail to include a proper sense of the away in which actual systems of belief are sustained by persons with perceptual experience. For a pure coherent or incoherent set. This seems not to do justice to our sense that experience plays a special role in controlling our systems of belief, but coherentists have contested the claim in various ways.
 Aristotle said that a statement is true if it says of what is that it is, and of what is not that it is not (Metaphysics Γ. iv. 1011). But a correspondence theory is not simply the view that truth consists in correspondence with the ‘facts’, but rather the view that it is theocratically interesting to realize this. Aristotle’s claim is in itself a harmless platitude, common to all views of truth. A correspondence  theory is distinctive in holding that the notion of correspondence and fact can be sufficiently developed to make the platitude into an interesting theory of truth. Opponents charge that this is not so, primarily because we have no access to facts independently of the statements and beliefs that we hold our beliefs with a reality apprehended by other means than those beliefs, or perhaps, further beliefs. Hence we have no fix on ‘facts’ as something like structures in which our beliefs may or may not correspond.
 Coherence is a major player in the arena of knowledge. There are coherence theories of belief, truth and justification. These combine in various ways to yield theories of knowledge. It only seems reasonably and yet fitting to proceed first, from theories of belief through justification to truth. Coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the belief that you are reading a page in this book. So what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that your having some effectively estranging dissimulations of illusory degenerations, made-up in disturbing and perturbative thoughts whirling within your mind, and, yet, it is believed not but only of what is to be in reading your book, but that’s not my fault?
 One answer is that the belief has a coherent place or role in a system of beliefs. Perception has an influence on belief. You respond to sensory stimuli by believing that you are reading a page in a book than believing that you have invented some differentiated space where you occupy a particular point thereof, in a new and different world of imaginistic latency, and in that world is where our reading is taking place to its actualized concentration on or upon the belief that an influence on action began by some sorted desirous mode of differentiations. You will act differently if you believe that you are reading a page than if you believe of something imaginable of a world totally alienable of itself, in that whatsoever has in occurrences to you, it has individuated concurrences with some imaginistic events, as, perhaps, of an imaginistic source so that your presence toward the future is much to be realized. Perception and action undermine the content of belief, however. The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays in a network of relations to other beliefs, the role in inference and implication, for example. I infer different things from believing that I am reading a page in a book that from any other belief, just as I infer that belief from different things than I infer other beliefs from.
 The input of perception and the output of action supplement the central role of the systematic relations the belief has to other beliefs, but it is the systematic relations that give the belief he specific content it has. They are the fundamental source of the content of belief. That is how coherence comes in. A belief has the content that it does because of the way in which it coheres within a system of beliefs. We might distinguish weak coherence theories of content of beliefs from strong coherence theories. Weak coherence theories affirm that coherence is one determinant of the content of belie f. strong coherence theories of the content of belief affirm that coherence is the sole determinant of the content of belief.
 When we turn from belief to justification, we confront a similar group of coherence theories. What makes one belief justified and another not? The answer is the way it coheres with the background system of beliefs. Again there is a distinction between weak and strong theories of coherence. Weak theories tell us that the way in which a belief coheres with a background system of beliefs is one determinant of justification, other typical determinants being perception, memory and intuition. Strong theories, by contrast, tell us that justification is solely a matter of how a belief coheres with a system of beliefs. There is, however, another distinction that cuts across the distinction between weak and strong coherence theories of justification. It is the distinction between positive and negative coherence theories. A positive coherence theory tells us that if a belief coheres with a background system of beliefs, then the belief is justified. A negative coherence theory tells us that if a belief fails to cohere with a background system of beliefs, then the belief is not justified. We might put this by saying that, according to a positive coherence theory, coherence has the power to produce justification, while according to a negative coherence theory, coherence has only the power to nullify justification.
 A strong coherence theory of justification is a combination of a positive and a negative theory which tells us that a belief is justified if and only if it coheres with a background system of beliefs.
 Coherence theories of justification and knowledge have most often been rejected as being unable to deal with perceptual knowledge, and, therefore, it will be most appropriate to consider a perceptual example which will serve as a kind of crucial test. Suppose that a person, call her Julie, works with a scientific instrument that has a gauge for measuring the temperature e of liquid in a container. The gauge is marked in degrees. She looks at the gauge and sees that the reading is 105 degrees. What is she justified in believing and why? Is she, for example, justified in believing g that the liquid in the container is 105 degrees? Clearly, that depends on her background beliefs. A weak coherence theorist might argue that though her belief that she sees the shape 105 is immediately justified as direct sensory evidence without appeal to a background system, the belief that the liquid in the container is 105 degrees results from coherence with a background system of beliefs affirming that the shape 105 is a reading of 105 degrees on the gauge that measures the temperature of the liquid in the container. This sort of weak coherence combines coherence with direct perceptual evidence, the foundation of justification, to account for justification of our beliefs.
 A strong coherence theory would go beyond the claim of the weak coherence theory to affirm that the justification of all beliefs, including the belief that one sees the shape 105, or even the more cautious belief that one sees a shape, results from coherence with a background system. One may argue for this strong coherence theory in a number of different ways. One line of argument would be appeal to the coherence theory of the content to the coherence theory of the perception belief results from the relations of the belief to other beliefs in a system of beliefs, then one may argue that the justification of the perceptual belief to other beliefs ion the system. One may, however, argue for the strong coherence theory without assuming the coherence theory of the content to beliefs. It ma y be that some beliefs have the content that they do atomistically but that our justification for believing them is the result of coherence. Consider the vr y cautious belief that I see a shape. How could the justification for that belief be the result of coherence with a background system of beliefs? What might the background system tell us that would justify that belief? Our background system contains a simple and primary theory about relationships to the world. To come to the specific point at issue, we believe that we can tell a shape at issue, we believe that we can tell a shape when we see one, that we are trustworthy about such simple matters as whether we see a shape before us or not. We may, with experience, come to believe that sometimes we think we see a shape before us when there is nothing there at all, and so we see an after-imagine, for example, and so we are not perfect, not beyond deception, yet we are trustworthy for the most part. Moreover, when Julie sees the shape 105, she believes that the circumstances are not those that are deceptive about whether she sees that shape. The light is good, the numeral shapes are large, readily discernable and so forth. These are beliefs that Julie has that tell her that her belief the at see sees a shape is justified. Her belief that she sees a shape is justified because of the way it is supported by the other beliefs. It coheres with those beliefs, and so she is justified.
 There are various ways of understanding the nature of this support or coherence. One way ids to view Julie as inferring, that her belief is true from the other beliefs. The inference might be construed as an inference to the best explanation. Given her background beliefs, the best explanation Julie has for the existence of her belief that she sees a shape is the at she does see a shape. Thus, we might think of coherence as inference to the best explanation based on a background system of beliefs. Since e are not aware of such inferences for the most part, the inference might object to such an account on the grounds that all justifying inference is explanatory and, consequently, be led to a more general account of coherence as successful competition based on a background system. The belief that one sees a shape competes with the claim that one is deceived and other sceptical objections. The background system of belief informs one that one is trustworthy and enabling one to meet the objection. A belief coheres with a background system just in case it enables one to meet the sceptical objections and in that way justifies one in the belief. This is a standard strong coherence theory of justification.
 It is easy to illustrate the relationship between positive and negative coherence theories in terms of the standard coherence theory. If some objection to a belief cannot be met in terms of the background system of beliefs of a person, then the person is not justified in that belief. So, to return to Julie, suppose that she has been told that a warning light has been installed on her gauge to tell her when it is not functioning properly and that when the red light is on, the gauge is malfunctioning. Suppose e that when she sees the reading of 105, she also sees that the red light is on. Imagine, finally, that this is the first time the cred light has been on, and, after years of working with the gauge. Julie, who has always placed her trust in the gauge, believed what the gauge tells her, that the liquid in the container is at 105 degrees. Though she believes what she reads, her belief that the liquid in the container is at 105 degrees is not a justified belief because it fails to cohere with her background belief the at the gauge is malfunctioning. Thus, the negative coherence theory tells us that she is not justified in her belief about the temperature e of the contents in the container. By contrast, when the red light is not illuminated and the background system of Julie tells her that under such conditions that gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence tho ry tells us that she is justified in her belief because her belief coheres with hr background system.
 The foregoing of conventional type in coherence theories of justification have a common feature, namely, that they are what are called internalistic theories of justification. Also, on this, a fundamental similarity to a coherentist view could be internalist, if both the beliefs or other states with which a justificadum belief is required to cohere and the coherence relations themselves are reflectively accessible. According to which some of the factors required for justifications must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Perhaps the clearest example of an internalist position would be a ‘foundationalist’ view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs and justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism depending on whether actual awareness of the justifying elements or only the capacity to become of them is required, and, in this position, drawing much of a similar coherentist view.
 Respectfully, internalist and externalist theories affirming the coherence is a matter of internal relations between beliefs and that justification is a matter of coherence. If then, justification is solely a matter of internal relations between beliefs, we are left with the possibility that the internal relations might fail to correspond with any external reality. How, one might object, can a completely internal subjective notion of justification bridge the gap between mere true belief, which might be no more than a lucky guess, and knowledge, which must be grounded in some connection between internal subjective conditions and external objective realities?
 The answer is that it cannot and that something more than justified true belief is required for knowledge. This result has, however, been established quite apart from consideration of coherence theories of justification. What is required may be put by saying that the justification one has must be undefeated by errors in the background system of belief. A justification is undefeated by errors just in case any correction of such errors sustain the justification of the belief on the basis of the corrected system. So knowledge, on this sort of positive coherence theory, is true belief the at coheres with the background belief system and corrected version of that system. In short, knowledge is true belief plus justification resulting from coherence  and undefeated by error. The connection between internal subjective conditions of belief and external objective realities result from the required correctness of our beliefs about the relations between those conditions and realities. In the example of Julie, she believes that her internal subjective condition of sensory experience an perceptual belief are connected with the external objective reality of the temperature of the liquid in the container in the container is 105 degrees, and the correctness of that background belief is essential to the justification remaining undefeated. So our background system of beliefs contains a simple theory about our relation to the external world which justifies certain of our belief s that cohere with that system. For such justification to convert to knowledge, that theory must be sufficiently free from error so that coherence is sustained in corrected versions of our background system in corrected versions of the simple background theory, providing the connection between the internal condition and external realities.
 The coherence theory of truth arises naturally out of a problem raised by the coherence theory of justification. The problem. Nonetheless, is that anyone seeking to determine whether she has knowledge is confined so the search for coherence among her beliefs. The sensory experiences she had are mute until they are represented in the form of some perceptual belief. Beliefs are the engine that pulls the train of justification. But what assurance do we have that our justification is based on true beliefs? What justification do we have that any of our justifications are undefeated? The fear that we might have none, that our beliefs might be the artifact of some deceptive demon or scientist, leads to the quest to reduce truth to some form, as, perhaps, an idealized form, of justification, that would close the threatening sceptical gap between justification and truth. Suppose e that a belief is true if and only if it is ideally justified for some person. For such a person there would be no gap between justification and truth or between justification and undefeated justification. Truth would be coherence with some ideal background system of beliefs, perhaps, one expressing a consensus among belief systems or some convergence toward consensus. Such a view is theoretically attractive for the reduction it promises, but it appears open to profound objection. One is that there is a consensus that we can all be wrong about, at least in some matters. For example, about the origins of the universe. If there is a consensus that we can all be wrong about something, then the consensual belief system rejects the equation  of truth with consensus. Consequently, the equation of truth with coherence with a consensual belief system is itself incoherent.
 Coherence theories of the content of our beliefs and the justification of our beliefs themselves cohere with our background systems but coherence theories of truth do not. A defender of coherentism must accept the logical gap between justification and justified belief and truth, but she may believe that her capacities suffice to close the gap to yield knowledge. That view is, at any rate, a coherent one.
 Mental states have contents: A belief may have the content that I will catch that train, or a hope, that awaits for hope that its hope is hope and would be hope for the wrong thing, and that may have content. A concept is something which is capable of being a constituent of such contents. More specifically, a concept is a way of thinking of something ~ a particular object, or property, or relation, or some other entity.
 A concept is that which is understood by a term. Particularly a predicate. To possess a concept is to be able to deploy a term expressing it in making judgements: The ability connects with such things as recognizing when the term applies, and being able to understand the consequences of its application. The term ‘idea’ was formerly used in the same way, but is avoided because of its associations with subjective mental imagery, which ,may be irrelevant to the possession of a concept. In the semantics of Frége, a concept is the reference of a predicate, and cannot be referred to by a subject term. The distinction in Frége’s philosophy of language, explored in ‘On Concept and Object’ (1892). Frége regarded predicates as incomplete expressions, in the same way as a mathematical expression for a function, such as sine . . . or, log . . ,.is incomplete? Predicates refer to concepts, which themselves are ‘unsaturated’, and cannot be referred to by subject expression (we thus get the paradox that the concept of a horse is not a concept) although, Frége recognized the metaphorical nature of the notion of a concept being unsaturated, he was rightly convinced that some such notion is needed to explain the unity of a sentence, and to prevent sentences from being thought of as mere lists of names.
 Even so, several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person pronoun, or think of himself as the spouse of Jane Doe, or as the person located in a certain room now. More generally, a concept ‘c’ is distinct from a concept ‘d’ if it is possible for a person rationally to believe ‘c’ is such-and-such, without believing ‘ d’ is such-and-such. As words can be combined to for m structured sentences, concepts have also been conceived as combinable into structural complex contents. When these complex contents are expressed in English by ‘that  . . . ’clauses, as in our opening examples, they will be capable of being true or false, depending on the way the world is.
 Concepts are to be distinguished from stereotypes and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money. Nonetheless, we can come to learn that Anthony Blunt, are its historian and Surveyor of the Queen’s Pictures, is a spy, we can come to believe that something falls under the concept while positively disbelieving that the same thing falls under the stereotype associated with the concept. Similarly, a person’s conception of a just arrangement for resolving disputes may involve something like contemporary Western legal systems. But whether or nor would be correct, it is quite intelligible for someone to reject this conception by arguing that it does not adequately provide for the elements of fairness and respect which are required by the concept of justice.
 A theory of a particular concept must be distinguished from a theory of the object or objects it picks out. The theory of the concept is part of the theory of thought and epistemology: A theory of the object or objects is par t of metaphysics and ontology. Some figures in the history of philosophy ~ and, perhaps, even some of our contemporaries ~ are open to the accusation of not having fully respected the distinction between the two kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought ‘I think’, containing the first-person pronoun way of thinking, to conclusions about the non-material nature of the object he himself was. But though the goals of a theory of concepts and a theory of objects are distinct, each theory is required to have an adequate account of its relation to the other theory. A theory of concepts is unacceptable if it gives no account of how the concept is capable of picking out the objects it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.
 A fundamental question for philosophy is: ‘What individuates a given concept’  ~ that is, what makes it the one it is, than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question. An alternative approach, addresses the question by starting from the idea that a concept is individuated by the condition which must be satisfied if a thinker is to posses that concept and to be capable of having beliefs and other attitudes whose contents contain it as a constituent. So, to take a simple case, one could propose that the logical concept and is individuated by this condition: It is the unique concept ‘C’ to possess which a thinker has to find these forms of inference compelling, without basing them on any further inference or ‘B’, ACB can be inferred: And from any premise ACB, each of ‘A’ and ‘B’ can be inferred. Again, a relatively observational concept such as ‘round’ can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception those judgements that are. A statement which individuates a concept by saying what is required for the thinker to possess it can be described as giving ‘possession conditions’ for the concept.
 A possession condition for a particular concept may actually make use of that concept. The possession condition for ‘and’ does not. We can also expect to use relatively observational concepts in specifying the kind of experience which have to be mentioned in the possession conditions for relatively observational concepts. What we must avoid is mention of the concept in question as such within the content of the attitudes attributed to the thinker in the possession condition. Otherwise we would be presupposing possession of the concept in an account which was meant to elucidate its possession. Inn talking of what the thinker finds compelling, the possession condition can also respect an insight of the later Austrian philosopher Ludwig Wittgenstein (1889-1951): That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go on in new cases ion applying the concept.
 Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the other. Two of the families which plausibly have this status are these: The family consisting of some simple concepts ‘0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are ‘0' so-and-so’s, there is ‘1' so-and-so:  . . . the family consisting of the concepts belief and desire. Such families have come to be known as, ‘logical holism’. A local ‘holism’ does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. Si one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to possess them is to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking, of the concept treated. The possession condition for concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.
 A possession condition may in various ways make a thinker’s possession of a particular concept dependent on or upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world as being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession of that condition will make possession of that concept dependent in part upon the environment relations of the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition which properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.
 Concepts have a normative dimension, a fact strongly emphasized by the American logician and philosopher Saul Aaron Kripke (1940-), where on, for any judgement whose content involves a given concept. There is a ‘correctness condition’ for that judgement, a condition which is dependent in part upon the identity of the concept. The normative character of concepts also into the territory of a thinker’s reasons for making judgements. A thinker’s visual perception can give him good reason for judging ‘That man is bald’: It does not by itself give him good reason for judging ‘Rostropovich is bald’, even if the man he sees is Rostropovich. All these normative connections must explain by a theory of concepts. One approach to these matters is to look to the possession condition for a concept, and consider how the referent of the concept, and consider how the referent of the concept is fixed from it, together with the world. One proposal is that the referent of the concept is that object (or, property, or function, . . .) Which makes the practices of judgement and inference mentioned in the possession condition always lead to true judgements and truth-preserving inferences? This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permits us to say what it is about a thinker’s previous judgements that makes it the case that he is employing one concept than another, this proposal would also have another virtue. It would allow us to say how the correctness condition is determined for a judgement in which the concept is applied to a newly encountered object. The judgement is correct if the new object has the property which in fact makes the judgmental practice mentioned in the possession condition yield true judgements, or truth-preserving inference.
 A definition that proceeds by ostension, or in other words by simply showing what in intended, as one might ostensively define a shade such as blue, or the taste of a pineapple, by actually exhibiting an example. It relies on the hearer’s uptake in understanding which feature is intended, and how broadly the example may be taken. A direct ostension is a showing of the object or feature intended, while in deferred ostension one shows one thing in order to direct attention to another, e.g., when showing a photograph to indicate a person, or a thermometer to indicate the temperature.
 An ostensive definition is an explanation of the meaning of a word typically involving three elements: (1) An ostensive gesture (2) an object pointed at which functions as a sample, and (3) the utterance ‘This is (a) ‘W’. Like other forms of explanation of word-meaning, an ostensive definition function as a rule or standard of correctness for the application of a word. The utterance ‘This is ‘W’, when employed in giving an ostensive definition does not describe an object (i.e., the thing pointed at) as having the property ‘W’, but defines a word. It is most illuminatingly viewed as providing a kind of substitution-rule in accord with which one symbol, e.g., ‘red’, is replaced by a complex symbol consisting of utterances (‘This’ or ‘This colour’), gesture, and sample. Hence instead of ‘The curtains are red’ one can say ‘The curtains are this ↗ is correctly characterized as being ‘W’.
 Like all definitions, ostensive definitions are misinterpreted. One way of warding off misunderstanding is to specify the ‘grammatical signpost’ by which the definiendum is stationed, i.e., to give the logico-grammatical category to which it belongs, viz. ‘This ‘C’ is ‘W’, where ‘C’ is a place-holder for, e.g., ‘colour’, ‘length’, ‘shape’, ‘weight’. Like all rules, an ostensive definition does not provide its own method of application. Understanding an ostensive definition involves grasping the ‘method of projection’ from the sample to what it represent or from the ostensive gesture accompanying the definition to the application of the word. Thus, in the case of defining a length by reference to a measuring rod, one must grasp the method of laying the measuring rod alongside objects to determine their length before one can be said to grasp the use of the definiendum. Ostensive definitions fulfil a crucial role both in explaining word meaning and in justifying or criticizing  the application to that word, (e.g., ‘Those curtains are not ultramarine ~ this ↗ colour is ultramarine [pointing at a colour chart] and the curtains are not this colour). An ostensive definition does not give evidential grounds for the application of a word ‘W’, but rather specifies what counts as being ‘W’.
 The boundaries of the notion of ostensive definition are vague. A definition of a smell, taste or sound by reference to a sample typically involves no deictic gesture but a presentation of a sample (by striking a keyboard, for example). Conversely, defining directions (for example. ‘North’) by a deictic gesture involves no sample. Nor is the form of words ‘This is (a) ‘W’ essential. ‘This is called ‘W’ or ‘W is this C’ can fulfil the same role,
 Whether something functions as a sample (or, paradigm) for the correct application of a word is not a matter of its essential nature, but of human choice and convention. Being a sample is a role conferred upon an object momentarily, temporarily or relatively permanently by us ~ it is a use to which we put the object. Thus, we can use the curtains here and now to explain what ‘ultimarine’ means ~ but, perhaps, never again, although we may often characterize (describe) them as being ultramarine. Or we can use a standard colour chat to explain what ‘ultramarine’ means. Although if it is left in the sun and fades, it will no longer be so used. Or we may establish relatively permanent canonical samples, as ‘was’ the case with the Standard Metre bar. A sample represents that of which it is a sample, and hence must be typical of its kind. It can characteristically be copied or reproduce and has associated with it a method of comparison. It is noteworthy that one and the same object may function now as a sample in an explanation of meaning or evaluation of correct application and now as an item described as having the defined property. But these roles are exclusive in as much as what functions as  a norm for description cannot simultaneously be described as falling under the norm. Qua sample the object belonging to the means of representation and is properly conceived as belonging to grammar in an extended sense of the term. Therefore, the Standard Metre bar cannot be said to be (or not to be) one metre long. Furthermore, one and the same for more than one expression. Thus, a black patch on a colour chart may serve both to explain what ‘black’ means and as part of an explanation of what ‘darker than’ means.
 Although the expression ‘ostensive definition’ is modern philosophical jargon (W E. Walterson, ‘Logic’, 1921) the idea of ostensive definition is venerable. It is a fundamental constituent of what Wittgenstein called ‘Augustine’s picture of language’ in which it is conceived as the fundamental mechanism whereby language is ‘connected with reality’. The mainstream philosophical tradition has represented language as having a hierarchical structure, its expressions being either ‘definables’ or ‘indefinables’, the former constituting a network of lexically definable terms, the latter of simple, unanalyzable expressions that link language with reality and that inject ‘content’ into the network. Ostensive definition thus constitute the ‘foundations’ of language and the terminal point of philosophical analysis, correlating primitive terms with entities which are their meanings. On this conception, ostensive definition is privileged: It is final and unambiguous, setting all aspects of word use ~ the grammar of the definiendum being conceived to flow from the nature of the entity with which the indefinable expression is associated. In classical empiricism definables stand for complex ideas, indefinables for simple ideas that are ‘given’ is mental in nature, the linking mechanism is private ‘mental’ ostensive definition, and the basic samples, stored in the mind, are ideas which are essentially epistemically private and unshareable.
 Wittgenstein, who wrote more extensively on ostensive definition than any other philosophers, held this picture of language to be profoundly misleading. Far from samples being ‘entities in reality’ to which indefinables are linked by ostensive definition, they themselves belong to the means of representation. In that sense, there is no ‘link between language and reality’, for explanations of meaning, including ostensive definitions are not privileged but are as misinterpretable as any other form of explanation. The object pointed at are not ‘simpler’, which constitute the ultimate metaphysical constituents of reality, but samples with a distinctive use in our language-games. They are not the meanings of words, but instruments of our means of representation. The grammar of a word ostensively defined does not flow from the essential nature of the object pointed at, but is constituted by all the rules for the use of the word, of which ostensive definition is but one. It is a confusion of suppose that expressions must be explained exclusively either by analytic definition (definables) or by ostension (indefinables), for many expressions can be explained in both ways, and there are many other licit forms of explanation of meaning. The idea of ‘private’ or ‘mental’ ostensive definition is wholly misconceived, for there can be no such thing as a rule for the use of a word which cannot logically be understood or followed by more than one person, there can be no such thing as a logically private sample nor any such thing as a mental sample.
 Apart from these negative lessons, a correct conception of ostensive definition by reference to samples resolves the venerable puzzles of the alleged synthetic priorities of colour exclusion (e.g., that nothing can be simultaneously red and green all over) apparently metaphysical propositions as ‘black is darker than white’. Such ‘necessary truths’ are indeed not derivable from explicit definitions and the laws of logic alone (i.e., are not analytic) but nor are they descriptions of the essential natures of objects in reality. They are rules for the use of colour words, exhibited in our practices of explaining and applying words defined by reference to samples. What we employ as a sample of red we do not also employ as a sample of green: And a sample of black can, in conjunction with a sample of white, also be used to explain what ‘darker than’ mans. What appears to be metaphysical propositions about essential natures are but the shadows cast by grammar?
 A description of a (putative) object as the single, unique, bearing of a property: ‘The smallest positive number’, ‘the first dog born at sea’,’the richest person in the world’, in the theory of definite descriptions, unveiled in the paper ‘On Denoting’ (Mind, 1905) Russell analysed sentences of the form ‘the ‘F’ is ‘G’, as asserting that there is an ‘F’ that there are no two distinct F’s, and that if anything is ‘F’ then it is ‘G’. A legitimate definition of something as the ‘F’ will therefore depend on there being one and not more than one ‘F’. To say that the ‘F’ does not exist is not to say, paradoxically, of something that exists that it does not, but to say that either nothing is ‘F’, or more tan one thing is. Russell found the theory of enormous importance, since it shows how we can understand propositions involving the us of empty terms (terms that do not refer to anything or describe anything) without supposing that there is a mysterious or surrogate object that they have as their reference. So, for example, it becomes no argument for the existence of God that we understand claims in which the term occurs. Analysing the term as a description, we may interpret the claim that God exists as something like ‘there a unique omnipotent, personal creator of the universe’, and this is intelligible whether or not it is true.
 Formally the theory of descriptions can be couched in the two definitions:
  The F is G = (∃x)(Fx &(∀y)(Fy ➞ y = x)) & Gx)
  The F exists = (∃x)(Fx & (∀y)(Fy ➞ y = x))
In the most fundamental scientific sense to define is to delimit. Thus, definitions serve to fix boundaries of phenomena or the range of applicability of terms or concepts. That whose range is to be delimited is called the ‘definiendum’, and that which delimits the ‘definiens’. In practice the hard sciences tend to be more concerned with delimiting phenomena, and definitions are frequently informal, given on the fly, as in ‘Therefore, a layer of high rock strength, called the ‘lithosphere;, exists near the surface of planets’. Social science practice tends to focus on specifying application of concepts through formal operational definitions. Philosophical discussions  have concentrated almost exclusively on articulating definitional forms for terms.
 Definitions are full if the definiens completely delimits the definidum, and partial if it only brackets or circumscribes it. Explicit definitions are full definitions where the definidum and the definiens are asserted to be equivalent. Examples are coined terms and stimulative definitions such as ‘For the purpose of this study the lithosphere will be taken as the upper 100 km’s f hard rock in the Earth’s crust’. Theories or models which are so rich in structure that sub-portions are functionally equivalent to explicit definitions are hard to provide implicit definitions. In formal context our basic understanding of full definitions, including relations between explicit and implicit definitions, is provided by the Beth definability theorem, nonetheless, partial definitions are illustrated by reduction sentences such as:
 When in circumstances ’C’, definiendum ‘D’ applies if situation ‘S’ obtains, which says nothing about the applicability of ‘D’ outside ‘C?’
It is commonly supposed that definitions are analytic specifications of meaning. In some cases, such as stimulative definitions, this may be so. But some philosophers, e.g., the German logical positivist Rudolf Carnap (1891-1970), combining a basic empiricism with the logical tools provided by Frége and Russell, and it is his works that the main achievements (and difficulties) of logical positivism are best exhibited his first major work, was Der logische Aufbau der Welt (1928, trs. as, The Logical Structure of the World, 1967). This is the solipsisytic basis of the construction of the external world, although Carnap later resisted the apparent metaphysical priority here given to experience. Carnap pursued the enterprise of clarifying the structures of mathematics and scientific language (the only legitimate task for scientific philosophy) in Logische Syntax der Sprache (1934, trs. as, The Logical Syntax of Language 1937). Refinements to his syntactic and semantic views continued with Meaning and Necessity (1947). While a generally loosening of the original ideal of reduction culminated in the great Logical Foundations of Probability, the most important single work of confirmation theory, in 1950. Other works concern the structure of physics and the concept of entropy.
 Reduction sentences are often descriptions of measurement apparatus specifying empirical correlations between detector output reading of meaning. The larger point here is that specification of meanings is only one of many possible means for delimiting the definiendum. Specification of meaning seems tangential to the bulk of scientific definitional practices.
 Definitions are said to be creative, if their addition to a theory expands its content, and non-creative, if they do not. More generally, we can say that definitions are creative whenever the definiens assert contingent relations involving the definiendum. Thus, definitions providing analytic specifications of meaning are non-creative. Most explicit definitions are non-creative, and hence eliminable from theories without loss of empirical content. One could relativize the distinction so that definitions redundant of accepted theory or background belief in the scientific context are counted as non-creative. Either way, most other scientific expressions of empirical correlation. Thus, for purposes of philosophical analysis, suppositions that definitions are either non-creative or meaning specifications demand explicit justification. Much of the literature concerning incommensurability and meaning change in science turns on uncritical acceptance of such suppositions.
 Many philosophers have been concerned with admissible definitional forms. Some require real definitions ~ a form of explicit definition in which the definiens equates the definiendum with an essence specified as a conjunction A1 ∧ . . .  ∧ An of attributes. (By contrast, normal definitions use non-essential attributes.) The Aristotelian definitional form further requires that real definitions be hierarchical, where the species of a genus share A1  . . .  An ~ 1, being differentiated only by the remaining essential attributes An. Such definitional forms are inadequate for evolving biological species whose essence may vary. Disjunctive polytypic definitions allow changing essences by equating the definiendum with a finite number of conjunctive essences. But future evolution may produce further new essences, so partially specify potentially infinite disjunctive polytypic definitions were proposed. Such ‘explicit definitions’ fail to delimit the species, since they are incomplete. A superior alternative is to formulate reduction sentences for each essence encountered, which partially define the species but allow the addition of new reduction sentences for subsequent evolved essences.
 Ludwig Wittgenstein (1953) claimed that many natural kinds lac conjunctive essences rather, their members stand only in a family resemblance to each other. Philosophers of science have developed the idea in two ways. Achinstein (1968) retorted to cluster analysis, arguing that most scientific definitions (e.g., of gold) specify non-essential attributes of which a ‘goodly number’ must be present for the definiendum to apply. Suppe (1989) argued that natural kinds were constituted by a single kind-making attributes (e.g., being gold), and that which patterns of correlation might obtain between the kind-making attribute and other diagnostic characteristics is a factual matter. Thus, issues of appropriate definitional form (e.g., explicit, polytypic, or cluster) are empirical, not philosophical questions.
 Definitions of concepts are closely related to explications, where imprecise concepts (explicanda) are replaced by more precise ones (explicasta). The explicandum and explicatum are never equivalent. In an adequate explication the explicatum will accommodate all clear-cut instances of the explicandum and exclude all clear-cut non-instances. The explicatum decides what to do with cases where application of the explicandum is problematic. Explications are neither real nor nominal definitions and are generally creative. In many scientific cases, definitions function more as explications than as meaning specifications or real definitions.
 Imagination most directly is the faculty of reviving or especially creating images in the mind’s eye. But more generally, the ability to create and rehearse possible situations, to combine knowledge in unusual ways, or to invent thought experiments. The English poet Samuel Taylor Coleridge (1772-1834) was the first aesthetic theorist to distinguish the possibility of disciplined, creative use of the imagination, as opposed to the idle play of fancy imagination is involved in any flexible rehearsal of different approaches to a problem and is wrongly thought of as opposed reasoning. It also bears an interesting relation to the process of deciding whether a projected scenario is genuinely possible. We seem able to imagine ourselves having been someone other than were supposed to be or otherwise elsewhere than were are  supposed to be. And unable to imagine space being spherical, tet further reflection may lead us to think that the first supposition is impossible and the second entirely possible.
 It is probably true that philosophers have shown much less interest in the subject of the imagination during the last fifteen years or so than in the period just before that. It is certainly true that more books about the imagination have been written by those concerned with literature and the arts than have been written by philosophers in general and by those concerned with the philosophy of mind in particular. This is understandable in that the imagination and imaginativeness figure prominently in artistic processes, especially in romantic art. Indeed, those two high priests of romanticism, Wordsworth and Coleridge, made large claims for the role played by the imagination in views of reality, although Coleridge’s thinking on this was influenced by his reading of the German philosophy of the late eighteenth and early nineteenth centuries, particularly Kant and Schelling. Coleridge distinguished between primary and secondary imagination, both of them in some sense productive, as opposed to merely reproductive. Primary imagination is involved in all perception of the world in accordance with a theory. Coleridge derived from Kant, while secondary imagination, the poetic imagination is creative from the materials that perception provides. It is this poetic imagination which exemplifies imagination nativeness in the most obvious way.
 Being imaginative is a function of thought, but to use one’s imagination in this way is not just a matter of thinking in novel ways. Someone who, like Einstein  for example, presents a new way of thinking about the world need not be by reason of this supremely imaginative (though of course he may be). The use of new concepts or a new way of using already existing concepts is not in itself an exemplification of the imagination. What seems crucial to the imagination is that it involves a series of perspectives, new ways of seeing things, in a sense of ‘seeing’ that need not be literal. It thus involves, whether directly or indirectly. Some connection with perception, but in different ways, some of which will become evident later. The aim of subsequent discussion here will indeed be to make clear the similarities and differences between seeing proper and seeing with the mind’s eye, as it is sometimes put. This will involve some consideration of the nature and role of images.

No comments:

Post a Comment