an epistemically responsible, spare ontology

21 April 2006

rethinking the scope of the project + the notes from 18 April 2006

  Last Tuesday's (18 April 2006) meeting was productive, if a bit confrontational. I have some suggestions that were scribbled down that I'll try to transcribe in this entry after I first spend some time on structural and organizational issues.
  The first thing I realized was how much work it was to communicate the first, fundamental idea I had about how to use (an adaptation of) Fine's observation to clarify what one who endorses conventionalism is committed to. The idea is that Fine outlines two ways in which sentences in which a modal operator falls in the scope of a quantifier might be understood to be intelligible. The first way to understand such a sentence as intelligible is find that such a sentence has at least one proper instance, that is, minimally, a substitution instance in which the substituend is purely referential (if quantification is taken to be referential). I believe that this proposal for intelligibility is important, and that we can start with the basic idea of this proposal and adapt it in such a way that we are left with a requirement which is reasonable one for a reasonable version of conventionalism (which I'll try to flesh out in chapter 1). Briefly, the requirement is that for a conventionalist thesis to be workable, we must have semantic uniformity between a quantified sentence and a substitution instance. Why would we want this? Well because the subject of this entry is not a reworking of old material, I'll advert to the last paragraph of this entry. Admittedly, I need to work on this a bit more, but not here.
  Fine's second proposal for the intelligibility of quantified sentences with a modal operator in their scope is that the substitution instances of such sentences are such that they contain in the positions which are substituted into names which are "special" in that they are associated with conceptual content such that the substitution instances can be seen to be true in virtue of meanings alone or in virtue of conceptual relationships alone. The original quantified sentence can be understood to be intelligible because (1) a specific instance can be immediately recognized as true and (2) quantification is understood to be autonomous -- we're not quantifying over objects as referents of the variables, but rather the string with metalinguistic variables '[(∃x)φ(x)]' is understood as a sort of generalization of the string '[φ(t)]' (true in virtue of conceptual content); the generalization happens because we can substituted the string 'x' in the quantified sentence for the string 't' of the substitution instance. I want to argue that this proposal for a way to understand quantified sentences, is a requirement for the conventionalist thesis I'll defend. I think this argument is a bit easier, but still sort of tricky. Briefly, a conventionalist wants to analyze 'it is necessary that S' as 'it is analytic that S'. If we're to do this for a sentence like '(∃x)F(x)', and we want to say that there is something that is F, and a singular term for this individual, say 'n', is such that 'F(n)' is true, i.e. 'F(n)' is analytic. Naïvely, we think that if a sentence is analytic then we can recognize it as such and we can do so because we have the right conceptual contents associated with the singular terms and the contexts in which those terms occur. This, in a nutshell, is the reason that we must satisfy the second of Fine's two proposals for understanding quantified sentences which have a modal operator in the scope of the quantifier.
  There are two problems immediate problems with this suggestion (1) not just any old name will do -- we need a special class of names. It's possible that both 'a' and 'b' are names for the individual α and that both 'Fa' and 'Fb' are true, but that 'a' has conceptual content associated with it such that a cognizer can understand, just by possessing the name 'a' that 'Fa' is true as a matter of meaning alone. That is, the cognizer would see that it's a matter of meaning alone that the denotatum of 'a' is in the extension of the concept expressed by 'is F'. So a story needs to be told about what sort of names are appropriate and whether, in this example, 'Fb' is analytic or necessary after all (this actually seems to be a pretty important question -- and we might note Carnap's method of extension and intension to see how he deals with it.) (2) the relationship of meanings to concepts, i.e. the do predicates express concepts and if not what's their relationship exactly. It seems that we do want to require that meanings are, in principle, always knowable since the community of cognizers is responsible for the association between sign and signified)...
  Well... there's a lot there. In light of that, I've decided that the whole project will be plenty big and comprehensive if I go with maybe five chapters instead of the eight that I had planned. The fifth might be something like a bit about how this might fit in with Kaplan's Opacity because after developing some technology, he develops a characterization of '' and so too, necessity and how the thing stacks up against Chalmers 2D semantics and phenomenal concepts.
  And next, I'll try to transcribe what in the world Ludwig was harping on about.

19 April 2006

chapter 4: insuring that the right conceptual content is associated with the predicate terms of quantified modal sentences

  The issues of semantic uniformity and right conceptual content canvassed by Fine concern only individual variables and the actual singular referring terms which are substituted for them. But if we're aiming to make sense of a sentence in which a modal operator is in the scope of a quantifer by understanding its substitution instances (in general, and the singular terms that occur in them in particular) as having the right sort of conceptual content, and then by seeing that this conceptual content allows to make sense of the instances and so allows us to understand the quantified sentences themselves, it seems we must also spend some time addressing the conceptual content associated with the 'contexts' of these sentences. For example, if we want to take up Fine's second proposal for understanding a quantified sentence like '(∃x)φ(x)', we'd understand it by understanding an instance 'φ(t)' and we'd do this by recognizing the conceptual content had by the special name 't'. To make things even more explicit, we could make sense of a sentence like '(∃x)(x > 7)' by seeing that there is an instance, say, '(9 > 7)' which is true in virtue of the conceptual content associated with '9'. And that's as far as Fine's presentation of the proposal carries us.
  Of course, if we want to claim that '9 > 7' is true in virtue of conceptual connections alone (and so is '(9 > 7)' and '(∃x)(x > 7)' ), it seems that at a minimum we need to claim there are conceptual contents had by both the singular referring term '9' and the rest of the sentence -- what Fine calls the context -- '( ... > 7)'. In this specific case, the conceptual contents of the context might be spelled out in terms of the constituents '>' and '7' and their mode of combination (well return to this example over and over again later), but in order to consider the issue in more generality, let's consider a case in which the context is a monadic preciate such as 'is F'. In general, for a substitution instance sentence like 'φ(t)', we'll be concerned with the conceptual contents had by both the singular referring term and the context 'φ(_)' which in our most general case is just a predicate like 'is F'. Briefly, we can assess whether the sentence 't is F' is true or false by determining whether the conceptual content had by the term 't' "matches" the conceptual content had by 'is F'. How to understand "matches"? An obvious response is to claim that the predicate term 'is F' expresses a concept F-ness, say, and that the conceptual content had by 't' is such that it allows us to determine, on the basis of conceptual knowledge alone, whether that which is named 't' is such that the concept F-ness rightly applies to it or in linguistic terms whether that which is named 't' falls under the concept expressed by the predicate 'is F'.

So we have to say something more about concept possession works to tell a story about how such things might go. Two options: we don't possess a concept unless we exactly those things (even counterfactual things) to which the concept applies, or the more reasonable view on which we may have a concept without knowing every dimension, so to speak, of those things to which the concept applies.

This invites another complication for conventionalism and requirement for the right core content: it might be that we could analyze necessity in terms of analyticity, such that any sentence that held of necessity was true in virtue of meanings alone, but that a speaker might be competent with the predicates and terms and that a sentence constructed from these predicates and singular terms was true in virtue of meanings alone, but that the speaker didn't know this, not possessing the concepts expressed by the predicate he competently used. In this case it seems that the right conceptual content requirement is given up for a requirement about the meanings of the predicates that express the concepts in question and the meanings of the singular terms involved. Needs more work, but it's a start.

17 April 2006

chapter 3: how to account for Fine's semantic uniformity between a quantified sentence and an instance

  Fine's first proposal for taking as intelligible quantification into opaque contexts is semantic uniformity between a quantified sentence and its instances. I've discussed this before, but just to review and to try to get things ever clearer in my own head, let's say what it comes to again.
  Semantic uniformity is maintained from a quantified sentence (which we can represent as '(∃x)φ(x)' where 'φ' is a metalinguistic variable) to a substitution instance of such a sentence 'φ(t)' where 't' is the substituend for the variable 'x' in the original if each constituent of the '(∃x)φ(x)' that occurs in the instance 'φ(t)' plays the same semantic rôle in each. For example, if quantification in the original is to be purely referential, that is, the context 'φ(_)' merely serves to make a claim of some individual or other regardless of how that individual is picked out, then we say that the role of the variable in '(∃x)φ(x)' is only to pick out a value (an object) for the sentence to say something about. Since, under this interpretation of quantification, 'x' serves only to pick out an object about which a claim (determined from the context 'φ(_)') is made and the value that 'x' picks out does not in any way depend upon any feature of the variable 'x', in order to maintain semantic uniformity in an instance 'φ(t)', it must be that the 't' of the instance is also purely referential. That is, that which is picked out by 't' in no way depends upon any feature of 't' and that which is picked out by 't' is all that matters for the truth of 'φ(t)'.
  This situation seems intuitively right. When we say, "There is something that is F" what we want to claim is that there is some thing or other such that that thing is F; it doesn't seem that we want to claim that there is something such that if we managed to refer to it in some way or another then it will be such that it is F. So, naïvely, we have a way to understand quantification that is purely referential. And, I think Fine's assertion that to understand quantified sentences like '(∃x)φ(x)' as intelligible, if quantification is referential, we must have a proper instance (that is an instance in which semantic uniformity is maintained). His assertion is intuitively right also. The sentence, "There is something that is F" can be understood as making an intelligible claim only if there is some thing which is F independent of how that thing is picked out. We need this much to guarantee that our utterances are really about the world in a meaningful way.

[One question might be whether this is really the right response. If we claim, "There is something that is F" does it really matter whether that thing is F independent of how we refer to it? If one were really concerned over whether there was something that was F and not concerned with other side issues, such as how the langauge worked or other semantic issues, then it seems that he wouldn't care about the other dependencies such as whether our reference to the thing affected whether it was F or not.]

  How can or should we develop the requirement of semantic uniformity? One makes an immediate observation from Fine's presentation of the situation and the example sentence scheme ('(∃x)φ(x)') he uses during his discussion: if we're concerned with proper instances and uniformity in general, we should be concerned, at least, with not only the semantic role of the variable '(∃x) ... x' in relation to an instance ' ... t ...', but also with the semantic role of the context 'φ(_)' of each. It seems reasonable to hold that if the semantic role of that which ranged over singular term values (that is, the variable of the quantified sentence) is different from the singular term which occupies its place in the substitution instance, the respective context of the quantified sentence and the corresponding instance might play different semantic roles.

[Need an example of how this might happen. Maybe "The Smith Family Leap Frogs." How about 'is rigid' also? Compare, 'that name is rigid' and 'that plant is rigid'. It might be that contexts are just ambiguous between different uses. Also, if the semantic role of the context changes it seems that some tennent of compositionality might be violated -- or perhaps this would generate what is termed a linguistic "monster". Perhaps, if we're dealing with an imperfect natural language it may be that semantic shifts are possible, unlike the case in which a regimented language is under consideration. It does seem like a failure of intelligibility of a quantified sentence is possible given Fine's extensive examples in "The Problem"]


Of course, it might also be the case that if a failure of semantic uniformity occured between a quantified sentence and its instances, then the context 'φ(_)' is such that it would be flexible enough to "withstand" such a shift. All the concerns over the shift in semantic role of a context have implicitly been about shifts in semantics of simple predicates like 'is F'. Somewhat less pressing, along the same vein, is the problem of whether contexts involving logical connectives are subject to the same sorts of worries. For example, could the semantic role of '... φ(_)∧ψ(_)...' change from a quantified sentence '(∃x)φ(x)∧ψ(x)' to its instance 'φ(t)∧ψ(t)' because of something the '∧' did? It certainly doesn't seem so, but we should, in the spirit of investigation, try to determine if such a thing is possible.
  And it seems like the only way we can really engage in a systematic investigation of semantic uniformity is if we have a systematic approach to semantics in the form of a theory of meaning. I don't see too many options out there besides a compositional meaning theory a la Davidson's interpretive truth theory.

[perhaps compositionality is much more complicated than we thought -- perhaps not just something like mode of combination, but mode of combination + semantical "jist" and context + further combination]

It seems that we should address only the very simplist cases first given that our concern is over
semantic uniformity rather than the recursive machinery needed to understand meanings generally. In terms of Ludwig's "What is the role of truth theory in a meaning theory?", we should be concerned (at first) only with 'reference axioms' and 'predicate axioms'. After all we're concerned with the semantic roles of the variables, singular terms which take their places and (in the simplist, beginning case) the predicates that form contexts in which the former occur.

  It looks like there must be some sort of direct reference theory at work for semantic uniformity to get of the ground, otherwise it doesn't seem that both variables and proper names could serve only to pick out a referent. According to a mediated reference theory, it's difficult to see how a name could serve only to pick out a referent rather than providing some mode of presentation or cluster of descriptions by which the referent could be determined. Even if we could hold, in two-step fashion, that a singular referring term picked out a referent by way of an associated description, then the denotatum could be "fed" directly to a context to form a sentence, it seems that uniformity will not be preserved, given that quantification is referential. Only the value taken on by variables are supposed to matter in the determination of the truth of the sentence '(∃x)φ(x)', but on a mediated reference theory it seems that the truth of 'φ(t)' must depend somehow on the cluster of descriptions associated with 't'. But it seems that we should at least allow that quantified sentences can be understood as intelligible on both a direct and a mediated theory of reference, so some work must be done to see how a mediated theory of reference could satisfy the first desiderata.

  On the predicate front, it's hard to see how they could be understood in any other way than set theoretically. Given that that's really the only way to determine whether the object which is the value taken on by the variable satisfies the predicate that creates the context. After all, we'd have to check the extension of the predicate to see if that individual was in it, given that we couldn't make use of any conceptual (intensional?) material to perform this check.

16 April 2006

chapter 2: Fine's assertion that the two desiderata aren't compatible and response

  I've written much on this blog, and I think it's time for a bit of recycling. Here's a working out of Fine's assertion that the two proposals won't quite mesh.
  As far as the second part of chapter is concerned, our assignment is to show that with at least one semantic theory we can satisfy both desiderata. It's instructive to note that it seems that, as I think I'll have demonstrated in chapter one, we can satisfy both proposals if we're considering Carnap's system S2. The present concern is to show that we can use an interpretive truth theory to do the same.
  The basic idea is laid out in Ludwig's "A Conservative Modal Semantics" (perhaps in the service of another goal), but it merits attention here as well. The straightforward case is for numerals and the numbers they name. First off, in the compositional meaning theory Ludwig's working in, a thesis of direct reference is maintained. He holds this because, in the theory, meanings are given by systematically interpretting object language sentences in a metalanguage. This is done with the help of a recursive method provided by the grammar of the object language (recursive interpretation procedures like those for 'and', 'or', 'there exists' and 'for all') and reference axioms (such as the referent of 'bob' is bob) and predicate axioms ('x je crvino' is true iff x is red, for instance). Essentially, we can say that reference is unmediated (direct) iff, in our particular semantic theory, we can give reference axioms for the object language terms. I guess this means that if there are singular referring terms in the object language and we want to use as a compositional meaning theory an interpretive truth theory, we must hold that reference is direct.

[It seems that there's no problem so long as the referents of the singular referring terms are abstract, but if we're trying to pick out concrete individuals, what happens then? It seems there are puzzles for this sort (any sort?) of compositional meaning theory when referents are concrete. Perhaps a challenge to this sort of view might come from one who held the view that reference to concreta is always mediated. How might the interpretive truth theory accomodate this? In the end it might not be quite so bad because once we acknowledge that reference to concreta
must be mediated, then we're in the business of dealing with predicates when we pick things out. If we're in the business of dealing with predicates, it seems that (at least on the most promising view of concepts) we can handle any sort of modal claims about those things that we refer to -- or at least the ones that have some sort of bearing on the predicates that are involved in the act of reference. In any case, the meaning theory that we're using can give us answers presuming all the modal properties had by each thing in the extension of the predicate are part of the meaning of the predicate. Now when is unmediated reference most plausible? It seems it's in the case of demonstratives or indexicals. Ultimately, there may be a need to worry only about the reference axioms needed to refer to physical objects qua physical objects, the modally relevant properties being dictated in the act of reference to a physical object qua kind as in 'the table is red' making use of the predicate terms 'is a table' and 'is red'. This seems to explain why Ludwig was at such pains to account only for abstracta like numbers for which direct reference really seems to be a the only live option and physical objects qua physical objects.]

With direct reference on board, we guaranteed that we can satisfy Fine's the first proposal. First off, we can give an account of semantic uniformity in the context of a theory of meaning. So semantic uniformity needn't be quite so mysterious since we can give an axiomatic treatment of how meanings of sentences are achieved at the root from recursive axioms and reference and predicate base axioms. It seems reasonable to require semantic uniformity with both singular and predicate terms.
  Also, it seems that we can satisfy the proposal that singular terms occurring in the instances of quantified sentences have the right core content on the interpretive truth theory as a compositional meaning theory. Recall that the numerals are singular terms which refer to the numbers directly because of the reference axioms. It also seems that the reference axioms are such that they "induce" certain relations of those things that are referred to. For example, the first few axioms of the theory are 'ref('0') = {x: x ≠ x}', 'ref('1') = sucessor(0)', 'ref('2') = successor(successor(0))'. We define reference just in terms of the referent of '0' and successor. If we define the relation '>' which is such that it is appropriate in sentences like 'a > b' where 'a' and 'b' are stand-ins for numerals, and 'a > b' is true iff (1) the number named by 'a' is the successor of the number named by 'b' or there is 'c' such that 'a > c' and 'c > b'. We see that sentences with numerals and '>' are true just in virtue of the meaning. If we assume that to know the meanings of the terms in sentences like 'a > b', is to have the concepts associated with the constituent terms of those sentences, then it seems like these sentences will have the right conceptual content. This content is had by the relations borne by those things referred to in virute of the reference axioms that serve to pick them out.

13 April 2006

chapter 1: conventionalism in the service of analyzing necessity in terms of analyticity

I'm not sure what will survive from outline 2.2.2, but I've decided that I should probably just go ahead and work as if the outline had been accepted. So with that brief introduction, I jump into hypothetical Chapter the First.

Chapter 1. Conventionalism as presented in Carnap's Meaning and Necessity and explicated and defended in Ludwig, Sidelle and Thomasson.

 So what's Carnap's main thesis? I'll give it in the brief and ready and omit much of the argument he gives for why we should prefer his theories to competitors. For a semantical system (essentially a formal language with formal semantics), we can understand each term of the system as having an extension and an intension. Specifically, the extension of the singular referring term 'Walter Scott' is the individual person Walter Scott; the extension of the predicate 'blue' is all actual and only actual blue things; the extension over which a variable ranges are individuals in the domain of discourse. It's a bit unclear what exactly the intension of such terms are. But it seems that Carnap comes to the answer in round about fashion. By talking about what's true as a matter of how the world has turned out to be and what's true independently of how things just happened to be. If the statement s is 'Pa' where 'P' is a predicate term and 'a' is a singular referring terms is true it may be so as a result of happenstance, on the other hand if s1 is 'Pa1' is L-true, then the truth of this statement is independent of how the world happens to be. Carnap claims that the truth of s1 is a matter of meaning alone or has to do only with the semantical relations of the system under consideration. Perhaps in the modern idiom, we could rephrase Carnap's assessment model theoretically. If s1 is true "come what may" then we could say that it's true in all models -- another way of saying that it's truth doesn't depend on the particular admissible assignment of properties to the individuals of the world. So, for instance two singular referring terms ('a' and 'b') have the same intension if the statement 'a = b' is L-true. Two predicate terms ('is-human' and 'is-a-rational-animal' is Carnap's well worn example) have the same intension iff the statement '(∀x)(is-human(x) ↔ is-a-rational-animal(x))' is L-true. If we can make sense of this implicit definition of intensions, then the intensional values over which variables are said to range can be contrasted to the extensions over which variables range.
 There are intensional and extensional contexts. Extensional contexts occur in statements in which only the extensions of those terms which occur are relevant to the truth value of the statement. In extensional contexts, coreferring terms may be substituted salve veritate. On the other hand, in intensional contexts substitution salve veritate is guaranteed possible only for L-interchangible terms (that is terms which are interchangibility preserves L-truth). (§ 11-12) On Carnap's view, there are contexts which are neither extensional nor intensional. An example of this is a belief sentence like, 'John believes that D'; we're not necessarily guaranteed that we can substitute salve veritate an L-equivalent term 'D1' for 'D' in this sentence. (§ 13)
 A crucial chapter looks to be V (p. 173-204). The extremely short version is summarized in 39-1: "For any sentence '. . .', 'N(. . .)' is to be true iff '. . .' is L-true'." This seems to be the crux of the matter. Because the rules for 'N' are simply semantical rules of the system S2, as an easy consequence, we see that if 'N(A)' is a true sentence of S2 then so is 'N(N(A))' because since 'N(A)' is true it must be true in virtue of semantical rules alone and so 'N(N(A))' is true also.
 Carnap asserts (180) that since a modal system involves intensional as well as extensional contexts, it might be easier, when dealing with the domains over which variables range, to think of the intensional values of variables. That is, we should think of variable as ranging over the individual concepts that are the intensions of singular terms. But to make progress we must take all individual constants in S2 to be L-determinate -- this means that they can be paired 1-to-1 with individuals in a well ordered matrix.

 What's happening on pages 180-1? We have a definite description Ui and a state description Rn. (The state description is just a denumerable class of sentences which describe the distribution of properties across individuals.) The question of whether any individual is picked out by Ui is "simply a logical question" given our assignments. "Thus the description Ui assigns to every state description exactly one individual constant; any individual constant may be assigned to several state descriptions." If there is no individual that is so picked out by Ui, then 'a*' will be the individual constant that is picked out. If another definite description, Uj, is L-equivalent to Ui then Uj will assign the same individual concept under the same state description and these two things will express the same individual concept given that there are L-equivalent. So, the crucial conclusion is that, "we might say that an individual concept with respect to S2 is an assignment of exactly one individual to every state (which is a proposition expressed by a state description)". Because of the stipulation that individual constants are L-determinate (unambiguous), we can claim that there is a function from world states to individual constants (I'm not really sure what form these would take the idea seems to be that we can assign (some disparate) individuals which satisfy a definite description to each state description, so it seems like there will be an individual concept for 'the tallest mountain in the world').
 There is a treatment of variables of higher order, too. Propositions can be reasonably represented by world-states. For example, the proposition expressed by the sentence 'the book is on the table' could be understood to be the set of all world states in which the book is on the table. Since the attribution of a property to an individual is essentially what makes a proposition, and propositions are represented by world-states, we can think of properties as maps from classes of world-states to individual constants. If we clumsily represent a certain proposition as 'Pc' (something like 'c is P'), then the proposition is just a class of world-states, and this class of world states is determined by seeing in which c is P. So it makes sense to say that the properties can be thought of the pairing of classes of world states with individual constants such that the proposition 'Pc' is true. Perhaps we can think of the property 'P' as an map from all classes of world states to individual constants, such that for a class of world-states c, such that the individual constant ic has property P, then P(c) = ic. Two-place relations can be represented by assignments of ranges to ordered pairs of individual constants.

On pages 181-3 it seems that Carnap takes individual concepts to be "paired" with definite descriptions, and so for different world-states, the individual concept is associated with possibly different individual constants. It's easy to see how a conventionalist view could get going in this sort of semantic set up, in which those picked out by rigid designators don't even feature in sentences of a semantic system.

How plausible are rigid designators for natural kind terms? We might hold that they're not even names at all and so there's no purchase for rigid designation.

Where with Chalmers? Chalmers has appropriated Carnap and tried to adapt Kripke to that...

 [more from Carnap here]

 [Sidelle here]

 [Thomasson here]

 [Ludwig here]

We can't even secure reference without some sort of relations holding between those things which we pick out in the act of the refernce. NO differentiation would be possible is there weren't differential relations borne among those things we want to indicate. So the idea of reference without some sort of differential relational properties is misbegotten.

As far as the first desiderata is concerned -- how do we even determine the truth of a statement like 'φ(s)' without (1) having a good idea of differentiated reference, (2) using the differential relational properties had by those things we manage to pick out to determine this truth.

<-- part B here -->

[Fine's desiderata here]

May want to add a bit about why on the sort of semantical system Carnap has in mind both desiderata are satisfied. Semantic uniformity is satisfied because the systems S, S1 and S2 have formal syntax and semantics -- they're syntactically and semantically perspicuous in Fine's terms. And the right core content is satisfied because semantic values are assigned to designators in S, S1 and S2 by semantical rules which it, seems reasonable to suppose that we have epistemic access to. In other words, it seems that both of Fine's desiderata are satisfied by the system Carnaps has set up for us. So, I guess part of the project can be seen as the effort to make natural language seem more like the semantical systems of Carnap.
 Given Carnap's presentation, why is Fine worried about semantic uniformity for referential quantification and right conceptual content being at odds? It seems that the Quinean idea that it's incoherent for variables to range over intensions.

  Here is chapter one as of 27 April 2006.

11 April 2006

dissertation proposal outline 2.2

Here's the next proposed outline. Let's see how much survives the 18 April 2006 meeting...
Here's the outline expanded a bit and chopped into chapters.

05 April 2006

outline 2.1

  1. One consequence of conventionalism (about modality) is the view that necessity can be understood or analyzed in terms of analyticity. The view is advocated by Carnap and the empiricists and more recently by Alan Sidelle, Annie Thomasson and Kirk Ludwig. Challenges to the view come from the Kripke-Putnam assertions of necessary aposteriorities and contingent a priorities. I'd like to get clearer on how a species of the conventionalist view might be spelled out, and what kind of committments one would have to make in order to hold a conventionalist view. So the specific example is how can 'it is necessary that S' be analyzed in terms of 'it is analytic that S'. Kit Fine gives us a good place to get started in his "The Problem of De Re Modality" with his assertion that there are two separate proposals for making sense of quantified sentences in which quantfication is "into" a modal context. First is the semantic uniformity thesis -- a quantified sentence can be understood (is coherent) if it has a proper instance, that is an instance in which the singular referring substituend plays the same semantic role as was to be played by the referential variable (if quantification was to be purely referential). Second is the "right core content" or "special names" requirement -- a quantified sentence can be understood (is coherent) if a substituend is named in a special way such that the name has the right conceptual content so that the resultant is comprehensible. Gorey details aside, it seems that these two proposals for when such quantified sentences make sense serve as two desiderata.

  2. Explain how this is so: we need the first to make sure that our statements are about something, the second to make sure we have the right sort of conceptual link to that which we speak about. But more importantly, it seems that straightening out the semantic uniformity thesis might go some way toward saying what problems we migh face in the attempt to defend conventionalism. In other words, we need a semantical system in which to establish uniformity. First we need to consider how the semantic uniformity might go. We should settle on a compositional theory of meaning (like a worked out version of Davidson's).

  3. Then we must consider the conceptual side of things. How is it we know something modally relevant to a claim if we know certain special names of things? There are two issues here one concerning predicates the other concerning singular terms. It seems that the predicate issue deals in a question of concepts. On a certain theory, we can say that to possess a concept is to know every counterfactual situation in which it's appropriate to deploy that concept. And so knowing the modally relevant features of predicates (which express these concepts) comes along for free on this view. This sort of view allows one to say when a sentence of the form 'φ(t)' is true given that one has the correct conceptual content associated with the singular term 't'. Of course, that leads us to the question of having the right sort of conceptual content associated with the singular terms. And here we have some options. Gareth Evans' description names, Kaplan's standard names and Ludwig's description names. There are problems with each of these of course, and using each we don't name as many things as we'd like and so there are problems. But on each sort of view, we see why problem with a conventionalist thesis has trouble and it turns out that these trouble spots are exactly the places where our intuitions about modal claims in general seem to break down, so there's some evidence for the conventionalist thesis.

  4. If one agrees to Fine's semantic uniformity requirement for quantified sentences, then it seems that other systems of semantics (i.e. 2d semantics) may needlessly conflate conceptual and metaphysical requirements. An interpretative truth theory can make sense of semantics without positing anything like truth makers. Once we've separated the analytic (or meaning) requirements for making sense of a quantified sentence from the conceptual requirements, we can see how a quantified sentence might be intelligible yet so without being known to be so (or at least without being known to be true). On the 2D picture, we become confused because we think that the unintelligibility (or untruth) of a certain sentence has some bearing on a certain metaphysical status (what some possible world is actually like). This seems to be a confusion that we can make sense of with the Fine / Ludwig view of semantic uniformity and conventionalism.
Maybe we could approach this in another way.

  1. Claim that we must be able to make sense of quantified sentences which contain modal contexts if we're to be able to start forming a conventionalist thesis. Of course, we need to understand quantification into opaque contexts for other reasons, but we need to be able to make sense of '(∃x)φ(x)' to even begin an explanation of modality in conventionalist terms.

  2. Claim that both of Fine's desiderata must be satisfied if we're to get the conventionalist thesis off the ground. The first was that in order for us to claim that a quantified sentence is intelligible, there must be a proper instance of such a sentence. If quantification is referential, then a proper instance must be such that the singular referring term which is the substituend is purely referential. The second was that there be a special class of names such that one can tell by meanings (or play of concepts) alone that an instance of the quantified sentence is true.

  3. Argument for the first desideratum: there must be semantic uniformity between a quantified sentence and its stances if we're to make sense of the quantified sentence. Why? Assuming quantification is referential the variable serves only as a pointer to its values, so then for the quantified sentence to be true there must be a value which is in the extension of the predicate that is picked out by the context in the sentence. But it seems like this will be the case only if there's an instance of the sentence in which the singular referring term in the same context is purely referential. We need semantic uniformity to make certain we're actually talking about the things we assume we're speaking about. If one makes the claim '(∃x)φ(x)' we want to be guaranteed that there is in fact something that is referred to by some 't' such that 'φ(t)' is true. Without semantic uniformity, it doesn't seem like we'd be sure that we'd be expressing what we wanted to express in the quantified sentence.
     There must a semantic theory in which we make these assessments. We have options. I have in mind either an interpretive truth theory style of meaning or some sort of possible world semantics. Perhaps we can a bit about how to insure semantic uniformity on each of these views. It seems like this would come down to giving the appropriate reference and predicate clauses on an interpretive truth theory style of meaning theory, and finding the thing actually referred to on a possible world semantics picture.

  4.  Argument for the second desideratum: conceptual content. First off, we need to meet a challenge that is posed in Fine. Fine asserts that it's likely that an interpretation of quantified sentences in which their truth is determined by whether there exists a class of special names one of which could be used to create a substitution instance the truth of which is determined by the conceptual content associated with the special name and the remaining context of the instance will be such that quantification is referential. The reason is that the truth of substitution instances are determined by the conceptual content of the substituend rather than that which is pointed to by the referent. Recall that semantic uniformity and referential quantification was possible only if the singular terms of the resultant were purely referential. The substituends of the quantified sentence made sense of in terms of special names are guaranteed to have associated conceptual content, and so it seems that they can't be purely referential. If we want to hold on to both desiderata, we need to argue that the two requirements can be reconciled. It looks like one way to do this is in the context of the reference axioms of an interpretive truth theory. Reference is direct in light of the reference axioms, yet the names of the referents bear certain relations to each other based on how the referents are related by these axioms (I'm thinking here of ref('0') = 0 and ref('1') = successor(0), etc.). This could be spelled out in more detail. The important thing to note is that direct reference can be maintianed and there is some conceptual content to be had in virtue of the relations borne to each other by those which are referred to systematically by reference axioms.
     Can we make sense of the conceptual "special names" requirement (I'll call it the "conceptual requirement" for now on) in terms of a possible world semantics? I'm not sure -- I'll have to think about this one.
     We can (and need to) say more about the conceptual requirement on the interpretive truth theory choice. I have in mind saying more about the conceptual content associated with both singular referring terms and predicate terms. We've had the example of the numerals as singular referring terms which have associated with them the conceptual content needed to determine the truth of relevant modal claims involving them. Kirk Ludwig proposes "description names" as those which are directly referring with the appropriate conceptual content. There are other theories about how this might go with regards to names: Kaplan, Evans, Follesdahl, Kripke, etc.
     There is also a story to be told about the conceptual content associated with "contexts". Predicate terms are given their own sort of application axioms in the interpretive truth theory. We might also hold the view that to possess a concept is to know all the modally relevant situations in which to deploy that concept. This sort of view would provide the right sort of conceptual contents to be associated with the predicates that create the contexts of the sentences we're interested in.
     Finally, we might offer a refinement: we need only a theory of meaning based on a theory of truth with definitive reference axioms. The conceptual content is really just an add-on, we could do all the work we needed with only singular term axioms and predicate axioms. We could take a different view of concepts and still be left with this picture.. What's the possible world semantics analogue?

  5. Adjudication between my proposal for conventionalism and 2D semantics. It seems that 2D semantics muddies the waters a bit and makes more committments than we need.

Two related desiderata for a conventionalist thesis

If we want to analyze 'it is necessary that S' somehow using 'it is analytic that S', that is, to defend a version of conventionalism about modal claims, we must first understand quantification into modal contexts as intelligible. The reason is that we wish to have a semantic story on the conventionalist view for sentences like '(∃x)φ(x)' where the context 'φ( )' might be "opaque". Kit Fine asserts that there are two proposals for understanding quantification in. The first is that a quantified sentence like '(∃x)φ(x)' can be understood as intelligible if (1) the quantifier is taken to be referential, and (2) there is a proper substitution instance 'φ(t)' of the quantified sentence. A proper substitution instance is one in which semantic uniformity with the quantified sentence is preserved. Since, we've assumed that quantification is purely referential, then a proper instance must be such that 't' plays a semantic role of pure reference. In other words, 't' can play no other semantic role, else we fail to have a proper substitution instance. On this proposal, the original quantified sentence will be true just in case there is a proper substitution instance. For the case of a purely referential quantifier, the sentence will be true just in case, there is a substitution instance in which the the substituend is purely referential. It might be the case that for different types of quantification (and I'm not sure what those would come to be, but there might be)
  On the other hand, we can make sense of '(∃x)φ(x)' if we hold that there are a priviledged class of names all of which have the right (as Fine calls it) "core content" (I take it to mean conceptual content, such that if 'n' is one of this class then if the substitution instance 'φ(n)' is intelligible. Moreover, the quantified sentence is true just in case there is such a special name that 'φ(n)' is true. But the truth of this sentence is determined in a way different from that of 'φ(t)' of the first proposal. The truth of 'φ(n)' is determined by comparison of the relevant conceptual contents of the name 'n' and the context 'φ( )'. I'm guessing that this comes down to "checking" whether that which is named by 'n' is such that by meanings (or conceptual considerations) alone we can tell whether 'n' is 'φ( )'. Fine makes the astute observation that on this "special name" proposal for understanding, it seems unlikely that quantification is purely referential. In purely referential quantification, the truth of 'φ(t)' is determined exclusively by whether that which is named by 't' is in the extension of the predicate created by the context 'φ( )'. There need be no conceptual apparatus and deployment at all to determine the intelligibility or truth of the quantified sentence. On the other hand, it doesn't seem that whether the actual referent that is picked out by the special name 'n' is in the extension of predicate indicated by the context 'φ( )' has anything to do with the assessment of the intelligibility or the truth of 'φ(n)' and so the quantified sentence '(∃x)φ(x)'. (It does seem like the referent of 'n' will be in this extension, but almost incidentally so.)
 I think that to make a conventionalist thesis work, we need both. And to understand what the committments of conventionalism are, it's helpful to see what sort of postion one would take to be "in the intersection". Perhaps working exclusively in the context of an interpretive truth theory would make things easier (or even possible -- I'm not sure how we could do it with a different system of semantics). Since the only kinds of base clauses are for singular terms and predicates, it seems natural to consider what it would take to accomodate both proposals: semantic uniformity and a sort of special name treatment. It seems that dealing with predicates is easier. First off, we're not really quantifying over predicates (unless we've got names for them), so semantic uniformity seems to be guaranteed. In a sentence like '(∃x)φ(x)', if we assume that 'φ( )' expresses a predicate which the quantified sentence claims has a nonempty extension, so for a substitution instance like φ(t) to insure that we have uniformity, the context must also determine a predicate which has that named by 't' in its extension. I can't think of any cases in which semantic uniformity wouldn't be preserved from a quantified sentence to it's instances. Could, for instance, 'is a schnurg' play a different semantic role in 'There is something such that it is a schnurg' and 'Bob is a schnurg'? Doesn't seem so, but it warrants thinking about.
 In terms of the second desiderata, it seems that our view on concepts provides an answer to whether the predicate expressed by the context 'φ()' has the right sort of associated conceptual content. For example, we might hold that to possess a concept is just to know the precise application conditions of that concept, in all the gory detail that's required. For example, to have the concept WATER one must know that if x is water then x is H2O. This sort of view of concepts guarantees that we have the right conceptual content to know when a sentence like 'φ(s)' is true (provided, of course, that we have the right conceptual content associated with the name 's'). It's interesting to note that we might be able to competently use a predicate that expresses a concept without possessing the concept.
  So we're primarily concerned with singular terms.