an epistemically responsible, spare ontology

27 March 2006

puzzles

In the 21 March 2006 bi-weekly meeting, after I confessed that I felt as if I hadn't really developed a positive view on anything in the vicinity of the problem of de re modality or conventionalism, Ludwig suggested that what was important at this stage was not the beginning of an articulation of a view on the subject but rather the collection of various puzzles relating to the topic. So, with this suggestion in mind, I'm starting a puzzle archive in this entry. I believe there are already some puzzles set down in previous posts efforts at solving which may ultimately prove fruitful, but this entry is the formal repository for new puzzles.

  1. One puzzle of marginal interest has to do with Carnap's assertion (§38 of Meaning and Necessity) that for the semantical system S2 which includes intensional operators, there can be a metalanguage Me in which terms such as 'Human' are neutral between extension and intension, but which is itself only extensional. It's easy to see that the object language term 'N' won't occur in Me, only a name for 'N' which is given extensionally. But what aboutoccurrencese of terms beginning with 'L-', it seems like this terms will be intensional? Carnap claims that of the four requirements for a complete semantic description of a system S would include: (i) formation rules for formulas, (ii) rules of designation for individual constants and predicates, (iii) rules of truth, (iv) rules of ranges, (i), (iii) and (iv) could be formulated in the purely extensional Me. (A detailed explanation for the availability of extensional treatment is given on p. 170.) As far as (ii) goes, it seems that simple designation is extensional and could be so included in Me, but L-designation is not. (Why? I assume because L-designation applies to the intensions had by terms of M' which are ex hypothesi not available in Me.) But we're not in jeopardy because, by knowing the semantical rules of S2, we are able to extensionallycharacterizee the L-equivalence of sentences of S2. For instances, the sentences of Me: ''H' designates Human' and ''H' designates Featherless Biped' are both true, but so is the statement (of Me) ''H' and 'F •• B' are not L-equivalent', since we can show (having all the semantical tools of S2 in Me, that ''H' designates Human' holds according to those rules alone (and this was determined extensionally) and ''H' designates Featherless Biped' does not (and this was determined extensionally). I guess the idea is that since we can see extensionally whether a sentence is true in virtue of semantics alone, then we can use 'L-equivalent' in Me purely extensionally, and so it looks like we can have a metalanguage for S2 that is extensional. What does this mean for the interpretive truth theory and an analysis of 'it is necessary that ...' in terms of 'it is analytic that ...'?

  2. Could Kaplan's proposal for how '' might be understood in terms of 'logical necessity' have any effect on how a conventionalist thesis might go? That is, we can understand what sentences are in the extension of '' by assigning a truth value to $entences given certain sequences by using the technological innovations that Kaplans develops ("arc quotation" and "shifty operators"). Once we can make sense of quantifying into opaque (modal) contexts, there are standard model theoretic ways to assess the truth of these sentences and assess the truth of these sentences in every model. There's also an analogous proof theoretic procedure to determine which sentences with a '' hold in each model. So it seems like we can, with his innovations, assess the truth of these sort of sentences.
     Now, more generally, if it seems that we understand '' as a predicate of sentences, then we have a method for assigning an arbitrary sentence beginning with a single '' a truth value -- that is, we simply pick out the sentences which we want to be in the extension of ''. Since, '' forms an opaque context, it doesn't have to be the case that the usual entailment relations hold between sentence "inside" this context. For example, if 'S' is true (i.e. 'S' is in the extension of '') and 'S' entails 'T', then we're not necessarily guaranteed that 'T' is true (i.e. 'T' is in the extension of '').
     On the other hand, according to the method of $entences, we treat intensional operators as if they were predicates of $entences and then take the sentence which is in the scope of the intensional operator as if it were contained in arc quotes. So, I guess both S = '(∃x)φ(x)' and Sf = 'φ(x)' may be such that both 'S' and '(∃x)Sf' can be understood as intelligible and assigned a truth value. We make sense of the latter is by assuming that since '' is a predicate of $entences (i.e. applies to both sentences and valuated formulas), we can determine whether Sf is in the extension of '' based on our assignment of a value to 'x'. (Finally! I understand why Kaplan says that phrases like 'φ(x)' in arc-quotes are only syncategorematic! It's still the case that only formulas in which all variables are quantified are sentences, it's just that with the method of $entences, we determine whether '(∃x)Sf' is true by checking to see if Sf is in the extension of '' on a particular valuation. We need the prefix '(∃x)' to insure that the sentence contains no free variables.)
     Now where does this fit with Fine? First off, it seems that Kaplan is considering only situations in which quantification is referential because he's assuming that there's an assignment of values to variables such that we can determine whether the $entence 'Sf' is in the extension or not. It seems that a presupposition here is that the 'x' in this formula is such that it can be assigned a value and that the sentence which would result from substituting the name 't' for 'x' in 'Sf(t)' would be have the same truth value as the sentence 'Sf(s)' if s and t are co-referrential. In other words, on Kaplan's picture, it seems that the only way we can understand the method of $entences is if we assume that formulas treceiveve a valuation on this method are referential in the sense of referential quantification.

  3. Ernie LePore presented a view on which quotation is a semantic phenomenon which can be understood with the help of the Strong Disquotational Scheme (SDS): Only '''Quine'' quotes 'Quine'' literally contains 'q''. A corollary being that ''Quine'' literally contains 'Quine'. We can explain various data that suggests quotation is context sensitive by noticing that we can quote expressions or signs (which are used, in the context of some language or other, to articulate expressions). I'm curious where the conceptual separation of expression and sign (which seems to be a good one) leaves us if we adopt Fine's suggestion for a Universal Abstract Syntax (UAS) (in "The Problem of De Re Modality"). The suggestion was that contra (for instance) Montague, we think of syntax as fundamentally about instances, replacement and substitution in expressions rather than as based on a theory of concatenation. It seems like Fine's suggestion is that we think of syntax as a feature of abstracta differences in which could be spotted by the instances which were written down with the help of a concatencation system.
     The puzzle is whether we can understand quotation on the SDS model in terms of a Finean UAS. On the face of things, it seems that quotation, treated as a semantically substantial phenomenon depends upon features of signs systems which are based on a theory of concatenation.

23 March 2006

comments on Michael De's "Essentialism, Reference and Quantified Modal Logic"

There were lots of typos in the comments that were presented in html here and there were problems rendering '' with some browsers on some platforms. I've created a .pdf document that contains a corrected, updated version of the comments on Michael De's "Essentialism, Reference and Quantified Modal Logic." and placed it here.

20 March 2006

outline 2.0

Here's a sketch of a revised outline (quite different from, but still holding on to a few of the things that were in outline 1.0):
  1. If we wanted to defend a conventionalist thesis (in particular a view on which the locution 'it is necessary that ...' can be analyzed in terms of, among other theoretical devices, the locution 'it is analytic that ...'), then it seems that an obvious starting position to take is one of mediated reference. If, contrary to a direct reference thesis, we claim that the senses (or modes of presentation, clusters of descriptions, etc.) associated with names are sufficient to determine the referents of those names, then we'd be in a much position to claim that necessity could be analyzed in terms of the relations of concepts (and this position combined with a view about concepts on which predicates express concepts -- the predicate 'is water' expresses the concept WATER -- is a view on which we could analyze necessity in terms of analyticity). In Meaning and Necessity, Carnap asserts that there are individual concepts which are the intensional counterparts of the names for individuals in a domain of discourse. Could these individual concepts be the mediators which give us the conceptual content that makes modal conventionalism possible, or are the individual concepts secondary to how reference is secured (that is, is reference secured directly in Carnap's system)?
      On another approach to the conventionalist position, a special class of names are taken to be directly referring, but these names have conceptual content associated with them. It seems that we need direct reference for names if we're to hold onto the notion that semantics is to be given in terms of an interpretive truth theory. These special description names are said to be directly referring because they are given (base) reference axioms in the meaning theory. That's why we must hold on to the direct reference of these names in this sort of framework. And it seem also as if we need something like an interpretive truth theory to get a grip on analyticity. Or at least if we want to explain what analyticity comes to in a tractable way.
      One might also wonder why we couldn't try to assimilate names to predicates on the singulary predicate model: for everything that's named couldn't we simply form a predicate that is satisfied by only the thing so named. This seems to be a much easier background assumption for a deflationary story about 'it is necessary that ...' -- we could simply handle everything as does Koslicki in the case of 'water is H2O'. The semantics of this sentence are given by the sentence '(∀x)(is-water(x) → is-H2O(x)'. On the assimilation-of-names-to-predicates-view, we could give the semantics of the sentence '9 is greater than 7' as '(∀xy)((is-9(x) & is-7(y)) → x > y)' (in this case the predicate that "plays the role", or at least a similar role, of the name '9' is 'is-9' and '7' is 'is-7'). How would the semantics for the sentence '(∃x)(x > 7)' go on this account? Maybe, (∃x)((∀y)(is-7(y) → x > y))? I'm not sure what the disadvantage to this approach would be. Perhaps we should look at how things play out in the context of the theory of meaning that we favor. On this view, there wouldn't be any reference axioms for names. Does this mean that we have to go with a more traditional explanation of truth in terms of satisfaction and sequences, rather than a Matesian truth-all-the-way-down approach? That may be the issue with the predicates approach...

  2. There are proposals for (directly referring) names with a sort of special core-content. Kaplan's standard names ("Quantifying In"), Føllesdahl's genuine names, Kripke's rigid designators, Gareth Evan's description names, Ludwig's description names. How each of these proposals get at the problem. The theoretical possibility of "c-names" like Ludwig's description names but which don't have quite all of the conceptual content that's required to determine every modally relevant property.
      Let's see if we can come up with some (admittedly shamelessly cooked-up) example of c-names which are not description names. If we admit that street names are description names, then, for instance, 'South West 43rd Street' is directly referring and it has associated with it enough conceptual content for one who's competent with the name to know every modally relevant property had by the thing so named. In particular, a competent user would know that it's necessary that if South West 43rd Street runs north-to-south then it is west of South West 40th Street. So the modally relevant properties in this case (other than the modally relevant properties had in virtue of the street's being a physical object, etc) are mostly those involving the location of the street relative to other streets in the same system of streets. Since streets are numbered with an eye toward ease of navigation, it's obvious that certain claims (like the one above) involving street locations (relative to one another) will be true and necessarily so (analytically so).
      Now, imagine a diabolical mayor who's taken to a perverse scheme for re-naming streets. The mayor changes the names of the street such that South West 40
    th, 41st, 42nd, ... , 49th are all "permuted". For example, what had been 43rd may be 49th after the renaming; South West 44th Street may become 45th or may remain 44th. The permutations are such that only the "ones place" of the number of a street name is changed. For instance, if the streets run north-south, then, after the diabolical permutation, we know that South West 54th is west of South West 49th Street, and that this holds of necessity, but we're unsure if South West 49th is west of South West 40th. If it is the case that South West 49th is west of South West 40th, this certainly isn't necessarily the case -- the diabolical mayor may have executed a different permutation.

  3. Conventionalism and Fine's failure to keep apart the two proposals for the intelligibility of quantified sentences. What about uniformity and autonomous quantification? Fine claims that there are two proposals for determining whether a quantified sentence is intelligible. The first is understood in terms of proper instances: for the quantified sentence '(∃x)φ(x)' is intelligible if there's a proper instance 'φ(t)' of it. If we're taking quantification to be referential, a proper instance is one which is uniform with respect to the original quantified sentence (that is, the context 'φ( )' of both sentences function syntactically and semantically to "say something about the referent" and the singular referring term 't' functions to pick out a referent about which the context says something -- as of course does 'x' in the original sentence on an assumption of referential quantification) and one in which the substituend 't' is purely referential as the variable 'x' in '(∃x)φ(x)' is purely referential on an interpretation of the quantifiers which is referential. Once we have a proper instance, we can understand the quantified sentence by informally saying "There is something, such that what the context 'φ( )' created on the uniform interpretation of 'φ(t)' about it is true of it." Which is, of course, an informal paraphrase of the truth condition of '(∃x)φ(x)'.
      The second proposal was that there be some class of standard names with the right core content such that that which they referred to could be substituted into an intensional context and quantification over which into those sort of contexts makes sense. Let's try to spell it out with an example: the standard names are the numerals, the intensional context it makes sense into which to quantify is the modal context represented by '', and for simplicity let the context 'φ( )' be ' > 7'. Fine's assertion is that since we have a standard name for each of the numbers, then we can understand a quantified sentence like '(∃x)(φ(x))' because there is a standard name such that if we substitute for 'x' in φ(x) that which is named by the standard name '9', the sentence resulting from this substitution is true because φ('9') (this is just the sentence '9 > 7') is true and true in virtue of the conceptual content that's so represented. Fine claims that the quantification is autonomous. I guess the reason being that to make sense of the quantified sentence, we look to instances which can only be constructed with the use of standard names. We assess the truth of the sentences which include standard names in the relevant contexts. The truth of these sentences, then can be thought of as strictly "de dicto" because the names themselves tell us enough about the content expressed to let us know if the sentence is true or not. So there's an important difference between this sort of quantification and purely referential quantification: the truth of a quantified sentence in which quantification is referential is determined (on an assumption of semantic uniformity with the sentence's instances) by whether one of the members of a domain over which the quantifier ranges is such that to make a claim of it which is made by the context (that is, everything in the sentence besides the singular referring term) is true, on the other hand the truth of a quantified sentence in the which the quantifier is autonomous (in the standard name sense) depends upon whether they are instances, characterizable orthographically as those in which a standard appears (such as ' ... α ... ', where 'α' is a standard name, that are true. The truth of the (autonomously) quantified sentence depends upon the truth of (substitution) instances in which standard names are written into the context presented in the quantified sentence. For this sort of quantification, since we characterize truth in terms of truth of instances with standard names instead of merely the assignment of values to variable, we say this quantification is autonomous.
      Fine makes the prima facie mysterious claim that it's unlikely that for a sentence 'φ(n)' in which 'n' is standard name, 'n' is purely referential. I'm not sure why this should be so. Does the fact that the right core-content is associated with these names do something to make it the case that they aren't purely referential. More generally, is Fine saying that names with which are associated rich conceptual contents are such that they cannot be purely referential? Let's consider numerals as "right core-content" names for the numbers to see if Fine's point is borne out.
      Well, from the high-on-the-mountaintop view that Fine is so fond of, it seems that having the right core-content does stand in the way of pure referentiality. For a purely referential quantified sentence '(∃x)φ(x)', proper instances of which are 'φ(t)' and 'φ(s)', truth is understood in terms of whether the object which is referred to by the singular referring terms 't' and 's' are, respectively, in the extension of the context indicated by 'φ()'. The conceptual content associated with either 't' or 's' is completely irrelevant to the truth of the proper instances (and hence the truth of the quantified sentence of which they're instances). Compare this the autonomously quantified case where truth depends upon standard names and which contexts they appear in. The truth of such a sentence depends upon the truth of its instances, in which standard names take the positions occupied by variables in the quantified sentence. How is the truth of an instance determined? A natural response is that the truth of these instances are determined by whether the conceptual content associated with the context 'φ()' and the referring term 'n' is such that 'φ(n)' is conceptually true -- or true as a matter of meaning alone (depending on how we understand the relationship of concept and predicate). And so it seems that truth is determined differently for the two kinds of quantified sentences. There are important questions about whether an autonomously quantified sentence (in the standard name sense) could be made intelligible in terms of referential quantification. Naïvely, it seems that to make a conventionalist thesis work, we need to have both things -- see the previous post on this topic.

  4. A view on description names, c-names, contexts, concept possession and some taxonymizing.
     Does a particular view on concepts and concept possession make any one interpretation of quantification seem more likely? In particular, if I possess the concept WATER and know that the predicate 'is water' expresses the concept, then, on a certain view, for any individual x named by 'α', I know whether the sentence 'α is water' is true. Now if we take the direct reference view of proper names, then if 'β' is a standard name and the sentence 'ψ(β)' is true, then the sentence '(∃x)ψ(x)' is true for under both the referential and autonomous interpretation of quantifiers. If 'ψ( )' is 'φ( )' then, on the autonomous interpretation of quantifiers, we see why the quantified sentence is true -- it's true in virtue of the conceptual content had by possessing the concept(s) expressed in the predicate(s) of which context 'φ( )' comprises and the conceptual content associated with the standard name 'β'. If 'ψ( )' is 'φ( )' then, on the referential interpretation of quantifiers, we see that the quantified sentence is true only if there arexistentsts which are necessary and which fall under the predicate(s) laid out in context 'φ( )'. It shouldn't be surprising that those things which bear standard names (at least in Kaplan's sense of "standard name") are those things which are necessary existents. Perhaps there are other context for which things which are not necessary existents hold in virtue of meaning alone. As always, we must investigate further...

  5. Conventionalism versus 2D semantics and the problem of reductive stories about modality and meaning.

17 March 2006

synopsis + new directions

Perhaps it's time to review and see what some of the themes to emerge from the last month and a half are.

  • I've focused primarily on papers to do with sententialism (Higginbotham and Ludwig & Ray) and opacity / the difficulties surrounding de re modality (Fine, Kaplan and Ludwig).

  • The "dual use / mention" explanation of semantics for opaque contexts seems to be a promising one (although Higginbotham's position seems to have major difficulties), preferable to a Schiffer-style approach.

  • Fine and Kaplan (in "Opacity") offer sustained treatments of some of the issues arising from Quine's assertion that modal and belief contexts are opaque. It seems that there are two sorts of approaches to the difficulties for quantifying into these contexts. The first of these is more properly "logical" or technical. Kaplan's formal and technological innovations show how opacity and quantification can be made to work together. Fine offers a clarification about the underlying notion of logical satisfaction and a careful treatment of the linguistic issues at stake in Quine's argument that the openness to substitution test fails in general for opaque contexts.

  • In "Quantifying In", Kaplan offers a suggestion for when quantification into opaque contexts is permissible (even unproblematic) that makes use of epistemic material. If we have "standard names" then it seems that those can be used in (at least) modal contexts such that they can be replaced with variables in the construction of quantified sentences and quantification in is possible. Fine is careful to notice the difference between the epistemic solution (standard names) and the more properly logical (Kaplan's version -- he uses model theoretic ideas to show how we might understand intensional operators and the possibility for quantifying into some of these contexts) or semantic (Fine's version -- he uses uniformity to explain when a quantified sentence can be made sense of. Perhaps, ultimately this comes down, with a bit of work to the same sort of modal theoretic idea that Kaplan has. After all, uniformity has to do with references made by instances of a quantified sentence. Assessing the truth of those sentences comes to seeing if what's referred to is in a certain set or extension of a predicate).

  • It's seemed to me lately that the conventionalist position needs to sit astraddle both of these positions: To explain how conventionalism might be plausible, we need the properly logical and semantic story to explain quantification into intensional contexts intelligible and we also need the properly epistemological story to show how we might be said to know the truth of modal claims on the basis of conceptual content alone.

  • If this is right, then we may want to consider how the conventionalist thesis might play out for various views: one on which we make use of description names only, one in which there are c-names and maybe one in which we try to "predicatize" everything (that is, in our semantics analyses, we take there to be no singular referring terms like '1', but only the "singulary" predicates like "is the number 1").

16 March 2006

Fine on the failure to separate two proposals regarding quantifying-in

  From the beginning of the second complete paragraph on p. 97 to the end of the second complete paragraph on p. 98, Fine reviews what he sees as a conflation of two proposals concerning the difficulty of quantifying into an opaque context. To get clear on the two, he suggests a difference in the term ‘instance’ . A mere substitution instance for the quantified sentence ‘(∃x)φ(x)’ (call it ‘S’) is the sentence ‘φ(t)’ which results from the meaningful substitution of a singular referential term ‘t’ for ‘x’ -- there's no concern over the uniformity of 't' which respect to its substitution for 'x'. Contrast this with a proper (substitution) instance. A proper instance is an instance which is uniform with regard to S. If we're guaranteed that there is a proper instance of S, then we're able to say that it's intelligible. If the quantifier is interpreted such that it is referential, then to show that S is unintelligible we must show that there is no proper substitution instance, not (as Quine leads us to believe in "Notes on Existence and Necessity", for instance) that for some term t, t is irreferential in 'φ(t)'.
  One might suggest that we can circumvent Quinean difficulties on the basis of our choice of singular referring terms. That is, if we choose a term t which is referential, then we can make sense of S because we have an instance that's uniform with respect to S. So, in Fine's terms, all we really need is a proper instance of S. That will give us enough to make sense of S. The choice of a term which provides a proper instance of S is based on its linguistic function; that's what guarantees uniformity.
  On the other hand, there's a proposed remedy to the difficulty of quantifying into opaque contexts in which we should select a class of "standard names" that are such that they have the right core-content for the intensional context in question (perhaps without regard to whether these standard names play any sort of uniform linguistic role). (It seems like this is what is happening in Kaplan's "Quantifying In".) Substitutivity will succeed because the standard names are the things which replace variable in instances and are defined such that it makes sense to carry out this replacement. Fine claims that for this proposal to succeed, it's essential that each member of the domain be named by a standard name. But there's no guarantee that these standard names will be referential in the contexts in question. A bit strangely I think, he claims that "given that they [the standard names] are selected on the basis of their content, it is unlikely that they will be so." After all, standard names are names, and chosen specifically to name; how could they fail to be referential? Perhaps Fine's asserting that its unlikely that they will be purely referential -- they will pick out their referents, but will somehow be more than simple place holding pointers to their referents. The result of dealing with the difficulties of quantification in this manner is that we wind up with a form of autonomous quantification into the chosen contexts: "satisfaction is given in terms of the truth of the instances formed with terms from the class; quantification is explained in terms of satisfaction" (p. 98)
  The reason the proposals are so hard to keep apart is that "the standard terms behave, in regard to their substitutivity properties, as if they were referential in a uniform context." (p. 98).
  It seems that a conventionalist analysis of necessity is really tenable only when these two conditions ("uniformity" and "names-with-the-right-core-content") coincide. That is, only if we have names with the right core-content can we be certain that a certain sentence is true in virtue of meanings alone. For instance, if 'd' is a standard name, and it's part of the core-content one knows if one possesses d, and from this core-content we know that d is P (for some predicate 'P'). Then we have warrant to claim that 'P(d)' is analytically true and so 'P(d)' and '(∃x)P(x)' are both true.
  Now, also I need to be able to argue that only if we have uniformity of a quantified sentence with respect to its instances are we able to hold the conventionalist view of necessity. Uniformity guarantees that the semantics of a quantified sentence and it's substitution instances are given similarly, and this seems to be something we want if the quantified sentences we use are to have meaning similar to their instances. We can put the requirement for uniformity in focus if we consider what happens without uniformity. If there isn't uniformity between a quantified sentence and it's instances, the term t in 'φ(t)' , for example, doesn't play the same semantic role as does the 'x' in '(∃x)φ(x)'. If quantification is taken to be referential, then this must mean that 't' in 'φ(t)' is not (purely) referential. This means that while the quantified sentence makes a claim (most generally) about whether some individual (or other) is in the extension of some predicate (or other) in virtue of the fact that variables are used (in this case) solely to pick out a referent and the semantic structure of the sentence is to make say something about that referent, instances of the sentence to don't do this, as the term 't' is not used (solely) to pick something out. This situation seems to run contrary to the desire we have, in light of holding the conventionalist view, for instances of a quantified sentence to be about something in the same way that the quantified sentence itself is about something. So, viewed this way, a necessary condition on the defense of a conventionalist position is that we must have names with the right sort of correct core-content (in order that the necessity of claims can be explained in terms of meaning) and that semantic uniformity between a quantified sentence and its instances be maintained (in order for the instances to have the same semantics as the quantified sentence and so given the same sort of treatment with regard to an interpretive truth theory of meaning).

15 March 2006

outline 1.0

Much of what I'd proposed in outline 0.2 was discarded based on the meeting of 10 March 2006. I've been forming another outline based on what remained and what was discussed. One thing that wasn't discussed (which is no guarantee that it won't be on 21 March 2006 -- Ludwig said we'll talk about the outline again and my exposition of Kit Fine's second chapter) was the possibility of 'c-names'.

Ludwig proposed that any description name d is directly referring, in the context of a theory of meaning based on an interpretive truth theory, and has enough associated conceptual content such that a competent user of the name knows all the modally relevant properties had by the individual so named. So, if I'm competent with d, for any context ψ, in which the sentence ψ(d) is true, I understand ψ(d) and I can tell if ψ(d) is analytic. It's interesting to note that there are contexts in which description names appear which are true, but not analytic. For instance, '32 is the value of the air pressure in psi in my left rear tire' is true, yet not analytic. Description names include numerals, street names ('SW 43rd Street' has all the descriptive content we need to determine where it is, relative to other streets, yet it doesn't seem to be a definite description because it is directly referring) and names for individuals based on complicated kinship relations (I must track down the original Gareth Evans paper where this came from). Maybe there are more. (A brief aside: it seems that if street names are description names that the "wrong" conceptual content might be associated with them -- it certainly seems possible that a city could have been laid out so that NW 1st Street is north of NW 2nd Street, for instance. The street names would still be directly referring, but their names wouldn't "tell" us how to find them.)

Given this story about description names, it seems possible that there could be singular terms that were directly referring with associated conceptual content but which lack enough associated conceptual content to be called description names. I'll call these things 'c-names' For instance, say 'c' picks out a chemical compound, and unbeknownst to a person who is competent with c, that which is picked out by c is exactly the product of chemical reaction process P (the result is normally called 'c*'). It's necessary (analytic) that the result of process P is c, but one competent with the names 'c', 'c*' and 'P' wouldn't know that. I think what the example gets at is that there may be a statement that is analytic 'c = c*', but empirical investigation may be required to know that the statement is analytic. Could we say that the conceptual associated with the c-name increases as we learn (a posteriori) analytic sentences in which the names appear? Interestingly, conceptual content increases but the name doesn't change. Is one who is competent with the name before the recent empirical discovery of the certain property really competent? In other words, do empirical discoveries about that which the name denotes change what's required for competence with the name? There's definitely something to investigate for the difference in c-names for entities for which the type/token distinction doesn't apply c.f. numerals versus chemical compounds. On the view put forward by Koslicki in the proper semantic analysis of sentences containing mass terms like 'water is H2O', there are no names. It's unclear whether description names could ever name anything for which the type / token distinction made sense.

One might think that we could use this sort of semantic analysis for any term which picked out an individual which could properly be said to be a token of a particular type, and so claim that to those things which are tokens only predicates are appropriate and so reserve c-names only for those individuals for which the type / token distinction doesn't apply. I'm not sure if there's anything here, but it might be worth pursuing. In Ludwig's paper, he hints that the modally relevant properties may result from the ways we must think about individuals and it seems that this result should apply only to types, being that the way we must think about a certain individual results from the type of thing that individual is. Not really sure what to say about this at the present. I hope to get clear on this issue in subsequent posts.

Another thing of note is that it seems that we do need names for properties, too. If we wish to make claims about necessary relations between properties and then provide a deflationary explanation of the locution 'it is necessary' as we did with individuals and properties, then it seems that we must have names for properties. What would these names be like? To maintain the analogy with description names for individuals, we'd have to claim that the names were directly referring, yet involving enough conceptual content to provide one competent in the use of the name with knowledge of all the modally relevant properties associated with the predicate so picked out. Being competent with a description name for a property is a bit like possessing a concept and being competent with the predicate that expresses that concept, except that being competent with a description name for a property requires much more than simply possessing a concept. Being competent with a description name for a property 'dp' involves understanding φ(dp), for an arbitrary context φ() for which φ(dp) is true, and being able to say whether φ(dp) is analytic. This seems to be a much higher burden than simply possessing concepts; indeed it seems unacceptable outside very basic sentences like 'the property of being red has the property of being the property of being red'. In light of this, perhaps we should eschew leaning of the existence of description names for properties.

14 March 2006

the impossibility of the reduction of the modal to the non-modal

Scott Shalkowski's "The Ontological Ground of Alethic Modality" presents an interesting thesis, part of which can be summarized by the claim that a Lewisian possible worlds style reduction of modal truths to non-modal truths cannot be non-circular without being arbitrary. The reason is that in order for this sort of reduction to go through the class of possible worlds (which are supposed to underwrite our claim that, for example, 'it's possible that an oak tree is 1000m tall' iff in some accessible, "nearby" possible world there is a 1000m tall oak tree) must be of exactly the right size. That is, first, the class of possible worlds (C) must not include impossible worlds or entities (like a round square) because including such entities spoils our getting at what's possible when we quantify over C. And, second, C must not exclude any genuinely possible world.

I'm assuming the idea is that if a world in C included a round square as one of its inhabitants, then the program of reducing "it's possible that..." to a question of quantification over C is thwarted -- it's not possible that there is a round square, and so we don't want to affirm the truth of 'There is c ε C such that there is a round square that's included in c', but this sentence may very well hold on an impossible worlds view. Of course, one might hold that we shouldn't restrict C to just the set of possible worlds, but should include impossibilia as well on the basis that the restriction smacks of circularity. But then, as the possibility of a round sqaure illustrates, we've lost the intuitive attraction that the world theory held.

On the other hand, C must include every possible world; no such world can be left out. If one were to be left out, then an entity or certain configuration (call it 'e') would be possible, but the sentence 'It's possible that e' would come out false on the possible worlds analysis of modality. We see that if the possible worlds analysis is to get off the ground, then the class C is bounded above by impossibility and below by what's possible.

The observation is that modal concerns (what's possible and impossible) factor into the setting of both bounds, and so a possible worlds account as a reductive story about modality is either circular or arbitrary.

Being interested in a deflationary account of modality in general (and more specifically semantics for the locution 'it is necessary that S'), I was particularly struck by something in the concluding remarks. Shalkowski claims on page 686:

That an expression means what it does involves not merely the fact that the expression has been or is being used in certain ways, but also the fact that it is permissible to use it in novel circumstances in some limited ways. That meaning is projectible, but restricted, is just the fact that it is possible to use the expression in certain ways but not others and still accord with the conventions of a given language. Expressions with the same previous usage but with different projections onto novel cases differ in meaning. Thus the story with meaning is, in the final analysis, a modal story and not the proper basis for the foundation of modality.

Where does this leave us with regard to the deflationary account in which 'it is necessary that S' is analyzed in terms of 'it is analytic that S'? Well, if modal notions are involved generally in meaning (in that 'it's possible that α is F' can be explained in terms of when it's appropriate to make the predication 'is F' of a certain individual 'α', and these appropriateness or assertability conditions are grounded in the meanings or conceptual content of the predicates and names involved) and specifically in claims of analyticity, then it seems that we haven't deflated modal claims, rather we've made a claim that modal claims involving a certain locution should be analyzed in terms of the application conditions of predicates and reference axioms for names. If we hold the view that one grasps concept A iff one can rightly apply the predicate 'is A' to all and only things that fall under A, then we've characterized modal claims involving a certain locution in terms of concepts and conceivability.

12 March 2006

preparation for outline 1.0 (from the ashes of outline 0.2)

Just as I suspected, not much of outline 0.2 survived the meeting of 10 March 2006. Briefly, gone are:

(1) Any concerns over whether the fact that properties (even considered extensionally) are uncountable in regards to whether there are names for them. There are, after all, uncountably many names for the real numbers.

(2) The concerns over the fact that first order formulations of arithmetic admit of more than one interpretation. I had thought that there was a troubling analogy between the formal language case and the natural language case: a first order formulation of arithmetic admits of more than one model, if natural languages are also indeterminate with respect to their interpretation, then it seems like the deflation of (apparent) de re necessities to simply statements that are analytically true might be, after all, "many-to-one". That is, for any analytically true sentence of the right form, this sentence might be the reduction of very many (apparent) de re necessities. However, this might be something to pursue later.

(3) Finally, (and this one smarts a bit) the notion that natural kind terms (or any sort of kind terms) might be names, with some associated conceptual content, that are directly referring, and so might be description names after all. Before I get to the reason for the challenge, let me say why I was keen on trying to present natural kind terms as names. An example of a motivating sentence for the notion of metaphysical necessity is 'water is H2O'. The sentence is supposed to be necessarily true, yet not analytic and a posteriori. I had thought that if we could come up with description names for kind terms then we'd be able to start a story about sentences like this that would let us see that for some kind names (which were description names) a deflation of de re necessities in terms of analyticity was available. And so some of the motivation for the positing of metaphysical necessities would be removed.

But, on the other hand, there's a golden wedding band.
..

Ludwig (pace Koslicki) suggested that the semantics for sentences like 'water is H2O' are given by universally quantified sentences like '(x)(x is water → x is H2O)', and so names for kind terms need not even come up in these sort of claims. And it seems that for most if not all of similar sentences, this sort of predicate explanation will be available. So, on this view, it doesn't look like there's a need to investigate how description names (or more generally c-names) might be used to deflate de re necessities.

I do hold out hope for the usefulness of c-names (or even description names) for naming kinds, by way of naming properties. I haven't worked it out yet, but if we wish to make second order modal claims -- claims like 'it's necessary that property X has (second order) property P', then we will have to nominalize properties to deflate these claims with the use of analyticity. Hopefully, more to come on this issue...

10 March 2006

outline 0.2

I shaped up the outline 0.1 in preparation for today's meeting. Here it is. The gist is that I propose to fill out the view that conceptual necessity is the primary form of necessity (I call this position CNF) by considering some objections to the view and possible responses to those objections. The objections are (1) technical: (a) Are there enough names for properties? (b) Formal first order languages of arithmetic admit of more than one model, and so something may be analytic (following from axioms of these languages and the rule of inference) and be about more than one thing, but we think of necessity as holding for relations between the individuals so named -- so there's a gulf between "analyticity" and necessity; does the same phenomenon occur in natural languages? (2) to do with "description names". (a) Description names are directly referring but involve enough conceptual content to make every modally relevant property apparent to one who's competent with them. It seems there must be names which are directly referring and involve conceptual content, but do not involve enough conceptual content to make available every modally relevant property to one who's competent with these sort of names. I call them 'c-names'. It seems like we can investigate CNF by probing the differences between c-names and proper description names. We can also learn more about how both might work, and how CNF could be more fully developed and applied to philosophical problems if we use Fine's notion of a context (in term of uniformity of semantical roles for referring terms and contexts).
I wonder how much will survive in outline 1.0?

07 March 2006

outline 0.1

One approach to the presentation of the view that the fundamental notion of necessity is conceptual necessity (I call the thesis 'CNF') is, first, a spelling out of the basic thesis, followed by an initial list of some of the commitments incurred by the basic thesis. After that, we could take much more care in a detailed work-up of each of those commitments and responses to the difficulties raised.
Also, it seems that there should be some sort of fairly general discussion of how CNF can account for the intuitions that would lead one to hold a two-dimensional semantics (2DS) view. And also some general discussion of the epistemological and methodological virtues of CNF vis-a-vis 2DS. I've made a first outline reflecting these desiderata.