Comments

Sunday, May 6, 2018

Ockham vs Hickam in the Epistodome

Bill Idsardi and Eric Raimy

"Parsimony arguments are pretty weak." Norbert, in the comments last week

OK, after a longer hiatus than I [wji] intended, back to more discussion of Substance-Free Phonology (SFP). Reiss 2016:4 makes the following observation:

“[OT] advocates building into Universal Grammar, as constraints in CON, phenomena that have independent explanations via phonetic, physiological and physical factors. As pointed out by John Ohala in various contexts (e.g. 1990) it is not better in science to have two explanations (phonetics and phonology) rather than one (just phonetics) for a given observation.”
This viewpoint ignores the modular and model nature of the phonology component. Take a current favorite animal in cognitive science, the dead reckoning ant from Gallistel and King 2009 (which is certainly also dead from all its dead reckoning by now). It moves about in the world, and in doing so changes its location. We have a geophysical account of its location (which might include us diabolically transporting it to another location). But that doesn’t in any way preclude or supplant an account of what is in the ant’s brain, which will correspond in some ways to the geophysical model (in step-wise updating r and θ in its mental map) but not in others, as when we carry it elsewhere, or attach stilts to its legs, throwing off the updating routine or what distance a step is worth. It’s sensible at that point to say that the ant doesn’t know where it is (but also that it doesn’t know that it doesn’t know, thanks Donald Rumsfeld). Likewise, as we have been saying in this series of posts, the information in the phonology module model for speech has at least some correspondences to “phonetics” (i.e. articulation and audition) in its data structures and relations (Reiss 2016:27 “universality of the interface of the substantive primitives with the human transduction systems”). As a direct consequence, some regularities in phonetics will then be recapitulated as regularities in the phonology model. The question is (and it is very much an empirical one) which of these regularities is identified inside the phonology model? I.e. which reflected regularities are instantiated as phonological generalizations (rules, processes, constraints, laws)?

But, moreover, it’s not clear that Ockam’s razor applies in cases like those that we are considering. The most trenchant reply overall might be (https://en.wikipedia.org/wiki/Hickam%27s_dictum):
“While Occam's razor suggests that the simplest explanation is the most likely, implying in medicine that diagnostician should assume a single cause for multiple symptoms, one form of Hickam's dictum states: "A man can have as many diseases as he damn well pleases."” 
While we are definitely in favor of “explaining away” (Pearl 1988) as a mode of causal inference, the application of explaining away or Ockham's razor in modular system is less than clear to us.

And Elliott Sober (pc to er) reminds us that there is a long philosophical tradition of pluralism. So, for homework, the philosophers of science should dig out that copy of Against Method again. And the computationally inclined should think model averaging (https://en.wikipedia.org/wiki/Ensemble_averaging_(machine_learning)). 


Coming soon: The lore of the excluded middle

5 comments:

  1. I think Occam's razor is being completely subverted here. Putting everything into the phonological component is the simplest (and probably wrong) way to go about the problem. Looking at external factors qua external factors and understanding how they influence phonology is way, way messier (and probably closer to the truth).

    Unless, of course, the goal is to get rid of things that have independent explanations, and not study them.

    ReplyDelete
  2. The SFP position is not that the existence of non-linguistic factors supplants an account of what is going on in the speaker's head. I think two types of facts are getting mixed up here: the content of mature I-languages vs. the innate limits of possible I-languages.

    Take something like final (de)voicing. The typological generalization is that final voicing is unattested. It plausibly has a phonetic explanation due to the aerodynamics of voicing an obstruent, running out of breath by the end of an utterance, yadayada. And the SFP position is that the phonetic fact is enough of an explanation of the typological fact.

    This is not to be confused with replacing an account of what happens in a speaker's head: the learner exposed to German will have to learn a final devoicing process; the learner exposed to French won't. The content of the I-phonology has to recapitulate generalizations of the data (that's arguably part of the definition of learning). The learner's mental phonology will "correspond in some way" to the E-language pattern. This is all SFP-compliant. The SFP position is only that the learner is not ALSO born knowing that final voicing is impossible; she just learns whatever her language input has.

    This contrasts with a position where someone posits that since final voicing is unattested, then it must be impossible at all and that phonology ALSO has some innate formal restriction against final voicing, e.g. in the form of an innate *VoicedCoda without a counterpart. This idea is not incoherent, but it explains a typological fact that already has an explanation, so it's not clear what explanatory role it adds to posit it.

    So there are two types of facts: there is the fact that language X has final devoicing, which a learner inevitably has to recapitulate through learning, and any complete account of language has to explain how she did that. Then there is the typological fact that no language has final voicing, and the SFP position is that this fact does not need to also be within learners.

    The SFP position seems to me entirely in line with Hickam'S dictum here. If a patient comes up with multiple symptoms and you find a very plausible cause for one of them, then it follows that you don't also need to cram an explanation that same symptom into the cause of the others. In phonology there are multiple types of generalizations that an observer of sound patterns will find, things like (a) "German has final devoicing", or (b) "no language has final voicing", or (c) "no language stresses prime-numbered syllables". The SFP position is that (i) not all these facts have to be symptoms of innate constraints on phonology, and (ii) if one symptom is readily explained one way, e.g. (b) is a symptom of phonetic biases on diachrony, then we don't have to also look for a second cause that explains it. SFP follows Hickum's dictum since it opposes cramming all phonological explanations into the innate knowledge/structure of the phonology. Sound patterns can have as many causes as they damn well please.

    Of course inevitably phonology does have SOME innate restrictions. The strong SFP position is that those will turn out to all be formal restrictions on the power of data structures and rules, the inventory of symbols , and possible mappings to interfaces, unrelated to phonetic difficulty.

    ReplyDelete
  3. Thanks for the comments Max.

    I'd like to be a bit wonky about the 'final devoicing' example you are talking about because I think its more complicated than you are presenting with some shades of substance in there...also note that Laryngeal Realism is assumed here...

    First, German is not an example of 'final devoicing'. It is an example of 'Final Fortition'. One may ask, what is the difference and it is that German inserts a [spread] gesture to produce this effect and does not delete a [slack] gesture. This is very important when you ask the question of why does phonology seem to like insertion of [spread] on a consonant at the end of a word but not so much inserting [spread] in this same context?

    Another twist on this is what do we think about a 'low tone boundary element'? Inserting a low tone on a boundary final vowel doesn't strike me as an unusual phonological process (even if I can't come up with an example off the top of my head). This would be fairly close to the 'final voicing' example you want because 'voicing' on a consonant is [slack] and 'L tone' on a vowel is also [slack].

    This would now kick the question to why does 'word final voicing' occur on vowels but not consonants? I agree with you that an acquisition story on this is the way to go but I just can't seem to jettison substance here (I can be very dense). The bias towards vowels and against consonants here seems to be exactly the substantive content of [slack] interacting with the substantive content of [vowel] vs. [obstruent]?

    I think we're all 'minimalists' here so no one wants to be adding in stuff anywhere its not needed, but it does appear to me that the acquisition model having an idea (substantive knowledge) about how much 'voicing [periodicity]' to 'expect' based on whether something is an approximant or obstruent would be very very helpful. If the learner knows that approximants are going to have periodicity then they 'know' they don't have to posit any sort of phonological feature to represent that. Deviance from the expected periodicity can then be a diagnostic for a contrastive tone or laryngeal feature (i.e. [spread]/voiceless or [constricted]/glottalized, etc.) on the approximant. The learner needs to have a different set of expectations on obstruents though. Obstruent systems that do not have a laryngeal contrast (i.e. Hawaiian, Menominee, etc.) do have variable periodicity based on context in that obstrents at beginning and ends are mostly 'voiceless' while word internally 'voiced' but the learner knows/expects this and is not fooled into positing a feature distinction. It is only when there is too much periodicity (voicing language) or not enough (aspirating language) that the learner will posit/learn a featural contrast...

    I like the above acquisition story and Max, you are right, there is no need to posit any sort of constraint like 'don't consider final devoicing as a possible rule'. But the variable expectations that the learner appears to need to have to figure out what the larynx is doing in approximants vs. obstruents seems to strike me as extremely 'substantive' and I actually don't think that this kind of substance knowledge in acquisition is particularly redundant. YMMV.

    ReplyDelete
  4. I am definitely happy with adding all the required phonetic complexity to this example.

    "The bias towards vowels and against consonants here seems to be exactly the substantive content of [slack] interacting with the substantive content of [vowel] vs. [obstruent]?"

    I agree, but I don't see why this substantive interaction needs to happen in anyone's mind. It's substance: the interaction is already happening out there. That is it is already a fact about articulation that voicing a sonorant is aerodynamically easier than than voicing a sonorant. The Evolutionary Linguistics gamble is that this difference in difficulty is enough to bias language change toward the languages that are actually attested, without any innate knowledge of substance.

    "having an idea (substantive knowledge) about how much 'voicing [periodicity]' to 'expect' based on whether something is an approximant or obstruent would be very very helpful"

    This is more of an argument in favor of some innate phonology-phonetics mapping of the kind in Hale & Reiss as opposed to Emergent features. That is it's an argument for one SFP position over another. I agree with you and Hale & Reiss that the learning story for bare-bone emergent-feature SFP is unimaginably complicated and innate knowledge of the realization of features definitely help.

    ReplyDelete
  5. I recall talking with Paul Kiparsky at the Phonology 2k conference at MIT about Ohala's point against duplicating phonetic explanations in the phonological UG. I was basically parroting Mark Hale, who had introduced me to Ohala's work. According to my memory, Paul's response was "That's too reductionist---you should read Derrida on the philosophy of science''. I never followed up on this, never made it to Derrida. Does anyone have a graduate student they don't like to whom this task could be assigned?

    ReplyDelete