Wednesday, July 8, 2015

his concept of infinity is disappointing. no human can construct infinitely many combinations, but that's just the point - we use a finite number of word combinations in our lives, not an infinite number. it's his conceptualization that is impossible as anything beyond a conceptualization; it's important to understand it in the abstract, but there's no such thing as an infinite collection of words. i don't know how he can be confused on that point. and, peano had no problem deriving the set of natural numbers from 0 and 1 - it's in fact the only way to do it. i don't really want to argue about induction; it works fine if you use it right, and is less convincing if you don't.

if you're curious, i think his theory is kind of unavoidable in a broad sense, but it's not wise to jump to too many conclusions about how important it really is. the dna gives us a stomach. but we decide what to eat.

http://www.youtube.com/watch?v=3VyteV_7sxI



also: your brain is a quantum computer. not in some quasi-mystical sense, but in terms of the reactions it's utilizing. i think he has the right idea about minimizing resources. but, the idea that it's a turing machine is a little outdated. it's solving np problems on a second-by-second basis - fueled by nothing but sugar.

i think maybe a more interesting question is whether other animals have this "i language", or, really, just exactly how something like a dog actually thinks. 'cause it's pretty clearly thinking. as a human, i couldn't understand any other way to think. that doesn't mean there aren't other ways to think. i mean, babies are obviously thinking; i suppose none of us remember our thought processes from before we could speak. you'd have to guess it's at least the beginning of such a thing, even if it's not fully formed.

i remember reading arguments that this leap had something to do with a mutation regarding vocal chords. it's not just that a monkey can't talk to you. it's that a monkey is physically incapable of forming words. we can't know what's going on in there. and, if there's some truth to that, a broad idea of internal language may be more widespread than we realize. such a leap would be less of a mental one and more of a physical one; in a sense, almost a technological one.

but there's no jump to infinity. infinity is unattainable. it's just a theoretical abstraction. the math he's using is very widely used in computer science (the chomsky hierarchy is an unfortunate use of language....); i've studied quite a bit of it, and it's really just a mathematical formality. linguistics would lose nothing from modelling itself in the finite realm, it may even gain a little levity, but the math that exists is all infinite, so that hubris moved into linguistics with it. that's just getting lost in the model.

===

this is a long video, and i watch lectures like this when i eat, meaning i'm on the third (and probably last) segment of it. i see somebody drew attention to the larynx issue.

i don't think the question is really in the realm of questions that have answers. but, we have a lot of neanderthal dna in our genome, and that is a pretty good reason to suggest that they were at least able to learn languages that we taught them. i guess that goes back to the question of how advanced an "i language" really is - or whether it is broadly present in various degrees across the mammalian spectrum (at least).

either option is kind of disastrous for this genetic basis of the great leap. either it's ancestral, or it could be taught. i'd lean towards the latter.

it's maybe easy to jump to "language was selected". but, i have a hard time believing the idea that a fluent sapiens would intermingle with neanderthals that could not speak. a few, maybe. outcasts. rape, even. but not at the level that the interbreeding happened.

i guess that the other option - if we wish to maintain the dogmatic position that language is innately human, because it's convenient for us - is that language must be a lot younger than we've hypothesized. i get the great leap hypothesis, that all this sophistication happened and it must have been connected to language. but, due to the interbreeding with neanderthals, that kind of logic necessitates taking the development back to well before all those things happened.

i think the closest thing to actual evidence that we have for the origin of language is in constructing proto-languages and tying it to archaeological evidence, and that suggests that the language that most of the world speaks (indo-european) is barely 10,000 years old in origin. how or if the other major language groups are related to this is just speculation, but the time frame for proposed nostratic theories is barely 20,000 years ago. it seems like a fairly great leap in itself to argue that language is 100,000-75,000 years old.

if language is only 25-30,000 years old, then you can maintain this separation wall and maybe even argue it was evolutionarily relevant. but it means it happened after the great leap, and after the interbreeding.

except that it seems convenient that you can connect click, tonal and "nostratic" languages to the l1/2/3 genetic split, meaning the common basis had to be pre-migration, and that you're left with almost no option but to conclude that neanderthals must have at least had the capability to learn language.

===

i spend a lot of time walking. it's good exercise. saves gas money, saves gym membership costs; it's a more holistic way to live. it's also good for thinking.

if you look at a map, something curious jumps out - l3 is non-tonal in precisely the same places that neanderthals lived, and areas that hybrids moved into during the interglacial. there is certainly a geographic correlation. l3 is tonal in areas outside of the neanderthal range.

might neanderthals have actually played a role in the development of non-tonal languages? that is, "nostratic" languages? the instantly apparent hypothesis from looking at the map is that l3 would have been dispersed out of africa by speakers of tonal languages, who continue to exist in a continuum that exists on the eastern side of a line that slices across eastern asia from india to korea (and includes sino-tibetan and austronesian languages). it also includes some native american languages. however, in the places where interaction with neanderthals occurred, that tonality would have been lost. whether this created a pidgin, or was even primarily neanderthal in structure, would be very hard for me to speculate on, as a non-expert in the topic. but, if neanderthal language developed without clicks and tones, some cultural assimilation may have resulted in losing it.

the timeline is also consistent.

listen. if we cohabited and interbred with neanderthals, then it is obvious that we are culturally indebted to them in some capacity or another. we could not have integrated with them culturally without adopting some of their culture.

it's at least a curiosity.

could the neanderthals have been unable to understand tonal language? might we have adjusted for their benefit?

if you have an alphabet with n letters and a maximum word size of m, then there are n^m possibilities for words - in fact less than that, as many will be unpronounceable. you can then sum that over a maximum sentence size, and again over a maximum "book size", where a book is meant to represent a life's worth of thought. there are finite restrictions because we have finite lifespans. if you drop the grammatical formalities, we could express a maximum word size determined by enunciating a syllable for every second over a hundred and fifty year lifespan. such a limit would be well beyond the realm of possibility, and remain finite in scope.

the limits are perhaps not arbitrary. if we could calculate a maximum word and sentence size, we could understand the complexity our brain actually works at.

===

i looked into this a little further and i can't say if "infinite expression possibilities" was meant to be poetic rather than literal. but, it has to be. the idea that we have infinite abilities to express ourselves is mathematically false. and, that's the answer to this conundrum on not being able to jump from finite expression to infinite expression.

peano provided a construction of the natural numbers, and it remains true that infinity means counting forever - which is of course impossible. that means infinite thought is also impossible.

there's no paradox. it's just a misunderstanding of the infinite. there is a difference between arbitrarily large (which is still finite) and infinite. but, we can't even say our possibilities are arbitrarily large, because our lives are finite.

in some cases, it may be convenient to model certain things as infinite. but, that does not translate to any kind of reality.

you could then even calculate the probability that two people would express the exact same sequence of words - which could be used to prosecute students for plagiarism.

so, if you're talking about an actual language - like english - you should be utilizing a finite subset of the kleene star. that is, instead of taking the infinite union, you should be taking a union up to m, where m is the maximum word size. for mathematical purposes, you might get a stronger result if you take the infinite union, and mathematicians would like that - and not care what m is. but, a language like english doesn't have 5000 letter words; the longest (non-imaginary) word is a mere 29 letters, so all those extra strings (while harmless) are just hubris, and introduce theoretical confusion.

all the operations will work as they would otherwise. it doesn't affect how any of the theories actually work.

a computer can work forever or halt. we're not going to do that. if somebody starts talking to us in 5000 letter words, we're going to shrug and walk away - and we probably don't have enough usable "ram" to remember the syllables at the beginning of the word, anyways. we can choose to terminate the operation as a result of a buffer overflow, or something. so, it's a meaningful distinction when you're talking about how our brains actually work.

again: i think these are probably meaningful limits. and they may be experimentally determined, even. i'm not sure how. non-expert, again. but if they could be, that limit will be useful in determining what kind of language we can actually process and what kind of language we can't.

the way you want to think of our brains is probably that it's an absurdly fast processor, with basically no ram. so, we're working with registers. machine code. we just didn't evolve ram. we seem to have a hard drive, but no ram. so, we're restricted by the size of the register - and because we can't dump everything to disk (at least not consciously), we just toss stuff out of the register when we get a stream of information like a 5000 letter word. that's beyond what we can process.

so, again, from a formal mathematical perspective, you can speak of language as this infinite thing. but that makes no sense to us, as humans. it's a model of theoretical language, but it's not a valid description of actual language, or a way to understand how our brains work.

fwiw, i wouldn't be opposed to the idea of punctuated equilibrium, in principle. i just tend to resolve the gould-dawkins debate with hybridization. and, conveniently, that appears as though it might be consistent, here.