Good catch
Nope, can't call mistype on that one. I've been using my HTC Desire for answering *all* posts to my personal articles, but only a very, very few of them that aren't my own articles. (It's my way of comparing the usefulness of the onboard keyboard of the device.) The error rate is significantly higher on the Desire, but it doesn’t compensate for the fact that I am one of those individuals who primarily utilise non-linguistic cognition. “Werdz r hard.”
I have to admit to being dependant on spell check to catch most of the obvious spelling errors, but we all know it’s mostly useless for grammar and homonyms. That’s where proof reading comes in and where I generally fail. I can proof read my own posts or articles a dozen times, but if I am proof reading them within a space of about an hour or so from when they are written then I seem to miss a lot of grammar. Most especially I miss the homonyms.
It’s something I’ve struggled with as far back as I can remember. Give me two pictures, and I can point out almost pixel-level differences between them. Give me an audio recording to compare and I can find minute differences. Language, tactile information or chemo-senses are all areas that never remotely developed to their full potential in me.
I’ve been keeping a log of all mistypes over the past five months. I go back and re-read my comments to find them. Additionally commenters are always helpful in pointing them out. Apart from the obvious ones, (no sleep, no coffee, typing a comment whilst distracted by other things) I haven’t really run across a commonality in what causes me to make these little linguistic faux pas.
Indeed, longer comments such as this one I “sign off on,” having reread and rechecked the post several times before submitting. (I don’t reread any post shorter than about 2 paragraphs.) It’s interesting because at the time, my brain honestly can’t spot the errors. As I read the text, it seems to translate “what I actually wrote” into “the concepts that underlie what I meant to say” without fully conscious access to the underlying text itself.
The truth is that I just don’t ‘think’ in words. I think in objects. A word is nothing more than a property associated with that “object concept.” Unfortunately, the actual text of the word seems to be stored with the auditory representation of the word rather than being the key indexed item. My mind seems to use phonetic rather than typographical representations of words as the indexed item. Thus it is a pointer to the phonetic (rather than textual) representation of a word that is stored with a concept. To get to the textual representation of a concept I need to go concept -> phonetic linguistic representation of that concept -> textual linguistic representation -> textual linguistic representation in $language.
It’s a fascinating study. To me at least, though I recognise it would probably bore normal people. I think it’s because my entire family are shrinks. Psychologists, Psychiatrists, Psychiatric Nurses, the odd Social Worker…and the one or two black sheep of the family like me who work in IT. Whilst I have no desire to have my brain picked at by anyone I’m actually related to, I do admit that I find the entire study of “how we think,” especially as relates to the conceptualisation and manipulation of thought to be fascinating enough to wish that I could be part of a larger study of exactly these processes. One performed by a professional in the field, of course.
I can’t help but look at my brain from an IT perspective; information storage, processing, etc. As near as I can tell, I truly have a completely non-linguistic, object oriented thought process.
I think I have also determined why we need sleep. For lack of a better way to explain it…our brains suck at indexing. We write a raw copy of the day’s experiences to a buffer. That buffer has a finite capacity. Once we fill it up, we get tired. When we sleep, our brains perform a /massive/ deduplication of the information in the buffer. This is how we can store so many memories in such a small space (the human brain, IIRC, is thought to only store a few tens of terabytes of information in total, whilst our eyes provide visual input at something like 10Mbit/sec.)
That deduplication is the key. It’s why certain sensory input dominates a given memory; it was probably unique enough not to have been de-duplicatable. There’s a certain amount of fuzzing involved too; the smell of a particular batch of cookies is stored in our memory as identical to similar batches, despite that being essentially impossible. Our brains have a built-in “close enough” algorithm when deduplicating the day’s information.
I believe that this is also why, when you start to get really tired, you can remember everything before a given magical point of tiredness, but everything after becomes a blank (or a blur). We become “tired” at a given point before the buffer is truly full. Once the buffer is completely full, the brain simply won’t write anything more to the buffer. The exception being if it triggers fight-or-flight. That information seems to overwrite other information stored in the buffer (probably being deemed super-critical by the brain, more so than any information that might be stored representing the trivial aspects of your day.) It’s why when we are truly exhausted we can forget even critical details of what we were working on the next day…but if someone were to hit us in the face in whilst that state we would most certainly remember it.
Again, all conjecture; but it’s a theory I have been working on for quite some time. Yes, it does relate (as per the beginning of this long post) to spelling, grammar, and why certain people miss these things more than others. I believe that /how/ we think is truly different depending on the person. Some people seem to think in words (linguistic cognition.) In fact MOST people seem to think in this manner. A smaller percentage think in objects, some in pictures, some in sound. Those who don’t think in words (non-linguistic cognition) often have a more miserable time translating their thoughts into a given language.
Add this to the fact that my default mental language is actually French and I think it is at least part of the reason why I miss so many of these errors, even when I am putting the effort in to proof read. It’s not all of the reason, nor is it a cure…but I’m working on it.
The experiment with the Desire is interesting mostly because it shows the differences in my raw, uncorrected error rate as opposed to using a notebook where I have an opportunity to type up my comments in Word, spell check them, and review them on a real screen. Comments typed for my own articles over the past month are essentially my brain’s raw output…compounded by the shitty keyboard. Mistakes made in these threads are my brain’s corrected output: something I have no excuse for, but find fascinating nonetheless.