7 posts • joined 30 Oct 2008
I am with the zealots, on the whole. Already thinking how to work this tech into my projects.
One concern is that the federation model is not yet stable. From the draft protocol spec :
"2.5. Wave ownership and server authority
The operational transform used for concurrency control in wave is [currently] based on mutation of a shared object owned by a central master. "
Reading here and elsewhere in Wave's docs, it appears that (elements of) Google would like Wave to be more federated than it currently is. Because this uncertainty lies at the root of the system and because 'architecture is politics' , we might expect a lengthy period of wrangling before a standard becomes stable.
Either way, I hope the notion that Google must sit astride all Wave data has been put to bed. That is simply incorrect. Wave is truly an open system in principle. In practice, system designers and application owners will need to step up and compete with Google if it is not to remain a de facto Google platform.
Props to TheRegister.co.uk for having promoted this issue in its pages and to its readers for keeping the flames hot. Elsewhere we are told that proper journalism and the net are considered inimicable. Yet this issue may eventually be resolved so that the interest of democracy shall prevail against those of private interests. In no small part thanks to persistent lobbying online from citizens and professionals.
last post for me
Agreed (to disagree). It is also time for me personally to switch the same gears in a more significant way. For the past couple of years, I have been thinking and working a great deal on Information Management in the abstract and I am currently in process of finishing a prototype to demonstrate a key principle that I want to proselytise. From here, I need to be focusing more on how to apply it to subject matter domain(s).
You are right that I do have some experience, though latterly more as a generalist IT manager, than as a specialist KM consultant type. I have found that there are inherent tensions that a designer needs to acknowledge and work. The weight of any factor may change through a system's adoption life-cycle. As I reflect though, I find I cannot adequately summarise these. Here's an incomplete list:
- predominant user types (early adopters; heavy SOP users; business managers)
- user status (internal / partner / external)
- homogeneity of users' language / business culture / social graphs
- Scale of data / number of discrete sources / granularity of content / language
- known unknowns / unknown unknowns
- business criticality of content
- homogeneity of data sources and content
- Organisation's culture w.r.t. information (willingness to adopt new practices; functional/regional/business line demarcation; command and control / bottom up; acceptance of external data sources).
Although I do agree with the adage about SOP, I also find it parochial. It applies to me and to many people I know. On the other hand, it seems to be culturally specific. I have worked in Japan and there I found the adage may hold but only in exceptional cases . It feels more like noise than signal in Japan, whereas in the UK the converse seems true. This is just anecdotal and subjective, but you asked for my opinion, so there you have it.
We seem to be at cross-purposes, but enjoying ourselves nonetheless. Probably nobody else is reading this thread by now.
I may be missing something, but I keep coming back to the (mis?) understanding that all three of your exclusive and complete alternatives neglect to reference anything of the sender's intent or meaning. What I meant by signal in the data. your three possibilities only seem to reference the receiver's understanding of what they think they have received in the data. If this is a correct characterisation of your argument, I cannot agree with your definition of understanding. Indeed it is so patently wrong (to me) that I really did think you were just joking around. I do not believe that now.
Perhaps you are not familiar with the work of Shannon, dubbed Information Theory. Entropy is not fuzzy logic, but it does describe a kind of grey. Entropy is actually uncertainty. The lower its value the less uncertainty the receiver should in truth hold about the data being signal rather than noise or vice versa. It was originally described in terms of a communications pipe (coming from Bell Labs), but I believe it can apply to any application of information transfer. We agree on most of the other things you've written here, so if we can get over my second paragraph in this post then you'll probably also see why entropy is applicable to this thread. Of course, you might see dealing with uncertainty as dancing with the devil. BTW don't read more into my allusions than common English idiom.
I am slightly less cynical about the intention of organisations to want to do information / data / knowledge management. This may be just down to our different workplace experiences.
On the whole it is human nature to accentuate differences. One thing that you and I and Polanyi seem to have in common though is the importance of the personal or subjective perspective. Though for you, this seems to be be all important, whereas I do believe communication is a two person (or machine) activity in which something real and valuable is transferred. Information is an artifact of both communication and (as you rightly said earlier) of interpretation.
A simple but useful place to establish agreement might be over the term 'exformation'.
- Andrew AKA Claude Shannon
There is a qualitative difference between reasonable understanding and perfect understanding. If the latter implies a data source containing no signal, semantic content or whatever, then it is pointless (unfalsifiable) to say the data has been understood. Perhaps your belief / knowledge fig-leaf did cover up the original sin, but I believe you were just messing around with all that stuff.
People, like the grey-beards, usually have little trouble interpreting data to a reasonable degree of understanding, or fidelity to the intent of the originator. And some do better in general or on specific tasks than others. This could be tested empirically, but it is surely intuitive anyhow.
What we need is systems that can do as good a job as those folks, but on the larger quantities of disparate data that are now prevalent in the workplace making it unfeasible to manage everything using dedicated or even time-sliced expert staff. Getting a good schema is the tricky part. I propose that a schema must emerge out of ordinary use of data and its interpretation to information. Entropy, not logic, will be at the core of such a system.
IT? stands for Information Theory ?
what they do best
Imagine there existed a search engine that was used in 100% of all web transactions involving payment.
What share of the whole search market would it need ?
What volume of business would pass close to its gaze ?
How would this affect the business model of its competitors ?
Having achieved 100% share, what small tweaks to its operating model could generate significant income without jeapardising the monopoly ?
If MS has only ever done one thing for its investors it has been to monetise a monopoly.
rare metals for instance
"Tim Worstall knows more about rare metals than most might think wise"
To address the theoretical / empirical distinction highlighted by several previous commenters, perhaps Tim could tell us how commodities markets generally price in expectations of future economic growth.
Are there circumstances in which it is possible to zero out supply variation in one's analysis, to be left with roughly the impact of demand for (consumption of) resources on price, under changing expectations of future GDP growth?
In other words, does he agree with those analysts who believe that recent commodity price slump across the board is largely caused by the markets' revised expectation of economic slowdown ?