14 posts • joined Wednesday 28th November 2007 12:58 GMT
Good points, but I would like to add more detail.
You're quite right that Apple have rarely truly innovated. Most of their innovations have been done by someone else before.
Apple's true advantage lay in two areas. The first was in attention to detail and a strong direction. Steve Jobs, once involved in a project, dominated it. Don't want Steve's idea of an email client? Tough. That's all Apple will give you. But it was, admittedly, above average. And that was enough.
Their second advantage lies in two distinct but related areas - marketing and journalists. Apple have great marketing, and that helps. But many journalists aren't actually technical - they're writers who are merely ten percent more technical than the average person. Therefore they use Macs, because they're prevalent in the industry (due to an early GUI and availability of DTP software) and are personal users. Apple benefits from a very friendly Press.
So you're right that Apple's innovation was, in every case, not actually so. From MP3 players to capacitive screens to multi-touch, Apple has been as innovative as a block of granite. But they know how to do a demo, and their three-ring circus with Jobs as ringmaster was certainly good copy.
Apple are not dumb. They knew the iPod would be cannibalised by phones - remember the iTunes phone they did with Motorola before the iPhone? They have a good grasp of the market, and have proven themselves to be adept at pre-empting their competitors by entering the market early - witness phones and tablets.
Their weakness is in their uncritical belief in their own bullshit. There is no reason to have a tablet smaller than 10" - until eBook readers primed the market, and Google/Asus stormed it with the 7" Nexus. And Apple had to ship a smaller iPad.
There is no reason to have a larger phone, as your thumb can't reach that far. Until the Samsung Galaxy S3 and Note 2 sold like hot cakes, and Apple had to put a larger screen on their next iPhone.
Journalists are very happy today to forget that we had Apps on a Palm/Handspring Treo or Windows Mobile devices almost a decade ago, or that the original iPhone was web apps only despite not being 3G... But Apple's packaging and marketing make up for both history and shortcomings.
No one thing Apple has done has been truly innovative or ground-breaking. But they can more than make up for that with positive press coverage from poor "technology" journalists.
Their big problem now is that they charge more as a luxury brand, yet deliver less due to their obsessive control. Their competitors aren't necessarily better, but are at least both cheaper and more flexible.
Except even the most technologically illiterate journalists can't help but notice Apple losing their lead now.
Apple are in the a corner - give up control and risk loss of profits and bad publicity, or carry on and hope to work on high margins as a luxury brand. I suspect they will continue as a luxury brand for as long as they can, before moving slowly back to obscurity. They certainly show few signs of wanting to give up control, being fanatical about both what their devices can do and what they will allow themselves to take 30% of in their stores...
Re: Remote Control
Yes, PowerShell can connect to a remote machine and yes you can administer it.
But have you tried it for anything but Windows stuff? For example, Exchange Server? It's not very smooth. If I SSH to a Linux/BSD based MTA, I'm fine - every tool that's installed on that box is now available to me. If I run them, because I'm actually running on the remote machine, there are no issues. It all just happens over port 22, which is basically an encrypted text pipe. (I simplify, but not by much.)
If I connect to an Exchange Server via PowerShell, I need a port 80 connection with which to fetch the special PowerShell tools that Exchange needs. I then have to create a new session on that machine, and import that session into my current session.
(See http://technet.microsoft.com/en-us/library/dd297932.aspx for details.)
It's the same for SQL Server and other Microsoft server software.
You could view this as a security boon, albeit security by interface obfuscation. Personally, I view it as a bloody stupid way to work. The *NIX method is both more seamless and intuitive. PowerShell still has some way to really rival it.
I fall more towards the CLI camp
GUIs are nice, but they change. Because CLIs are often scripted, they become an API, and therefore don't change much over the years.
A CLI is also, perversely, easier to document - documenting a process for a GUI often results in 20 pages of screenshots, with no real detail about what you're actually DOING. Whereas a CLI document is often much shorter, so people feel they should pad out the document with a little explanation...
Sometimes, a GUI is superior - as others have pointed out, some selections can be hard to do with CLIs. Although if we had tools for the CLI like 4DOS/4NT/Take Command's SELECT, we'd be laughing there too.
What I really wish is that each played to their strangths more. I've seen too many GUIs that should have had some decent reporting or status monitoring, but were instead just bunches of buttons and checkboxes.
Conversely, I've seen CLI administered programs that were pretty poor, with minimal scripting (required input during the program) and more of an "edit the config file" attitude than actually providing a tool to make the change.
There is no panacea. Which is lucky, as it keeps us all employed...
Novell's allegations are well documented - basically, MS had a bunch of APIs for Chicago (what shipped as Windows 95) and was encouraging their use by third party companies. They then pulled them in a late beta.
So efectively Microsoft suckered Novell into developing for something that they never intended to ship.
See http://www.groklaw.net/article.php?story=20111121214458515 for all the gory details. But beware - it'll take a long while to read!
Fax isn't a good indicator
Disclaimer: I work as a messaging systems engineer - email, fax, IM.
Fax isn't as great an indicator of "backwards" as you'd like to think.
The plain fact is that a fax is a legally understood medium in every jurisdiction that matters. Email isn't.
So if you send an offer, a contract, a complaint or other documents by fax internationally then it's usually as legally binding as the real thing via the post. Those court battles were fought years ago, and won in favour of the fax.
The same cannot be said of email.
So when doing business, especially internationally, fax is often preferred. (Just yesterday, I spoke to someone whose small business only has a fax machine for dealing with suppliers in Europe.)
In some cases, they may have to send documents by fax because it's required by law, or because lawyers (on either side) have advised it be used as a "belt and braces" measure.
This doesn't mean that Intel's paper isn't valid - just that it may have picked the wrong metrics.
Interesting to see that people don't know what the cloud is even if they're using it, though - the buzzword is overused, so I shouldn't be as surprised as I was.
It's not an IT issue.
This is a cultural issue, not a technical one.
Basically, if you're doing things properly, you shouldn't need the emails.
Most of the documents people are looking for should be stored somewhere else other than email. A document management system, a network share - whatever works for that team/group/organisation.
But email is attached to a person. Just because Bob closed the Acme sale, should you have to keep Bob's email forever? Even after he's left? No, the documentation for that sale - the terms, the contract, etc. - should be somewhere that ISN'T BOB'S MAILBOX.
However, people are lazy feckless ****holes who just don't get this, so we end up having to rummage through their crazy personal filing system to find a document that should have been stored somewhere properly.
The best, easiest way to reduce your email storage costs - for both compliance purposes and otherwise - is to follow three simple steps:
1. Make it easy to get stuff out of email and to somewhere secure, shared and useful.
2. Have low quotas.
3. Single-instance on commit to the compliance archive, to reduce storage costs.
If, for legal purposes, you have to keep every message for n years, your storage costs are always going to be high - because you'll be grabbing everything at the router, rather than relying upon backups from the user mailbox.
From-the-router style compliance archiving, even with single instancing, is ruinously expensive - and you should probably look at systems like Centera for the belt and braces you'll need.
For anything less, the solution is simple - get the stuff out of email. Make it gross misconduct (a sackable offence) for employees not to be doing that.
Your problem will then be document management / information management, but that's a nicer problem to have than "there's an email we need, might be one of these eight people, we think it was sent in 2008, why would you need an exact date?".
Make the cultural change, it'll save you loads of money.
Exchange did work that way (single-instanced storage), with a couple of caveats that I won't bother going into here.
Microsoft moved away from it, mostly due to performance issues. Single instancing means you have to "garbage collect" your mail stores, to remove the mails that have now been deleted by all owners. It also means creating new copies transparently (from the user's POV) when a user edits the email somehow, which they often manage to do accidentally.
Basically, it costs in performance terms and makes the DB messy, and fast large disks are now cheap. So Microsoft decided to ditch shared storage and go all out for speed. I think that was with Exchange 2007, so any recent Exchange implementation/upgrade will lack SIS.
What they need to do is one of the tracks from the Total Annihilation soundtrack. That was a superb bit of work, which stands out to this day.
Consider yourself corrected, or at least challenged to prove your point!
I've seen this argument time and time again, and I always ask for the same thing: Proof.
There are specialised h.264 decoding parts. They're usually in TVs and the like, because there you don't want to have to put too complex a software system in them.
But when people say "hardware acceleration", they usually think something along the lines of "the processor coordinates data transfer via DMA or some other bus to a special chip which decodes the video and puts it directly onto the screen".
Yep, those special chips in dumb devices like a TV do that, and do it at very low power and heat output.
In a phone or on a laptop? There is no block of hardware dedicated to h.264 in that manner. That would be nuts, because it restricts you.
Instead, there are blocks of specialised computation that aren't much different to MMX, SSE, and so forth. That's what people are talking about when they talk hardware acceleration on a more complex device.
Think about it - otherwise, the iPhone/Android "h.264 chip" would need to be connected directly to the orientation sensor, and would be doing the animation AND resizing when you turn the device from one orientation to another. That's one heck of a complex bit of hardware when compared to the original vision of "chip which does video".
Basically, if the h.264 decoder uses them, then so can WebM. It's just a matter of doing so. Which has already been done for the most part - some of the first patches I heard of to the decoder were ARM assembler versions to improve speed, for example.
Hardware acceleration isn't an issue unless you have a device you can't get a software decoder update for. And the device manufacturers & developers have pretty much sorted that. (Although I wouldn't hold my breath waiting for Apple to join in!)
There are still dedicated "dumb" hardware decoders out there, in camcorders and TVs. But for the use cases you mentioned (desktops, laptops, phones) WebM can be accelerated, and not without much hassle.
It's down to the willingness of the vendor, and most seem willing. Check the WebM wikipedia article for a nice list...
Of course, I could be wrong here. If you know otherwise, then I want proof. I want a spec sheet(s) that show that a common GPU has a dedicated in-silicon decoder of a dumb nature, which could not be reprogrammed to do a new size/orientation/output destination or be partially used for WebM decoding.
Without wishing to sound snotty, that places the ball in your court. I've put forth my understanding, and you now have to prove me wrong. Which I would welcome, by the way.
I've been looking for that magic spec sheet since WebM was first announced, and haven't found it yet. Nobody has presented it to me, depsite numerous challenges to do so. I'm not yet tempted to call the hardware acceleration argument total balls, but I'm pretty close to it!
Is the voicemail stored on a computer system?
If the voicemails are stored on a computer system, then access by anyone without authorisation is illegal.
The Computer Misuse Act 1990 clearly states that unauthorised access is itself a crime, even if you modify no data.
At no point does the law state how the computer system is to be accessed. Keyboard and mouse, Kinect, phone and number pad, phone and voice recognition, direct neural command - it's irrelevant.
If you're not the owner of the computer or don't have permission from the owner to access it, then accessing it is breaking the law.
Backup? Um... No.
I'm not sure it's in any way accurate to call Ubuntu One usable for backup.
Ubuntu One synchronises multiple PCs, and synchronises your content into the cloud for access anywhere. But if you delete or overwrite something accidentally on your machine whilst it's connected to the internet, then a minute or so later it's toast on Ubuntu One as well. That's hardly a good backup solution...
It seems to me that you've fallen into what I like to call the "RAID trap" - people buying RAID arrays often mistake RAID 1 or RAID 5 for backup, when in fact it's merely providing redundancy against hardware failure.
The bottom line is that if your laptop died, Ubuntu One would keep your data for you. But it can't protect you against your own stupidity...
(And a clarification - I'm not railing against Ubuntu One, and use it myself for syncing machines - but I wouldn't use it for backup! But it's a good sync service and it's looking promising in terms of projects using it - for bookmark sync, data sync, configuration sync, etc., which is why I'm a paying customer of Ubuntu One.)
Dropbox is slightly better for backup - it holds previous versions for about 7 or 14 days (I don't recall which, exactly) and they offer an option for subscribers called "Packrat" which keeps previous versions of files indefinitely.
In the end, I picked SpiderOak for backup. They seem to be the most secure, and give me 100Gb for 100 dollars a year, which is a pretty good price. It has multiple versions, and is cross-platform with Win, Mac and Linux clients. I was surprised not to see it mentioned here.
For local or local-network backups, check out the rdiff-backup package. Rsync and diff, combining to give space-efficient backups - even to SSH hosts across the network. I've had no problems with it.
I've had a completely different experience.
Personally, I prefer the UI. I generally find it cleaner and less cluttered than the Windows UI, and often a lot more logical.
Of course, I'm really talking about Gnome rather than Ubuntu. Even Kubuntu can't make me like KDE, which seems to be aping Windows a little too much, and clutters itself up accordingly.
As for installing non-repo software... No idea what you're on about. I've never had trouble with non-repo software. Wanted Opera. Downloaded it and double-clicked it. Up pops a nice box saying what the software is. Click on the Install button, type in my password, and a few seconds later I'm done.
Same story with Bibble 4 Pro and the upgrade to Bibble 5.
Extensive repositories mean I've not needed to install much software from outside them, but if there's a .deb file for it then it's incredibly easy.
The only times it's not easy are when, like VMWare Professional, the software comes as a bundle file. Then I have to drop to the CLI, as you describe. This is not a failing of Ubuntu. Two much smaller companies got it right, VMWare haven't yet.
As for not running existing software - I've found there's very little I need to run in Windows. Linux in general has usable equivalents for all of my needs. Not all of them are free - some I've bought. But they're out there, and seem to be increasing in number.
These days on Windows it's games and that's it. And if Steam comes to Linux and brings Team Fortress 2 with it, then that'll remove 75% of my Windows needs...
Christo - stand up, your voice was muffled by your chair dear lad.
I've never seen an environment where administrator privileges are required to run Notes. It'll run quite happily with user privileges, unless you've mucked about with the Windows security.
That's based on 11 years experience with the product, running on every version of Windows except ME (did anyone use that?) and Vista, by the way.
Now, with Vista, you may have a point. The large number of security changes and the fact that Vista shipped late into Notes 6.5's lifecycle mean that it's not going to be supported by IBM. So that's hardly a fair point, but a valid one I suppose.
Vista is supported by Notes 7.0.2 and higher, including the newly released 8.0.
What I suspect you've done is installed Notes in single-user mode, where it will require administrative access to the data directory you selected when you installed it. (NOT to the whole machine.)
Users tend not to have those rights, and therefore you get errors.
That's very easy to fix, and frankly not exactly hard for a networking and security wizard like yourself to have determined with some simple testing. Or a search of the IBM knowledgebase. Both of which I'm sure you were too busy for.
Personally, I'd instead look at installing Notes in multi-user mode. Then it'll put a subset of the data directory into their Windows profile, bypassing the problem entirely.
Either way, good luck fixing it.
- Product Round-up Smartwatch face off: Pebble, MetaWatch and new hi-tech timepieces
- Geek's Guide to Britain The bunker at the end of the world - in Essex
- FLABBER-JASTED: It's 'jif', NOT '.gif', says man who should know
- If you've bought DRM'd film files from Acetrax, here's the bad news
- Microsoft reveals Xbox One, the console that can read your heartbeat