66 posts • joined Monday 9th April 2007 21:35 GMT
Did you actually think ...
.. that a marketing droid from Dell would actually know anything about technology?
On the other hand, dealing with Apple's VAR or enterprise programmes is indeed excruciating. Been there, tried that, gave up. The pitch is all wrong and it ends up looking hard to deploy, hard to package, hard to integrate, hard to manage, and terrible support.
Storage isn't a barrier to virtualisation and hasn't been for some years. Where did you get that notion? Perhaps from some PR officer.
No, actually, for most enterprises it's application and OS vendors who are the problem, usually through bogus/archaic licensing or support certfication constraints. Not technology problems at all.
The real victims
Anyone who self-supports Red Hat Linux and patches their own kernel.
Okay, they were never RHEL's target market but could often be found being productive in otherwise hostile enterprise environments.
My rural reception with Optus has massively improved since updating to an iphone 4, a 900MHz-capable device. I still retain a Telstra handset for alpine cycling but on the cheapest plan; it's just for emergencies.
Bicycle shops are in the same boat. They were part of the recent whinge brigade along with Gerry Harvey et al. Yet the typical local bike shop experience remains one of surly service, terrible advice, and ridiculously high prices.
Go online instead and receive instant, high-quality advice from a friendly website selling top-notch goods at rock-bottom prices.
These guys, it's as though they *want* to go bust through lack of imagination.
Java is now a product
And like any marketing machine, Oracle requires a new release every X months to sell more product. Whether those features have merit or not.
I'm quite sure that Oracle's product management regards the JCP as a huge irritation and can't wait to be rid of it.
It's a sorry end for what was once a promising language. But as the new COBOL it will linger for many decades to come, dolled up and whored out to whoever needs her.
Just another crappy php website
That's all Facebook is and will ever be. A big crappy php website perhaps, but a crappy php website nonetheless. So this is a big crappy webmail with chat? Woohoo, no-one ever did that before.
Target market: American teenagers.
Email killer it is not
Note on doing business with Microsoft.
If you get into bed with Microsoft, expect to be the one biting the pillow.
That is all.
overlong article missing the point
Given that the underlying protocol was a real-time federated social publish/subscribe service, they'd have been better off making the front end a Facebook killer. Instead they faffed about trying to make it a multimedia Instant Messenger, only one you have to visit a website to use and that didn't work in IE. Oops.
Well engineered but poorly productised and didn't work for nongeeks. Or even geeks at nongeek workplaces.
Oddly reminds me of Sun Microsystems.
The idea is to make a hash of their mission?
If you're going to parody Fry you need to be articulate and funny.
So Firefox told me to update flash. I duly followed the link.
What a disaster. Adobe are the new RealNetworks when it comes to awful product update mechanisms.
First strike against Adobe: not using the native plugin update mechanism.
Second strike against Adobe: trying to include some other obnoxious download that I don't have already, don't want, and had to untick.
Third strike against Adobe: sending not the update, but some Download Manager. Yeah, like I need another product-specific Download Manager. F*** off.
Fourth strike against Adobe: the download manager failed to pick up our corporate proxy settings correctly and the download failed citing a integrity check failure.
Fifth strike against Adobe: their lousy download manager then hung and I had to bring up Task Manager to kill it.
Stupid chumps. Bad product design. Bad engineering. And they wonder why product elegance snobs like Steve Jobs have the knives out for Flash?
Perhaps Google could attach that "wireless hoover" the Senator also hilariously claims is sucking up the Internet Banking transactions? It'll help clean the portals of these 20,000 scams.
If it was the Minister for anything else I wouldn't care about technical competence. Unfortunately he is our Minister for Broadband. What a chump.
It may be Larry, but ...
... he's right on this one. Sun had terrific engineering but horrible productisation. No idea about bringing technical products to market. It's astonishing they stumbled along for so long at all, and the ponytailed one did little to improve matters. From Java to the SPARC - it almost always fell to others to make products built on the core engineering that enterprises actually bought (e.g. IBM, Oracle, BEA for Java. Cray and Fujitsu for the SPARC). The biggest squandered opportunity of all must surely be the Sun Ray end-user computing architecture, eclipsed by VMware and Citrix by sheer force of vastly better marketing.
The only glimmer was the 7000-series Unified Storage platform, which could've/should've been a NetApp killer. But it was too little, too late. Perhaps Oracle can still channel that potential better.
Maybe Schwartz saw himself as a Jobsian tech-visionary figure, only with enterprise rather than consumer products. But I think he lacked the charisma and his company certainly wanted for execution. Larry makes me nauseous but surely does both better.
It takes big cojones to come clean about your security blunders. The vast majority go unreported - how many of *your* passwords have leaked without your knowledge?
The whole notion of site-specific usernames/passwords is an horrible anachronism. I feel resentment every time yet another crappy site asks me to sign up for an useless account.
PCoIP: not as good as I'd hoped
Looks like a rebadged/OEM'd Teradici device. Which given the PCoIP support would hardly be a suprise. The IBM PCoIP device is very similar again in form factor and specification. They're very likely all based off the same Teradici reference design.
We studied VMware View as a wide-area VDI solution.
A pity that PCoIP seems so bandwidth hungry, because I was greatly looking forward to VMware View supplying a better protocol than RDP. In lab tests, PCoIP was only slicker than RDP or ICA over high latency links where bandwidth was not an issue, but borderline unusable even for web browsing over commonly encountered lower-bandwidth connections e.g. 512/512 DSL. Visible tearing, painfully slow scrolling & updates etc.
Even after tuning in conjunction with a VMware consultant it required at least a megabit to provide satisfactory performance for just one remote Windows XP desktop doing browsing & word processing on a typical business DSL circuit.
Trying to stuff a call center or entire retail site's traffic down affordable business-grade connectivity looked infeasible as a result. Especially since we then have to pay for new thin clients, virtual desktop backend servers & licensing, and of course a DR site for same. It worked out cheaper to stay with the conventional desktop model.
It was suggested that we could/would be able to deploy accelerators, but I'm primarily interested in protocol innovations that eliminate this additional cost and significant complexity - otherwise what's the point? I could just deploy RDP with WAN acceleration.
Our conclusion was that the much lower-impact ICA is still the leading national-scale desktop protocol for those who can't justify the cost of reliable very-high-bandwidth connectivity. I think most enterprises will fall into that category.
PCoIP only starts to look interesting for high latency global-scale remote access e.g. offshore developers, remote trading desks.
A thousand times no
As with any technology that turns a stateless service behaviour into a client-specific proxy, this is a friend to censorship and an enemy of end-to-end transparency.
In this case, it's yet another attempt to subvert the primacy of routing protocols in making routing decisions. Hence:
"This gives the authoritative provider a better idea of where users are located, which means it's more likely to send users to a nearby data center when resolving a net address"
We already have two solutions of that: BGP and Anycast, and they already work for everybody rather than just secretive colossal-scale providers. And without compromising the key Internet design principle of end-to-end transparent layering.
This is an "I-Know-Better-Than-You" kludge that serves nobody well.
Confirming that mine is demonstrating occasional five-to-ten seconds pauses and has twice required a hard reboot since updating. I wouldn't call it "bricked", but clearly there are stability problems with this release.
I do wonder why people seem so keen to scream "nyah nyah told-you-so" like children at incredible length. Don't you people have lives? I don't particularly care for Mr Jobs, but the iPhone is the first phone or PDA that I've actually enjoyed using.
Debian project overrun by nutters, film at eleven
The loud, hard-headed, intransigent propellerhead technocrats at the Debian project will just perceive this as an attempt to undermine their authority, credibility and integrity.
Debian is the place where free software goes to be tied up and beaten because it likes it.
"Question for people: (maybe a rhetorical one): how can a company own a technology that is open source?"
For one thing, open source isn't the same as public domain. But that's actually beside the point.
Ownership of Java isn't about the rights to a source code repository. It's about owning the standardisation process, especially the JCP. It's also about perception amongst enterprise customers (i.e. the ones with money, and who have to plan their technology choices years ahead) and product developers (e.g. those with embedded Java, which is practically every device these days short of the iPhone) of who the pre-eminent entity is.
You may be a coward, but your mate must be an idiot.
It took me under thirty seconds to work out how to cut and paste, working simply off trial and error. And it's working between apps.
I suppose you /were/ in the pub, with all that implies.
Paris, because she's smarter than you.
In this context "socialize" is Enterprise-speek for "tell people".
Thus creating synergetic leverage amongst key stakeholders, building core competencies across a matrix of functional silos! *cough*
A matter of perspective.
I don't use the word backup any more; when people do they get my lecture on the Four Rs of data copy usage. Because there isn't ONE reason why we take copies, or replicas, or backups, or whatever you wish to call them, there are four!
As all the smart people have recognised already, what matters isn't that a copy is made. What matters is what you want to do with it afterwards. And it ain't so simple once you get beyond protecting your email and photo albums.
So the Four Rs of data copy usage are:
* Restore. Pull back one or more historical files, tables, volumes or what have you because the current working set was deleted or corrupted.
* Recover. Restart data processing using an alternate facility due to loss of a primary.
* Repurpose. Use a primary data set for production support, report generation, system test, or system development.
* Retain. Meet regulatory and legal requirements - and broader expectations - for data archival and discovery.
The interest in each outcome should guide you as to which storage technology capabilities to buy or build. Some are surprised to learn that traditional tape backup is not the best path to achieving all four at efficient cost in one package.
I've heard the phrase "Information Lifecycle Management" (ILM) used when introducing this topic, but I avoid doing so because it's an industry term and everyone else glazes over. Even your IT colleagues will look at you funny, suspecting a dose of Gartner kool-aid. To be fair, ILM "done right" is a bigger topic, encompassing people and process as well as technology: it is comparable to ITSM, PM and the SDLC as a complete subdiscipline of ICT. It is well worth treating as a separate competency once you have some scale.
VM failover is not clustering.
Much as I agree that *all* vendors should support hypervisor-based deployment of their software, this is a case of comparing apples with onions.
RAC provides horizontal scalability and fault tolerance in the event of a database crash or a hardware failure. VMware HA is a restart mechanism. These articles and blogs are full of verbal gymnastics, trying to equate two very different technologies that have very different use cases, mostly for the sake of playing childish vendor one-upmanship games.
Those of us who actually design, build and operate the stuff aren't fooled. Alas, non-technical users might be.
@All the SPARC apologists
Sorry, chaps. Oracle isn't going to keep the SPARC alive just because it threads well. Large swathes of their technology strategy - from their Cloud Computing model to the hilariously named Unbreakable Linux - scream "We Love Intel". Heck, they even said exactly that in late 2008, mostly to annoy Sun and point up how market-trailing the UltraSPARC-based server range had become and how overdue Rock was.
Any predications based on technology are fundamentally misunderstanding Oracle's business model and reflect the mindset of Sun inside and out. Sun were always great at engineering but lousy at productization (examples: the game-changing E10K was a Cray product design, and the current M-series are Fujitsu boxes). The expectation that great engineering leads naturally to commercial success is/was a pervasive fallacy. There's a hint that someone got a clue with the Amber Road product direction, but that was very late in the game to start competing with NetApp.
Best path for the SPARC's survival now is a sale to Fuj. Expect Oracle to focus on Sol x86. Because Nehalem blades perform well enough to deliver the vast majority of enterprise workloads, and they readily repurpose for Linux and Windows workloads. So the SPARC's legacy of exceptional multiprocess/multithread performance just ain't that interesting anymore.
It's great to see someone resurrecting and productising distributed memory, a concept that's been around for donkeys years.
@Matt B: DEC Memory Channel would be more akin to, say, RDMA over Infiniband, which if I read correctly, is just one piece of this stack.
At that price point, the use cases for this will remain slim and are basically identical with those of main-memory databases i.e. transactional systems with large working data sets. Although I can also see possible uses by the starry-joined OLAP crowd.
Ultimately this stuff just forms part of a hypervisor and permits what amounts to main-memory lending between VM hosts. The cloud computing people will love being able to have a memory rack next to their CPU rack.
And thus we reinvent the mainframe ...
Get it right.
NetApp did not kill off SnapMirror, which is the core replication software.
They killed off the ReplicatorX software line, which came with the Topio acquisition, was branded "SnapMirror for Open Systems", never really found a market, and is more or less superseded by Open Systems SnapVault albeit at a file level rather than a block level.
The value of VOIP
VOIP is not about cheaper calls, no matter what the marketing claims.
Historically, there were only voice bearers. Then we started running data circuits over those voice bearers; the PRIs and the OC-series. Then voice itself became packetized and suddenly we're running voice over data - but still over those same voice bearers. The technology ramps up, but underneath it's still a circuit, and you're still using roughly the same bandwidth to make a call.
The true value of VOIP is in rich voice functionality and the lower cost & complexity of operating a single data network with voice as "just a service".
As more services converge onto data networks (storage is next with FCoE) we'll see increasing utility from that notion. RIght now, we're still building infrastructure. So this is the expensive stage.
But in the meantime, you still need a reliable bearer at the bottom of it all. If you're serious about business-grade branch-office VOIP, you'll go BDSL, with SRST handing off to local PSTN, and a very highly available core switching capability in your DC.
And don't expect a ROI under five years.
If you look at the subtext of the engineering direction for 2009, they're gearing up for a crack at the general (rather than media) mid-sized enterprise market.
They have successfully productized and monetized an open-source Unix and could easily absorb the best of Sun's R&D people. They're already cherry-picking bits of Solaris (Dtrace, ZFS &c). Apple could give Solaris the makeover it deserves.
They could do with a decent x86 server series. But they've never been afraid of using non-Intel processors, either.
What would Apple do with Java or with StorageTek, though?
I'm no Sun fanboy (we just ditched Sun storage and went with NetApp), but Matt, you're blowing way too hard.
You started out with a bunch of wild claims that were obviously wrong to anyone like me (a common-or-garden infrastructure architect) who'd actually stopped to read the technical briefs first, and then transitioned majestically into flamebait and TLDR territory.
It's the fast track to the "not credible" category.
NetBook & ARM
Contradicting the article: there isn't any particularly good reason why the iPhone OS can't be ported to another chip.
If iPhone OS is, as claimed, Darwin under the bonnet, then Apple already have 99.9% of the porting job to x86 done - it's OS X. No surprise that the iPhone emulator runs so well. It would merely remain to merge the platform differences and recompile, or (more likely) maintain something like an integration tree (what we used to call a Vendor Branch) for the hybrid offspring. Or whatever Apple's SCM equivalent is.
Maybe not a trivial job - but well within the capabilities of even a small UNIX kernel hacking group. The rest, then, is UI development - also well within Apple's capabilities.