70 posts • joined 31 Jan 2012
10 Years Late
The municipality of Munich, Germany, had these kind of considerations 10 years ago. There is also MigOS, the transition project of the Federal Parliament in Berlin to migrate from Windows to Linux, since March 2002.
So, the UK Government on the forefront of vendor-agnostic IT Systems? Strategically supporting technology companies to harden a more balanced economy for future market tumoils?
Just some hot air to negotiate lower licensing fees or to offshore more sensitive data and more jobs.
Looks to me like a viable option
The salaries for most range between £25K and £50K. That's similar to what they pay here in the UK. Looks to me like an option, all they do here in the UK is offshore your job, reduce your salary and your rate, throw one voluntary and involuntary redundancy wave after the other at you, reduce IT budgets....it never ends.
An engineer in the UK is treated like a suspect, as someone who almost has to apologize for doing his work. All the ideas you have are talked down and thrown out of the window. In China, India and the US the engineer is supported in his patent applications and tech startups.
What keeps me here...
Re: Not sure what UEFI is actually for...and why Windows is preinstalled
I'm complaining about Windows being preinstalled on laptops and UEFI on motherboards, even if you buy the motherboard separately, as a component....I think ASRock, ASUSTeK, Gigabyte and MSI started in 2011 with that.
Re: Not sure what UEFI is actually for...and why Windows is preinstalled
Thanks, mmeier, but don't get me wrong, I try not to buy the PC in one piece. For the last PC (which became a server) I bought the components and slapped Debian on it, only to replace the disks, then add memory, then replace processor and motherboard, then SSD disks are faster, and RAID6 is a good thing, and KVM virtualization is really cool, and I always wanted to have my own webmail server....
It's different with laptops, you can't really buy them in components, unless you pretend that you have a broken model and go on eBay to get spare parts, etc.
You have more freedom in burning new firmware on your router (dd-wrt) than in replacing the BIOS on your motherboard.
Not sure what UEFI is actually for...and why Windows is preinstalled
When I buy a new car, I have the choice between petrol, diesel and hybrid engines, and nobody is forcing me to buy the fuel from this or that oil company.
When I buy a new laptop, I usually have the choice between a 13, 15 or 17inch screen and between a 320GB or a 2TB disk. But Windows comes preinstalled, and I have to pay for it, although I don't want it.
Once Windows has booted up, I'm reminded incessantly that my laptop is suddenly very much at risk unless I pay for the full license of this pre-installed anti-malware software. Except I don't want that particular anti-malware software and my firewall is on my router already. I am also reminded to take out this Small Business Advantage (whatever that is), that online backup services (dunno what is backed up to where), and pay for the full MS Office license.
Depending on the browser I use I am reminded to use Google or Bing as my default search engine.
I then try to relax a bit by browsing on Youtube, where I am bombarded with advertisements for beer, some must have video game, and the latest bloke flick. During those videos suddenly half the screen is replaced with an advertisement rectangle because YouTube has warmly found out that I'm looking for love and I should join Mature Dating, because Russian women are waiting for me already.
Wow....this really beats any diesel engine in a car....
IT has come a long way
Obviously, taking all IT and telecoms operations in house is impossible. But IT has come a long way. Did you notice that, in order to increase reliability, car manufacturers have
. reduced the number of parts required to build a car
. standardized these parts accross their range and subsequently
. reduced the number of suppliers?
The same is possible in your datacentre: Reduce the number of different operating systems, applications, middleware and databases, and I guarantee the IT operation will become more transparent, affordable and flexible.
Oh, and while we are on it, the number of datacentres can be reduced, too. Instead of datacentres in the UK, Lithuania, Bangalore, Hong Kong, Manila and Chicago we only need UK and Hong Kong. And you don't need to have a layered approach of multiple suppliers and outsource service providers for all bits and bobs of your operation, you can also train up staff and/or simplify processes, and I've seen the black swan where a company saved money (!) by insourcing (!) and become more flexible as they liberated themselves from a lock in situation with a certain supplier.
The time of CFOs overrulig the CIO on service delivery is coming to an end.
A comparatively small article, actually. It does not mention service level requirements, agreements and actuals, service continuity management and business continuity processes.
We don't need to wait after the horse has bolted, the service (read income) is lost already when there is a service degradation.
What we have to realise is that there is no "outsource and forget" anymore: since IT operations are equivalent to the business operations, business continuity planning needs to be in place to cater for service degradation already! If the business capability lies in the combination of IT with business, then It cannot be a cost centre anymore. It is a profit centre.
Would you outsource a profit centre? The time of CFOs overrulig the CIO on service delivery is coming to an end.
Compare it with Calligra Suite first
It is of course reassuring to in effect announce that Libre Office is now where Open Office should have been a few years ago and work has been done to make dialog boxes look nicer and more consistent, and the CMIS integration is good. Most probably there has been a lot of tidy-ups and recoding under the hood.
Calligra has already tidied up their attributes and dialog boxes and really allows high productive work. I cannot say each component has the same polish as LO, but it is catching up fast, some components are already eligible for production use, e.g. Krita.
This is the future - maybe the beginning of the end for steal and aluminium
For me, solar electricity and solar thermal power is the future: what's wrong with taking advantage of an energy source which is available in abundance? There are efforts already to create roof tiles that are in fact solar modules. What's holding us back in creating windows and pavements that are in fact electricity modules or thermal collectors?
Is this the end for coal? Not at all: you see, coal is almost to valuable to the burned off. There are much better uses for it like carbon fibre and graphene. Of course, we won't need to mine that much coal as we do today, but higher valuable goods will be made out of it. I'm sure carbon will replace steel and aluminium as building and manufacturing materials.
You want the cooler phone - buy a Nokia
. a IPS LCD WXGA screen what is better than Apple's Retina display in their Lumina 920
. touchscreen that also can be used with the gloves worn by the user
. pervasive NFC technology
. wireless charging
. optical image stabilization
. phase change memory in their Asha Phones
... and now graphene
. Nokia was recognised as the greenest technology equipment manufacturer in Newsweek’s 2012 Green Rankings.
Looks to me Nokia is really agressively pursuing innovation - where is Apple? Where is Samsung?
http://www.thedailybeast.com/newsweek/galleries/2012/10/22/newsweek-green-rankings-2012-world-s-greenest-companies-photos.html#74a6cfe8-c9c4-480b-a6ce-b7f7d5744ada ; http://www.theregister.co.uk/2012/12/17/micron_pcm_asha/ ; http://en.wikipedia.org/wiki/Nokia_Lumia_920
Re: Garbage arch
I thought that as well some time ago, but it does not matter if x86 is "garbage" or not. What matters is the commodification which results in RISC architectures losing their unique selling poiint.
I had to learn it the hard way myself, too, but the times of RISC and proprietary hardware archiectures is over. I've seen CIOs making strategic decisions to move away from RISC to x86 - not because they love x86 but because they had no continued business justification for SPARC and POWER. To give you a few examples, NYSE Euronext and the London Stock Exchange moved to Linux on x86, petaflop systems are being built using x86 (http://www.theregister.co.uk/2013/01/30/atipa_pnnl_hpcs_4a_supercomputer/)
Inward investment still welcome, tho
The bank and credit card services, health services, IT operations of telcos and infrastructure companies the west is offshoring to India is welcome, however, countertrade, i.e. India offshoring bank services to the UK and buying hardware designed by the west is forbidden....interesting
Now in the long run, young aspiring engineers and other university graduates only finding work as a shelf stacker and plaster mixer? Well, holding that shovel gets them off the unemployment statistic. No prizes for quessing where the statistics application is hosted, who built it using inter company transfers, who wrote the software and who supports it.
Who is the loser in the long run?
I mean, as they said in the noughties: "we no can find qualified people 'ere".
It seems the mobile OS becomes a commodity, if Canonical can port their Ubuntu onto mobiles (http://www.ubuntu.com/devices/phone) then other Linux distros can and will do this, too.
Which means the USP in mobiles will be in the hardware, such as who has the better sat-nav, photo sensor, audio chip, you name it.
Can someone enlighten me?
I really don't know what the advantage of MS Office is, in comparison to Libre Office and Calligra Suite.
Can someone enlighten me here?
Whenever I discuss the issue why a company is spending thousands if not millions in licensing money every year on Office products, even die-hard Linux implementers don't want to move away from it. Don't count on support from the Acca trained CFO either, he cannot combine technological processes with financial decisions. All I hear is subjective arguments which can be summarized with "'cos we've always done it this way". When pushed, imagined or rarely occurring examples are mentioned, for example:
. "There is no support for Libre Office or Calligra Suite" - not true
. "Visio files won't work" - not true, they can be imported with Libre Office Drawing
. "The files our customers send to us cannot be opened with Libre Office or Calligra Suite" - not true, I've seen mostly data being sent in pdf format, also for legal reasons, as it is not that easy to change a pdf file. And, it is easy to export files in docx, xlsx, and pdf format with Libre Office or Calligra Suite. Libre Office even has a pdf import extension.
. "There is no project management application" - not true, there is Calligra Plan and not every employee needs PM software.
By comparison, open source or free products like rsync, Squid Proxy and Apache Web Server are happily installed on mission critical, revenue generating systems, but when it comes to desktop software large expenditures are signed off without accepting questions.
Thus, can someone please let me know MS Office' unique advantage?
...let's buy a couple, slap Debian on it and set up grid computing
If you are concerned about your intellectual property
....you might as well create a private cloud, apply to be a CA, include good x509 attributes, set up a well encrypted VPN, etc. It's easier than you think, gives you more control about your security archtiecture and in the long term - from what I have seen - is cheaper.
The same functionality was already provided by Fring in 2007, which could be installed for free on the N95 running Symbian: http://www.theregister.co.uk/2007/01/31/fring_launched/
...which brings me to my usual reminder about how lean and advanced Symbian was at a time when Apple's iOS could not even do copy and paste, but seriously, I don't think we will need a SIM for making phonecalls in the future, maybe only to establish an internet connection over which we then do our browsing, bookings, payments, messaging, and voice calls.
Many thanks for this article, El Reg, and indeed many thanks Trevor Pott
Now, please, for comparison, an article about building your private cloud using
. Solaris Zones
. IBM WPARs
. IBM PowerVM
. Oracle VM
. VMware ESXi
. KVM on Ubuntu
. or a mixture of the above
Please together with licensing and maintenance factors and costs, hardware constraints and list of how many OSes that can be hosted, vertical scalability and high availability aspects, follow up licensing and maintenance costs resulting from that, security and privacy aspects etc.
This is the future
Due to ARM's licensing model I predict that there will be more fabless chip makers creating processors for a myriad of special applications. It may even that corperations will be able to design and aquire processors and motherboards customized for their needs, for which their own operating system and applications can be created and compiled. Linux is like Lego, it's mix and match, why shouldn't this model be extended to the hardware? All we would then need is a 3D printer or similar to create this motherboard or that PCI card, metal sheets and a punch press for the server case, fans, power supply and a bunch of cables....done.
But why stopping at the CPU, why not extending the chip licensing model to RAM memory, SSD storage, etc.?
Re: Doesn't add up
"your assertion that the western nations have followed a keynesian model in recenty decades." - maybe I should have mentioned here the worldwide resurgence of interest in Keynesian economics after the 2008 recession (http://en.wikipedia.org/wiki/Keynesian_economics_in_United_States_2008#In_the_United_States_and_Great_Britain), but all I'm saying is any public spending, keynesian or not, does not help a national economy if the work, the tax returns, the profits or a combination of these are going offshore.
"even dafter notion...that you could have constant economic growth by monetising more and more activities" - I can only agree on this.
Doesn't add up
We here in the "west" have seen the outsourcing and offshoring of manufacturing, services, research and development to other nations around the world over the past three decades, whereas the profits from these experiments were and are pipelined to low tax havens.
The western governments, desperate to kickstart economies and keep growth followed wrong advice from Keynesian economics to borrow money for big projects on one side and to ever more deregulate whole industries and markets on the other:
Keynesianism never works in the long term, if tax receipts don't occur and/or are diverted to other nations or indeed if the work itself is diverted to another country. Milton Friedman's moneytarism does not work in the long term and leads to the rapid rise in commodity prices and excessive liquidity, which caused the housing bubble of 2004-2006 in the US and the inflated prices for food, oil and metal which in my opinion is the real reason for people feeling poor again! So it's not a bunch of nurses, teachers and fire fighters.
So, what's the way of of this? Growth. Simple as that.
We have to look to a country which has been in a stagflation for more than two decades and is burdened with a public dept of 238% of the GDP : Japan. The new prime minister, Shinzō Abe, now wants to realign the priorities of the national bank of Japan and puts economic growth before the containment of inflation. Fair enough, the theory behind this policy are from Michael Dean Woodford, but I think it's quite revolutionary to effectively depart from John Maynard Keynes and Milton Friedman.
We looked at outsourcing part of our IT infrastructure to a cloud, but none of the providers could provide us the vendor independence, fexibility, speed to and off the market, long term cost savings, security, up and down scalability for our needs, and we could not maintain a medium term continued business justification. The advertisement of all cloud prividers was very well polished, but after close examination we found it was more a lock in to spend piles of cash over a long period of time for a solution which could not satisfy our business requirements.
We subsequently looked at building our own private cloud, result was a private application, storage, and server cloud in one, with a flexibility that outgunned everything that was on the market at that time. Cheaper as well: 1GB of high availability storage space did not cost $1 per day, but 0.013 cent!
There still will be tiered storage, just SATA will phase out
Companies have an obligation to keep records for at least 7 years, in many cases in the financial sector you have a 14 year retention peiod. You don't want to keep this data relatively easily accessable (read deletable), so I would think a tape in a fireproof vault is still the most appropriate.
I would rather see the spinning disk to dissappear, and being replaced with PCM or other chips. Imagine you have 365 generations of backup, each of which you can access almost as immediately as the actual data, wouldn't this help other trends like true 24/7 computing?
Continuing this thought, if we now have solid state chips of, let's say, 2PiB, why not just put them directly on the PCI bus, omitting the need for a storage controller, or better, put the SSD directly on an adapted memory bus?
"Yet...these critical infrastructure elements are usually procured, managed and run separately, resulting in a fragmented infrastructure..."
In many companies there is no integrated thinking in first place, even after we implemented ITIL. Andrew Buss is talking about changing silos to a layered service delivery. Sure enough, there are challenges here, e.g. the compartmentalization due to PCI-DSS req 6. But Andrew is coming bottom up from an infrastructure viewpoint, many companies however start top down. For example, "we need to provide this trading functionality using that application because the sales rep bought us a steak lunch. And, uh, let's throw some money at some tin. Er, we need, servers, storage and networks." All items that I mentioned here are products, or solutions to requirements. Even Andrew still makes the distinction between these segments. Unfortunately this is 1980s thinking and simply wrong, and also is not remedied by deploying management software like Oracle Enterprise Manager or a bunch of tools from BMC or VMWare.
The infrastructure today is still defined by the sales reps that are selling us the servers, the routers and the storage. The first step to a more flexible IT model is to liberate ourselves from them. This means "mail service" instead of "MS Exchange" and "data, backup and archiving service" instead of "SAN".
After that, we talk about continued business justification, roles and responsibilites, tailor to suit the service/the customer, etc.
And, boy, do you get results once you get external parties out of the equation!
actually it is great stabilization....and Nokia is innovative
There are videos about stabilization tests:
It appears the mechanical stabilization of the Nokia 920 is better than the iPhone's digital stabilization:
Nokia was and continues to be innovative by deploying cutting edge technologies...other phones are more about hype. A few examples
. tessar lens (already in the N95 in 2007)
. first to deploy GPS in the N95, well before the iPhone
. inductive charging of Luminas
. NFC - a Nokia technology
. the capacitive touchscreen on the 920 also has the ability to be used by gloves, the first smartphone to do so
Good article, but it does not mention that smartphones, indeed mobile phones in general have become a commodity. The USPs used to be functionality (in software), now that every mobile OS does more or less the same than the rest the USP is again hardware.
Soon there will be more, other Linux distros available for mobile phones, Canonical is developing in this direction at this point in time. It wouldn't surprise me if there is or will be a community effort for Debian. Which brings us back to the business adaptivity of mobile OSes: Wouldn't it be better, from a security perspective, for companies to create their own Linux distros for desktops, tablets and smartphones?
It's easy enough.
That support for VMDK-based Storage eases the migration from VMWare ESX to KVM, but interestingly RHEL 6.4 introduces the haproxy package as a Technology Preview.
Is RedHat about to take on F5 BigIP?
Re: @AC: Obsolete product?
Who is distoring the facts? I paste in links and proofs you're just placing subjective statements. Here we see it again:
"Clustering only solves part of the equation unless you're willing and can afford to create separate failure domains by deploying multiple redundant copies of data" - what equation? Resilience and business continuity planning? Now that starts with business process optimization (RTO, RPO etc), and fallback to analogue, if possible. Then via load balancing clustered web, application, RDBMS systems we eventually arrive at fault tolerant storage (mentioned above), backup and archive, where we close the circle with RTA. In this equation there is a lot more than just storage, btw.
"and most small enterprises are not in that position" - well, if there is a SME which has limited funds why not using commodity hardware and open software solutions, rather than throwing money at a proprietary vendor? (http://www.redhat.com/products/storage-server/on-premise/; https://help.ubuntu.com/12.04/serverguide/drbd.html)
Whatever, man, at least I Dare to Think out of the box!
@AC: Re: Obsolete product?
Why are you talking so much about overhead? Are you scaremongering? Let's assume you have a x86 system with 48 Xeon cores, each core having two threads. With 96 virtual processors available, where do you see the bottleneck? With this commodity computing why do you need a separate storage controller? It's rather the opposite: Software RAID may employ more sophisticated algorithms than hardware RAID implementations and thus, may be capable of better performance (http://en.wikipedia.org/wiki/Software_RAID#Software-based_RAID). If Hadoop has taught us anything, it is that getting compute and storage on the same physical devices can substantially boost performance.
If now RedHat is combining the KVM virtualization with the Gluster distributed filesystem and is promoting freedom from proprietary storage I see DAS can provide much more functionality the cash gulping SAN solutions from any vendor cannot provide for the same amount of money.
As said before, look at Google, RHSS, IBM FlexSystem, and HPC at Cambridge (http://www.hpc.cam.ac.uk/services/darwin.html)
If you want to take a look at modern storage architecture, go on a Linux course, and learn about GlusterFS, Ceph, FhGFS, et al. (http://en.wikipedia.org/wiki/List_of_file_systems#Distributed_parallel_fault-tolerant_file_systems)
For me, SAN is dead.
Take that picture subtitled "The HP 3PAR StoreServ 7400 array" and compare it with the Dell c6220 (http://www.dell.com/us/enterprise/p/poweredge-c6220/pd). At least Dell had the idea of adding a few 2 socket Xeon servers behind that disk array.
Why do we then still need fibre channel fabric, fc switches etc., when the storage can be right on the PCI bus?
But don't take my word for it, have a look at GoogleFS, IBM FlexSystems and RedHat Storage Server.
"I love Dropbox, and have used it in both the personal and corporate contexts."
It took me just 3 hours to create my own webmail server. It has spam removal, virus protection, etc.
Oh, my mailbox is 1TB in size.
It look me just 1 hour to create my own dropbox. I created my own certificates and encryption keys. 2TB in size.
Beat that, Google, Matt Asay et al.
The correlation to SaaS is wrong
SaaS or IaaS is usually bought to alleviate short term capability and capacity bottlenecks, which are caused by budget cuts and freezes which in turn is caused by challenging market conditions. Any external cloud for a medium term already is horrible from a security, cost, intellectual property and integrity point of view.
But Matt would bend it around the subtle advertising for the industry he works in, wouldn't he?
So if companies now liberate themselves from the clutches of IT vendors and find they can run the business vendor agnostic, the real question is why not going the whole hog and establish an IT Architecture roadmap, replace Windows and RHEL with Debian, Oracle with Postgres etc., and even go so far as to hire electricians and software developers to eventually translate business processes to 100% into IT? This, however, is too evil even for Matt, as insourcing and developing wisdom goes against the very principles of a business, and the customer needs to be kept insecure of his abilities, in doubt, open to marketing waffle, acronyms and buzzwords. For example SaaS, Cloud computing...
Why do we actually still need a SAN?
"And now the company is mashing them up so they can run side-by-side on the same clusters, uniting compute and storage on commodity boxes."
"If Hadoop has taught us anything, it is that getting compute and storage on the same physical devices can substantially boost performance."
SAN is just technology theatre where one component of an IT infrastructure is being rewrapped and made more expensive and complex (exta expenses for fibre channel fabric, fibre switches, storage controllers, specialized staff, floorspace, cooling, additional maintenance contracts...). SAN does not create a 3-tiered storage strategy on it's own. SAN does not give us RAS - RAID and replication do, independent of the SAN.
The days of the SAN are counted. But don't take my word for it, have a look at IBM Flex Systems, Google and now RedHat.
Also, now that we see that OSV provides much more flexibility and feature richness, where is the future for hypervisors or hardware partitioning and those expensive network switches? Technically, all you need for a router is two NICs and IPtables. Maybe, in a few years time, when we will have 16 core commodity processors we will see the reduction if not dissappearance of expensive, specialized routers/catalysts and the lot?
Nothing new, actually...and where is the USP?
Looks in concept very similar to the RS/6000SPs back in the 1990s. Interesting, though, that the network and the SAN switch are moving into the chassis.
Similarities can also be drawn to the nodes of the Dell PowerEdge C Servers.
IBM usually does not sell server hardware on it's own, rather, one has to buy a service and maintenance contract with IBM which comes with a bit of hardware, thus I would put a question mark behind the prices quoted here. If we take these additional costs into consideration, let's assume that a POWER7+ core costs at least 100 times more than an Intel Xeon Sandy Bridge-EP-8 core. Is POWER7+ 100 times more performing than the Intel Xeon?
Where is the USP for the POWER platform today?
I doupt it
There is China where a high tech enterprise can only set up shop by creating a joint venture with a local company. In India by law bank transactions have to be processed inside India.
Because there is now a growing requirement to convert legacy business processes that are highly manual to IT operations you think the US and the EU will benefit from it? Maybe only for 3 - 6 months, and only to consult on laws and industry legislations they will need to observe when doing business in the West.
We're in a world where even medical X-ray pictures are taken here but analyzed in India, accounting is on the Phillippines, the hardware is made in China and HR and the helpcentre is again in India. R&D is again in China and software development is in India.
We will not win much by a growth in IT spending in APAC.
We will win by creating Galileo.
We will win by abandoning the dreadful, outdated, old technologies, such as the TCP/IP protocol (http://www.theregister.co.uk/2009/02/17/ip_security_review/).
We will win by developoing new technologies (http://en.wikipedia.org/wiki/Seventh_Framework_Programme).
We will win by introducing a law which forces multinational high tech enterprises to create a joint venture with a local company and a law which mandates that bank transactions are processed within the US respectively the EU.
Re: Why do we actually still need a SAN?
"most storage gurus still hate sharing networks with the IP traffic" - Well, they will have to sooner or later, with the increased deployment of converged HBA/Ethernet cards and 10GEth networks.
"putting in extra networking just for the FC-IP traffic" - you don't need that, as the virtualized servers are using DAS storage. Thus, storage traffic is left within the server. Unless you want to replicate block devices or use Gluster etc., for which with you can use EtherChannel (cheap) or a 10GEth network (expensive).
"rapidly run out of bandwidth" - Matt, the highest network spikes are usually on the proxy tier, the largest data movements are usually on the database tier. You put them on different networks anyway to satisfy IT security, and you don't do a weekly offline database backup at 10am on Tuesday morning, rather than the incremental online backup after 11pm every day. If you don't want to do that, it is better to leave the storage data stream within DAS. Strangely enough, you haven't mentioned Dataguard or SRDF over IP, which put load on the network already.
"".....SAN is never fast...." Sorry, but yes it is. " - Matt, compare the IOPS and throughput of a direct attached SSD or SSD PCIs with what you get with a SAN. A PCI bus is simply is faster than the SAN fabric.
"Every direct-attach disk introduce a single point of failure for your data, and also massive inefficiencies in storage utilisation." - Firstly, storage inefficiencies never went away because of the SAN, they were transferred to the storage arrays, as I pointed out in my original post. Thin provisioning is done on OS Virtualized servers, too, such as RHEV. Spindles still pop may the disk array be in the server or in the SAN. A storage strategy removes inefficiencies. And RAID 0,1,5,6,50,60, etc can also be done on the server itself, if you have a sufficient number of drives. Have a look at the Dell c6220 or the many 4U servers out there. In addition, DRBD can help in HA implementations.
"You don't know how to design a SAN" - Matt, one storage controller serving, let's say, 100 servers, is a SPOF, even if you have two Cisco Nexus switches. The way around this is....another storage controller and array, preferably in another data centre. Let's compare the expense of two 4U x86 servers with two IBM XIVs, 4 Brocade switches, etc.
Someone said "You never managed a few hundred servers, did you?" - Yes, I did. And I've seen a whole datacentre going down and weeks of restores/roll forwards because the SAN went pop. Strange, when it comes to security everyone is saying compartmentalization. Not so when it comes to resilience. Have a look at vSphere, RHEV, Canonical Landscape or Oracle Enterprise Manager for managing distributed systems.
Why do we actually still need a SAN?
Another low end storage array from this or that vendor does not matter. Why do we actually still need a SAN?
Let's face it: it's first and foremost a cumbersome, expensive method of providing storage to a server, as you not only need a expensive, dedicated network infrastructure (the SAN fabric) in addition to the IP network you already have.
SAN is never fast - it cannot be, as the signal still needs to travel through the fabric from the server of the SAN array. Every direct attached SAS/SATA disk beats the SAN.
A SAN is a single point of failure and it introduces more points of failures, despite what the sales rep says. In HA implementations you have a redundant number of servers, NICs, HBAs etc., Oracle RAC, Dataguard, Golden Gate, Solaris Cluster, Power HA and whatnot. What do you have with the SAN? A bunch of arrays, fragmented over time with Raid 1, 10, 5, 50, 60, 0+1, 0+6 configs, getting mains from one UPS. Now, what if that UPS fails or someone removes all the ports on the switch which was in use whilst the Oracle DB was quarter-end batch processing?
For me a SAN is just another example of 'cos-we've-always-done-it-this-way. You're much better off with a few dozen HDDs, or better, SSD PCIe's in your server and OS virtualization inside.
Revenge of Symbian
Symbian may be dead, but we can pass the shovel to iOS, Android, BBos and the like, too! You see, it is about fragmentation: Of course it was very easy back in 2007 to create smartphone apps for the iPhone or the many Android devices. But over time your customer base grows and the existing customers want additional features on that mobile app and every year or so there is a new version of the mobile OS platform - and you cannot expect that your entire user base is diligently upgrading their mobile phone two weeks after the release of the new OS. In addition, your sales department finds that targeting only high value earners (=Smartphone owners) doesn't bring the revenue in this recession, so you are forced to create mobile apps and a support path for dump and feature phones, too! In the meantime there may be paradigm changes, such as one mobile platform moving away from Java SDK to C++. This all leads very soon to spread out, parallel regression tests and support paths, horribly expensive on the development and service delivery side.
This is unsustainable, but also no other consumer technology is developing so aggressively and very soon it will not matter if your phone or tablet runs Symbian, Android, Bada, Debian or some Chinese OS. As long as you can browse the web on it you will be fine.
This is the future of premium shopping
Ask the customer to hold up his NFC phone or Oyster Card to the TV screen or poster and order and pay while the hype is still working.
There may be some difficulties with fresh food, and the 14 day cooling off period for non-food items, but generally I would see this the future of premium retail.
Interestingly, the other future of retailing is....the good old booth. Yes, those small family run businesses pay a minimal fee to the council, need no air conditioning, lightning, company pension scheme etc., are quick to erect and quick to go and accept only cash. Rock bottom cheap and have the same or better quality as the goods in the supermarket.
You can always donate to Fedora, Debian, Wikipedia or others...
Not only routers, Hardware and Software, too.
By all means. I've been arguing for years that we have a
. higher ROI
. complete security audit trail
. complete system state and patch governance
if companies build their hardware and at least the OS themselves. It's so easy to build together a high available, fully supported solution, which, over the course of 2 years with the salary of two system engineers included, costs LESS than the equivalent COTS solution with an expensive support and monitoring contracts, which still requires two system engineers. I've got the numbers, there are eye watering savings possible.
This can include building the OS (our own Linux based distro), proxy, web and web application server, and database servers, SAN, network routers, PCI-DSS compliancy etc.
Fair enough, there are limits, such as building an HSM, but overall, I don't understand why companies still today throw money at some established vendor for expensive support contracts with a bit of hardware and software attached.
But don't take my word for it, look what one of the most successful companies on this planet do, they make sure they keep their wisdom in-house (e.g. Google).
What a waste of time....
It appears ElReg needs to fill the website. I mean, BYOD is a real threat to CIO activities, such as service strategy and design, IT architectural frameworks, strategic platform consolidation et cetera. I mean, with BOYD those employees will bring their own Oracle RAC with dataguard replication implementation on a bunch of really cheapo M5000s, correctly partitioned and PA-DSS compliant. BOYD will also solve data architecture, strong encryption, HSM farms and LMK key creation - the employees do this in the morning between two sips of coffee! Never mind that Citrix and VNC cluster which enables...BOYD. Maybe BOYD will implode into oblivion because of BOYD?
Re: Obviously A Fanboi ...
How can this phone get 90%???
The faster chip doesn't matter - hard transaction times do. Faster chips only gloss over bloatware or badly written code. Where is the comparison to other smartphones as ElReg does it with laptops?
There is still no OLED screen, a small screen in comparison to Samsung Galaxy and Nokia Luminas, no NFC and no expandable storage.
Still no Xenon flash, no camera oversampling, no lossless zoom.
The Retina display has become obsolete: Samsung Galaxy S III or the Nokia Lumia 920 have higher resolutions and larger screens than in the iPhone5
4G LTE is not a unique selling point, everyone else has it, too.
And the paying starts again: first for the phone itself, then for the overpriced 24or 36 month contract, £25 for the adaptor, apps, you name it.
This is the future
As with everything in IT, the price of PCIe SSDs will come down and they will scale vertically - very soon a 20TB PCIe SSD will be available for £300.
In parallel, motherboard manufacturers will solder a SSD chip on their boards as a giveaway, and after that the SSD chip will be part and parcel of the CPU - by that time we may have 10mn or 5mn lithography.
Which means a dedicated SAN will be dead. I mean, seriously, if we have 20, 50, or 100TB on a PCIe card close to the northbridge of the CPU, and 10GigEth networks for the 4 way DRBD, why wasting more money on a separate SAN system.
Did I say will? SAN IS dead, if you have enough budget.
Long live the free market
First Samsung made sure they got the hardware leadership, with this came a significant market share, and now they add service provisioning to their portfolio - ahem - ecosystem.
Will be interesting to see what fanbois and tech press make out of this, if there will be a witch hunt like with Nokia.
Is Matt Asay's column ghost written?
Nice words about already known facts, gently flowing around open source. Spot on when it comes to trends. And very much the quality of and essay from a reporter. Discuss in 500 words.
But nothing of CIO level.
Real hitters would be subjects like this:
TCO is just a marketing ploy, an aspirational estimate, literally fluff in your mind, stretchable as you see fit.
A CIO should actually be the CFO with technical knowledge, and ask his suppliers to commit to financial targets and penalties.
External clouds are just cool, but very expensive - building the cloud in house reduces the cost of CPU, memory and storage units to pennies - here is the proof, etc.
What shareholders want to hear
Paul Otellini technically just says what Intel shareholders want to hear. What he does address is the growing frustration of CIOs with proprietary platforms, like SPARC and POWER, which are, when equipped with enterprise grade memory, SAS and SSD hard drivres, HBAs, etc, 10-15 times more expensive, but not 10-15 times more performing. The advantage of proprietary systems in feature richness to ensure availability is diminishing by the day.
What he does not address is the solution: vendor independence, which also means independence from x86 architecture, and the fact that CPUs have become a commodity. Sure enough, there should be a standard, but the standard can be ARM.
I predict that in not to distant future, big corporations will send a wishlist to Intel, ordering a customized 256 core, 2048 thread ARM processor, or the like, maybe already soldered on a motherbard, next to the 4TB SSD chip. If Intel can produce 100K units of these, fine. If not, they will order from Samsung. Or TSMC. Or someone else.
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Bugger the jetpack, where's my 21st-century Psion?
- Google offers up its own Googlers in cloud channel chumship trawl
- Interview Global Warming IS REAL, argues sceptic mathematician - it just isn't THERMAGEDDON
- Windows 8.1 Update 1 spewed online a MONTH early – by Microsoft