* Posts by CheesyTheClown

606 posts • joined 3 Jul 2009


iPhone X 'slump' is real, whisper supply chain moles

Bronze badge

Have one and don’t use it

I bought the big model... used it a week and now it’s my spare phone for travel.

It was my first attempt at an iPhone without a headphone jack. What a f-ing joke. Constantly charging phones and headphone when I went wireless. Went back to wired and could charge the phone while listening. Of course I could buy a splitter or a wireless charger. But what a frigging horrible experience it’s been.

The usability on the iPhone X is a disaster as well. It was as if they put absolutely no thought into the phone. They even made it so the frigging power switch was more than the power switch now. So every time you try to turn the phone off it does other crap instead.

I would return it, but I needed a travel phone anyway. So why bother? Back to my iPhone 6S plus. I’m in the states now and will stop by the Apple store and get a battery replacement today.


Capita data centres hit by buttload of outages

Bronze badge

Entertaining read

Mainframe tech is specifically designed for handling these problems. Using “everything is an object” and “share nothing” tech, it should be possible to run these systems for decades without an outage. Using big-ass custom cloud platforms is a quick way to end up screwed.

This is why companies like Google, AWS and Microsoft are dumping IaaS and containers in favor or more reliable FaaS architectures. Yes, there were problems from the 60s to the 90s, but mainframes have generally always been far more reliable that the crap most vendors are passing as PaaS today.

Acid compliant share nothing record storage systems as well as FaaS as well as well designed load balancers and non-SAN object storage can offer almost zero-downtime (way better than five nines) platforms.

A vendor like this has absolutely no possible excuse for service outages. If they can’t do it themselves, they should call IBM and get it done right. It’ll cost a lot, but nowhere near as much as losing business due to letting IT people be involved in information systems.


UK.gov: Psst. Belgium. Buy these Typhoon fighter jets from us, will you?

Bronze badge

Spend the money on drones

Seriously, what’s the real benefit of wasting money on one shit plane vs. another. We should be able by now to make drones that can be manufactured far faster and deployed far easier than either fighter. What’s lost in having a pilot in the cockpit can be made up for by having 10 times as many aircraft in action.

The F-35 for example basically places the pilot in a virtual reality environment anyway. It’s not like having eyes in the cockpit really benefits anything. Latency might be an issue, but having a nearby land based or flying control center should compensate for that.

Just quit with the human pilot shit or human military shit as much as possible. The only good reason for human militaries is population control. The more children we send to their deaths the less babies they’ll make and the less burden they’ll place in the job market. It’s 2018 and it’s about fucking time we stop doing the war bullshit by shipping children off to die. If politicians really want to play bullshit games, let them do it with their own lives.

Make a crap load of land, sea, sub-sea, and air drones and control them remotely. As always, he with the most gold makes the rules. Then whoever has a bunch of gold can take over whatever country they want and we can send in the construction crews after to clean up the mess.


Hyperscale oligarchs to rule the cloud as the big get bigger, and the small ... you won't care

Bronze badge

My predictions

All public services will be cloud based. This means that public email, conferencing and collaboration systems, etc... these systems generally go across the public Internet anyway, there is no point claiming that you can secure it better at home. In addition, spam and virus protection doesn’t work unless the mail server is global, individual organizations running things like Cisco Email Security Appliance are screwed. Global providers can also work better together to secure the backbone. New providers will not be able to enter this market in the future due to locked down peering.

Private networks will become far harder to manage. Military and other government networks will lack access to proper solutions in the future since enterprise scale software will become a niche market for collaboration and messaging. This will probably result in a lot more open source solutions being deployed for private servers. Microsoft will probably make an Azure Stack solution for offline networks. NATO governments will adopt it and other governments will reject it.

Software defined is already universal in cloud providers. AWS, Azure and Google Cloud are already nearly 100% SDN. All their solutions are completely software based as well. Cisco doesn’t make a software define solution, so they missed out. They have policy based networking which is similar, but not scalable.

Storage will be back to non-enterprise. There is no reason in a software defined world to consider enterprise storage. Cloud storage doesn’t need it and it doesn’t benefit from it at all. Systems like NetApp, EMC, 3Par and others are already relics of the bad old days. As more systems are cloud based, we’ll focus on NewSQL, NoSQL, and object storage. These perform fantastically badly on scale-up systems like SANs and NAS. They also regularly have high risks of single point of failure. Their cost is 20-30 times higher per gigabyte than their alternatives. NVMe fabrics, FC fabrics, etc... are almost the worst possible ideas in cloud storage. All flash is a total waste and all these systems just don’t work well anyway.

As a matter of fact, Hyperflex from Cisco is among the worst solutions for cloud storage as it almost guarantees your data will suffer from high latency and poor performance. It’s great for VMware, but terrible otherwise.

Business systems will come back home. Using FaaS in a box solutions, we’ll see more systems come home. They will be much smaller and simpler. Expect to see entire enterprises running on clusters of computers that cost less than $1000. Now that we’re shipping our crap to the cloud, we’re learning that we don’t need VMs and containers and other wasteful tech to run our enterprises. We simply need better systems. The average Raspberry Pi 3 has substantially more capacity than most enterprises need for their systems.

Zero-trust networking (thank you Google for the name, been working on ZTN for 6 years and didn’t have a name for it) will eliminate the need for most corporate enterprise networking. By centralizing most services, we’ll no longer need east-west network traffic. As such, we can eliminate nearly all network equipment we use today.

In addition, whether you’re 5 users or 50,000 users, the cost of LTE and SIM cards at now cheaper than buying a Cisco or Aruba wireless network. Almost universally, it’s more cost effective to eliminate your enterprise network completely and move entirely to mobile services. With zero-trust, it makes perfect sense of to simply dump your Cisco enterprise network. After all, your company probably already pays for at least some of your cell phone bill today. The cost to cover the additional bandwidth needed would be $10-$20 a month per user. That is A LOT cheaper than Cisco networking and that’s not even counting the consultants or endless lost hours of business to silly meetings about things like Cisco DNA.

SDN in the cloud is accomplished through mostly open source virtual switches connected via IP. The cloud management platforms then integrate with things like Kubernets or Azure Resource Manager to handle the cloud networking. Because of how clouds, containers and FaaS works, generally single homes, gigabit, layer-3 connectivity is all the systems need. As such, Cisco data center networking and ACI fabrics just don’t belong there.

CPU capacity will decrease a great deal as well. The average FaaS (Lambda, Azure Functions, Google Cloud function) requires 1/10,000th the system capacity to run compared to containers or VMs. As such, as more systems are launched using less capacity. The clouds will shrink and not grow.

Sinc Cisco has nothing in the cloud server category to sell. The servers in my cloud are Raspberry Pi’s with 12TB spinning discs and LattePanda’s with 64GB eMMC. We have 9 of each (18 devices) we’ll run a million active users on. We’ll add more for geography not capacity. We are using 8 port Cisco c3560-CX switches for now.

So, let’s see what the problems are in 2021 :

- Enterprise networks are going to fizzle. Cisco never managed to gain presence with LTE and basically resells ASR-5000 but lacks their own product

- ACI was a no-starter and basically is on its way down.

- UCS has no real need to be upgraded or replaced. I have racks full of them turned off. We now use two chassis, 4 FIs, no switches, and we’ll shit those down too. We’re moving to cloud and Raspberry Pi.

- Zero-trust networking eliminates the need for about 85% of all Cisco security products.

- We just laid off 50 Cisco UC (phone and video conferencing) “experts” and shut down that division because of Skype and FaceTime. That business is dead.

- SD-WAN from Cisco is way too expensive and it’s cheaper (and better) to use LTE, Citrix, or Microsoft DirectAccess. If you’re going to waste money on SD-WAN, get VPLS instead.

- SD-ACCESS costs way too much and even if you have the $1.5 million to spend for the minimal safe implementation, it’s far too expensive to maintain and Cisco will probably lose interest before getting much further with it.

I think this research was basically “Please don’t forget us!!! Really... we can do stuff too!!!”

But in all fairness, I’m wearing a Cisco hoodie over a Cisco T-Shirt will sitting next to a rack of $200,000 of Cisco equipment I personally own... logged into a VM on a Cisco UCS blade server in a Cisco UCS data center I personally own. And next week I’ll train a national security agency on using Cisco. Oh and I work for a multinational massive Cisco partner.

And what I’m telling you is, I have no plans of using Cisco in my upcoming designs which surprisingly are to automate Cisco customers. Cisco just doesn’t really fit in the next generation enterprise.


Ex-Chipzilla exec Arms biz to SoC it to Intel in the data centre

Bronze badge

AMD, Qualcomm, etc...

This post seemed like a fairly boring ARM offering. Either due to the unknown brand, underwhelming specs or whatever else, this seems like little more than simply repackaging generic ARM cores.

AMD if they’d have followed through on ARM may have been interesting. Qualcomm was interesting and still may be, but they don’t seem to care about getting their stuff out there.

My company is already dumping big iron in favor of highly distributed SoCs for “serverless computing”.

By the time these guys ship and find a distribution channel, the world will be either x86 big iron, public cloud or distributed smaller systems. 32 core ARM CPUs don’t really fit anywhere. It’s far better to have 3x4 or 8 core CPU.

I suppose they think there’s a place where people want to rewrite and/or recompile their systems without any consideration for quality of their code.


No Windows 10, no Office 2019, says Microsoft

Bronze badge

Re: And the MacOS Platform?

Yeh... I went there too. I assume that the author of the headline was a little too excited to take a stab at Microsoft to bother checking whether what he/she wrote made sense.

I'm pretty sure that the article is only referencing users running the Windows platform. Mac, Android, iOS and whatever else will likely remain as is.

Bronze badge

What's a

Bombasic Bob?


Cisco gives intent-based networking a third leg to stand on

Bronze badge

And finally Cisco accomplished Software-less Defined Networking

Ok, so in the old days, you would configure a switch. Then you would run netflow to watch the traffic. Then you would use Cisco Prime Infrastructure to roll out changes and monitor compliance.

Now, you roll out a fabric, run tetration to monitor the traffic., then you run new analytics software to make sure you actually changed stuff.

So at which point in time does the software running on the infrastructure talk to the controller and inform the controller what it needs and then have it verified against the controller which then implements the changes as needed?

How in the world can what Cisco is offering ever be called Software Defined anything... unless you mean that the network administrator can use an external application from the network to upload a configuration to the network?

This is another great example of what happens when Cisco makes something awesome like ACI. Then Cisco manages to get beat out by something free and/or open source like Microsoft Network Controller, OpenStack Neutron or Kubernets. Then instead of dumping their mistake and making something that works awesome with integrates as a first class citizen with those other products, they start shipping more crap which doesn't work with the other stuff.

Ok Cisco.

1) Abandon ACI... it will never actually work. Your customers are simply making VLANs on them and applications aren't happening.

2) Work harder at integrating NxOS with the other products.

3) Quit the EVPN crap already.

4) Make a good solid data center switch that actually works with the other platforms.


Google takes $1.1bn chomp out of HTC, smacks lips, burps

Bronze badge

Re: I think G is shooting its own foot here

And Google will still make the repeat revenue from maps, search, advertisement, film sales, music sales, etc...

Google can make their own phones... work with HTC to design them and then pay Samsung to manufacture them. We'll see a real alternative from China soon. RedFlag Linux was a bit lame... but I expect that China is well situated now to take up the mantle making a competitive telephone OS. 10 year ago... maybe not, but now China has tech talent and western business knowledge pouring from their ears.

Bronze badge

What do you mean competitive portfolio?

HTC and Samsung are not in the same business and never have been.

Samsung is an electronics manufacturer who has optimized the hell out of the supply chain by buying companies such as Sharp LCDs, built their own storage company, is one of the most advanced semiconductor fabs... has one of the most successful industrial engineering teams, has 100% automated fabrication lines in most segments... etc... Samsung has no peer in the industry other than possibly the Chinese government who owns things like Foxconn and many other companies.

Samsung produces nearly every single component of every phone it ships. They also probably have investing interests in many raw material suppliers such as oil companies, mining companies, recycling firms, etc... they are a conglomerate capable of producing a telephone for barely more than the human costs.

In addition, nearly every other vendor of phones in the world has to buy at least several parts from Samsung or a Samsung owned company just to make their own phones. Or at least they probably have to buy from companies who pay Samsung to manufacture their parts for them.

Then there's HTC... who makes pretty much nothing but the circuit board and the case. They code some software too I suppose. They have absolutely no revenue stream following the moment the phone is shipped and paid for. The only possible way for HTC to make a profit is to negotiate great manufacturing deals and supply chain deals. They don't own anything once the phone ships and all that's left is liabilities. They have to pray they can remain price competitive with companies like Samsung who pays probably 1/10th as much as they do to make a phone. They have to pray that on their pathetic profit margins that the user doesn't need support covered under warranty.

Most people don't upgrade phones anymore. I'll get an iPhone X next week which is given to me as part of my new job. I don't really plan on using it much since I prefer my iPhone 6S Plus. Apple still makes money from me by selling me movies. I don't buy music anymore since I have like 1200 songs in my library and I listen mostly to audio books. I pay for my kids to buy apps once in a while.

If Apple gets it working out here in Norway, I'll experiment with Apple Pay.

Now... for the next killer feature for phones


Apple and Google should work together to standardized a secure method of identifying yourself legally. So for example, an app which is also your passport and drivers license. So you have a QR code which pops up on the screen and directs people checking ID to a site which verifies whether you are who you say you are.


Twilight of the idols: The only philosophy HPE and IBM do these days is with an axe

Bronze badge


HPE is never a good idea to buy from. They are an acquisitions and mergers shop only. With the exception of servers which for the most part are just PCs with a half-assed and unloved remote KVM... which is borderline unusable... an ILO system which is as reliable as a politician... etc they don't produce anything themselves. They simply find a company with a lot of sales or specific inroads into the government and buy the company and run it into the ground. They get most of their sales from things like Aruba for example. The market has two players in enterprise wireless communication, companies chose Aruba and invested heavily. They don't want to replace their entire wireless network to switch to Cisco, so they just keep buying Aruba. Of course, HPE killed off most of Aruba's best products... dumped all their aging and shitty Procurve A and S stuff on them and lost most of their developers by killing all the fun in the organization and outsourced most of the rest to India. Investing in HPE is generally never a good idea except for the stock prices.

IBM... well ... let's talk Softlayer.

1) Most of what they do isn't even cloud. They simply let you rent servers or VMs. This is great for the loser companies out there who actually think they can save money by going cloud... and think IaaS is cloud. It's not. It's basically colocating servers. You still have to do almost all the management. You don't get anything useful from them. You still need to run updates, run your own security, etc...

2) PaaS... their PaaS platform doesn't even seem to have a NewSQL platform. Where Microsoft has Azure Tables and Google has Spanner and Amazon has Aurora. IBM is peddling DB2 in what only appears to be a containerized version where you still need to build and maintain a DB2 environment. The other platforms are simple, you just say "I want a database" and you have an always available database. With IBM, you have to build and manage a database platform. Even Microsoft, who has SQL server knew that you couldn't make something like DB2 cloud scale.

They don't really offer a platform either. What they offer seems more like a bunch of containers. Of course you can do kubernetes... yippie!!! but it's not a platform. You still need to build and maintain your own infrastructure.

3) No IBM technologies other than DB2. Where's the CICS? Where's the WebObjects? Where's the RPG? Where are all the things which make IBM worth using in the first place. IBM has more than 50 years in the PaaS business. They have more than 50 years in the FaaS/Serverless business. And they don't even have a single worthwhile IBM technology to build on.

In the past 10 years, Microsoft transformed themselves from a product company to a platform company and are now shipping Azure Stack which is a mainframe in a box. You can install and operate Azure Stack as a fault tolerant mainframe. They are committing to platforms and APIs. With Azure Functions, Microsoft is shipping their own CICS system with all the underlying technology to make it happen. They have developed the best development tools ever seen in a mainframe environment to work with them as well.

And IBM is delivering what appears to be little more than Ubuntu or OpenShift with Kubernetes.

The biggest problem with companies in general is that everyone seems to implement features based on what they read in the news. Or they hire a manager of a product from another company. What's worse is that SoftLayer looks like they learned what cloud means from VMworld... TOTAL FAIL!!!!

They need to take a team of developers and have them make a demo product on Azure, Google Cloud and AWS and learn what cloud is. No frigging containers. No frigging virtual machines. Build the real deal. That means use the platform. Then they need to talk to the guys at IBM who have 50 years experience in PaaS and learn how to make something special.

When they do that, they need to throw a billion dollars at it and make it happen.


UK infrastructure firms to face £17m fine if their cybersecurity sucks

Bronze badge

But doesn’t it apply to VPN?

So, when a British firm wants to secure their infrastructure and implements a VPN (not Cisco or course) to control access to management. Then the company requires that all keys must be properly secured on encrypted devices... the consultants (located everywhere) will be forced by UK government policy to have phones and PCs with encryption with no back doors.

If the phones and PCs with no back doors don’t exist, then how would this work?

Would the back door be British only? I know from reading the occasional FHM that British men are completely obsessed with back doors. What happens when a foreign consultant travels to their home country where by law, their phone would have to be accessible via a back door there? Is it ok if for example a Russian contractor’s phone is accessible to the Russian government while they are in Russia?

It would of course only ever be used for altruistic reasons like crime prevention and would never be exploited by anyone other than truly trustworthy people.

I guess there could be a policy that only people who don’t travel internationally can work on the infrastructure.

It seems there could be a conundrum here.


Here we go again... UK Prime Minister urges nerds to come up with magic crypto backdoors

Bronze badge

I’ll do the it!!!

Writing insecure crypto is easy. I have a great derivation or ROT-13... I call it ROT-29. I wrote it with my friend John Veiler... we were considering calling it Rot-Veiler... but it sounded silly.

Now, if we will ever have secure crypto... with a back door. We first need military intelligence, open secrets, jumbo shrimp, and a few dozen more oxymorons.

Encryption by its very definition cannot contain back doors. It is mathematically impossible. Not like “I have a theory about Brangelina’s breakup”. But as in the mathematical theorems have not been discovered to allow something known to be breakable to be called secure.

I suppose, I’m the U.K., the people have never had to be concerned with government corruption, corrupt policemen, etc... but in the rest of the world, we use encryption to protect the innocent... quite possibly from their governments.

Unfortunately, that places a greater burden on the government when protecting the innocent from the dangerous, but what’s the point of protecting people from the bad guys if your only goal is to remove their liberty?

In addition, there is no possible way to block people from using encryption. So, if you keep the good people from using it, it won’t help with the bad people.


Trebles all round! Intel celebrates record sales of insecure processors

Bronze badge

Not really with you on this one

Spectre and Meltdown are generally exploitation of poorly coded operating system kernels.

Speculative execution is a critical CPU design feature. Compare a Raspberry Pi vs a similar board running an ARM core with the feature. The performance difference is phenomenal. It also is bloody insecure if the operating system doesn't flush the pipeline on system calls.

System calls have always been and always will be expensive on general purpose operating systems. Consider that it requires a great deal of setup, serialization, etc of each call. It also requires processing of a software interrupt or an exception to break into the kernel. Transferring data of any consequence back is ridiculously expensive as it requires traversing the differences between the LDT and the GDT or maintaining multiple LDTs for the same data.

We as operating system designers made a conscientious choice to ignore the state of the speculative execution pipeline a long time ago. This was done because the cost of flushing it was too high and we simply did not see it as being a real security risk. Most JavaScript engines exploit the hell out of the state of the pipeline to avoid cache coherence issues between threads on different cores to avoid negotiating locks on memory which are way more expensive to process than normal system calls.

The solution to the problem is 100% operating system. VMware and other virtualization vendors need to perform access control during task swaps to identify whether to flush the pipeline between threads. This makes a lot of sense because in circumstances where virtual machines are reserving entire cores, there isn't much benefit to a flush of the pipeline on system calls, Of course VMware writes some of the most horrifying code with regards to security, so I figure they should probably just flush the pipeline and take the performance hit. There's no chance they can possibly get access control right.

Web browser vendors need to update their JavaScript JITs to explicitly avoid production of code that can exploit this. This is very doable, but every browser vendor will take a pretty serious hit performance wise. Stack on that issues regarding WebGL and WebCL, it could be a difficult challenge. Either way, there's no possible reason we should have a problem ensuring that attacks can't be launched from websites.

Server managers need to turn on Windows Smart Screen or similar to ensure that they don't run stuff that has the exploit present. As will other naughty software, a well placed time-bomb should trick security labs everywhere. Sandboxes which move the time forward to try and trigger time bombs for ages because the naughty software only needs to explode during a window of time to get around that.

Anti-virus needs to be up to date.

I in no way blame Intel, TI, ARM or any other hardware vendor for this cock-up. This is 1000% Microsoft, Linus, etc... and even then I don't blame them. I had to update 4 operating system I've written to flush the pipelines between threads following this exploit. It was my choice in the first place to skip cleaning up my shit between syscalls.

Now, AMD style memory encryption IS NOT!!!! read... IS NOT!!! a solution to this. I have over 100,000 lines of code in my project I'm working on right now. It's 100% multi-tennant and it's all in a single process and has no separation or possibility of separation on AMD processors via memory encryption. In fact, if I tried using that feature, it would be an absolute cluster-fuck.

I have gone back to update my code to handle role based permissions a lot better.

So... in the end... these are not processor based security vulnerabilities. We simply have had a bonus performance boost by coding operating systems badly for a long time. We now lost part of that boost for now... but there's absolutely no reason that operating system developers can't design solutions to identify when to selectively flush the pipelines. Then we'll get the performance back.

P.S. - I don't think QNX is having any problems because of this.


Laggard Cisco stumbles over, puffing: 'HyperFlex now supports Hyper-V'

Bronze badge


So, Microsoft already ships project Honolulu with RDMA based hyperconverged storage as part of Windows Server 2016. And, Storage Spaces Direct is non-proprietary, works on every vendors servers, etc...

Hyperflex is a hyper resource hog. It can do 40Gbit, but so can WSSD with about 1/100th the overhead due to the RDMA support. Oh... let’s not forget large scale storage tiering, massive data scaling, etc...

Add to that support for Microsoft network controller and you can do everything Hyperflex does better.

So buy some C220 M4 or better servers for compute and hot storage. Then buy some C3260s for cold, colder and damn near frozen storage. Then install Windows Server 2016 and project Honolulu and skip using storage products that offer absolutely nothing useful at a massive additional cost.

Oh... Hyperflex is compatible with precisely nothing when it comes to backup. You have to run sector by sector backups of hard drive snapshots.

Cisco does make the absolute best servers for enterprise data centers. But Hyperflex, UCS Director, UCS central, UCS Manager, Cloud Manager, etc... they’re absolute shit.

Look at the latest version of UCS Manager. Cisco can’t even get the right box to be highlighted when navigating servers. No upload process bars on software uploads... no status report at all during firmware updates causing the fabric to reboot.

The only way that UCS Manager is useable anymore is through the command line or APIs and the APIs are frigging horrifying messes of “it’s kinda like SOAP”.

Thankfully, they support Redfish on rack servers which completely eliminates the need for horrors like UCS Manager and UCS Central.

Honolulu is close to being fully integrated with Redfish, so bare metal will be 100% managed by a company who actually writes software.

To be fair, UCS is 100000% better than most competitors if for not other reason than it’s ability to managed all devices from all vendors through a single API. So, creating a RAID is done through Cisco’s code and doesn’t require trying to hack your way into the RAID controller. Ethernet settings are parts of Cisco’s code and doesn’t require booting into a 10 year old version of Windows to run a 8 year old Java to maybe perform a network BIOS update (hello HPE).

Hyperflex is just an amazing waste of money if Hyper-V is there. It’s great for platforms like VMware which who actually think storage, networking, and management are optional.


What do people want? If we're talking mainstream enterprise SATA SSDs, reliability, chirps Micron

Bronze badge

Reliability and endurance isn't really necessary in an enterprise

As soon as you stop using legacy storage systems like SAN and DAS, the fact is, I'd much rather have cheap and maybe fast.

Using modern file systems, you can easily scale out. So, having many small servers managed by a system like Ubuntu MaaS delivering scale-out is far better than having a few massive servers with lots of storage that if something fails kills everything.

If you're running SQL servers, then scale out. Drive, system, data center failures don't really matter. There's always at least 3 copies of every piece of data. If there's ever 2, then the system makes a 3rd automatically.

If you're running NoSQL... you would never ever ever run a SAN or NAS. It is possibly the worst idea in history to do so.

If you're running Blob... you would scale out and use a file system which shards copies and guarantees at least 3 copies at all times.

If you're running log storage, you'd use a system like FluentD which would scale out using sharding.

If you absolutely have to use something like NFS or SMB, you'd use scale-out servers via pNFS or Windows Scale-Out File Server.

If you absolutely have to run some block storage system, you can use scale-out iSCSI such as StarWind or Datera. In fact, StarWind is great because it can give you scale-out NFS on Windows Server.

It's far smarter these days to use strictly scale-out systems since technologies such as NVMe and FiberChannel can no longer deliver performance. When they do, it comes at a ridiculous cost which makes no sense.


big, is nice but no really important.

fast is nice, but once you get away from keeping everything you ever owned on a single SAN, it's not that important

reliable.... not really important since hard drive failures in modern storage doesn't matter

cheap.... this is the thing. $50-$300 of storage per storage node is pretty good.

I'm standardizing now on 120GB mSATA drives for the enterprise. The prototype has 9 banana pi nodes with one drive each and gigabit Ethernet in-between. I'm hoping to manage 100,000+ users on the platform. Since we're moving to 100% transactional, we're considering a few more nodes with 12TB spinning disks as cold storage. If we need more performance, we might add a few more nodes but I have no idea how we'll ever use that much capacity. We already over-provisioned by at least 3 fold.


Samba 4.8 to squish scaling bug that Tridge himself coded in 2009

Bronze badge

Re: Samba is still relevant? Yea!

I'm not sure whether I want to agree with you because you're right and pragmatic or disagree with you because you simply shouldn't be right :/

There is no particular reason why object storage systems have to be all-or-none solutions. By employing virtual file systems (basically how OneDrive and Dropbox integrate with Windows), it should be possible to support random access within reason. The S3 API has grown to become somewhat of a completely unmanageable beast. But it does have random access abilities. There should be no particular reason why a virtual file system couldn't be implemented which supports mapping remote files.

An example would be that if you connected to a shared OneDrive folder and the folder would be marked as "Online use only" and then pass requests over the API. SMB is substantially more efficient for this purpose, but at least in my experience... the most common use for large files these days is ISO files and software installations.

ISO files can be easily mapped by the systems that use them as iSCSI which is actually still quite a bit more efficient for this form of media than SMB. In addition, but of course security becomes a concern as iSCSI pretty much tops out at CHAP. However, iSCSI over IPv6 can be a big improvement when using IPv6 security. A better RBAC solution could of course be warranted. iSCSI also has pretty good directory services if SNS is configured appropriately.

As for installation media... I can safely say that I've found myself far too often using USB drives in recent history for lack of a good remote file system solution. Again, this could likely be resolved using S3 random access and with virtual file system drivers. I know there's a few commercial ones for Windows out there now and a quick search on Google found some "work in progress" open source ones as well. I don't know whether they support random access especially since S3 generally isn't used on premises, but it would be great if they do.

HTTPS overhead would probably have a pretty severe effect on performance, but it would be a pretty good option from a security perspective. Unlike the security in most other protocols, TLS tends to be hardware accelerated at both client and server. It also receives updates constantly when the client or server use the OS libraries.

As for logs... yeh... Samba is amazing for that. I use it as a model in my own software development. Actually had to remove a pile of logs from my current development project recently since 99.4% of my CPU usage was actually due to excessive logging. But to be fair, all protocols should be implemented with a LOT of logging as an option. :)

Thanks for the comment... as I said... I believe you're right but wish you were wrong :)

Bronze badge

Re: Samba is still relevant?

What exactly are you talking about?

I said nothing about storing your files in the public cloud. You even quoted where I specifically said "Open Window Server, add the Sharepoint feature".

This means that instead of using the public cloud, you would host it in-house in the private cloud.

And public identity servers do make sense. You need to be identified from the outside reliably when you're using VPNs, Citrix, etc... using a company who devotes massive resources to identity is logical. This allows you to always be up to date on security patches and what not. OpenConnect ID, SAML v2.0, and a few others are extremely secure by nature. Then you can run federation services in-house whether through Windows or a plethora of alternative options.

As someone with experience coding SAML and OpenConnect ID identity providers as well as Radius and TACACS+ servers... I can safely say that I've almost never encountered anyone (with or without certification from respected vendors) that actually understand secure login. I've never met a network engineer with the first clue of how EAP actually works. I've never met a Windows Server "expert" that has the first clue of how Kerberos works.

That said, I will gladly use a company such as Microsoft, Google, IBM or Amazon who have entire internal organizations of people with actual educations in these topics to provide and maintain identity.

As for NodeJS, NoSQL, etc... yes... these are great tools. I highly recommend against coding against proprietary systems like AWS Lambda, but I am fond of Microsoft Azure Functions since they are open source and can easily be hosted in house on Azure Stack.

As for SCADA on the network, control systems should definitely never be in the public cloud and should actually be 100% disconnected from any IP network that can be accessed from the Internet.

Oh... and when I finally settle my butt down and start coding today, I'll be working on a network management system for an offline network for a government organization. My normal customer list is primarily companies which are 100% offline. US DHS, DoD, several NATO militaries, national banks, etc. I live and die by FIPS140-2. And I am extremely security focused. And this is why I generally look for alternatives to file sharing protocols. They are generally designed for performance, not security. They are nasty gateways into networks since most often the only actual security enforcement in these protocols is within the operating system kernels themselves. Implementations of SMB like the one found in the Darwin kernel make me cringe in fear.

On these closed networks, we're investigating using Azure Stack as an option for identity. This will allow us to stay up to date using offline networks. Azure is among a few of the most actively secured identity providers out there. As such, when Microsoft eventually makes Azure Stack capable of operating 100% off-line, it will be an excellent option for identity. This is because Amazon, Google and Facebook are not likely to start shipping their IDp servers as a product any time soon. But by using Azure Stack in-house, it should be possible to have a department in charge of downloading and applying patches daily from Microsoft.

I only feat that "security experts" will start selectively choosing which patches to apply and I hope Microsoft applies an "all or none" approach to it. More security problems have been associated to "super intelligent IT guys" selectively patching.

Bronze badge

Samba is still relevant?

So, the year is 2018. We have 10,000 solutions for "cloud storage" which make use of HTTPS based APIs for identity services as well as file and print services.

SMB has evolved at Microsoft as a protocol for providing back end storage sharding/replication as well as VM migration on closed networks with Azure/Hyper-V environments.

IPP is the preferred local printing protocol. mDNS is the preferred method of printer/service discovery.

ActiveDirectory is practically dead as more and more corporate PCs are not even registered in the Active Directory. For example, I am waiting for my new PC at my new company to arrive, it will ship in April (a top model Surface Book 2 15"). I specifically asked for it to not be domain joined. I don't need it. If I ever need domain joined, I'll use a virtual machine. I can access my mail through web mail. Besides, Outlook contacts suck, I much prefer the webmail contacts. I will however run Outlook on my iPhone.

So file services. Every single company out there has Microsoft OneDrive for business. Open Windows Server, add the Sharepoint feature and let the users login using OneDrive for Business to access their files.

File shares are a BAD IDEA!!!! They let viruses run rampant and they don't have transaction history. They're not based on blob storage. They're just an all around bad idea. In fact, SMB should be disabled all through the organization and using a locally hosted OneDrive or DropBox or any other virtual file system with proper history and backup as well as secured access through more advanced user authentication provided through proper secure identity providers...

So.. SMB is dead... ditch it, kill it, burn it.

The rate exception would be in video production. But even then, they're using SMB because they have nothing better to work with at the moment. The BBC created a system called Ingex a long while back which was basically the start of an amazing object storage system for production assets. It provided a Samba module that would allow the server to provide virtual access to different resolutions and qualities even though the media itself was stored raw. Someone should pick that up and standardize it and replace the SMB with a virtual file system driver over HTTPS/UDP

Anyway... I loved Samba and used to teach it as a course in the late 90's. I used it heavily for years in the early 2000s. But this is 2018... what in the world would you ever use SMB for anymore?


I thought there'd be more Instagram: ICT apprenticeships down 20% in five years

Bronze badge

Apprenticeships are ruining corporate IT

If you think a 18-19 year old is talented and smart enough to work for you... pay him/her to go to the university.

The only reason that apprentice is sitting in front of you is that he/she knew they could skip getting a real education and take a shortcut to money and freedom. Pay them more than they would have made as an apprentice to study at a proper school. Either a real university (preferred if they have true potential) or at an ICT school.

There is a 99% chance you'll lose them. Because once they grow up a little and start actually learning, they'll realize they don't want to be a loser IT guy.

On the other hand... make them work for you over the summer and possibly an extra semester at some point. They'll be worth far more to you than they would have been as an apprentice.

If they aren't interested in going to further education, it means they lack commitment to themselves and their future. It means they want an easy way out.

IT is and always has been a bad idea. It was a replacement for using skilled and educated IS professionals. IT people click things together like Legos and always need more bricks. IS professionals design systems that once they work can continue working for decades to come.

You simply DO NOT WANT TO HIRE IT PEOPLE. If you need someone to work on your systems, hire educated Information Systems people. They take longer to get things done, but that's because they'll solve the problems instead of the symptoms.


Job ad for designer proves its point with MS Paint shocker

Bronze badge

Re: Meh...

I started wit BASIC, debug.com and edlin way back when. I did demo coding in the glory days of counting clock cycles to blit in mode-x. I can implement bressigham lines and circles from memory in pretty much any language.

Last night I was at work writing hashing code for implementing EAP for my homegrown radius server for fun until midnight.

I am more than happy to cut and paste from stack overflow. Especially in the rare cases I have to slop some shit together in Python because I’m on a Remote Desktop to a machine which has nothing but Python and a policy against installing software.

To be honest though, thanks to Stack Overflow, when I needed a quick and dirty fix for editing 48 Cisco configurations embedded as base64 in XML rags on 20 files, it took me 25 minutes to learn enough Python to write a short but functional program to do that. I did that last week and I hope to wait another year before writing more Python. But let’s be honest, Stack Overflow is one of the greatest programming resources EVER!

I just wish there was a suitable replacement for Dr. Dobbs. We lost way too much when they went bust.


SPEC SFS 2014 benchmark smashed by storage newbie

Bronze badge

Re: Eh?

I actually switched recently to USB2.0 from NVMe and FC. It ends up, NVMe was horrifyingly slow with maximum throughput of 1.98GB/sec on a PCIex2 interconnect. Also, dedup became a massive bottleneck as the number of devices increased.

This is ok for thinks like 8k video editing and it’s lovely for low latency. I would highly recommend a system like this for movie production when people are working in 8k 4:4:4 uncompressed. But no one would ever be stupid enough to edit in 8K uncompressed. You’d want to use a lossless codec with scalable and possibly temporal compression such as H.265 or J2K encapsulated in MXF. This means that the machine you’re editing on can make uuid requests for frames at a given resolution and color depth that can actually be edited in real time without hardware DSPs. Then, the project can be sent to render and real-time no longer matters.

You would never ever ever want to use a file server as a frame server as it doesn’t understand video. You want a frame server that actually speaks video. It saves MILLIONS!!!

In any case, when managing business data, images, etc, using map/reduce with Banana Pi and 256GB commodity SSD drives, I have cut my transaction processing times from using massive storage systems like this by a lot. The reason is that I now have massively scalable storage. All my requests are intelligently routed and hot data is processed from in-memory. With 2GB per node and 128 nodes, that means about 96GB of hot data. The hottest data is replicated across all node and all locations. For all other data, there’s a minimum of 3 copies maintained at all times on least two sites. We can also process stripes for large blobs and we have several 12TB spinning disk nodes per site.

So, we have performance that would make this system weep at how slow they are. But if you need high performance, unstructured, random access data this system is almost certainly king. It’s great for legacy technologies like virtual machines and raw video editing. It can also be useful for storage of scientific data sets for supercomputers. A great example would be for storing data sets from the LHC.

If you’re a business running anything other than high frequency trading systems, if a storage system like this sounds attractive, you’re not managing your data but instead just over provisioning in hopes that if you spend enough millions, you can avoid actually managing your data.


Destroying the city to save the robocar

Bronze badge

Bullshit... if you don’t mind

Autonomous cars are the best idea out there. As soon as Uber, Lyft or someone else can allow me to pay a monthly fee (within reason) for the ability to click “bring me here” and then send an autonomous vehicle with individual lounge pods of which there should be 6-16 per vehicle, I will leave my car parked in the garage and use it for the monthly grocery store trip to Sweden.

I and many others have no interest in manually operated vehicles. I believe that highways should be autonomous car only. I believe streets on Singapore where the street is off limits to humans and underground passages are available are the solution.

I want rapid charger vehicles on the road so autonomous vehicles can be charged by magnetically linking to a vehicle in front of them while driving.

I believe cities must evolve. I also believe that the rubbish written in this article is written by some pissy “I want to drive my own motorized weapon” asshole.

It’s time for autonomous vehicles. It’ll take 10-20 more years to get something good.

Also... quit the bullshit about asking companies like Volvo, Ford, BMW, etc... about autonomous vehicles. They will all be gone before long. Their business model depends on 1.5 cars per western household. With autonomous vehicles, we should see more like one pod bus per 3-4 households instead. The pod bus will have few moving parts and due to lack of individuality, they will no longer be replaceable but instead repairable and upgradable.

There is no room left for traditonal car companies in the future. We’re done with that. It’s time to move on. I am driving a BMW i3 and this car is proof that BMW will never survive the technology future. They can’t even make the front hatch not open while the driver is sitting down with the key in their pocket. Let’s not even talk about their software updating system. The f-ing car is internet connected and you have to bring it to a garage for security patches.

We’ll see real autonomous vehicle companies in the future. Probably one of the companies that will spin off of Tesla after Musk over-extends and bankrupts the company.


Another day, another Spectre fix slowdown: What to expect if you heart ZFS

Bronze badge

Re: True risk profile

This is the misunderstanding about the problem. And also that NetApp, EMC and others also don’t understand it. Also that AMD is stricken as well.

NFS, SMB and web management systems are based on remote code execution. A well written RPC call with a code injection is all that is needed to exploit all threads within a process on any platform of this type.

iSCSI and FC should be relatively safe, but they have their own problems for which you should avoid them.

Blob stores should be ok.


Hey Europe, your apathetic IT spending is ruining it for everyone

Bronze badge


My organization is moving away from large servers in favor of better code.

I believe Europe is generally far ahead of the US on IT spending. We adopt sooner and we learn sooner.

Let’s consider Cisco, VMware and Windows Server as a platform. This is because these are the prices I know best.

To build a small corporate data center based on minimalistic best practices able to ensure one single server is operational an able to handle business at all times.

We need two data centers each configured as follows :

- 3 rack servers

- 2 spine switches

- 2 leaf switches

- 2 data center bridging switches

- VMware licenses (vCloud Foundation)

- Windows Server enterprise licenses.

Also, there needs to be at least two MPLS or dark fiber connections between data centers.

This design is barebones. It contains no applications, SQL Storage, blob storage, NoSQL. This is just the absolute base configuration of Windows Server, Active Directory, file Server, etc...

The cost of this in retail pricing (which will be about 40% less in reality) is about $1.6 million U.S.

If you build anything smaller than this, you should be in the cloud using an IaaS platform which provides this for you.

Let’s also add that if you buy said system, it’s not plug and play. You need a slew of It consultants to build, configure and run it. It’s a long term project. Consider that TCO is measured in the millions over the years.

It’s far better to use smaller more classic platforms. In fact, I’ve spent a great deal of time evaluating IBM i for new projects. Eventually, I settled on using Raspberry Pi with Apache OpenWhisk. We’ll build all new systems using old-school CICS methodologies on OpenWhisk. We’ll start by adding proper C# support and building a role based access control system that meets our security needs.

So, we’ll go from about €10 million a year for new projects to about €2 million a year since we’ll stop spending money on physical hardware like servers.


Remember those holy tech wars we used to have? Heh, good times

Bronze badge

Re: NetWare or NT

They both had their strengths. NetWare was more stable in the beginning, but by NT 3.51 when Windows came into its own, people already chose their religion.

NDS was really great, but when Active Directory came along, most of us didn’t need all the features. When group policy happened, NDS was more of a burden than a help.

Novell was way too slow to adopt IP as well. IPX was great for LAN but wouldn’t scale for WAN. Multiprotocol router was too expensive too.

Eventually Novell forgot who they were and their file server wa mediocre. Their print server was no longer needed. Their identity services were obsolete. They lacked group policy support. They didn’t do Internet.

Strangely, I was in an NDS training course for a new deployment two weeks ago. It’s still there and it’s still great. It actually can be used to make Linux manageable.

Bronze badge

Re: Seen loads

NetWare, NT, Lan Manager and others were all pretty good if the people on staff understood them.

Clipper and FoxPro and dBase IV were all quite good too.

DrDOS was great, but MS-DOS was good too. Once you installed PC Tools Deluxe they were all pretty good.

Deskview/X on one computer, Windows for Workgroups 3.11 on the other.

My big one was Microsoft C 5.1 vs. Turbo C++ v1.0

I used both, but couldn’t do C++ on MS without Glockenspiel and Codeview was a whore.

Bronze badge

Re: "something that isn't backed by anything of value can have value?"

By that, you would believe that Tether which is SEC approved and based in California and locksteps their currency to the USD and requires an actual holding of 1USD per circulating Tether would count?

I’m just chiming in to make noise.

Crytocurrency can be legitimized. In fact, it could be the replacement for plastic and paper sooner or later. It is very likely a good solution in the long term. For me though, things like Bitcoin, Monero and others are a bit of a disaster.

Bronze badge

Re: Um....

My childhood 24/7 ballgags, brownie mix and clown porn.

(Pretty much the best movie quote of the 21st century)

Bronze badge

Re: No mention of systemd?

I like it :) But I have a use case for it.

I honestly don’t mind either way. I prefer using and developing for systemd. But that makes me special it seems.

Bronze badge

Re: "Religion gave way to pragmatism"?

Is it bad that I like systemd?

I actually left Linux a long time ago because of the impressive amount of stupid involved with /etc and others. It was a flaming shithole and still is.

systemd is a massive improvement and I am slowly moving back to Linux. In fact, I’m replacing a few dozen Cisco servers running VMware and Windows with a few hundred custom Raspberry PIs. I feel very strongly the move would have been feasibly impossible without systemd.

But I guess I’m not part of the masses. Oh well.


Security hole in AMD CPUs' hidden secure processor code revealed ahead of patches

Bronze badge

Re: BIOS updates? What BIOS updates?

And the reason for UEFI was good. BIOS required 16-bit. BIOS was not a beautiful or secure thing. In fact, BIOS was a disaster.

Consider that x86 BIOS implemented a software interrupt interface which required chaining to support adding additional device support. Booting from anything other than ATA was limited to emulating a hard drive protocol dating back to the late 70s.

The total space available to implement boot support for a new block device was a few kilobytes and was a nightmare for updating.

UEFI is a glorious update but certainly could have been better. It is however hundreds of times better than BIOS ever could be. The question is whether hardening it is an option. There is no reason why hardening UEFI isn’t possible. In fact, the main problem with UEFI is that system administrators are deprived of a suitable set of books, videos, etc to make them competent on the platform.

Keep in mind that UEFI is based on platforms which date back to the 70s as well. We lived in the dark ages in the PC world for way too long. If you ever used a Sparc or a MIPS, you would know that the UEFI design is brilliant.


Here come the lawyers! Intel slapped with three Meltdown bug lawsuits

Bronze badge

Re: OK, I'll bite

Not only are the fixes through software, hardware fixes wouldn’t work anyway.

So, here’s the choices :

1) Get security at the cost of performance by properly flushing the pipelines between task switches.

2) Disable predictive branch execution slowing stuff down MUCH more... as in make the cores as slow as the ARM cores in the Raspberry Pi (which is awesome, but SLOW)

3) Implement something similar to an IPS in software to keep malicious code from running on the device. This is more than antivirus or anti malware. This would need to be an integral component of web browsers, operating systems, etc... compiled code can be a struggle because finding patterns to exploit the pipeline would require something similar to recompiling the code to perform full analysis on it before it is run. Things like Windows Smart Screen does this by blocking unknown or unverified code from running without explicit permission. JIT developers for web browsers can protect against these attacks by refusing to generate code which makes these types of attacks possible.

The second option is a stupid idea and should be ignored. AMDs solution which is to encrypt memory between processes is useless in a modern environment where threads are replacing processes in multitenancy. Hardware patches are not a reasonable option. Intel has actually not done anything wrong here.

The first solution is necessary. But it will take time before OS developer do their jobs properly and maybe even implement ring 1 or ring 2 finally to properly support multi-level memory and process protection as they should have 25 years ago. On the other hand, the system call interface is long overdue for modernization. Real-time operating systems (and generally microkernels) have always been slower than Windows or Linux... but they all have optimized the task switch for these purposes far better than other systems. It’s a hit in performance we should have taken in the late 90’s before expectations became unrealistic.

The third option is the best solution. All OS and browser vendors have gods of counting clock cycles on staff. I know a few of them and even named my son after one as I spent so much time with him and grew to like his name. These guys will alter their JITs to handle this properly. It will almost certainly actually improve their code as well.

I’m pretty sure Microsoft and Apple will also do an admirable job updating their prescreening systems. As for Linux... their lack of decent anti-malware will be an issue. And VMware is doomed as their kernel will not support proper fixes for these problems... they’ll simply have to flush the pipeline. Of course, if they ever implement paravirtualization like a company with a clue would do, they could probably mitigate the problems and also save their customers billions on RAM and CPU.

Bronze badge

Re: OK, I'll bite

I agree.

The patches which have been released thus far are temporary solutions and in reality, the need for them is because the OS developers decided to begin with that it was worth the risk to gain extra performance by not flushing the pipeline. Of course, I haven’t read the specific design documents from Intel describing the task switch mechanism for the effected CPUs, but following reading the reports, it was insanely obvious in hindsight that this would be a problem.

I also see some excellent opportunities to exploit AMD processors using similar techniques in real world applications. AMD claims that their processors are not effected because within a process, the memory is shielded, but this doesn’t consider multiple threads within a multitennant application running within the same process... which would definitely be effected. I can easily see the opportunity to hijack for example Wordpress sites using this exploit on AMD systems.

This is a problem in OS design in general. It is clear mechanisms exist in the CPU to harden against this exploit. And it is clear that operating systems will have to be redesigned, possibly on a somewhat fundamental level to properly operate on predictive out of order architectures. This is called evolution. Sometimes we have to take a step back to make a bigger step forward.

I think Intel is handling this quite well. I believe Linux will see some much needed architectural changes that will make it a little more similar to a microkernel (long overdue) and so will other OSes.

I’ll be digging this week in hopes of exploiting the VMXNET3 driver on Linux to gain root access to the Linux kernel. VMware has done such an impressively bad job designing that driver that I managed to identify over a dozen possible attack vectors within a few hours of research. I believe very strongly that over 90% of that specific driver should be moved to user mode which will have devastating performance impact on all Linux systems running on VMware. The goal is hopefully to demonstrate at a security conference how to hijack a Linux based firewall running in transparent mode so that logging will be impossible. I don’t expect it to be a challenge.


Nvidia: Using cheap GeForce, Titan GPUs in servers? Haha, nope!

Bronze badge

No one mentioned cloud providers

Seems strange to me that no one here noticed that this is primarily directed at forcing Microsoft, Google and Amazon to buy server parts instead of consumer.

I’m pretty sure this is an effort by NVidia to

A) sell more data center GPUs

B) give Cisco, Dell and HP a business case to continue building NVidia mezzanines for their servers

C) force companies to pay for ridiculously overpriced technologies like Grid on Vmware as opposed to simply using regular desktop drivers on Hyper-V which is A LOT less expensive. And by a lot less, think in terms of about $100k for a small 200 user VDI environment.... just for the driver licensing.

This isn’t targeted at small companies or users. This is targeted at companies like Amazon who are “cheating” NVidia out of probably a hundred million dollars a year by using consumer grade cards.


Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign

Bronze badge

Counting chickens?

First, this is news and while I don’t buy into the whole fake news thing, I do buy into fantastic headlines without proper information to back it up.

There are some oddities here I’m not comfortable with. The information in this article appears to make a point of it being of greatest impact to cloud virtualization, though the writing is so convoluted, I can’t be positive about this.

I can’t tell whether this is an issue that will actually impact consumer level usage. I also can’t tell whether there would actually be 30% performance hit or whether there would be something more like 1% except in special circumstance. The headline is a little too fantastic and it reminds me of people talking about how much weight they lost... and the include taking off their shoes and wet jacket.

Everyone is jumping to conclusions that AMD or Intel is better at whatever. Bugs happen.

Someone claims that the Linux and Windows kernels are being rewritten to execute all syscalls in user space. This is generally crap. This sounds like one of Linus’s rants about to go haywire. Something about screwing things up for the sake of security as opposed to making a real fix.

Keep in mind, syscalls have to go through the kernel. If a malformed syscall is responsible for the memory corruption, making a syscall in another user thread will probably not help anything as the damage will be done when crossing threads via the syscall interface.

Very little software is so heavily dependent on syscalls. Yes, there is big I/O things, but we’re not discussing the cost of running syscalls, we’re talking about the call cost itself. Most developers don’t spend time in dtrace or similar profiling syscalls since we don’t pound the syscall interface that heavily to begin with.

Until we have details, we’re counting chickens before they’ve hatched. And honestly, I’d guess that outside of multi-tenant environments, this is a non-issue otherwise Apple would be rushing to rewrite as well.

In multitannant environments, there are 3 generations Intel needs to be concerned with.

Xeon E5 - v1 and v2

Xeon E5 - v3 and v4

Xeon configurable

If necessary, Intel could produce 3 models of high end parts with fixes enmass and insurance will cover the cost.

Companies like Amazon, Microsoft and Google, may have a million systems each running this stuff could experience issues, but in reality, in PaaS, automated code review can catch exploits before they become a problem. In FaaS, this is not an issue. In SaaS this is not an issue. Only IaaS is a problem and while Amazon, Google and Microsoft have big numbers of IaaS systems, they can drop performance without the customer noticing, scale-out, then upgrade servers and consolidate. Swapping CPUs doesn’t require rocket scientists and in the case of OpenCompute or Google cookie sheet servers shouldn’t take more than 5 minutes per server. And to be fair, probably 25% of the servers are generally due for upgrades each year anyway.

I think Intel is handling this well so far. They have insurance plans in place to handle these issues and although general operating practice is to wait for a class action suit and settle it in a fashion that pays a lawyer $100 million and gives $5 coupons to anyone who fills out a 30 page form, Amazon, Google and Microsoft have deals in place with Intel which say “Treat us nice or we’ll build our next batch of servers on AMD or Qualcomm”.

I’d say I’m more likely to be effected by the lunar eclipse in New Zealand than this... and I’m in Norway.

Let’s wait for details before making a big deal. For people who remember the Intel floating point bug, it was a huge deal!!! So huge that after some software patches came out, there must have been at least 50 people world wide who actually suffered from it.


Storage startup WekaIO punts latency-slashing parallel file system tech

Bronze badge

Re: Can't be an RTOS either

I remember as a kid, a school teacher challenged me and some friends to cut a hole in a business card large enough to walk through. After 10 minutes we were walking up to the teacher to explain it couldn’t be done. Then a friend turned heal and went back and made a series of cuts and we walked through the hole.

This puzzle allowed me to be the kid with the scissors.

A real-time OS needs to handle “interrupts” as they come in. Code in an RTOS should never block and if disk access is required, the program should send a read request as a queued event to the disk I/O subsystem and receive the result back once it is finished. Compared to when I was coding for QNX in the 90s, this type of programming is extremely easy thanks to improved language support with extensions like lambda functions.

Now, the “time” aspect of RTOS generally would mean that as soon as a timer interrupt fires at a specific interval on the system’s programmable hardware timer, a deterministic number of cycles should pass before the entry point to the handler. This allowed “Real Time”.

If we eliminate the time aspect of the processes involved... which a storage system wouldn’t be concerned with anyway, it is possible to develop an operating system that behaves as an RTOS if “interrupt” handling can be made deterministic. In addition, if all code is written as non-blocking by handling all protocols using lambdas and “async programming patterns”. So, as opposed to a general purpose OS which task switches without reason, an RTOS would schedule everything based on prioritized event queues.

So, the main issue is how to process interrupts in real time within a user-space process.

1) Implement interrupt priority for waking the scheduler of the process on incoming interrupt. This can be done in a few creative ways, a kernel module is one method, alternatively setting processor affinity, to reserve 100% of a CPU core and running a spin-lock sleep could work. There are dozens of ways to do this.

2) Expose a second level MMU to the app allowing the process to directly handle protection faults. This is the obvious method of providing deterministic interrupt handling.

3) Expose the virtual NIC as a PCIe device to the app. Then when the app is managing its own MMU with it’s own GDt and IDT, the PCIe adapter can trigger faults for MMIO interrupts within application managed memory space. So as the RTOS app sets virtual protection on an Ethernet memory address, it should signal the app in a reasonably “real-time” fashion.

So, while I was with you all the way up to writing a full page of text... I got to just in front of the teacher’s desk before turning back to cut a whole in a business card to walk through. As long as the events aren’t actually timer triggered, I believe an RTOS in user space via hardware virtualization is entirely reasonable. :)


The hounds of storage track converged and hyperconverged beasts

Bronze badge

Re: Integrated Systems

Umm... what do you mean high end?

Have you seen the performance numbers on Hyperflex, FlexPod, VxRail. Etc...???

My goodness, SQL query times to be ashamed of. MongoDB performance to make a grown man cry. Hadoop performance which looks like someone is taking downers. Object storage numbers of a pathetic nature.

These are low end systems for companies who attempt to compensate for unskilled staff by throwing millions at Dell, Cisco and HP.

I’ll give you a good means of knowing your IT department is incapable of doing anything useful. They actually buy storage systems instead of database systems.

Another clue, they think in terms of VMs and containers. This is a pretty good sign they don’t know what they’re doing.

If you have 10Gbe or faster networking to the servers, you probably have no clue what you’re doing.

If you have servers dual homed to network switches, your system probably is designed to fail and outage windows are scheduled all the time for no apparent reason.

No... these are the low end systems for low performance throw brute force. Unless you are performing oil discovery, mapping genomes, etc... they are about as low end as you can get. Of course, hyperconverged storage is scarily slow compared to specialized storage.

Look at scale out database solutions. They cost far less, require far less hardware and perform far better than what you’re used to. And no... you don’t need VM storage except for your legacy crap which you shouldn’t deploy more of anyway.

That said, VDI is a solution for super big servers... but even then, you shouldn’t have high storage requirements. The base VM should be replicated to every server in the pool and all user storage should be centralized (OneDrive for Business for example). And for that, a simple Windows Server 2016 Core install with Enterprise license and Kubernets should handle it. Though project Honolulu may automate it as well.

Again, no need for storage subsystems, SANs or anything stupid like that. It’s all about the databases.


Missed opportunity bingo: IBM's wasted years and the $92bn cash splurge

Bronze badge

Failure to attract new business

IBM is seen by guys like me in the outside world as unapproachable. I generally am responsible for $10-$25 million a year in solution purchase decisions. I have been very interested in IBM as parts of those solutions but regularly lean towards “build your own” solutions.

IBM should be working their asses off on making access to their technologies interesting to the Github world. For example, right now the world is moving towards FaaS services which has been IBM’s core business model for nearly 50 years and AWS and Microsoft will take that all from them. I have a small $1.8 million budget for a proof of concept over the next 18 months. If it pans out, it will become the core system for the next 10 years at a 2 billion euro multi-national company which is a subsidiary of a top-5 global telecom company.

I’m designing the system by using AS/400 as the architecture of our platform. Since IBM is something which feels unapproachable to me, I’ll invest in small group of people to rebuild all the key components of the AS/400 platform instead of simply buying them from IBM. This could easily be worth $40-$50 million for IBM, but I wouldn’t even know who to call to open a conversation with them. It’s easier to buy some out of print manuals on the platform architecture and build it ourselves and open source it.


Microsoft Surface Book 2: Electric Boogaloo. Bigger, badder, better

Bronze badge

Re: Middle Income vs Middle Class

Haha... it all depends whether you’re poking fun or intentionally taking offense to the letter as opposed to the intent of what was written.

I sadly wish I could say that I didn’t have to sit in a room full of teenager boys comparing specifications of Jordan’s. I would gladly be classless if that is the cost. :)

To be fair, one pair was his big birthday present and the other pair, he was allowed to spend a few hundred dollars of his confirmation money before putting the rest in savings. I don’t believe their is anything more than placebo with regards to “Tech” in sneakers. But he doesn’t ask for much, so as long as he brings good grades and doesn’t go rotten, we don’t mind spoiling him occasionally.

Honestly, I’d buy him another pair if he brought be 10 A+ grades in Math, Science and Norwegian in a row. I figure if I buy his grades now, I won’t have to pay as much to support him later.

Bronze badge

Re: Buy the US 15” from Amazon or similar

Well played!

P.S. - You made my day :)

Bronze badge

Buy the US 15” from Amazon or similar

$3200 gets you at GTX1060 and 15” screen. Then after shipping and VAT, it’s about 3200£.

I’ve had my Surface Book for two years now, top model first generation and I believe it was pretty cheap. I paid about £3500 for it after overnight shipping and taxes. It has been the best laptop I’ve ever owned and I still use it 8-18 hours a day. It’s my development and gaming PC and I’ve experienced some sleep issues with it, but never had a problem otherwise.

I owned a Samsung Series 7 Slate (the machine Windows 8 was designed for), a Surface Pro, Surface Pro 2 and Surface Pro 3. When I switched from Apple to Microsoft, my life has only gotten better with each generation. Altogether, I’ve owned about 50 laptops over the years and other than my Wacom Cintiq Companion, I have never been so happy.

Consider a machine which costs about $150 a month to own over a two year life span. I more than that for cigarettes or coffee. The fact is, $150 a month isn’t even a rounding error. Add to that the tax deduction associated with it which makes it closer to $135 a month.

I know there are people out there who think I terms of what does it cost on the first day, but as a typical middle class household that takes home $10,000 a month after tax, even if we had to pay for it (as opposed to the boss), $135 of $10,000 for a tool to use for 90% or more of your work doesn’t matter at all.

P.S. I said middle class, not middle income. Middle income is politician talk for making the less poor of the poor feel like they’re not being screwed by a classist society. Middle class means white-collar, home owners with college degrees. If you need help telling the difference, middle class teenagers own one or two pairs of Jordan’s. Middle income owns 10-30 because they lack the class to manage their money.


Storage Wars: Not very long ago, a hop, skip and a jump away...

Bronze badge

My predictions for 2018-2020

The storage market will plummet because companies will stop building storage systems when they should be building database systems.

1) AzureStack, OpenStack, AWS, Google Cloud will start being seen at home for enterprises. These systems have been saving massive amounts of space by dumping traditional storage systems.

2) Storage will no longer be blocks or files or strictly objects. Systems will storage data strictly as structured, unstructured or blob. All three of these systems scale-out beautifully by simply adding more inexpensive nodes and eliminate the complexity and cost of fault tolerant designs. Records are sharded across nodes and based on heuristics, store data a minimum of three times and archives data to spinning disks as the data grows colder or is marked deleted. As such, there is far less waste. Performance increases almost infinitely as more inexpensive nodes are added and map/reduce technologies give incredible performance. As such, file and block storage will start being shunned as they are generally a REALLY BAD IDEA.

3) Stateless applications will make great progress taking over. All new applications will be developed towards cloud platforms and on functions or lambdas (FaaS). The cost difference is so incredibly drastic that companies will even start rewriting systems for FaaS. The reason is that FaaS uses approximately 10,000 to 25,000 less computing resources and are far easier to operate. Applications running on crappy networks with crappy nodes repetitively outperform the best Cisco or Oracle have to offer. As for storage, what's better? Two super fast 32GB SAN fabrics shared across massive servers accessing gigabytes per second or a well designed system running on 200 Raspberry Pis with map/reduce technology for distributing the load and gathering information as it reaches where it is needed? That's right, option #2 will destroy the absolute best that Netapp or EMC have to offer every single time.

4) Virtual machines won't go away and companies will still piss away good money after bad to maintain and grow their virtual machine platform without even knowing why. They'll look at VM and storage statistics and not bother talking with the DBA to identify whether there is something which can be done to cut CPU and storage requirements. Companies will learn that no matter how much more advanced data centers get, they always cost more... not less. The minimum entry price to a virtualized data center today (assuming standard industry discounts) is about $1.9 million. If you cut corners and spend less, you really really really should be in the cloud instead. It costs $1.9 million to buy 6 servers, 4 switches and all the applicable software licenses to run a VMware data center. A 6 blade data center is the minimum configuration for almost guaranteeing one machine is always up. Dumping VMware and using Hyper-V will cut the cost under a million and using Huewai instead of Cisco can save maybe another hundred thousand... oddly, as a hard core Cisco guy, I'm considering Dell as a replacement for Cisco servers at the moment as Cisco desperately needs to rewrite UCS manager now that data centers are completely changed.

I can go on... but storage as we know and use it today is going to be nearly dead because scaleout has come to SQL, NoSQL and Blob. There just isn't any reason to invest in SAN or file storage anymore.


Guess who's developing storage class memory kit? And cooking the chips on the side...

Bronze badge

Perpetuating the problem?

Most enterprise storage today is devoted to virtualization waste. Using virtual machines to solve problems in a SAN oriented environment has been an absolute planetary disaster. Many gigawatts of power are wasted 24/7 hosting systems which run at about 1/10000th the efficiency they should operate at. NVMe shouldn't even have a business case in today's storage world.

This announcement was interesting because it appears that the solution presented has focus on database and object storage. VMs and containers have their own section, but VMs and containers are yesterdays news. Companies deploying on VMs or containers obviously have absolutely no clue what they're doing. They're letting IT people build platforms for systems without the slightest understanding of what they're actually deploying. They're just focusing all their time on building VM infrastructures which are simply crap for business systems in general.

Let's make things simple. Businesses need the following :

- Identity

- Collaboration (e-mail, voice, video, Microsoft Teams/Slack, etc...)

- Accounting


- Business logic and reporting

Identity can be hosted anywhere but for the purpose of accounting (a key component of identity) it should be cloud hosted. In fact, there should be laws requiring that identity is cloud hosted as it is a means of eliminating questions of authenticity of submitted logs within the court systems.

Collaboration is generally something which should work over the Internet between colleagues and b2b... but again, for the ability to provide records to courts upon subpoena, cloud hosted is best for data authenticity sake. In addition, given the insane security issues related to collaboration technologies like e-mail servers, using a service like Google Mail, Microsoft Azure, etc... is far more sensible than hosting at home. No group of 10-20 people working at a bank or government agency will ever be able to harden their collaboration solutions as well as a team of people working at Google or Microsoft.

Accounting... accounting should never ever ever be hosted on a SAN or NAS to begin with. There are 10000 reasons why this is just plain stupid. It should only ever be hosted on a proper database infrastructure employing sharded and transactional storage with proper transactional backup systems in place. Large banks can manage this in-house, but most companies run software designed to meet the needs of their national financial accounting requirements. Those systems need to be constantly updated to stay in sync with all the latest financial regulations. To do this, SaaS solutions from the vendors of those systems is the only reliable means of supporting accounting systems today. Consider that if the new U.S. tax code makes it through congress, there will be probably millions of accounting systems being patched soon. If this is done in the cloud, if there's a glitch, it will be corrected by the vendor. If there are glitches doing so in house (and there often is), data loss as well as many other problems will occur. Using systems which log data transactionally in the cloud as well as logging the individual REST calls allows data loss or corruption to be completely mitigated. This can't be said on-site solutions.

CRM is a database. Every single piece of data stored in a CRM is either database records or objects associated to database records. There is absolutely no intelligent reason why anyone would ever run a SAN to store this information. Databases and object storage are far more reliable. Using systems like those offered from NetApp, EMC, etc... are insanely stupid as they don't store data logically for this type of media. They've added APIs with absolutely no regard for application requirements. Consider that databases and object storage employ sharding which inherently has highly granular storage tiering and data redundancy. The average company could probably invest less than $2000 and have a stable all flash system with 3-10 fold resiliency and performance able to shake the earth an EMC, Netapp or 3Par stands on. We are doing this now with 240GB drives mounted to Raspberry PIs. Our database performance is many times faster than the fastest NetApp on the market today. We have far more resiliency and a far more intelligent backup strategy as all of our data is entirely transactional.

Then there's business systems. If you need to understand how these should work, then I highly recommend you read the Wikipedia entry on AS/400. Modern FaaS platforms operate on the exact same premises as System/36, System/38 and AS/400. You can run the absolute biggest enterprises on a few thousand bucks of hardware these days with massive redundancy without the need for expensive networks or heavy CPUs. The cost is in the platform and maintaining the platform. Pick one and settle on it. Once you do, then you build a team of people who learn the ins and outs of it and keep it running for 30+ years.

As for Big Data. The only reason you need "storage class" anything here is that companies put too much on a single node. If you use smaller and lower powered nodes, you can build an in-house Google style big data solution that far outstrips most systems available today in performance using purely consumer or IoT grade equipment. If you need this kind of storage, you have an IT team who hasn't the slightest idea how things like Map/Reduce works. Map/Reduce doesn't need 100Gbe or NVMe. It works pretty well over 100Mb and mSATA. Just add more nodes.


No one saw it coming: Rubin's Essential phone considered anything but

Bronze badge

Re: Google Play reports a mere 50,000 download of Essential's Camera app

And out of curiosity, how many of those downloads were to telephones on display in stores?

Bronze badge

Essential is missing something essential

A support infrastructure.

If you break the screen... where will it be replaced?

If you need support... which store will you visit?

Bronze badge

Re: Saw it coming, just didn't care.

I am using an iPhone 6s plus and have no intention of changing phones so long as there is no headphone jack. The headphone jack is a standard which allows me to use the same headphones on dozens of devices. And dongles don't work... at least not more than a few days before you have to buy a new one.

As for SD card... I can't go there with you. It feels too much like saying you want a floppy drive on an internet connected device. 256GB on phones these days... Do you really need more?


Pickaxe chops cable, KOs UKFast data centre

Bronze badge

Not entirely true

I worked as an engineer developing the circuitry and switching components for UPS systems running the safety systems at two nuclear facilities in the U.S.. these systems delivered 360v at 375amps uninterrupted

Rule #1 : Four independent UPS systems

Rule #2 : Two UPS off grid powering the safety systems. One at 100% drain, one at 50% drain

Rule #3 : One UPS being discharged in a controlled manor to level battery life and identify cell defects

Rule #4 : Recharge the drained battery

Rule #5 : Fourth UPS drain and recharge separately

Rule #6 : Two diesel generators off grid

This system may not guarantee 100%, but it is far better than five-9’s. There can absolute catastrophic failure on the supplying grid and it does not impact the systems even one bit. This is because the systems are never actually connected to the grid. And before you come back with issues or waste related to transference, the cost benefits far outweigh the losses because the life span of 90% the cells are extended from four years to an additional 3-5 years by properly managing them in this fashion. And the power lost at this level is far less expensive than replacing the cells twice as often.

P.S. before you call bullshit, there was extensive (corroborated) research at Univeristy of South Florida over a period of 15 years on this one topic.


It's a decade since DevOps became a 'thing' – and people still don't know what it means

Bronze badge

Re: Nope.

haha What do you classify as a practitioner?

I know a lot of practitioners as well, and DevOps is the current name of the evolution of business software development, operations and process management. It's not new. We learn as we go and we improve. DevOps has nothing to do with writing software to perform the IT departments job. It has nothing to do with automating things like virtual machines and containers. It has nothing to do with any of that. It has to do with operations and development working together to ensure there is a stable and maintainable platform to build and maintain stable software against through proven test driven development techniques.

Software defined and automation and all that crap has nothing to do with DevOps. You could of course use those things as part of the DevOps process.

The goal is to not need all the IT stuff to keep things running. You should be 100% focused on information systems instead. Start with a stack and maintain the stack. A stack is not about VMs and containers. It's about a few simple things :

- Input (web server)

- Broker (what code should be run based on the input)

- Processes (the code which is run)

- Storage (SQL, NoSQL, Logging...)

There are many different ways to provide this. One solution would be for example AWS, another is Azure and Azure Stack. You can also build your own. But in reality, there are many stacks already out there and there's no value in building new ones all the time. As such, while the stack vendor may employ things like automation and kubernets and docker and such, they're irrelevant.

What we want is :

- The ability to build code

- The ability to test code

- The ability to monitor code

- The ability to work entirely transactionally

Modern DevOps environments are just a logical progression on classic mainframe development to include things like build servers, collaborative code management, revision control, etc... it's also adding an additional role which used to be entirely owned by the DBA which goes further to ensure that as the platform progresses, operations, DBA and development work as a group so that we reduce the number of surprises.

Of course, you may know more about this than me.

Bronze badge

Re: Yawn...

Isn't it nice?

It's awesome to be in a world where you can have developers on staff who... if the code is slow will work with the DBA to make it better.

Imagine a world where the operators would see 90% CPU usage and be able to discuss with the developers what was going on and work together to identify whether it was bad code, an anomaly, etc... and then correct the problem? This generally happens because code which was originally intended for batch processing is reused without optimization for transaction processing. So, instead of being run once a night or hour, it instead is run a million times a minute. So, we then optimize queries and cache data if necessary and make sure the database is sharding the hot data where needed.

If you're not pissing away $1.9 million for :

- 6x Rack servers with 20 cores each, 256GB RAM each and 16 enterprise SSD drives

- 4 leaf switches, 4 spine switches, 4 data center interconnect switches plus all the 40Gb cable

- 2 dark fibers with CWDM for DCI

- VMware Cloud Foundation with NSX, vSAN, ESX/vSphere

- Windows Server Enterprise 2016

Which is pretty much the lowest end configuration you would ever want to run for a DIY data center...

You can instead spend the money on things like making software which really doesn't need systems like that. Run your generic crap in the cloud. Build your custom stuff to run locally. And keep in mind that the developers are able to run their systems on 10 year old laptops while they're coding. But the good news is that by dumping the data center and the staff to run it, they can now have new laptops and a better coffee machine. :)




Biting the hand that feeds IT © 1998–2018