* Posts by CheesyTheClown

402 posts • joined 3 Jul 2009

Page:

Is VMware the power it once was?

CheesyTheClown

I think loyalty was key

If you have a system which appears to work for you... why not stick with it. I personally recommend against wasting the time, resources, money, etc... upgrading to VMware 6 (or the soon in beta 7) since it's really a whole lot of nothing particularly good. It's also really just not as stable as the old versions.

Customer loyalty or "lock-in" for others is VMware's biggest advantage. They were there first and people have grown comfortable with it. It works for most people and there's no benefit to building something new if what you have works.

For new deployments of systems, it's become very cost effective to use alternatives. Azure Stack which most companies already are licensed for and therefore is basically free simply requires a few more machines.

Ubuntu has accomplished more than any other company with regards to making OpenStack and out of the box experience. Unlike Redhat's normal "everyone wants python and the command line" way of doing things, Ubuntu has given an application oriented experience which feels similar to how we felt about VMware for servers. It might be that Ubuntu is a good solution for you to try for new systems and VMware 5.5 is good enough for legacy systems.

It's up to you... it takes a lot of resources to identify new technology and learn it well enough to comfortably use it. If the number of hours needed to learn the new product exceeds the number of hours you would save by using it, it obviously makes sense to stick with what you have.

I would recommend you consider taking a look at Azure Stack when it comes out though. If you can install and manage Windows, you can easily install and manage that.

P.S. Personally, I have never seen anyone do anything but lose money by using public cloud unless they were a service provider who would benefit from a global CDN.

0
0

Quick as a flash: A quick look at IBM FlashSystem

CheesyTheClown

Hmmm... But why?

SAN is something with a relatively short shelf life.

Consider first why SAN is interesting to begin with.

1) Physical machines (VM hosts need to be boot, preferably without local storage) and as such a block based storage system is attractive. This means a single reliable 4-60gb image mirrored for every blade/server is desired. Using "thin provisioning", this is about 1.2gb of storage altogether.

2) VMware and storage guys seem to think that fiber channel is king. There is much legacy here and almost never any sensible reasoning for this thought process. Fiber channel is generally pretty slow. But it let's people do things how they always have. Most data centers based on FC almost never has NPV configured properly. But also consider that since vHBAs are NEVER hardware accelerated. As such, block storage places much silly wasteful overhead on host CPUs.

Ok, so what's the alternative?

With VMware, you're typically screwed because VMware behaves like a primadonna regarding their driver development tools. So, without paying $5000 and signing a ridiculous NDA with blocks open source, you can't use NFS on any home built system. Also, it's become kind of a religious thing for VMware users that the system should always be SAN boot. This is wasteful and basically stupid.

In modern times, whether for cache or other reasons, servers should always contain local storage. The performance difference is huge. Simply adding a few local SSDs make the servers scream compared to clogging a fiber channel pipe with swapping and nonsense reads. Therefore, there is no reason to boot servers from SAN. Instead, simply push via PXE an image of the latest boot to the blades. In fact, this makes management REALLY EASY. So bye bye block storage over a wire here. If you really need remote block storage, consider StarWind (much much less expensive than DataCore and quite usable and more manageable). Or from Linux, just create a glusterFS and use LIO to share via iSCSI. I've also used LIO for FC but SGST seems more reliable for this.

Second, scale out file servers have absolutely no limits on performance or scale as SANs do. I've been seeing million+ IOPS on generic hardware for a while now. With 8 servers running 20 cores and 384gigs of RAM as well as hybrid storage on Windows Storage Spaces you can fully saturate 8 40Gbe interfaces per blade. Unlike this cute little IBM SAN, that's 320Gbe x 8 servers for closer to 100GB/s. Add enough SSD to each server and it will fly.

What's more, unlike IBM's tech which requires hopes and prayers that Veeam will be nice to them or that customers have to use crumby generic backup systems, Linux GlusterFS and Windows Storage Spaces are first class citizens for backup.

The bigger selling point to DIY storage is that IBM now has like 12 different storage systems. I have dozens of customers who spend tens of millions of dollars constantly just to upgrade storage... Why? Because vendors make something new and the old stuff doesn't matter to them anymore. SAN loses 80% of its value the moment you place the order.

2
0

Mud sticks: Microsoft, Windows 10 and reputational damage

CheesyTheClown

Re: I'm a bit confused

I would agree with you, but these days he can't even do his homework without a PC. So he's been using his grandparents' computer and my parents have been paying for way too many repairs (we're in different time zones and frankly I hate remoting into a PC that's so loaded with malware that clicking start takes over a minute). So, it was just cheaper to get him a PC that was little more than functional.

0
0
CheesyTheClown

Re: I'm a bit confused

I honestly don't see this as an issue of stealing from the OEMs. This is more like what Cisco did when they made UCS. They tried to tell their partners what they needed to make data centers really work. The partners (IBM, HP, Dell) all told Cisco they shouldn't dabble in things they didn't understand and they should just be nice and make network equipment. Then Cisco reinvented the entire server market by simply not selling servers. They instead worked towards selling data centers. Even now, Dell, HP and Lenevo have no idea what hit them. It's not that Cisco's sales team are so good, it's that Dell, HP and Lenevo just really suck at making data centers.

Now enter Microsoft who year on year was losing market share to Apple who clearly understood that power users were not important. Power users account for a very small percentage of the market. What was more important was fashion and function. Apple spent a decade making one high volume device after the next after the next. Apple never once released a powerful device, they instead turned Jonny Ives into a brand and turned on the sex appeal. They hired marketing from Tag Heuer, Burberry and more to master the art of designing a top selling fashion brand that made simply owning one of these devices would make you special. They added features and presented them as if having access to them and using them would make girls or guys swoon at the knees to been near you.

And then they focused on building a market where the initial purchase wasn't even the sex appeal, but the accessorizing was. There are 3 year olds who are asking mommy for the new app or to buy an in-game add-on. Apple mastered the art of making sure that whether you buy the cheapest thing they sell or the most expensive was irrelevant, they would get $0.30 from this sale and $1.30 from that sale and then stand on stage bragging to everyone that they made hundreds of billions of $$$ by charging what is basically a 30% credit card processing fee... and then they cut out the middle man by starting their own payment processing system so now even VISA and the others won't see their 3%.

So... here comes Microsoft who realized that Windows could never have sex appeal if you leave it up to vendors like HP to do it. The problem with the PC world is that the PC vendors need to sell PCs. As such, if they deliver someone else's platform like Microsoft Windows or Android, their only path the profitability is through repeat hardware sales.

This isn't like the Android phone world where Samsung can make a profit by selling Google searches after the phone ships. HP, Lenevo and Dell have absolutely no possible way to make a profit following the sale of a PC to a customer. Microsoft has the option to insert music and video stores and such. But the PC vendor is screwed. If they can't sell you another PC, how will they make a buck.

So, the big PC vendors are terrible at sales. There was a time when there was a Gateway 2000 store in nearly every town and you could pay premium prices to buy a computer directly from the vendor. This meant the vendor could actually benefit from some money for supporting the computer or at least not having to split the warranty cost with the store like BestBuy. It didn't work because it just wasn't the right time and frankly Gateway 2000 overextended since they were basically selling other people stuff and the margins weren't good enough etc...

So, when Microsoft comes along and opens stores, sells their own computers and while they do show off computers from their OEMs, no one visits a Microsoft store to buy a Dell or HP. You go there to buy a Surface because it's clean and sexy. It's also supported and even now, I have no problem getting support for Surface Pro (version 1) devices from Microsoft. I still get firmware updates for it as well. So instead of some crap vendor like Dell, HP or Lenevo selling me a computer and basically telling me to buy a new computer the second I need a new BIOS or driver, Microsoft is supporting their devices long term.

Microsoft is still missing one major component... marketing. They can't keep sending Panos Panay onto stage like that. This guy is "soooo local" in the sense that he is REALLY REALLY Seattle. Watching him makes me want to gore myself with an over-sized cork screw. Steve Jobs was said to practice his speeches dozens maybe even hundreds of times before getting on stage in front of people. He would get every word just right. Panos Panay is actually as big of a disaster on stage as Steve Ballmer. You need someone up there who can sell sex. Panos Panay should be selling pizza. I'm sure he's great at what he does... just some people should never be broadcasted around the world. I can only assume that his creepy Seattle behavior lost Microsoft millions of sales because people were creeped out by him.

Microsoft should dig up some people who are great on camera and stage and make them co-VPs of their divisions and make their jobs nothing more than presenting the next products. Give them 18 months from release to release to do nothing more than stand in front of mirrors and perfect the art of selling Microsoft's next big thing. These people should be pretty but not too pretty. They should look like the person everyone wants to be or be with.

Basically Microsoft needs to learn how to market fashion. They now entered the fashion business, it's time to learn from the best and compete. Jobs is dead but there's thousands of hours of video of him out there.

0
5
CheesyTheClown

Re: I'm a bit confused

Oldcoder,

I'm not sure how that would save me any money. Software licenses make up almost peanuts on the cost of a DC these days. Using retail pricing, It's like $12,000 per server for all Windows Server 2012 R2 licenses including the guests running on them. If you buy 72 Cores, 6TB per server, you put 3 into DC A and 3 into DC B... that makes it cost a total of $96000 for operating systems. And nobody pays retail. So it's like $40,000 instead.

The reason you use Hyper-V instead of KVM is that Windows is paravirtualized on Hyper-V and RAM consumption is more than halved on Windows Guests. That saves nearly $2 million on RAM. Add the fact that it appears that server 2016 will paravirtualize the scheduler as well and that saves maybe $500,000 on CPU. Then add to that for interconnect that Hyper-V has full RDMA support on Ethernet or Infiniband and you can save a ton of time migrating virtual machines.

I love KVM and Open Stack, but I have a business to run. OpenStack and Azure have grown to become amazing platforms. The main difference is that I simply don't want to hire 50 people to sit around dicking with making my systems run on OpenStack. I'd prefer to just download prebuilt, supported and maintained apps from a vendor who has ten thousand customers depending on that app, so when something goes wrong, the vendor works out not only a quick and dirty hack, but a solution that had some quality control on it.

I suppose the comparison is the C programmer and the C#/Java programmers. The C programmers want to reinvent the wheel every time they start a project. They want to have absolute control over every function and they're going to write a 5 million line system using all kinds of archaic methods that end up using things like GObject from Glib because instead of using an object oriented language, they'll reinvent the entire concept of an object and even emulate the C++ vtable using macros which are damn near un-debuggable. Their lack of dependence on a garbage collector which require them to invent a memory management system which is slow as hell because clean-up always has to be run immediately instead of during idle cycles. Their code will take 3 years and require 25 developers working full time to maintain and they'll constantly suffer from memory leaks, performance issues, lock-ins blocking refactoring etc... They will claim that their method of programming is pure and better and the buy paying them to write it will think he's getting state of the art awesomeness but in reality is just getting a headache.

The C#/Java developer will focus on the job which needs to be accomplished and make use of pre-built libraries, a great C#/Java developer will make use of asynchronous tasks and manage their garbage collector and limit wasteful tree walking when it's not needed. They'll use proven and optimized classes and interfaces which are used by a million other developers and are hardened like a rock. While primitive operations in the C code ran much faster, overall the C#/Java code accomplishes every business task faster as optimized C# and Java code tends to use far less complex algorithms and thanks to threading and delayed memory cleanup, the code can spread cores cleanly. And most importantly, they'll develop 50,000 lines of code and deliver code which is clearly written and maintainable. Oh and they were in production in a month or two.

There are simply no platforms currently on the market which actually facilitate delivering business on OpenStack with KVM and/or Docker. The tools themselves exist, but the packages simply don't. Azure is a hair better since Microsoft's has worked REALLY REALLY hard on making "applications" or packages easy to move and deploy in multiple places. The biggest shortcoming in my eyes currently is that I can't see how to legally (within the license) distribute Windows Server as a guess as part of a package unless you're Microsoft.

Oh.. and Hyper-V runs RedHat (which is obscenely expensive) as a guest. So, add to that that Windows has full Docker support as well, I don't really care what the guest OS is, Hyper-V and Azure has me covered. Unlike RedHat, KVM, etc... who will probably keep focusing on 10,000 new ways to deploy new apps, I'm pretty sure the Azure platform is pretty stable now. I'm pretty comfortable knowing that Azure Stack, once officially released will have excellent app support and that most vendors who offer their apps as part of the Azure Cloud library will support Azure stack as well.

It would of course be nice if Amazon or Google released their platforms for private clouds as well, but I don't see that happening. Since neither me or any of my customers can legally use public cloud or any cloud not on their physical premises or connected to any Internet connections at all, those systems are simply not in the cards for us.

0
4
CheesyTheClown

I'm a bit confused

I do experience some hiccoughs but I haven't noticed anything fundamentally wrong with Windows 10. I really miss the Windows 8 start menu, but other than that, it's just another OS. It runs, it's fast, it's mostly easy to use. It's generally for the time being a much more user friendly experience than Mac OS X, but that's mainly because Apple treats OS X like a bastard step child these days.

I was under the impression that Windows 7, 8, 8.1 and 10 were killing the PC industry since there hasn't been a real need to buy a new computer anymore. Once you have an SSD, it works on almost everything nicely.

I just bought a Lenevo Ideapad ($150) for my nephew who felt that smashing the gaming laptop I bought him during a hissy fit was a good idea. It's an Atom with 2GB of RAM. While it will never run Crysis, Windows 10, Chrome and Office ran like a champion on it. If anything, PC downgrading has made a lot of sense since Windows 7 and SSD drives came out.

So, PC companies are whinging because Microsoft has reached the point where optimizing and tuning is what they're focusing on. Meaning today's computers will likely run just fine for another 5-10 years. They're all upset that their glorious market based on new features in software needed new hardware. Well... that time is over. Not only that, but now Microsoft is making their own PCs which is great because that means there is finally a reference platform. If the other guys can't keep up, screw-um.

Now I'm waiting for Microsoft to finally make servers. I would drop $5 million on a new data center if I could get Azure Stack in a box.

12
32

Dodgy software will bork America's F-35 fighters until at least 2019

CheesyTheClown

Re: Can someone please...

How about Norway? We bought 50 of these and were told "It's the cost of membership to NATO... We don't need them or want them and we can make remote controlled drones for 1/100th the price but America says we have to buy them or build our own damn military".

I personally would prefer to simply see a small fleet of remote controlled drones piloted by some gamers who can control 10 at a time each. It's not like we actually need pilots in the cockpits. The pilots in the F-35 don't really see what they're fighting or aiming at. We simply moved the game console into the cockpit and spent trillions to do so. Every aspect of the F-35 is electronic. There isn't a single connection between the pilot and the plane. So far as I can tell, most of the flaws related to the F-35 are related to the human being actually in the plane. So why not just retrofit some C-130s to have consoles where the pilots of drones sit to keep the latency relatively low. Then drop drones out of the back of the cargo planes and have 100 smaller planes for each F-35?

I guess it's the bravado factor. The rednecks running the militaries think that you have to feel the G forces to be able to fight. It's pretty funny to think that a few guys in a maker space and some talented video gamers could probably out do the biggest defense contractors and the fanciest pilots.

A guy at my local maker space has been doing some great projects with small scale jet engines. It might be fun to see what would be born if he made a drone :)

9
1

Hackers crack OS X, Windows, web browsers' security to net $460,000

CheesyTheClown

Re: MS edge

Theoretically impossible to secure a web browser.

In all honesty, Edge (and I'm no fan of Edge) is holding up pretty well for software which is so massive and so new. Having worked for years as a browser developer, I can't possibly imagine many good ways to both implement functionality as well as harden a browser except through a reactive method of closing holes once they are found.

1
2

Microsoft SQL Server for Linux is a brilliant and logical idea

CheesyTheClown

Re: Why

I wondered the same. I don't think it's that simple. SQL servers are made up of three distinct components (of which I may be clueless to proper names for) which are the front end, the query engine and the storage backend. This is common for most systems so far as I know. To achieve redundancy and scalability, it makes sense to have three or more of each type of node. This allows one node to be in maintenance while two remaining nodes are providing high availability. That's 9 nodes for a base configuration. It also means that platform related performance issues are probably less relevant than the core pipeline structure of the SQL server itself. So I would speculate that performance should in theory be equally optimized for each platform because of the base nature of how the SQL server would have to have been built to distribute work loads in such a farm.

There are likely many reasons to choose different SQL servers. This is similar to how I commonly use SQLite for local RAM oriented tasks vs. SQL Server for enterprise tasks.

Microsoft has a great strength on their SQL server platform because it's impressively well documented and a great deal of thought has gone into manageability as well as security and scalability. It's actually possible in an MSSQL environment to clearly calculate Big-O for different queries and stored functions. Other engines are commonly a black box. SQL Server is also a bit of a beast that hoes far beyond simple query processing and ISAM. It's more compatible to Oracle than MariaDB. It has excellent blob storage and is actually well suited for object storage.

Does this mean that SQL is a clear winner over the open source alternatives? Probably not in many normal cases, but it is a system that could simply add up to a more agile platform overall if employed properly. Microsoft also offers superb and structured training for nearly all facets of SQL Server which makes it very attractive to corporations. You can probably achieve the same things on other platforms, but operators, users and developers alike can learn nearly every component of SQL server without just hacking around and googling it to death. That's worth A LOT.

As for a Linux version, I believe Microsoft must intend to deploy additional management and services on Linux and while SQL server is probably best hosted on Windows, many products will benefit by having an MSSQL express environment native to Linux and Microsoft probably justifiably believes a partial port doesn't make sense if they can productize and maintain on Linux as well. So, I'm pretty sure the goal of this exercise isn't to build a mass market product, but instead to provide the storage solution for some other cross-platform products.

9
2

No Kinetic energy at DataDirect Networks: Ethernet drives snubbed

CheesyTheClown

Poor design

The protocol was designed by developers for developers. Developers don't buy hardware, IT guys do. As a result, the protocol is scary and cryptic to anyone other than DevOps. Even DevOps have too many things to work with that using Object/Key storage is a less than optimal solution.

Sea gate and others have done a very poor job of seeding drives to DevOps guys and as a result there is simply no interest. This is an awesome example of absolutely terrible marketing.

0
0

Cisco to partners: We're all doing services now – resistance is futile

CheesyTheClown

Cool!!!! Christmas early!!!

In two weeks I'll be delivering a full DNA workshop on Prime 3, ISE 2 and APIC-EM as a single product. Full network deployment with Plug and Play, Compliance Manager, Network Automation and Dot1x etc... It's gonna be awesome ;)

0
0

Cisco CTO: Containers will ride to private cloud's rescue. Oh yes!

CheesyTheClown

It's about cost

Public cloud makes sense because it's damn near impossible to get all the features and stability of a public cloud service like Amazon or Azure in house. Expertise are expensive and IT staff is a bloody nightmare because they have specialties and finding a data center expert is nearly impossible. I know I have never met one and I train 300 "data center experts" a year.

Private clouds make sense when it becomes possible to buy a single finished solution which mostly updates and manages itself. 3 solutions nearing this have been released this past year. Cisco's open stack solution is pretty close but more or less useless unless you're only deploying containers and have no real world needs. Dell and Microsoft's solution is excellent but is only a quarter finished and lacks support from a good organization (Dell sells servers, they suck at cloud).

If Microsoft ships servers (could happen), game over. Private cloud rocks. Of course, Cisco and Microsoft could make it happen too.

0
0

HTTPS DROWN flaw: Security bods' hearts sink as tatty protocols wash away web crypto

CheesyTheClown

Kyle Lady says...

Did we need a "Security Expert" to tell us that "We need to implement security"?

Ok... so step 1... if you want to secure your network, learn how to secure your network. Some numb nuts from a security firm can charge you $10,000 a day to run free tools off a Kali Linux download and print a report for you and tell you 10,000 places you need to fix... and btw, I can sell you this tool.

Or, you can do your jobs, run Kali yourself fix the obvious and mitigate the problem children like appliances which have HTTP but not HTTPS since they run on 1KB of RAM (think about your PDUs).

Remember security experts don't know how your network works and they don't really care. They just run scripts, print reports and sell stuff. You can skip them and move on.

0
5

VMware licence changes put users on upgrade treadmill

CheesyTheClown

Disagree

With Microsoft we blame Satya or Bill

With Oracle we blame Crazy Larry

With VMware we have no idea who to blame. I'm pretty sure it's a team of guys wearing ties and one fairly attractive girl wearing shoulder pads who don't actually know what a VMware thingy is but think by saying words like synergize and pointing to nifty Gartner graphs they can run a tech company.

I can't really say for sure when VMware took the nose dive, but I think it was around the time that EMC bought them out. VMware propagated the entire market like wildfire because it was the only functional choice. More and more people used it and the people running the company treated it like it was oil or water... after all, if you make a bad business decision with oil, water or VMware, your customers will still be forced to use you and you can always pump more right?

So, for the last several years while Microsoft and others have been working towards the goal of operating one of the biggest and most reliable data centers in the world, the VMware guys have operated with no leadership or direction. As a result, they make 10 competing and incompatible products whenever a new buzzword arises. They keep increasing their prices for products people already have and insist that bug fixes should be paid for by buying entirely new infrastructures. They don't really innovate anymore, I can't think of anything they've done which is even moderately more than **snore** since VMware 4.

The #1 reason we still use VMware is that we can pretend that we're running 20 year old server in a Window and that's easy. The other tools require us to plan and read something to make them work. VMware can be installed really badly in 30 minutes... and it will let you run it like that for years.

8
1
CheesyTheClown

AWESOME!!!

I LOVE IT!!! I just called my wife and let her know 'Baby, we're going to Disney!!!"

Thanks to the anti-upgrade called VMware 6, I moved away from VMware because of this specific nonsense.

1) Any version less than Enterprise Plus is a REALLY BAD IDEA!!!!

2) Networking isn't an add-on in the data center, it's a selling point. Microsoft and OpenStack have networking and they both did it REALLY REALLY REALLY well. NSX did it kinda ok and VMware tells all my customers that in order to run kinda OK networking, they would have to pay almost 10 times as much for the license. NSX should be part of standard.

3) Nearly half the new features in VMware 6 cause unfixable errors unless you intentionally deploy those features half assed.

So... I moved completely to Hyper-V now and while it hurt really bad at first, it's been by far the smartest move I've ever made. So now I go from class to class, company to company, government to government and convince them to stop blowing their budgets on VMware and instead make use of those Microsoft Enterprise licenses they already are paying for and actually use them. Like... "Why are you paying $7500 per CPU socket for VMware when your enterprise license from MS already gives you everything you need?"

Oh... you wanna know the best thing about switching to Microsoft? I was able to get updated drivers for all my hardware (been a big problem on VMware lately) and build, test and troubleshoot my network in a clear orderly fashion. When I domain joined all my Hyper-V servers and Azure Stack, the CA simply pushed certs to all the servers and I had easy installation of certificates. Oh and the management tools for Hyper-V were just plain better. As long as you avoid ever trying to deploy applications from within SCVMM, it's 1000 kinds of awesome.

4
5

Nearly a million retail jobs will be destroyed by the march of tech, warns trade body

CheesyTheClown

Why so long?

Using current tech, there is no reason you couldn't require people to scan their payment method (phone, card, etc...), weigh the shopper and then require they weigh the same when leaving and scan the items necessary to make up the difference. Alternatively, RF tag everything and just require payment for the items scanned.

It's really not hard to do with modern tech. Maybe the cost per RFID tag would be less than the cost of a human to process the checkout.

1
1

FBI v Apple spat latest: Bill Gates is really upset that you all thought he was on the Feds' side

CheesyTheClown

Re: Wasn't Gates...

I use a term which I don't if it's mine or someone else's. I call it "Journalizing".

Journalizing is when a journalist performs and interview or "research" and digs up enough information to create an article. They will for the purpose of "integrity" ensure that decent journalists will always be able to identify references and provide proof that they aren't actually lieing that someone said something. But they don't need to say the whole thing.

As proof that my daughter has an excellent future in journalism, when she was three, she told the nannys at the day care that "Pappa drinks a lot". She was quoting her mother who told her that "Pappa drinks a lot of coffee". This meant we had to spend an hour in a meeting/counseling because I, a person who drinks approximately 5 liters (little more than 1 bottle a month) of beer annually was being accused of alcoholism. And of course, when a 3 year old accuses you of drinking a lot, you can't ever argue against this because then it's just denial.

I've worked with journalists over the years and I've learned that you should pretty much never take anything the say on face value because 99% of what you find interesting about them is journalizing. You always have to ask yourself "Was this the whole quote or a partial quote which sounds more fantastic when presented this way?"

7
0

Prison butt dialler finally off-hold after 12-day anal retention marathon

CheesyTheClown

Hmm... strange... no butt plug phones

I googled and searched on eBay for butt plug phones and came up with nothing.

It seems like prisoners would probably pay out large amounts of cigarettes to get their... hands (maybe other parts) on a phone shaped to be "safely stored" in their asses.

Maybe HTC should work on one of these... it seems to be an optimal fit for their business model... I think it might be best to refer to it as a disposable model... preferably not for recycling.

0
0

NASA's Orion: 100,000 parts riding 8 million pounds of thrust

CheesyTheClown

Re: The march of technology...

I have to admit, I was at one point of my life obsessed with the technology behind space flight. I join you in your sentiments and agree with you completely. While the technology has progressed substantially in new rocket designs, as a technologist (self proclaimed) I am thoroughly disappointed with the rate of progress.

I believe that last 15 years of space travel has been a massive success for no other reason but having attempted to move past the nonsense related to the archaic model of development of space travel. I believe strongly that we have made far more progress since privatization of space flight has become a reality. Companies like SpaceX and Scaled Composites or even just John Carmack's endevours into vertical takeoff and landing has been a huge improvement.

SpaceX is likely to begin losing their agility before long. They are slowly letting NASA and the government in general have too much say in their development. Orion is a scary project because there is too much of the old model involved in their business.

I think that SpaceX and Bigelow combined could be wonderful. They could in theory open the path for making it possible to begin making far better space craft... in space. Maybe within ten years we'll see companies like Virgin shuttling people to and from a space station where they can construct large scale space transport without first needing heavy lifters like Orion. It would be optimal to launch large spacecraft as pieces on top of smaller vehicles, assemble them in space and then launch them. The next logical step wouldn't be going to the moon, but instead going to lunar orbit and establishing an orbital station that could be used as a station to ferry people too and from the moon using light weight vehicles well suited to the task.

The idea of Orion has always been scary because it suggests that we need to be able to reach our chosen destinations directly from the earth's surface. Building a rocket that can go directly to the moon always sounded stupid to me. A space station in earth's orbit and another in lunar orbit could make this far more efficient. Then there's mars and beyond. Just the cost of launching directly from earth to the moon is outrageous. The massive amount of fuel required is unacceptable.

How about the additional benefits of being able to keep rescue vehicles on the ready at the stations we build? As a result, it would pave a path that could truly limit the dangers of being stranded because all rescue missions would have to be launched from the earth's surface. I don't think we'll see personal space ships like those in the TV shows like FireFly any time real soon, but I do think we can see vessels making regular trips from earth to the moon to mars and back within 20 years.

Things are really improving and it's certainly a good thing to have another player in the game to reach space. But a massive vessel like Orion just seems like the wrong way to do it :(

1
0

Feds spank Asus with 20-year audit probe for router security blunder

CheesyTheClown

Asus is Asus... it's not Cisco, Aruba, Meru, etc...

Asus is a cheap home wireless router that you plug in, turn on and you're done. If you're concerned about security, it really just doesn't even matter what brand you use, if you don't properly monitor and configure patches and updates you're screwed.

These days, the best solution would be a Windows based router with automatic updates turned on. At least then every now and then there's a chance a security patch will come in. So far as I know, Asus isn't doing weekly or monthly updates of their firmwares. They aren't doing daily updates of their firewall rules. They aren't running a security management center or even contracting someone else. They simply sell a wireless router and occasionally offer a feature patch which next to nobody installs.

There's just no point to this. So far as I know, there's never been any claim by asus to be a secure device. I was pretty sure their selling point was "Any idiot can plug one in".

0
1

German mayor's browser tabs catch him with trousers down

CheesyTheClown

Is there a website for this?

How about a web site which offers convincing lies for people who get busted like this? I've always been a fan of "I clicked some strange thing a while back and ever since, my browser keeps automatically opening these links. I don't even notice them anymore".

2
0

Feds look left and right for support – and see everyone backing Apple

CheesyTheClown

Re: That will only access certain data

hmm... interesting. I went through the Apple security document as you mentioned. In addition, I read the system programmer's manual for the ARM TrustZone/SecureCore.

First, as always, from my experience hacking on the platform, as always, including the core doesn't mean effectively using it. There's a huge amount that's out in the open since it was probably too difficult to have security and usability in the same device. It would kinda suck if every time you received a push message or e-mail, you'd have to type a password to let the software act on it. So to speak, while the lock is itself quite secure, they leave the keys in the door most of the time.

I of course depended a great deal on unlocking a phone where the keypad was locked but the keys were already provided. Dealing with a phone that has been power cycled, I speculated a great deal on. I don't have any more spare phones right now, but I'm pretty sure I have some good plans for getting the phone open anyway.

The keys used for encryption are too long to type and are fixed length so they have to be stored in a locker somewhere. The locker may be the secure core but that would suggest additional non-volatile memory for key storage as part of the secure core. I don't see this being part of the securecore. This means that the keys themselves have to be stored somewhere out in the open where anyone can play with them. I'm quite sure those keys are also encrypted, but using a 4 digit pin or 6 digit pin to release them shouldn't be overly challenging as the algorithm must be present in the OS code... single stepping that to identify the cipher generally isn't too bad. IDA pro would do most of the work for you anyway.

If the phone is off and that doesn't work, there are more than a few other goodies in there.

To begin with, it looks like the system is designed to use relatively run of the mill symmetric block ciphers. There should be a few hundred thousand blocks with known signatures a the block headings that can be used to identify the counters. If you're lucky enough to have a bunch of files with highly predictable and relatively long headers like JPEGs or PNGs, then factoring the encryption key should be pretty easy. AES for example can usually factor key length to 40 or 50 bits when using a large number of known headers from files. This is why things like PGP exist. Using key exchange asymmetric ciphers is always better, but even they get really weak when you have enough known/predictable data to decode on. This is why most secure protocols don't encrypt headers or if they do, they use something special for the headers.

I've fallen asleep three times while writing this, so I'm hardly at the top of my game. But honestly, I'm tempted to go buy a bunch of iPhone 5s's today and see how many I'd have to fry before I could reliably recover the data. Too bad I have a business trip this week and can't spend evenings at the local maker space.

1
0
CheesyTheClown

Re: That will only access certain data

You're right, but that depends on using apps that properly implement security. Very few people read that document and as a result, most data they are looking for is in the wide open. The same code which allows e-mail messages to be received while the phone is locked is exploitable for mails store database access.

What is possible and what generally actually happens are two different things. Unless the criminals really went all out to make sure they only used apps for storing this information that were super-secure and they also paid particularly close attention to following all security recommendations, most of the data is easily accessible.

Also, as mentioned earlier, recovering enough information to obtain enough information to generate hash collisions should solve the rest of the problems. Fingerprint and pin codes are not a huge challenge.

1
0
CheesyTheClown

FBI mishandled evidence again

Here's the deal,

1) Confiscate the telephone while it's still powered on and the pin code has been used at least once. When this happens, all data is able to be decrypted through the normal operating system read and write commands. Also, simply dropping or tossing the phone should leave the phone in a still stable state for reading this data.

2) Attach an external charger to the phone immediately and leave it powered up the entire time until it has reached the forensics lab for data extraction.

3) Open the phone carefully avoiding removing the power cable and battery cables at the same time.

4) Ensure the power cord is securely inserted

5) Remove the iPhone battery and the main screws supporting the system board. It is ok to remove the screen as well. This won't impact the phone operating.

6) Reattach the battery (better yet, attach a battery that you're 100000% sure is charged). Hot glue the battery connector in place to make sure it doesn't come lose.

7) Disconnect the power cord from the base of the phone

8) Life the system board from the phone (gently of course)

9) Depending on the model, there are a minimum of 5 individual exposed vias or test points for each of the 4 relevant JTAG pins on the Apple CPUs.

10) Using the ARM ICE debuggers, connect to the CPU and switch to single step mode.

11) Door is open... from here you can

a) Extract the hash for the pin code to unlock the device properly. Run the has through John the Ripper to identify a 4 digit collision.

b) Extract the finger print points used for user verification so they can be fed into the device electronically to unlock sensitive data including bank accounts.

c) Image the flash after it's been decrypted by calling block access functions on the flash through the OS and therefore decoding the data in the process to get an unencrypted copy (will take as much as 3-4 days due to JTAG performance limitations)

d) Upload a new program to perform the same copy but bypassing app restrictions and perform it over wireless... takes about an hour.

e) Call system file i/o functions to read individual files... surprisingly difficult given the object oriented nature of the IOS file store.

There are endless methods for extracting data from an iPhone.

Alternative for powered off devices :

1) Image the flash via flash JTAG pins (unfortunately slow but effective).

2) Remove and copy all nvram (haven't done this yet... so would test on disposable test devices first)

3) Solder an FPGA in place of the NVRAM devices and use Altera/Xilinx logic probe functions to capture and decode write operations to the NVRAM

4) Follow similar steps to hijacking the kernel via ARM debugger, call the phone PIN code unlock functions and brute force, reset the phone after 3 tries.

5) Recopy the flash and compare the input (original and changed) as well as the NVRAM changes. Change the modified blocks back to the original values.

6) Repeat step 4 and reset only changed blocks after 3 tries. Brute force the 4 digit PIN.

I can probably come up with 20 other ways if I needed to. The first crack on iPhone 6S Plus I did took 23 hours and 5 Red Bulls. I wasn't really even trying very hard... probably spent 2/3 of the time reading and watching TV shows.

I really just can't see why this is such a big deal. If the phone can decrypt the data to begin with, it's going to be relatively simple to get it back. It doesn't even require someone particularly educated, I'm pretty sure more than half the guys I went to electronics class in high school with back in 1989 could do this.

Maybe the FBI (and others) should spend less time screwing around with court orders, quit listening to idiots in suits and instead, swing by a local maker space and look for a guy with Aspergers who really likes puzzles.

37
0

Hey virtual SANs – say hello to a virtual filer

CheesyTheClown

How did you get that?

I couldn't get past the blah blah blah

Honestly, I prefer storage to be all REST API driven so I can use System Center Orchestrator or UCS Director to do all the configuration an such. After all, who the hell wants to manage yet another stinking system through another stinking system.

On the other hand, it seems that Windows Storage Server 2016 has more or less all the crap I need for large scale deployments how. It looks like if you use Starwind on top of that, you can also have a pretty decent iSCSI solution for booting and a scale-out NFS solution for legacy systems based on VMware.

0
0

Patch ASAP: Tons of Linux apps can be hijacked by evil DNS servers, man-in-the-middle miscreants

CheesyTheClown

Re: I'll bet...

Oh... ummm I forgot...

RedHat generates absolutely massive amounts of "it kinda works, it must be done" code.

Google does pretty well when they're focused. I'm actually often amazed at how much good code comes from them. That said, there's a good bit of slop as well. But would you seriously believe you can employ that many programmers and have nothing but good code?

If RedHat were out of the game, there would be far less new bad code in Linux.... that said... there would be far fewer bug fixes as well. So I'm not sure if it would be a good or a bad thing.

I'm hoping there will be a new small and simple OS which could make a run for being the new "Let's try it" platform.

0
0
CheesyTheClown

Re: I'll bet...

I'll bite... I know I'm stupid for doing so, but I'll bite.

Look at the description of the bug. This is something which should never be able to happen in a proper code review environment. So far as I know, there's no company or operating system which has large number of highly skilled developers actively watching their repositories for this kind of stupid.

Linux, FreeBSD, Windows, Mac OS X all suffer differing levels of stupid. This particular flavor of stupid is actually as the troll suggest a special kind of linux stupid. Let me explain.

While the Linux kernel developers and to some extent the glibc developers have embraced within some constraints the use of data structures, their means of embracing them has always been weird and highly inconvenient.

See, where object oriented languages make implementing data structures a breeze and therefore can centralize major fixes of code to where the failure exists, structured languages like C tend to make use of some interesting creative tricks to accomplish the same. The gnome community for example implemented gobject which is the most obscenely inconvenient mechanism to reproduce the entire C++ language in C ever ... well next to Microsoft's COM. They go so far as to manually implement vtables which in a single inheritance environment doesn't cause much harm, but in multiple inheritance can be a disaster. On top of that, they implemented some of the weirdest RTTI methods I've ever seen.

glibc doesn't use gobject. Instead it tends to borrow from the Linux kernel kind of stupid which makes weird use of over-inflated monster structures which are REALLY REALLY REALLY efficient, but their complexity is bonkers kind of stupid. I've seen so many poor uses of rbtree.h and rbtree.c that I shake in my boots whenever a header file includes rbtree.h. I also know that all it would take is one bad line of code in rbtree.c to completely destroy the entire linux kernel for security.... and it has barely any unit tests at all.

Well... at least if the glibc guys would have used a linux style data structure, this wouldn't be a problem... but they didn't... instead they decided it was too much work to use one of the simulated classes. Instead, they reinvented the wheel... with 4 sides on it and made an array and chose to manage it themselves. This means all security holes or bugs found in the code would be localized. So, while this bug has been fixed in 5634543 different places in the kernel and glibc already, it was probably too much work to fix, so they just left it there. Funny thing is, I probably saw it a long time ago (1999) when I was writing a DNS resolver and peeking at glibc to see how it's done.

Let's be honest though... all operating systems have these problems. Only Lintard and Wintards and so forth are stupid enough to think that it's unique to the other guy. If you actually were smarter than an amoebae, you'd realize that all code is insecure and Windows and Linux are both pretty decent for what they do but should never be trusted for security. That said, neither should any other code.

I regularly teach how to hack through Checkpoint, Cisco, Palo Alto, etc... firewalls. I show that finding a nifty problem in a kernel driver or better yet in the syscall interface of the kernel can give you a golden ticket without the firewall software ever seeing the malicious code. I've got a few in my toolbox at the moment for Linux if I need them. Darwin is a goldmine of them. Windows is a little trickier since you have to actually dig a bit because it's closed. But, pretty much all operating systems are written like shit.

If you want a personal opinion on which I think is cleanest at the moment, I actually have to give Microsoft the crown. Ever since the introduction of the Windows 8 kernel, it's been such a massive improvement that I like them best. They have some of the best coding practices at the moment and they seem to be taking process really serious. There was a few shortcoming in retaining legacy driver support in Windows 10 which bit them, but at this time, they're quite good. Mac is pretty close to the bottom. Apple releases more half-finished code than even GNU does these days. Their unit testing is pathetic and I expect there to be massive amounts of "Fixed it.. broke it again.,.. fixed it... broke it again" in the Darwin kernel.

LLVM is maybe the most important project ever in the open source, but the quality of LLVM has been decreasing far too rapidly. The errors and warnings generated by the compiler are generally terrible for assisting with identifying root cause or even general error location. As such, the quality of the Mac kernel is only as high as it is because of duct tape and crazy glue... possibly some bubble gum as well.

2
0

It's 2015 and VMware tools break VMs if you open two browser tabs

CheesyTheClown

VMware? Really?

So, I'm using trial versions of vsphere this month to spin up my Azure cloud which I'll migrate to afterwards. I don't deploy VMware in production anymore because it's just too damn unreliable. It's like they spread themselves too thin and they don't even know which product does what or even what it's called anymore.

With KVM and Hyper-V both being "feature complete", why would anyone depend on VMware anymore... except for hack and slash vsphere client crap IT work.

0
0

Marvell, Carnegie Mellon agree to slash disk drive chip patent check in half – to a mere $750m

CheesyTheClown

Wow!!!!

Just read the patents... I almost never consider patents to be special and not-obvious. I honestly wish I'd have seen these 8 years ago and licensed the tech... it could have saved me 500+ hours of development. Why haven't I seen this in any mainstream signal processing books?

It's just so rare the patent system works... I'm not sure what this means for the world... I wonder if the copyright system might work next... then hell will truly freeze over.

Maybe Kanye West can ask CMU for help.

3
0

Flash array biz Tegile swings axe on staff

CheesyTheClown

SSD and Masturbators?

Isn't Tegile the Japanese company who claims to make male masturbation devices that are so good they decreased the Japanese population?

1
1

Lenovo: China biz down, PC and mobile down

CheesyTheClown

How could Windows 10 help?

So far as I can tell, each new release of Windows improves efficiency and decreases the system foot print and hardware requirements. I would imagine that Lenevo needs to make money from being innovative or trying to be sexy.

The market is completely saturated today. Faster CPU, RAM, SSD, etc... don't make a difference. When everything is already faster than any of us can complain about, making them faster doesn't make us run to buy more. Also, computers are lasting 5 or more years now before becoming out of style. Nearly 10 before they're too slow. I still use a 2011 Mac Mini for most of my iPhone development and all I did is add and SSD to make it fast enough. My desktop PC is a Core i7 2600 from years ago, it's still REALLY fast. We don't need better PCs.

Thinner, smaller, faster isn't really practical anymore. My laptop is a Surface Book i7 w/ 16GB and 512gig SSD. It's bigger and bulkier than the Surface Pro 3 I used for a a year or two. It was worth it for the bigger screen and higher resolution. I wouldn't pay for another laptop for higher resolution though.

Microsoft and Apple are selling computers because they're cool.... Lenevo doesn't go for that. They should buy Vaio and fix that.

1
0

Xitore slings 4 million IOPS box under its arm, strides out into the light

CheesyTheClown

Kinda cool

It seems pretty nice but is there a market for it. High priced flash solutions are optimal for non-pooled storage where it's important to keep the storage in as few devices as possible. These performance numbers are extremely easy to reach and exceed in scale out environments at fractions of the cost and with guaranteed higher up-times through a combination of sharding and distribution.

SAN solutions are on their way out. Filers aren't dependent on the same centralized performance as FiberChannel and iSCSI systems are. Even FC and iSCSI are becoming a little more intelligent (not enough to waste money on them) these days. Scale-out and multipathing are the keys to performance and reliability.

There is nothing terribly interesting about their technology other than performance either. I don't see and scalability information. There's no information regarding deduplication acceleration to meet the needs of software/hardware actually being able to keep up.

I really like to evaluate this technology and see whether there's more to it, I would love to eat my words. I managed to build a system far faster than this recently and I'm quite sure I paid far less.

0
0

Bill for half a billion quid lands on Apple's desk in Facetime patent scrap

CheesyTheClown

Obvious and prior art

Agile network protocol for secure communications using secure domain names

US 7921211 B2

As a former protocol engineer for Tandberg AS (now Cisco) who implemented time synchronization and encryption as well as NAT/firewall traversal code, I would dispute this patent as :

a) unoriginal... the described invention is not an invention rather than a bundle of other peoples inventions in an obvious manor. The description on the invention even spells that out.

b) impractical... even in a QoS enabled environment with cut-through low-latency switches, the application of this combination is ... as Sarah Palin would say non-sensical. I highly doubt Apple implemented this "invention" as suggested.

c) obvious ... dns proxying has been a hiding technique we all have used for decades before this patent.

d) unoriginal again... obfuscation by "shaking the box" has been around since the 70's... even in film references.

US 7418504 B2

I haven't read it all the way through since it's so silly (almost a duplicate of the other) that it was granted. I feel that they chose their specific wording to make it so the patent reviewer wouldn't understand it well enough to say no to it. I probably could find 1000 holes in this one without much effort from the skimming I've done so far.

These patents feel like they were written because the author thought they were so smart and no one else would think of these things. In reality, they appear pretty simple and I'm almost 100% convinced that I could easily dispute most of the claims made. Of course, I would be shocked if there aren't earlier patents which would be applicable as prior art but would open things up for another case. People patent this stuff all the time. They pick some REALLY obvious stuff, write it down somewhat cryptically, pay a fee and then sue.

I don't care much for Apple, but I dislike patent trolls more.

5
0

Cisco slings speedier SAN switches

CheesyTheClown

I'm wearing a Cisco shirt... but

Fibre Channel is a SCSI based protocol that at one time was fantastic. It was the only option we had because virtualization solutions were first developing and bare metal servers barely were highly dependent on booting from block based storage. FC was the absolutel ultimate transitional solution for centralizing storage. Since that time, far more advanced protocols have been implemented in VMware, Hyper-V and of course Linux based solutions like KVM and Xen.

When you add it all together, FC is now antiquated, slow and has an extremely high cost overhead. This press release is proof that companies are spending millions more than needed on hardware for absolutely no apparent reason.

40GbE for FCoE is also the dumbest idea to hit planet earth in decades. Even with SCSI multipathing (an actually dangerous hack to the SCSI spec), as 40GbE is made up of 4x10GbE connections in a port-channel, the efficiency of 40GbE is so incredibly low that it is just an absolutely massive waste of money.

Instead of pissing away money on further on this old and useless tech, companies would be far better off building up a new FlexPod based on SD card booting, auto-deployed host-profile based stateless configurations and of course NFS or SMB3 storage networking which both natively multipath as they are UDP based protocols and scale to terabits per second instead of tens of gigabits. In addition, they don't require additional infrastructure and overpriced (and insanely inefficient) SAN storage solutions like EMC or NetApp are optional.

In all my research (considerable as it's about 30 hours a week of my job), FC and FCoE yield approximately 1/80th the performance and gigabyte per dollar compared to NFS and SMBv3. It comes closer to 1/200th when considering the additional overhead of operations.

That said, the new Nexus switches sound amazing (without the 16Gb FC) in the sense that they have reliable Ethernet (the cornerstone of FCoE) which can be effectively used to deliver Infiniband-grade RDMA which improves performance of SMBv3 considerably (widening the gap further) as well as providing amazing (3-4x) performance for virtual machine migration on modern hypervisors like Hyper-V and KVM.

All that being said, for over 40 virtualization hosts, Cisco's ACI combined with Hyper-V or OpenStack can cut data center costs another 50% over either these solutions. With VMware, it's closer to about 10% due to VMware's nearly 300% higher cost of implementation and operation compared to the other two.

1
2

NetApp hits back at Wikibon in cluster fluster bunfight

CheesyTheClown

Re: Some extra detail

Dimitris,

It's great to see someone from a vendor here, thanks for being part of the discussion... it makes this a little more human for many of us.

I have recently moved some customers away from NetApp. I more than likely won't be using NetApp anymore in the future either. This isn't because you have a bad product. It's an excellent SAN product, the problem is that it is in fact SAN. As a NAS, I simply don't see any logical reason to depend on SAN products when NAS has always been handled better by servers than storage devices. My customers range from between single shelf unit sized through US government three letter organizations with purchasing budgets of $500 million per project. I just re-architected one of those deployments to use SAN for boot drives only and that was NetApp, but we moved from what was likely to be a 4 petabyte SAN to a 16 terabyte SAN and 4 petabyte server cluster instead.

Here's the deal. 7-mode was amazing. C-mode, not so much. NetApp's documentation is far too messy and the operations of C-mode is far too complex for what the customer's need. This is 2016 and the idea of having a team of full-time storage admins seems utterly ridiculous. In my experience, NetApp C-mode is utterly unusable without advanced training which is very expensive and generally not particularly good.

C-mode documentation is pretty awful. The web gui is unusuable in most cases as NetApp support keeps saying "Oh... I only use command line...". The PowerShell API for C-mode works... barely... which means it's not manageable through either Azure Pack or System Center Orchestrator. The network management is extremely monolithic and although there's support for things like virtual interfaces, they don't scale very well past a certain point.

Redundancy is a problem too. NetApp's drive prices are so outrageously high, I often wonder why you don't just give away that racks and controllers and charge a fortune for disks. When installing a drive requires installing 4 since you need redundancy within a single array as well as redundancy in the other data center, using disks that are generally twice the average industry cost isn't going to work.

Due to licensing issues with OnTap, the resale value of NetApp devices is virtually zero. You've locked the users so tightly into your cloud management system that as a company is expanding, they know they'll have to simply eat the loss on their SAN from you because they can't effectively expect to be able to resell it. Therefore, once you buy a NetApp, you scale it as best you can until you throw it away and buy a bigger one with almost complete loss of initial investment.

Performance is a real problem as well. C-Mode has a hard limitation of 8 controllers which isn't too bad, but the expandability of each controller is really limited. There isn't much room for upgrading RAM. The big solution I just specced out contained 8 servers each per data center (across 5 data centers). Each server internally contains 52 8TB drives internally. In addition, they contain 6TB of PCIe SSD and four, two-port 40gig ethernet adapters for a theoretical 240Gb/sec bandwidth per server scaled out. We have a great deal of room to scale up as well. Each server contains 6TB of RAM too. The entire solution runs on Windows Storage Server and by the time we deploy will have a full documented RESTful API as well as extremely extensive management tools. Using scale out in the lab, we're keeping 240Gb/sec per server pretty solidly saturated at all times.

On the small end, using systems like Dell's cloud servers or Cisco's M-series servers, we can easily have insanely high performance storage for a minor fraction of a similar solution from NetApp.

So, if we're trying to move from 7-mode to C-mode, it's not really worth the effort. We can't simply switch because we can't risk our data. So converting isn't an option, it's simply a matter of buying a second netapp solution to move to progressively.Windows Storage Spaces just performs better and also is well understood by the server guys which makes it much less expensive as it means you do't have to train and employ an entirely separate team to manage storage. It's much easier to manage due to excellent documentation and integration with PowerShell and System Center.

We don't really need iSCSI except to boot blades, and frankly, Starwind is almost ready for the big time and DataCore has that covered pretty well. They both have reputations and experience exceeding NetApp's own.

Don't feel bad, compared to EMC, you're doing great... but scale out file servers on commodity hardware is the long term solution. I'm sure you're not going anywhere though... people will keep buying your stuff for years because "That's the way we always did it." but for new data centers and new deployments, I don't see the value.

4
1

Hidden password-stealing malware lurking in your GPU card? Intel Security thinks not

CheesyTheClown

Re: The CPU isn't the only bus master

Just to nitpick... the wording you're using isn't entirely reliable. In all modern architectures, the PCIe bus is directly connected to the CPU which also hosts the MMU. The like between CPU and MMU has been blurred a great deal and as a result it would be highly inaccurate to suggest the CPU is not the bus master anymore. In a modern PC platform, I can't imagine any data which passes card to card or device to memory which isn't passed through the CPU.... chip.

It's probably important that we find a way for the purpose of wording in the future the differentiation between the logical CPU and the physical package

0
0
CheesyTheClown

Don't forget the capacitors.

It has been proven beyond any plausible doubt that :

a) All viruses are at least temporarily stored in capacitors

b) No virus company has taken this threat seriously

Both older electrolytic and more modern tantalum capacitors have been used for reliable short term storage of nearly every virus during their time in short term memory systems.

In addition, since capacitors are highly sensitive to audible sounds (consider what happens when you hold a telephone close to an amplifier which also contains these capacitors), it is obvious that there is an endless number of methods which can be used to disrupt data flow or even act as triggers. Consider a device no more complicated than a walkman radio from the 1980's feeding electromagnetic pulses into the air and with the right microphone being used to record back the signal to be deciphered later.

I believe companies such as Symantec, McAfee, ... all the security experts should contact companies like FoxConn and Asus and make it clear that by including capacitors on motherboards, they are leaving nearly every computer on the planet wide open to any virus which could exploit such method.

As a consumer, you should NEVER purchase ANY device containing capacitors as they are such a high security risk and viruses stored in capacitors are 100% undetectable.

0
0

Hortonworks shares plunge 22% after secondary IPO news

CheesyTheClown

Hadoop still relevant?

Are there any Hadoop scale problems that can't be solved cheaper and easier these days?

0
0

NYSE fed up of Violin's bum notes, threatens stock market ejection

CheesyTheClown

vSANs and 10GigE?

vSANs have been around for a decade and 10GigE equally long. Most servers these days are shipping with multiple 2-16x 40GbE ... anything less is called a PC. As for vSANs, Let's be fair, vSANs are SCSI based block based storage which on the very best of days might be able to scale to 25Gb/sec because MPIO is an after-thought in SCSI which kinda works. Therefore unless you can get a single wavelength with greater capacity, you're going to have a bottle neck. Using SMBv3 or NFS (on anything other than VMware) scaled beautifully past single link and can easily scale to terabits per second.

So... then we have Violin... I came to comment to ask what it is that they have that makes them special. SSD isn't really particularly interesting anymore. I just built a fairly cheap data center storage solution for my personal lab. It contains 8 servers with 12 cores and 96 gigs of RAM each... for storage, each server has a PCIe riser that holds 4xM.2 PCIe cards which each have an average throughput of about 2 GB/sec. I'm only using little 256gig modules for a yield of 256GB raw storage per server. By employing Windows Server 2012 R2 with Replica, Dedup, etc... as well as Starwind vSAN to boot older systems, I can get a sustained transfer rate measured in hundreds of gigabits per second while surviving two full node outages and function beautifully until 4 nodes fail. Each system has an additional 4x6TB hard drives for near-line storage for a yield of approximately 6TB per server of additional raw capacity.

The total cost of this solution is measured as a few thousand pounds per node. I didn't even build this for price. I built this for ease of reliability, management, scalability and performance.Compared to NetApp, EMC, Hitatchi and a few others (I'd mention HP, but why bother), performance was mangitudes faster, scalability was massively higher. When I go into production, I'll use decent servers, not these junkers I picked up on eBay for 500 pounds each and added storage and networking to.

So... where the heck does Violin fit? They seem to have a SDS option, but they provide so little information on it it's as if they're a company hell-bent on being the experts on a dying market.

1
1

Eight-billion-dollar Irish tax bill looms over Apple

CheesyTheClown

Re: Apple would love Siesta

Apologies... I meant prime minister. The Italian president is more of an internal lawyer as opposed to the prime minister who should handle international relations and business and such.

1
0
CheesyTheClown

Apple would love Siesta

There is no point manufacturing in countries with siesta... that means Spain, Italy and Greece. Madrid and Milan are ok, but I wouldn't ever consider anywhere else. Athens is precisely the wrong place. Oh... and Italy has that little problem of a new president every few hours.

3
2
CheesyTheClown

I am a socialist... but

Multinationals provide jobs by the tens of thousands and their stability allows their employees to pay more taxes. In addition, a single 10 story office owned by one of these companies holds a thousand employees, requires 50-100 more workers (lawn, electric, windows, etc..) part time. Provides a booming economy for small business owners near by... creating probably 1000 more full time jobs via the ripple effect. In addition, the real estate market surround the area booms.

Or you can give that money to the government to manage... to make it fair to the smaller company... and then, let those smaller companies completely foot the burden of supporting a staggering economy.

Need proof? Visit Vilnius and see how just having DanskeBank, Swebank, and DnB owning 3 buildings is causing a massive ripple effect in the area and attracting more big and small business.

The visit any European city without those multinationals.

6
3

You've heard of Rollercoaster Tycoon – but we can't wait for Server Tycoon

CheesyTheClown

Re: Microsoft and UNIX admins coding?

I would never hire anyone with 7-10 years experience as an engineer who would be willing to work for less. I generally haven't been approached for less than that in a few years except by these silly recruiters in the UK who spam everyone on LinkedIn with things like "Looking for a 4xCCIE, 3xMCSE to be in charge of 6 computers at a law firm in a two horse town... great pay and benefits... up to 40,000 pounds a year plus a carton of cigarettes".

Fact is, these days, I've been working pretty hard at automating IT positions as it helps people with $100,000+ a year keep their jobs instead of being outsourced to India due to lack of manpower for the price. The goal is less people but better people.

If it make you feel better, we can say $10,000 a month for 3 months each since it's Italy.

0
2
CheesyTheClown

Microsoft and UNIX admins coding?

OH HELL NO!

Just.... no.

They may have a concept, but IT guys are IT guys because they lack structure and always take the easy way out.... which means they avoid reading and research and just hack until they give up.

The project description itself looks like an IT plan... three guys, IT "experts" will hack together an MMO for about $30000 each. Which if they were actually any good at IT is two months salary... and they'll publish it... and maintain it.

Anyone see a problem?

2
8

Intel Skylake delays, Win10 and stock glut blamed for Q4 PC sales shrinkage

CheesyTheClown

Features not performance

I just bought a Surface Book... I would have bough the Surface Pro 4, but the Surface Book looked so much cooler. I was leaving my old job and I didn't know if I'd manage to keep my Surface Pro 3, so I ordered a Surface Book for $3200 after all the taxes and shipping and absolutely love it.

It's basically just a laptop and I can virtualize a small data center on it while presenting. I've had 20 virtual machines and 20 routers and switches up and running on it. This is too much machine for almost any real purpose.

These days, thinner and lighter is no longer a feature... we're thin enough and light enough. Longer battery? They're up to 12 hours now. More powerful... 99% of everyone probably is more than happy with a Core i3 and 4 gigs... maybe 8 of RAM. Better graphics... maybe... I did buy Final Fantasy XIII just to have something to test my new laptop with... then I moved on... no real point. Better screen? 3000x2000 on a 13" screen allows me to make the text smaller than I can comfortably read and I have better eyesight than most... probably the only thing I have that works well.

I wouldn't mind :

A better pen, I carry a Wacom Cintiq with me because the Surface Pen (and iPad Pro Stylus) suck so bad they're useless.

Better support for iPhone headset volume control buttons.

Thunderbolt would be nice, though I'm not sure why.

An SD card port that is flush with the card so it doesn't dangle.

A second screen has always been on my wish list... but I can't see this being a major selling point.

Better wireless display support

A better power brick adapter... can't really say I like the Surface Pro one and USB-C tends to suck up one of my precious USB ports requiring me to carry a hub with me.

Make the entire PC detachable as a phone... in other words, find a way to fit the Core i7, 16gigs of RAM and 512GB SSD into a phone and make the Surface Book a screen, video card, battery and keyboard attachment.

Better touch pad... this one feels too much like the Apple one and I abandoned Apple because of their absolutely awful keyboard and touchpad.

0
3

DataCore’s benchmarks for SANsymphony-V hit a record high note

CheesyTheClown

Too little too late

I've been testing DataCore and NetApp and quite a few others for storage. The first and most important thing I learned is that the best way to decrease storage costs is to just stop using SCSI which means there's a real need to get away from VMware (as I struggle to get a VMware data center up right now).

SCSI IS NOT a network protocol and it's really really really bad at it as well. There are 9-12 full protocol decodes/encodes and translations between a IO request from a virtual machine and the storage which adds a ridiculous amount of latency. Also there is an insane amount of overhead in block based SCSI for handling small reads/writes which are a fact of life since developers in general tend to use language provided streaming classes/functions for file I/O.

So, that brings us to NFS and SMB. NFS is okish... it's a protocol which really has far too much legacy and way too much gunk in it as it tries to be everything to everyone. At the same time, all these years later, there's still no standard for handling operations like VAAI NAS as part of NFS which is just plain silly since NFS is an RPC based protocol and things like remote procedure calling should be first nature to the protocol. As a result, using NFS is just out of the question for daily virtualization using VMware since those guys make it impossible for anyone other than companies willing to spend $5000 and sign contracts to get a hold of their API for VAAI NAS which is just stupid. As a result, for VAAI NAS with Linux storage servers, I had to install the Oracle VAAI NAS driver and override their certificates and decode their REST API and implement it on Node.JS to make it tolerable.

Then there's SMB v3 which is a near re-write from the ground up for just virtualization storage. To use it, you need Hyper-V which won't have nested hypervisor support until the next release which is something I'm personally extremely dependent on.

So, performance-wise... DataCore is SCSI and their management system has all kinds of odd bugs and quirks and is damn near impossible to implement properly in an application-centric data center. There just really isn't much value in their products other than acting as a FC boot server for blades which don't like iSCSI (think HP, Cisco, etc...)

NetApp has terrible performance. Because of the overwhelming sheer stupidity of using block protocols, the NetApp has no idea what it's filing and dies a slow painful death in hash calculation hell. Heaven forbid you have two blocks which are accessed often which have a hash collision. It'll suck up nearly the entire system. Let's talk controllers and ports. NetApp controllers and ports cost so much there's no point even talking about them. Then there's the half based API for PowerShell, barely functional REST API, disasterous support for System Center and/or vCloud Orchestrator. Add the OnTap web gui which is so bad there's no point in even trying to run it... which generally you can't because their installer can't even setup the proper icon and it's generally blocked by the JVM anyway.

I have a nifty saying about NetApp and DataCore... if I wrote code that bad, I'd be unemployed.

These days, there are a lot of options for storage... too bad most of it's not that good. I'm moving almost entirely to Windows Storage Server with SoFS and Replica because I'm able to get a fairly stable 2-3 MIOPS per storage cluster and I've been building that on $500 used servers with an additional 8 ten-gig ports and consumer SSD drives and NAS drives.

1
3

Trustworthy x86 laptops? There is a way, says system-level security ace

CheesyTheClown

Lots of whining, no real solutions

Let's start with the external devices for storing all state... the answer is simply no. You're coming at this from completely the wrong angle. Your heart is in the right place, but it's still a huge resounding no.

Let's consider yet another stinking external device. To make it practical, it would have to be something that is small enough to meaningfully be carried with your "Sexy laptop" or tablet. My computers range from 4" pocket PCs through 13" Surface Book PCs. Which means any external device should need to be the size of an SD card or MicroSD card... I lose about one of those a month unless I leave them permanently docked inside the PC which means they lost their external state.

Hacking the computer externally will simply happen. It's going to happen, it always has and always will. So long as the networking stack of each operating system is constantly changing and growing, and as long as there will be 10-20 million new lines of code running on a system every 18 months, there will be security holes and the ones you're talking about aren't even nice low hanging fruits that hackers love. Your talking about hacks which require real work to be involved. I used 23 hours to decrypt an iPhone 6S that was a much lower hanging fruit than you're talking about and to do it, I had to destroy 4 of them, one by placing it in the oven to get the chips off.

What you really want is some more reliable method of protecting users....

1) We need something like ME, it should be universal, being open would be a bonus, but universal should be available. This will be a major insecure hacking target for 15 years... it'll be like Linux or OpenBSD or Windows... too complex to ever really secure.

2) We need a means of securing the system to lock stuff down. This means signed code on the ME processors. The signing should be verified in hardware only. This means every single patch and update will need to be signed by the ME vendor.. IE Intel.

3) We need a way of wiping a system... JTAG/Serial is best. Let's have a set of tools which requires simply applying power to the system and require that every chip which holds state must be able to recognize a full wipe command and respond with progress and when finished. This will make it so that every device should have a standard connector... something extremely simple like a 0.5mm by 0.1mm by 1mm deep slot with four wires that will allow connection to a USB dongle that can be used to fully wipe a device. This means that even smart watches can have the connector on it. The benefit of this is that flash chips, etc... can all be connected to this and all devices connected to this bus will enumerate and respond. If a device is not able to respond, the computer should be considered compromised and/or possibly dead. As part of the ME processor (for example) there should be a small memory region which describes the devices which should be reachable over this bus.

Let's :

a) recognize that ME is here to stay because if you take my ME/AMT away I'll cry... try managing thousands of computers for a few days without the ability to ... well manage them and you'll kill yourself. ME is a mandatory human safety system.

b) recognize there's no possible way you'll ever have "a simple FPGA" that does what you want... mainly because what you mentioned isn't an FPGA thing... it would certainly have an FPGA, but it's not really something hybrid instead.

c) Recognize that you'll never ever ever have a stateless machine. It's just not going to happen. Every possible way of doing this is completely flawed because it requires carrying more crap. Wired crap will have to be some sort of card. Wireless crap would require some sort of stored keys on the computer and the external device to secure them... meaning that the device is no longer stateless so might as well say screw that.

d) There simply will never be a secure computer. You're correct that ARM vs. x86 is irrelevant... For example, Qualcomm, nVidia, TI all make ARM processors where ARM is little more than "the standard part" of the CPU. The vendor specific stuff in any of those three vendors is extensive. Consider radio controllers for GSM and LTE which generally don't run in software on the ARM chip as that would require all operating systems to be power sucking RTOSes. In reality, there's no point in even pondering a secure computer.

4
1

Storage in 2016: Could abandonware claims come to the fore?

CheesyTheClown

Storage will bottom out

Storage hardware vendors are boring, expensive, messy and obsolete.

SMB/NFS file servers on Linux or Windows with scale out are faster, cheaper and more reliable.

Object storage runs as a service on top of an OS. Integrating it in a SAN is just plain stupid. Programmers just need a REST API. Object storage dates to around 1984 with Apple's nasty, horrible forked file system. Object storage currently has a similar problem ... lack of support for archiving objects (think zip or stuffit)... therefore, it will likely not go mainstream outside of "Applications" in the Openstack sense for a while.

We're heading back to a time when storage will just be a standard part of any server OS. Hardware vendors like NetApp and Dell/EMC will be old news.

That said, the "storage experts" I train insist nothing can ever be as good as fibrechannel... They have no idea how anything else (or even) FC works, but they know FC is faster and better.

1
4

2015 was VMware's Year of Living Dangerously

CheesyTheClown

Where's the solution?

Let's be honest... all the competitors of VMware are fighting like banchies to provide data center solutions instead of bits and pieces.

So long as VSphere Client and Web Client are the primary tool for management of VMware and those tools are constantly lagging behind, there will be no success for VMware.

Hyper-V, OpenStack and Nutanix all offer end to end solutions. Hyper-V is longer the bastard buggy step-child. It's a solid and tested product that integrated with Azure Pack is amazing at the least. VMware is a massive pile of tools which are often incompatible with each other. Their virtual SAN solution is pretty old school.

I was a VCP and have the course requirement to cert up for version 6. I also was offered a spot to become a VCI and in short, I just didn't bother wasting my time. I just don't see a future in it. If you need my reason for it, the latest VCP requirements don't require any application design or even scripting or orchestration. VCP means you can click your way around VSphere client. Why would I waste my time with such ancient nonsense?

5
4

Getting metal hunks into orbit used to cost a bomb. Then SpaceX's Falcon 9 landed

CheesyTheClown

FFS!!!

There are now three companies... that's right... companies... privately owned enterprises which have been establishing launch capability and rocket reuse. These companies have all managed to accomplish MASSIVE tasks. Bezos, Musk and Rutan have all been able to make history by not only launching rockets into space but doing it with very little money compared to the old dogs (who are due to be put out to rest).

This doesn't need to be a competition. I'd imagine that there is more than enough business to keep them all going. Whether it's satellites, space travel, exploration, etc... we need all of them and it doesn't matter who did what first. The game has barely started. We'll soon start building space stations and colonies and more. There will be some enterprising people who decide to mine the moon or mars and build more ships out there. What I'm sure of is, unlike NASA who lacked agility, the technology behind space travel will evolve rapidly now. Imagine a day when virgin galactic can fly you to space to be dropped off at a port where large rockets not even to be dreamed of yet will take you to the moon or beyond.

Quit the competitive bullshit already and congratulate three great companies run by true visionaries.

1
1

Page:

Forums