* Posts by CheesyTheClown

513 posts • joined 3 Jul 2009

Page:

UK.gov embraces Oracle's cloud: Pragmatism or defeatism?

CheesyTheClown

Re: Cluebat required

Doesn't matter, under the terms of national security, Oracle will be required to provide access to all data stored on systems owned and/or operated by US companies without informing the owners of the data of the request. It's not supposed as to be happening yet, but sooner or later, the FBI, NSA, etc... will find a legal loophole that will make it happen.

2
0

Man facing $17.5m HPE fraud case has contempt sentence cut by Court of Appeal

CheesyTheClown

Re: This used to be how commerce worked isn't it?

Sounds to me that the guy was a hell of a sales person if he was selling servers at retail pricing. HP didn't have to cold call all the customers and probably saved millions on staff and red tape. Unless HPE actually lost money on the sales, it sounds like they screwed themselves.

0
1

Electric driverless cars could make petrol and diesel motors 'socially unacceptable'

CheesyTheClown

Re: Trolley problem.

Consider connected autonomous vehicles.

Either special utility vehicles or nearby delivery vehicles or worst case, nearby consumer vehicles can be algorithmicqlly redirected to a runaway vehicle, match speeds and forcefully decelerate the out of control vehicle.

This would be wildly dangerous with human drivers, especially if they are not properly trained for such maneuvers. But by employing computer controlled cars, it could be possible to achieve this 99 out of 100 times with little more than paint damage.

This doesn't solve a kids chasing a ball into the street without looking, but it can mitigate many issues related to systems failures.

I can already picture sitting in a taxi and hearing. "Please brace yourself, this vehicle has been commandeered for an EV collision avoidance operation. Your insurance company has been notified that the owner of the vehicle in need will cover the cost of any collision damage to this vehicle. Time to impact, 21.4 seconds. Have a nice day"

1
0

New Azure servers to pack Intel FPGAs as Microsoft ARM-lessly embraces Xeon

CheesyTheClown

Not entirely true but mostly

Alters has been hard at work on reconfigurable FPGA which is exciting. Consider this, calculating hashes for storage is simply faster in electronics, as fast as the gate depth will allow. Regular expressions are faster, SSL is faster, etc...

The problem is, classically, an FPGA had to be programmed all in one go. If Microsoft has optimized the work flow of uploading firmwares and allocating MMIO space to FPGAs and Altera has optimized synthesizing cores and Intel has optimized data routing, the a web server could offload massive work loads to FPGA. Software defined storage can streamline dedup and compression, etc...

This made perfect sense.

0
0

Azure Stack's debut ends the easy ride for AWS, VMware and hyperconverged boxen

CheesyTheClown

Re: A different battle

I have customers spread out across government, finance, medical and even publishing that can not use public cloud because of safe harbour. They all want cloud, not virtual machines, but end to end PaaS and SaaS, but they couldn't have it because one way or another, you're violating data safe harbour laws or simply giving your data to China or India. This is a huge thing.

3
0
CheesyTheClown

Re: Game Changer

Do you understand what this is? I'll guess not, but grats on first post before even performing the slightest research.

This is a cloud platform. It is not about launching 1980's era tech on the latest and greatest systems to manage. It's about giving developers a platform to write applications for which can be hosted in multiple places without the interference of IT people. It's about having an app store type environment for delivering systems that scale from a few users to a few million users. It's about delivering a standard platform with standard installers so there is no need for building virtualized 80's style PCs that require IT people running around like idiots to maintain.

You can keep your VMs and SANs and switches. There are a lot of us who are already coding for this platform and can't wait for this to fly. Whether you like it or not, we're going to write software for this. Your boss will buy the software and either you can run it or you can find a new job :)

1
8

Trump's CNN tantrum could delay $85bn AT&T-Time Warner merger

CheesyTheClown

Please clarify the original claim!

I read the whole article which appears to be a tantrum about Trump. Ok fine, he's an idiot. The article makes lots of different points. What it doesn't do is connect Trump's idiocy with why the merger would be delayed. Is AT&T considering pulling out? Will there be a conflict of interest that would allow Trump to influence the members of the merger and make them stop CNN from being mean to him and hurting his feelings?

I found the article entertaining and I certainly have no love for Trump, but...

What is the connection?

3
0

Analyst: DRAM crisis looms after screwup at Micron fab

CheesyTheClown

At least they didn't burn another one down

Don't the DRAM and HDD price hikes almost universally come from burning stuff down?

Wouldn't it be better just to say "We're cutting capacity to produce a shortage to force you to pay more"? Or is there an insurance scandal involved as well? After all, how likely is it that their insurance company has someone qualified to assess damages to a semiconductor manufacturing clean room on staff?

"Here, look in this microscope. Do you see that blue spec, they're everywhere now and we have to throw away all our obsolete equipment and replace it with the next generation stuff that we need for the next die reduction".

0
0

PCs will get pricier and you're gonna like it, say Gartner market shamans

CheesyTheClown

Re: Value for money?

I don't understand. Are you suggesting it's better to stick with spinning disks on laptops and desktops because you can't find a company who makes a SSD based on a stable technology?

Was there ever a moment in history where the hard drive business wasn't like that? Have you ever complained that the methods of suspending magnetic heads over the drive surface on different vendors of hard drives was different? Have you ever complained that the boot code on a western digital drive wasn't the same as on a seagate?

Are you worried that there is something magically different on a SATA cable when using SSD as opposed to on spinning discs?

Are you worried you have to use mSATA, M.2,, PCIe. You don't.

2
0
CheesyTheClown

Re: Value for money?

Semi-decent PCs? Without SSD and without decent screen resolution?

This is 2017. Semi-decent is a core i5, 16GB RAM and 500GB SSD and at least 2500 pixels wide. Decent is i7, 16GB, 1TB and 3000x2000 and nVidia graphics.

I bought that 2 years ago and have absolutely no inclination to upgrade. Microsoft can get a few extra bucks from me if they sell an upgraded performance base, but for about $750 a year per employee for their PC, it's a cheap option. I'll consider a new one in 2 more years if they come with at least double the specs and a minimum 2GB/sec SSD read time. Otherwise, I'll it will be a $600 a year PC, then $500.

Employers obviously have to consider the cost of buying dozens, hundreds or thousands of PCs. But leasing with an option to buy makes sense if the CapEx is scary. But spending less than $2000 on a PC can be very expensive. There are very few good machines on the market that cheap.

1
0

Cisco automation code needs manual patch

CheesyTheClown

This is very common in Cisco products

Cisco is great at making products on top of Linux and Apache tools, but they are utterly useless at securing Linux and Apache tools. Currently dozens (maybe more) Linux kernel exploits work against ISE since Cisco doesn't enable configuration of RHEL updates on those boxes. As a result, they are very often very vulnerable. They also are wide open to Tomcat attack vectors because the version of Tomcat running on ISE is ancient and unmatched.

As for root passwords... install ISE on KVM twice (or more) and mount the qcow images on a Linux box after. You'll find that the password for root is the same on all those images. While ssh access as root appears to be disabled, there are a few other accounts with the same issue.

I don't even want to talk about Prime. It's a disaster with these regards.

Surprisingly enough APIC-EM for now seems ok, but that's because about 90% of the APIC-EM platform is a Linux containers host called grapevine. I think the people who worked on that were somewhat more competent (I believe they're mostly European, not the normal 50 programmers/indentured servants for the price of 1 that Cisco typically uses).

I haven't started hacking on IOS-XE... I actually don't look for these bugs. I just write a lot of code against Cisco systems and it seems like every 5 minutes there's another security disaster waiting to happen. They have asked me to help them resolve them but it would require hundreds of hours of my time to file bug reports and I can't waste work hours on solving their problems for them.

Oh... if you're thinking "oh my god, I have to dump Cisco", don't bother, the only boxes currently I would trust for security is white box and unless you know how to assemble a team of system level software and hardware engineers (no... that really smart guy you know from college doesn't count) you should steer clear of those. The companies who use those successfully are the same ones who designed them.

Cisco, you need a bug bounty program. Even if I could make $100 for each bug I stumble on, I would invest the half hour-hour it takes to write a meaningful bug report. Then you can fix this stuff before it ends up a headline.

5
0

If 282-page doc on new NVMe drive spec is tl;dr, you're in luck

CheesyTheClown

Re: It's a standard for disk drives using Non Volatile Memory.

Intelligent protocols for multi-client access? If only there were some standard method of providing storage access in a flexible manor with encryption support, variable length reads and writes, prioritized queuing, random access, error handling and in a high performance package for non-uniform access over ranged media. Oh, let's not forget vendor specific and defacto standard enhancements as well as feature negotiation. No imagine the ability to scale out, scale up, work with newer physical transports, support memory mapped I/O access and even nifty features like snapshots, checkpoints, transaction processing, deduplication, compression, multicast and more. Imagine such a technology with no practical limitations on bandwidth, support for multiple methods of addressing and full industry support from absolutely everyone except VMware. Then consider hardware acceleration support routability over WAN without any special reliablility requirements.

Oh wait... there is. It's called SMB and SMB Direct depending on whether MMIO matters.

This is 2017. No one wants direct drive access over fabric unless they are simply stupid. Block storage over network/fabric in 2017 is so impressively stupid. It requires too many translations, stupid file inflexible file systems like VMFS, specialized arbiters in path and is extremely inefficient and an order of magnitude worse when introducing replication, compression and deduplication.

The only selling point for block based storage over distance is unintelligent and unskilled staff. The only place where physical device connectivity protocols (SCSI and NVMe) should be used is when you want to connect drives to computers that will then handle file protocols.

BTW, GlusterFS and pNFS are good too.

0
0

One-third of Brit IT projects on track to fail

CheesyTheClown

So 60%+ are expected to succeed?

That's really not bad.

Consider that most IT projects are specced on a scale to large to achieve.

Consider that most IT projects are approved by people without the knowledge to select contractors based on criteria other than promised schedules and lowest bids.

Consider that most IT people lack enough business knowledge to prioritize business needs and that most business people lack the IT experience to specify meaningful requirements.

To be fair, 60% success is an amazing number.

Now also consider that most IT projects would do better if companies stopped running IT projects and instead made use of turn-key solutions.

How much better are the odds of success when IT solutions are delivered by firms with a specific focus on the vertical markets they are delivering to?

0
0

Heaps of Windows 10 internal builds, private source code leak online

CheesyTheClown

Re: I'm done with Windows.

Windows 10 Serial driver (C code, based on the same code you've seen... still works) : https://github.com/Microsoft/Windows-driver-samples/tree/master/serial/serial

Windows 10 Virtual Serial driver (C++ code, based on the new SDK with memory safety consider) : https://github.com/Microsoft/Windows-driver-samples/tree/master/serial/VirtualSerial

Mac OS X Serial Driver (C++ code... runs in user mode) : https://opensource.apple.com/source/IOSerialFamily/IOSerialFamily-91/IOSerialFamily.kmodproj/

Using a domain specific language for a kernel which can implement the core kernel code in "unsafe mode" and then implementing the drivers, file systems, etc... in a "safe mode" language meaning memory references instead of pointers (see C11 which makes moves this way... but refuses to break with tradition by doing it as library changes instead of a language feature).

In reality, this is 2017 and if your OS kernel still has a strict language dependence for things like file systems and device drivers, you probably aren't doing it right. These days most of that code should be user mode anyway. And no, user/kernel mode discussions stopped making sense when we started using containers and Intel and AMD started shipping 12+ core consumer CPUs

0
0
CheesyTheClown

Re: I'm done with Windows.

Ohhh... I'm glad I came back here.

C is a great language and it's extremely diverse. It's absolutely horrifying for something like the Linux kernel though. Consider this, it has no meaningful standard set of libraries which means that support for things like collections and passing collections is a nightmare. Sure you have things like rbtree.[hc] in the kernel, but as anyone who has studied algorithms knows, there is no single algorithm which suites everything.

Let's talk about bounds, stacks, etc... there's absolutely no reason you can't enhance the C compiler to support more memory protection as well. C itself is a very primitive language and it's great for writing the boot code and code which does not need to alter data structures. But there are severe shortcomings in C. Yes, it's 100% possible to add millions of additional lines of repetitive and uninteresting code to implement all those protection checks. But a simple language extension could do a lot more.

Let's talk about where I find nearly all of the exploits in the kernel. This is in error handling and return values. It's amazing how you can cause problems with most code written at two different times by the same person or by two different people. The reason for this is that there's no meaningful way to handle error complex error conditions. Almost all code depends on just returning a negative value which is supposed to mean everything. The solution to this is to return a data structure which is basically a stack of results and error information and then handle it properly. The reason this isn't done is because people get really upset when implementing anything resembling exceptions in C. And yet, nearly every exploit I've found wouldn't have been there if someone implemented try/catch/finally.

Let's talk about data structure leaking and cleanup related to the above. Better yet, let's not... pretty sure that one sentence was enough to cover it all.

This is 2017, not 1969. In 2017, we have language development tools and technologies that allow us to make compilers in a day. This isn't K&R sitting around inventing the table based lexical analyzer. Sticking with the C language instead of creating a proper compiler designed specifically for the implementation of the Linux kernel is just plain stupid.

More importantly, there's absolutely no reason you have to use a standardized programming language for writing anything anymore. If your code... for example an operating system kernel would profit from writing a new programming language for it... do it. You can base it on anything you want. It's actually quite easy... unless you write the language itself in C. Use a language suited for language development instead. Get the point yet?

The next big operating system to follow the Linux kernel will be the operating system which leaves 95% of the C language in tact and implements a compiler which :

a) Eliminates remaining dependencies on assembler by implementing a contextual mode for fixed memory position development.

b) Provides a standard implementation of data structures as the foundation of the language

c) Implements a standard method of handling complex returns... or exceptions (possibly <result,errorstack>)

d) Implements safe vs. non-safe modes of coding. 90% of the Linux kernel could easily have been done in safe mode

e) Offers references instead of pointers as an option. This is REALLY important. Probably the greatest weakness of C for security is the fixed memory location bits. Relocatable memory is really really useful.If you read the kernel and see how many ugly hacks have been made because of it not being present, you'd be shocked. The Linux kernel is completely slammed full of shit code for handling out of memory conditions which exist purely because of supporting architectures lacking MMUs. References can be implemented in C using A LOT of bad and generally inconsistent code. It can be added to a compiler with a bit of work, but when combined with the kernel code, can implement a memory defragmenter that could fix A LOT of the kernel.

And since you're kind enough to respond aggressively, allow me respond somewhat in kind. You're an absolute idiot... though maybe you're only a fool. C# and .NET are actually very good. So is C, Java, C++, and many others. Heck, I write several good language a year when a domain would profit from it. I you don't know why C# and .NET or even better, Javascript are often better than plain C, you probably shouldn't pretend like you know computers.

Did you know that Javascript generally produces far faster and better code in most contexts than C and Assembler today? If you understood how microcode and memory access function, you'd realize there's a huge benefit to recompiling code on the fly. Consider that Javascript spends most of its time recompiling code as it's being run. This is because the first time you compiled it, it was optimal for the current state of the CPU, but as the state of the system changed (that's what happens in multitasking systems) the cache has changed and the CPU core being used may have changed (power state, etc..) and the Javascript compiler will reoptimize the code. It's even possible with Javascript that if you're on a hybrid system containing multiple CPU architectures or generations, the code can be relocated to a CPU which is better suited for the process.

Of course C could be compiled into Javascript or WebAssembly and have the same benefits. The main issue is that you lose support for relocatable memory as WebAssembly to support C/C++ is flat memory. But at least for execution, it's very likely your C code will run faster on WebAssembly than on bare metal. If you then start making use of Javascript/WebAssembly libraries for things like string processing, it will be even faster. If you move all threading to Javascript threading, it will be even better.

This does not mean you should write an operating system kernel in Javascript. Just as C is not suitable for OS development anymore, Javascript never will be.

0
1
CheesyTheClown

Re: I'm done with Windows.

If you don't mind me asking, what do you mean by "this" when stating "But this is completely different."?

And which threats has MS not addressed lately?

And, the lack of mitigation of threats? Is this only when you avoid forced upgrades? Did you want more secure software or to stay with older and less maintained software which might not be patched? Did you not want the Windows update which blocked wannacry?

You are very excited about Linux. Do you keep it up to date? Do you run antivirus? Do you allow network applications access via SE Linux and later close the holes when you no longer use the app? Have you configured different network profiles for home or public? Do you continue using apps with dependencies on libraries with known vulnerabilities? How do you manage your private keys?

Linux is fun. I spend most of my Linux time reading driver and network stack source looking for rootkits for fun. I love finding nifty things like code injection opportunities in the forwarding tables. Or better, methods of replacing openssl.so with a copy that backdoors the private keys.

Linux's greatest weakness is its dependency on C for everything. It's like placing a welcome mat on the floor and leaving the key beneath it. As such, Linux, GTK, Gnome... not even a challenge.

So... back to "This"?

13
12

How to avoid getting hoodwinked by a DevOps hustler

CheesyTheClown

Re: If they’re a 'DevOps Expert', they probably aren’t

I'm a programmer with some pretty nice notches on my belt. I'm into data center automation now. I spent 5 solid years establishing proficiency in IT since leaving product development and now am actively working on Powershell DSC and C#. I regularly write and open source modules and tools and work with customers ranging from a few dozen users to the largest government entities in the world. I employ test driven development, code review, unit and integration testing and revision control. I focus 100% on deploying systems that not only work, but repair themselves when things go wrong allowing operations and development work on implementing new systems instead of fixing things that shouldn't have broken to begin with.

I have absolutely no idea what DevOps is even though I'm a developer and my coworkers are operations.

I am certainly not an expert either.

So no... people don't know what it is and the people who are probably best at it are still just learning.

Let's summarize it as this.

If you're scripting... it's not DevOps

If you're doing it by hand... it's not DevOps

If you're describing what you want and then a system implements it and makes sure it stays implemented... it may be DevOps but since there are no reliable systems for that yet, it probably isn't.

6
2

America throws down gauntlet: Accept extra security checks or don't carry laptops on flights

CheesyTheClown

Re: Anon

Following the Brexit decision, I have resourced my suppliers outside of the UK because of potential difficulties related to red tape similar to the US.

In order to get paid by US companies money I'm owed, I have to hire a US accountant who specializes in international trade to fill out paperwork or simply forfeit 30% of my earnings. I'm told this is because the US simply assumes all money moved out of the US is probably for laundering or tax evasion. That's ok, I've decided that working with governments (this is work I've done for the DHS) that see their friends as potential criminals isn't worth the effort.

So, I've stopped travelling to the states ... and spending money there.... often A LOT because it's becoming too difficult to perform business in the states anymore to be bothered with it. I can't be bothered much with London anymore either. I'd rather make a phone call than fly there. I went through Heathrow 20-30 times last year and up until the week before the Brexit vote, customs in Terminal 5 was quite quick (at least when you flew business). The few times since it was horrifying. And no, I'll be damned before I spend a few hundred pounds to preapprove. I'll just stop my weekend trips to bring the kids for milkshakes at Hamley's. I don't need to spend my money in a country where you're guilty until proven innocent.

What's worst of all of this is that the overly opportunistic nature of the US and the UK breeds their paranoia. They think that since Americans and Brits would be more than willing to take whatever you have the second you're not looking that the rest of the world is like that too. And I'm sure there are some people who are like that. But I refuse to spend my life in fear of those people and I refuse to be treated as if I am potentially one of those people.

Of course, what most brits and Americans don't realize is that most of their own people who they think would take what they have without a second thought... wouldn't.

58
8

VMware's security product to emerge in Q3 as 'App Defence'

CheesyTheClown

Kudos!!! And WTF!?!?!

I'm a huge fan of advances like this. But this is something I've been doing for years with other systems like ACI and Hyper-V. I know this has it's own little competitive advantage, but it's basically same stuff as the other guys a few years late.

So, if there's actually a focus on this... why isn't VMware working with their Linux and Windows drivers to dig deeper into the system and provide mechanisms through the standardized firewall APIs on those hosts to provide meaningful feedback at an app level to the SDN solution. I mean... really... come on now. I want a method for my web server to say "Drupal needs to update on port X" and then have a policy system which decides whether the Drupal update app should have access or not.

Hasn't anyone told VMware that we've moved on from virtual machines in the software defined datacenter. We're working on containers and containers automation doesn't stop just because you've installed it. Containers request resources from the host and policies on the host grant or deny access to those resources.

Also, VMware and Cisco need to learn that we don't want to do software defined using another stinking controller. We want to define networking from the software. Installers and automation systems are not software, they're installation scripts. If you want an example of what software defined is, notice when a program on Windows asks for access to the network and Windows asks when you'd like to grant that access... and it's not asking for port numbers... it's asking whether that program can have certain access to certain resources. That panel should pop up on the security/network admin's telephone instead and when he/she clicks ok, it should install policies in Windows, NSX, the IPS and the firewall all in one go.

So, really VMware kudos for catching up with 2012. It's really quite cute. But can you please start working on software defined networking?

1
2

AES-256 keys sniffed in seconds using €200 of kit a few inches away

CheesyTheClown

Re: I'm not even surprised.

Real-time memory encryption in server is a generally bad idea for a multitude of reasons.

1) It's a false sense of security. People will believe it offers some level of protection... it doesn't.

2) The memory controller would have to be issued keys from within each each session. These keys are theoretically shielded from the host system. If the guest operating system implements this technology... kudos for them. It means that direct attacks from VM to VM are taken care of.

3) Drivers loaded on the guest VM will have access to the encrypted memory as they run in kernel mode on the guest VM. This means virtual network, disk and graphics adapters will be able to access memory as unencrypted or issue memory requests to MMU to get access to whatever they want. So, a compromised driver can be an issue. If you read the source code for e1000, VirtIO Ethernet, VMXNET3 drivers in the Linux Kernel, you'll see that they aren't exactly hardened drivers for security. They're good device drivers, but VMXNET3 for example looks very pretty in code format, but that's because it's not particularly bogged down with silly things like bounds checking code.

4) "Bridges" used for performing remote execution on guest VMs will generally have to be available since this is how automation systems work. So, Powershell Remoting (WMI/OMI), QEMU Monitor Protocol, KVM Network Bridge, PowerCLI, etc... all offer methods of performing RPC calls on guest VMs from the host and in many circumstances, directly at the kernel level.

5) Hardware hairpinning is an option as well. PCIe (unlike PCI and older devices) operate entirely on memory mapped I/O (MMIO) which means that all communications with the system and with system memory are performed by using memory reads and writes. In bare metal hypervisors with proper hardware such as Cisco VIC, nVidia GPUs ,etc... the hardware is programmable, partitionable, and can execute code. An example would be to log into the Cisco VIC adapter via out of band management and run show commands for troubleshooting. The iSCSI troubleshooting commands in particular are quite powerful and would easily allow issuing memory reads and writes on the fly from a command line interface. In order to honor them, the MMU in the CPU would have to decrypt the requested memory. Of course, the MMU and OS driver could mark pages appropriately to allow access-lists on individual protected pages. But that's mute when we see point 6)

6) RDMA provides a means of extending system memory from server to server. This works by mapping regions of physical memory in each server to be accessed by hardware from other systems over devices like RDMA over Ethernet NICs or Infiniband HCAs. High performance systems like HPC systems, high performance file servers (like Windows with SMB Direct) and high performance Hypervisors like KVM and Hyper-V (ESXi is very notably not part of this as it sacrifices high performance for high compatibility) perform live migration over RDMA where possible. While it is theoretically possible to move the guest machines in encrypted states, it would be necessary to carry enough information from one server to the other during a migration to provide a decryption key in the new host to access the VM memory as it is moved. That means the private key would have to either be transferred in clear text or would have to be renegotiated through an hypervisor hosted API... providing a new key in clear text to the hypervisor...if only briefly.

The intent of encrypted memory was really really awesome, but extremely poorly thought out. It could have some benefits in places like containers where individual containers could be shielded from the host OS and they don't migrate. But there would still be critical issues with regards to where the decryption keys reside. Also, as containers generally ARE NOT bare metal, so the keys would have to reside on the container host instead.

Thanks for bringing this topic up though. Make sure you tell everyone who intends to depend on encrypted memory that it's at least 10 years and several Windows, Linux, Docker and hardware generations off from being meaningful. But make sure to tell them they should bitch to their vendors to make them support it ASAP. It will require an entire ecosystem (security in layers) approach to make this happen.

3
0

Hey blockheads, is an NVMe over fabrics array a SAN?

CheesyTheClown

Who cares?

NVMe is simply a standard method of accessing block devices over a PCIe fabric. As with SCSI, it's thoroughly unsuited for transmissions over distance. It generally adopted many of the worst features of SCSI at least with regards to fabric. There is nothing particularly wrong with using memory mapped I/O as a computer interconnect, in fact, it's amazing all the way up to when you try to run it across multiple data centers. At that time, NUMA style technologies no matter how good they are basically fall apart. There's also the issue that access control simply is not suited for ASIC implementation, so employing NVMe and then adding zoning breaks absolutely everything in NVMe. Shared memory systems are about logical mapping of memory of shared memory. It is horrifying for storage.

So, in 2017, when we know that any VM will have to perform software level block translation and encapsulation at some point, why the hell are we trying to simulate NVMe which just lacks any form of intelligence when we should be focused on reducing our dependence on simulated block storage by more accurately mapping file I/O operations across the network.

BTW, the instant we added deduplication and compression, block storage over fabric became truly stupid.

2
6

Another FalconStor CEO out as storage software firm hunts for growth

CheesyTheClown

Info on FaclconStor?

I've just scrubbed their website and there is no meaningful technical documentation that can be easily found. I found a half-ass feature list and almost no user guides or configuration guides. All I found was endless junk for investors. I almost can't tell if they actually have a product to sell.

From what little I could find, it looked almost like a web front end to Linux LVM2, ZFS and LIO. Now, front ends are great, but there is no information regarding whether their product offers anything special besides a web page on Linux. Heck, it could just as easily be a front end on ZFS and COMSTAR.

How does a company that doesn't even provide a feature list like whether it supports VAAI-NAS or not sell anything? They brag about having presence in 20% of the Fortune 500. Does that mean that 20% are paying customers or do they have a VM running the demo version?

I have tried solutions from dozens of vendors but never FalconStor because I could never figure out why I should. But I guess FalconStor prefers to skip the tech guys

0
0

Uh-Koh! Apple-Samsung judge to oversee buggy Intel modem chip fight

CheesyTheClown

Re: And Virgin in uk?

That's under the assumption that I had access to such hardware. You are also under the false impression that what you just posted would provide meaningful information. I can see the results of that on some of the links I've encountered and it simply didn't provide much information.

Let's run with this though.

First of all, I'd imagine that if Intel has not released a patch with this problem, it would require alterations to the ASIC in order to correct the issue.

I could probably with some effort borrow a CMTS from a local cable company, I see that I can find a relatively old Cisco 7200 based CMTS for about $2000 from eBay or maybe piece one together from a chassis and a line card for a few hundred dollars. The problem with this is that I wouldn't be able to get DOCSIS 3.0 support operational which may be required.

I see that Puma 6 modems don't cost much either.

So, let's assume I could build a test rig for about $1000 (which I really wouldn't spend unless I had a business case).

I would need to figure out how to get root OS access to the Puma modem which likely is not difficult, though if the modem is running anything other than Linux, it may have just one of those stupid text based management programs. So I would need to connect JTAG cables to run in-circuit-emulation based debuggers. For Intel chips this isn't particularly difficult as they are extremely well known and thoroughly documented.

A much less expensive alternative is to get a boot image for the device and open it in something like IDE pro with a decompiler plugin. This could require much effort to work with since I'd have to guess my way through the file system and operating system code. And if the operating system image is compiled monolithic (instead of using kernel modules) which is common on embedded systems like this, I would have little or no hope of reverse engineering the applicable drivers.

Even if I somehow managed to reverse engineer the drivers (not really that difficult from kernel modules) then I would only have the control APIs to the ASIC, it wouldn't give me insight to the ASIC itself.

As the problem does not appear to be able to be fixed by software, even if I managed to reverse engineer microcode pushed to the chip (disassemble to VHDL or similar), it would likely not cover the areas of the chip which are plagued.

If I had the VHDL code to the chip, it may be difficult to work with. Generally, even without comments, it requires good engineering documentation with the block diagrams of each core... but with this, I more than likely could accurately diagnose the problem and come to similar conclusion that Intel more than likely has which is that there's a hardware limitation somewhere that can't be fixed without replacement.

So we're back to speculation... and more than likely meaningless speculation.

So I stand by my comment that Intel can replace the modems which people complain about with newer models. And for everyone else, suggest that cable companies implement IPS filters to protect their users from attacks.

0
0
CheesyTheClown

Re: And Virgin in uk?

I'd say that the real issue here isn't whether Intel is good or bad. Remember that Intel generally makes pretty great stuff. It's a matter of how this bug is being handled.

I haven't been able to find a great deal of information on this bug other than the artificial stuff like "my modem is experiencing jitter" or your comment about flow tables. I'm going to assume that before using terms like flow tables, you've done some research on this and know what phase of the forwarding pipeline this is and whether population of the flow tables is the problem or not.

I'm curious about the forwarding process of the modem since whether in bridged or routed mode, I'm under the assumption at this point that Intel has implemented a hardware based packet processor that manages most packet buffering and forwarding in the hardware forwarding engine.

Now, the act of forwarding a packet should be deterministic at all times. However the decision making process of how and where to forward the packet can introduce difficulties. This engine almost certainly performs header parsing in hardware. This should not be an issue either since headers per service type should be consistent.

The length of the packet seems to be what is choking the system. Length matters if encountering packets classified as runt or otherwise exceeding the MTU in networking. What seems to matter here is how the device is handling what is likely padded frames. That means that when the hardware is processing a frame which needs to be transmitted with additional padding in order to not be classified as a runt frame (less than 64 bytes + clock recovery and CRC) there is apparently an issue.

So, by following the logic above, it seems to me that jitter and latency is introduced when padding is required when translating bridging from DOCSIS to 802.3. If DOCSIS transmits padding (not sure, I'm not that familiar with DOCSIS) then upon receiving the packet, the packet engine seems to strip the padding (which is healthy I imagine) and processes packet authenticity (CRC or equiv). Then when re-encapsulating as Ethernet, a new frame is constructed. When the frame meets all main requirements, the process is handled in hardware which is observed by the fact that larger packets don't appear to cause issues.

When a runt frame requiring padding is encountered, the modem will generate a protection fault within the CPU which is handled by the operating system. The operating system then signals the device driver for the hardware. The driver then either copies the frame from the buffer into system cache or performs bunches of IO operations on the memory in place in the buffer. The entire frame is likely parsed by CPU at this point, then is placed back into the buffer to forward it again and the driver signals the hardware via MMIO to continue.

The earlier bug we've seen before any patches has been that packets are simply dropped. I don't know if 100% of the runt frames are dropped and this accounts for 6% of the data or if 6% of the runt frames are dropped. This makes a big difference.

So my theory is that the hardware logic is completely missing a runt frame handler and is entirely dependent on software to process runt frames. This sounds crazy, but Cisco.. THE NETWORKING COMPANY has had a known (but quietly hidden) bug in their 6800IA hardware for over 2 years unpatched that drops runt frames when they retag VLANs and they're hoping noone notices and complains.

Given the forwarding engine that is likely provided in the PUMA 6 (speculating) is designed to work like a normal forwarding engine, that means 99.99% of the forwarding work will be done in hardware. If there is an exception encountered meaning that the forwarding table (I assume this is what you mean by flow table) is not populated or there is a packet exception such as a runt frame requiring padding, the CPU will need to process the packet.

It is extremely common that in established conversations, the flow table should not need to be altered. Since you are in bridge mode there are probably 2-3 known flows on the DOCSIS side (being the routers processing at the CMTS) and there is probably at most 1 flow on the Ethernet side which is your network router. However in Layer-3 there could be a great deal more involved when encountering NAT.

That said, since the device likely can handle NAT, it probably has some amazing processing capabilities for handling exceptions with the NAT tables.

But runt frame processing is not handled quickly it seems. This could be that NAT doesn't actually require reading and parsing a full frame, then generating a whole new frame to process. Instead, it probably simply requires using a hardware optimized mechanism to push a new NAT entry to the table and then translation is handled in hardware.

So, then comes latency and jitter. If the packet has to be processed in software and the software itself is not designed for packet processing (meaning plain old Linux, wind river, whatever) then there would be a non-deterministic latency when processing these packets as operating systems can often use between 1ms and 150ms just to respond to an interrupt. This is not an issue for the occasional unknown flow. Chances are, the hardware is using an alternate buffer to forward with during this time. But if there are a lot of frames queued for forwarding, the buffers could be full, and block the pipeline which the unknown frame is being processed... at which time 150ms can be deadly.

So, the next thing that comes up is that there were earlier articles on this topic I believe which blames CPU speed throttling for the problem. This is common. Since the CPU in question spends most of its time asleep as it only needs to handle management and exceptions, it can be REALLY slow most of the time. When a new exception comes in, it will need to throttle the CPU up quickly. This adds more delays... maybe another 50ms ... who knows, I can't find the programmers guide for the chip.

So, now we're seeing lots of delays.

One option is that the ISP simply block runt frames which will kill any games using very small frames. Then beg the game guys to intentionally pad their packets. Of course, chat programs that transmit every character as they're typed will fail as well.

Another option is to optimize the OS kernel for packet processing runt frames... if they can be processed at all. There's a chance the packet forwarding microcode doesn't have a proper mechanism for this. It may demand each packet is handled independently. If this is the case, then without replacing the chip, there may be no answer. Of course, recoding the OS, writing a split core kernel which would allow one core to run the management OS and the other core to run a packet processor can improve performance and provide deterministic forwarding, it would still have high latency. But at least it would be reliable.

Finally, the real solution, recall the products. The issue with this is that the cost of fixing a device that is this cheap with such a small margin is more expensive than just making a new one. That said, if Intel has to sponsor a recall of every single device shipped with these chips, it could mean billions lost.

So the best option may be, help the vendors make runt frame friendly devices. Then if a customer complains, send them a replacement free of charge. Then pay whatever class action suit comes up for $5-$50 million and be done with it. It might even be cheaper just to pay the class action and make you buy your own replacement.

I think Intel unfortunately is handling this the best they can. Bugs happen. And there has never been a cable modem chipset that didn't suffer one problem or another.

I think you'll find that it shouldn't be long before your service provider is in a position offer a new modem with a newer chip that doesn't have the problem. I'd imagine the delay now is quality control.

3
1

HPE hatches HPE Next – a radical overhaul plan so it won't be HPE Last

CheesyTheClown

Re: Paraphrase "More job cuts"

That's not true. She has turned HPE into a company that buys profitable companies that have customers that can't leave them for at least a few years, kills off their engineering teams, kills their products. Then when all their customers leave because they bought products from smaller more agile companies and now are being treated like hell, HPE either spins off or kills off the business units.

Let's take an example like Aruba. Aruba customers understood exactly what the were buying. They could buy wireless access points and controllers embedded in switches which provided an excellent solution with predictable pricing and fantastic support. Then HPE who had a mostly shit product line because they bought a buggy as shit half finished product as 3com sucked them up and without considering the impact to customers, killed off the Aruba switching products as they were redundant and left customers without integrated controllers. Also, they started moving support to underqualified support centers in India. They killed off proper Aruba specific sales. They merged HPE networking with Aruba as if they are getting ready to spin off enterprise networking as well. The Aruba documentation and communities got hurried in HPE networking hell. Now Aruba exists by selling more equipment to companies who already had Aruba and can't justify dumping Aruba as they haven't had return in investment yet. Besides, the only alternative is Cisco and doing business with Cisco can be very difficult at times.

Simplivity... haha oh dear... they died the moment HPE bought them.

Nimble customers are already being beaten to death by HPE.

Ever since HP was taken over by people who wouldn't know what an oscilloscope was if they had one smashed over their heads, HPE has been strictly an acquisitions and mergers company. They have not been a reliable source of technology for a long time.

8
0

Farewell, slumping 40Gbps Ethernet, we hardly knew ye

CheesyTheClown

It's about wavelength as opposed to transceivers.

40gb/s is accomplished with 4 bonded (think port channel kinda) 10gbs links. That means we need we need 4 wavelengths to accomplish 40gb/s or 10 for 100gb/s. Using WDM equipment, a 40gb/s trasceiver can deliver 10,20,30 or 40gb/s depending on which wavelengths are optically multiplexed.

100gb/s using 25gb/s transceivers can provide 25,50,75 or 100gb/s over the same wave lengths.

Long range transceivers capable of service provider scale runs are very expensive. But compared to rental of wavelengths cost nothing. I've seen invoices for 4 wavelengths along the transsiberian railroad where short term leases (less than 30 years) were involved measured in millions of dollars per year. Simply replacing a switch and transceiver would boost bandwidth from 20gb/s to 50gb/s without any alterations to the fiber or passive optical components.

So, 40gb/sec makes a lot of sense in data centers where there is no reoccurring costs for fiber. But when working with service providers, an extra million dollars spent on each end of a fiber, the hardware cost is little more than a financial glitch.

5
0
CheesyTheClown

Re: Moore's Law on Acid

We'll move on to terabit, but as it stands, quantum tunneling is a major problem with modern semiconductors preventing us from going there. If I recall correctly, Intel posted a while back that their research says that we will need 7nm die process to create 1tb transceivers. So for now we'll focus on 400gbs

1
0

Two leading ladies of Europe warn that internet regulation is coming

CheesyTheClown

Re: But Angela has a working brain...

A Ph.d. in Chemistry, while not likely to be a candidate for the Field's medal any time in the near future should have a high enough level of mathematical understand to grasp concepts such as factoring and coefficients. She might not understand the relationship of mersenne primes and polynomial based encryption mechanisms... but I'm pretty sure she has friends she respects who do.

I don't like Merkel... and with regard to politics, I don't particularly respect her. I suppose this is very likely because the leadership positions in Germany have demands which make people into assholes (where in most other countries, asshole is a prerequisite). But I do think she's competent. Theresa May shares the shit out of me and gives me nightmares. She's basically Donald Trump with an accent which sounds benign. I really think she should change her name to Umbridge and get herself some special quill pens.

16
0

Intel to Qualcomm and Microsoft: Nice x86 emulation you've got there, shame if it got sued into oblivion

CheesyTheClown

Instruction stayed the same, the core changes

You have a lot of great points. I always considered the 64KB page to be a smart decision when considering backwards compatibility with 8085. It also worked really well for paging magic on EMS which was not much more difficult to manage than normal x86 segment paging. XMS was tricky as heck and DOS extenders were really only a problem because compiler tech seemed locked into Ohar Lap and others $500+ solutions at the time.

I don't know if you recall Intel's iRMX which was a pretty cool (though insanely expensive) 32-bit DOS for lack of a better term. It even provided real-time extensions which were really useful until we learned that real-time absolutely sucks for anything other than machine control.

Also, DOS was powerful because it was a standardized 16-bit API extension to the IBM BIOS. A 32-bit DOS would have been amazingly difficult as it would have required all software to be rewritten and since nearly everything was already designed to use paged memory. In addition, since most DOS software avoided using Int21h for display functions (favoring Int10h or direct memory access) and many DOS programs used Int13h directly, it would have been very difficult to implement a full replacement for DOS in 32-bit.

Remember; on 286 and sometimes on 386, entering protected mode was easy, but switching back out was extremely difficult as it generally required a simulated boot strap. That means to access 16-bit APIs from 32-bit may not have been possible. They would have had to be rewritten. For most basic I/O functions that wouldn't be problematic, but specifically in the cases of non-ATA (or MFM/RLL) storage devices, the API was provided by vendor BIOSes that reimplemented Int13h. So, in order to make them work, device drivers would not have been optional.

In truth, the expensive 32-bit windowed operating systems with a clear differentiation between processes and system-call oriented cross process communication APIs based on C structures made more sense. In addition, RAM was still expensive with most systems still having 2MB of RAM or less, page exception handling and virtual memory made A LOT of sense as developers had access to as much memory as they needed (even if it was page swapped virtual memory).

I think in truth, most problems we encountered was related to a > $100 price tag. Microsoft always pushed technology by making their tech more accessible to us. There were MANY superior technologies, but Microsoft always delivered at price points we could afford.

Oh... and don't forget to blame Borland. They probably were the biggest driving factor behind the success of DOS and Windows. By shipping full IDEs with project file support and integrated debuggers (don't forget second CRT support) and integration with assembler (inline or TASM) and affordable profilers (I lived inside of Turbo Profiler for ages). Operating system success has almost always been directly tied to accessibility of cheap, good and easy to use development tools. OS/2 totally failed because even though Warp 3.0 was cheap, no one could afford a compiler and SDK.

1
0
CheesyTheClown

Re: x86 bloated!!!

Some would also suggest that RISC suffers similar problems when optimized for transistor depth where highly retiring operations are concerned. Modern CISC processors have relatively streamlined execution units which is what consume most of their transistors... as with RISC. However, RISC which has to increase instruction word size regularly to expand functionality suffer the burden of either requiring more instructions for the same operation as CISC, or they have a higher cost of data fetching which result in longer pipelines that can suffer greater probability of cache miss. Since 2nd level cache and above as well as DDR generally depend on burst for fetches, RISC with narrow instruction words can be a problem. Also consider the pipeline optimization of RISC instructions which may have branch conditions on every instruction can be highly problematic for memory operations.

Almost all modern CPUs implement legacy instructions (such as 16-bit operations) in microcode which executes similar to a JIT compiler that compiles instructions in software.

Most modern transistors on CPUs are spent on operations such as virtual memory, fast cache access and cache coherency.

0
0
CheesyTheClown

Re: At this point...

I believe this is the right direction to think in.

Intel isn't trying to guarantee security in the mobile device market. That ship sailed. In fact, with the advent of WebAssembly, it is likely x86 or ARM will have little or no real impact now. Intel's real problem with mobile platforms like Android was the massive amount of native code written for ARM that wouldn't run unmodified on x86. With WebAssembly that will change.

Intel is more concerned that with Microsoft actively implementing the subsystem required to thunk between emulated x86 and ARM system libraries, it will be possible now to run Windows x86 code unmodified on ARM... or anything else really.

That means that there is nothing stopping desktop and server shipping with the same code as well. This does concern Intel. If Microsoft wants to do this, they will have to license the x86 (more specifically modern SIMD) IP. Intel will almost certainly agree to do this, but it will be expensive since it could theoretically have very widespread impact on Intel chip sales.

Of course, Apple who has proven with Rosetta that this technology work could have moved to ARM years ago. They probably didn't because they decided instead to focus on cross architecture binaries via LLVM to avoid emulating x86. Apple will eventually be able to transition with almost no effort because all code on Mac is compiled cross platform... except hand coded assembly. Microsoft hasn't been as successful with .NET, but recent C++ compilers from Microsoft are going that way as well. The difference is that Microsoft never had the control over how software is made for Windows as Apple has had for Mac or iOS.

5
0

Windows 10 Creators Update preview: Lovin' for Edge and pen users, nowt much else

CheesyTheClown

Hadn't thought much on it.

I just press the Windows key and type. It's clear that the search engine comes from the makers of Bing... but unlike Bing, it often actually comes close. So, I don't think I've actually seen the settings interface. I just search for what I need and generally try to use Powershell for most everything.

I will admit that display resolution should not be categorized as advanced settings.

1
8
CheesyTheClown

Re: Fall Creators Update

The quote "only the strong will survive" is actually a misquote regarding natural selection. Natural selection will select out those of a species least capable of adapting to the changes in their environment. One might suggest that as the world evolves, people who can't figure out how to become comfortable with a new version of Windows after 10 years might be headed down the same road as the dodo.

5
30

Science megablast: Comets may have brought xenon to Earth

CheesyTheClown

Re: Comets? Why bloody comets?

I don't get it.

Was that a legitimate query, a rhetorical question, or maybe an example of British humour (not to be confused with humor. Where humour employs the word funny as in "does that smell funny to you", humor employs the word funny as in "that was so funny I'll need to visit the hospital for stitches after rupturing my spleen from laughing so hard")?

Earth is tiny. Comets are generally small, Hale Bopp is more than 60km in diameter, insignificant compared to earth at 12,742km or even Pluto (2,374km), but its tail extends 500 million km. As orbits the sun, it comes into constant contact with all kinds of debris such as randomly floating junk and stuff left behind by meteors and comets. Gravity sucks them in and they become part of Earth.

Earth doesn't need to be provided with anything any more than your car needs to squash a bug on its windshield. But sometimes it appears we get lucky and manage to gather a few useful items such as something we believe may have helped spark life... which of course happened in originally in England... where Jesus was born and lived... and blessed the queen... and also was the geometric center of the universe... and as such must be protected by aliens of all forms... which is why Theresa May will now secure cameras in your bedrooms and bathrooms for your own protection. After all, this article says that someone or something in space is trying to invade your country with particles and possibly life forming gasses that are believed to be a primordial source of dangerous terrorists.

I guess it would of course be better if we had only British produced Xenon 67P in our atmosphere in the future :) Of course, the British version would be 69P because it's just worth more and it's damn sexy too!

3
0

Tech can do a lot, Prime Minister, but it can't save the NHS

CheesyTheClown

Quality healthcare = Civilization

If people are sick, they don't work.

If people fear getting sick when they're old, they horde savings instead of circulating them in the economy

If people take the burden of healthcare costs on themselves directly, they avoid doctors

Quality healthcare and quality education provided by the government increases tax revenue and at 12% of the GDP is a bargain.

What does it cost the government and the tax payers when hospitals and care givers have to compensate for people defaulting on their medical costs?

If any one person suggests that they think they will save money by not making healthcare and education and integral component of their civilization, they are short sited fools and should move to the U.S. and vote Trump. See where that gets you.

If England hopes to be a leader of anything following Brexit, they will find a way to spend more on healthcare and education, not less. Otherwise, they might as well reopen the work houses and rent spaces on floors where children can rent space to sleep at a price affordable on a street begger's pay.

4
1

You may now kiss the server-side: Dell EMC marries storage software to PowerEdge 14Gs

CheesyTheClown

Management API?

Sounds like some nifty features, but what about a management API?

Can I configure and upgrade the UEFI, firmware, BIOSes, etc... all from a single API? Can we configure the SCSI controller and RAID settings via the standard OOB management interface? Can we configure PXE boot via the OOB management? Can we configure UEFI, SCSI, NVMe, NICs via a single OOB management system while the system is operating? Is there a standard API for configuring VNICs on different NICs from different vendors (as Dell still doesn't make their own... UGH) so that VLAN and VXLAN is supported? Is there support for 802.1ae (MACsec/LinkSec) during boot? Is there support for LLDP during boot? Can I use these APIs for configuring and managing HBAs and vHBAs?

Does the system have an API for configuring certificates? Can we configure IPv6 security (IPSEC) and SNMPv3 views?

Does the ordering and fulfillment systems for the servers provide the MAC addresses for automated provisioning of DHCP, 802.1x MAB and AD accounts? Is the OOB MAC available as a scannable barcode on the shipping box, palette BOM and physical machine itself?

Is there one or two OOB management Ethernet ports? Is there a plug and play API so that no configuration data is stored on device and can be centralized instead? Can I push appropriate configs for the system based on DHCP option 82?

Do any of the other features matter if we still have to manage these servers like it's 1999?

This is 2017, Cisco did a lot of this stuff with UCS Manager. That system is old and clunky, but it works (99% of the time). Why the hell is it you can't after nearly 10 years get a second vendor?

0
0

BA CEO blames messaging and networks for grounding

CheesyTheClown

How could this even happen?

I'm developing a system now that is small and not even mission critical and it has redundancy and failover. Does anyone alive actually make anything anymore that doesn't do this?

1
0

Is it the beginning of the end for Visual Basic? Microsoft to focus on 'core scenarios'

CheesyTheClown

Re: Fickle Microsoft

haha I remember being the C programming king of high school and Windows came out and even with all the help that I could get from the Charles Pezhold book *which I spent two weeks of grocery store wages on), I couldn't for the life of me figure out the API.

Of course, X11, Windows, Mac all have horrible low level APIs... but now, I just code language and environment doesn't really matter anymore. It's more about simply just sitting down to type.

I nearly died laughing at the guy who said that simply changing the language made him go from senior developer to junior developer. I never met a senior level developer who was senior because of how well he could use one particular tool or paradigm. I always considering the most versatile person to be senior and people who speak like he did as ready to be promoted to janitorial staff.

0
0

SAP Anywhere goes nowhere, reaches commercial cul-de-sac

CheesyTheClown

Probably a weak pound issue

If they pen agreements with U.K. companies when the pound is weak, they will have to take what they can get since most U.K. companies don't want to pay prices that make sense on the sales sheet in dollars. US companies are forced to charge quite a bit more because of the very high cost of VAT in Europe and they'd probably have to charge U.K. companies less than their US counterparts.

In addition, I suspect that U.K. regulation is about to make a lot of rapid changes requiring a lot of coding to support it. The cost will be high. It's probably more cost effective to wait until the U.K. market stabilizes following Brexit to bother investing in that.

Of course it could simply be that Trump induced stupid has all their programmers and lawyers so busy that trying to keep up with American issues leaves no time to waste on U.K. stupid as well.

0
0

Hypervisor kid Jeff Ready: Converged to the core, and NO VMware

CheesyTheClown

Re: After what these guys did to their storage customers...

Dedupe on HCI is easy if you're not using VMware as they don't properly support pNFS, it does a bastardized form of it called multipathing.

The solution to this to to sell your soul to VMware, get access to the NDDK and implement a native storage driver which can implement pNFS on its own. There's absolutely no value in doing this and no one should ever bother trying.

There's the alternative which is to attempt to get iSCSI up and running in a scale-out environment. Due to limitations in vSwitch, this isn't an option since multicast iSCSI isn't supported in VMware's initiator and anycast isn't profitable in this case.

FC is out if for no other reason but FC is storage for people who still need mommy to wipe their bottoms for them. FC is so simple stupid, a monkey can run an FC SAN (until they can't... but consultants are cheap right?) and what makes it so simple stupid is that FC doesn't support scale-out AT ALL, though MPIO could scale all the way to two controllers.

So, then there's the question of value. Where's the value in dedup on a VMware HCI platform? That's a tricky one since due to the nature of VMware's broken connectivity options for storage, you can't scale out the system connectivity to begin with. You also can't extend the vmware kernel to support it because even if you have access to the NDDK, there's no one who actually knows it well enough to program with it and if you look at VMware's open source code for their NVMe driver on github, you'll see that you probably don't want to use their code as starting point. It's pretty good... kinda... but I'm tempted to write a few exploits for the rewards now.

Oh, then there's the insane cost and license problem behind the VAAI NAS SDK from VMware. I almost choked when they sent me a mail saying "$5000 and we basically can tell you what to do with your code"... for a 13 page document (guessing the size). So, you can't even properly support NFS to begin with. And no, I would never ever ever agree to the terms of that contract they sent me and there's less chance I would consider paying $5000 for a document that should not even be required.

So, back to dedup... you can dedup... in HCI... no problem! The problem is, how can you possibly get VMware to actually use the dedup and replicated data?

Then there's Windows Server 2016 which ships with scale-out storage, networking and compute all on one disc and all designed from the ground up for.... scale-out.

There's OpenStack which works absolutely awesome on Gluster with scaleout and networking.

So, what you're talking about "dedup on HCI is hard and slow." this is absolutely not true. dedup and scale-out on VMware is damn near impossible. But it's a stock component of all other systems and see a post I made earlier about slow. Slow is not a requirement. It just takes companies with real storage programmers not just hacks that slap shit together using config files.

0
0
CheesyTheClown

Re: Seriously? Did he really said that? With a straight face?

Ok... because there are bad implementations of dedupe out there (lots of them... NetApp being among the worst I've seen), there will always be comments like this.

Let's talk a little about block storage. There are many different levels of lookup for blocks in a storage subsystem. If you look at a tradtional VMware case, there are at least 6 translations, possibly up to 20 for each block access across a network. Adding FibreChannel in-between aggravates the issue quite badly. It adds a lot of latency based on it's 1978 era design (this is no an exaggeration, the SCSI protocol is from 1978). There are many more problems which come into play as well.

Every block oriented storage system which supports any form of resiliency through replication of any sort (which is not an option anymore) has to perform hashing on every single block received. Those hashes must be stored in a database for data protection. For 512-4096 byte blocks, chances are a CRC-32 is suitable for data protection, and for deduplication with a "lazy write cache" it's is also suitable. However, in the case of NetApp for example which is severely broken by design, everything is immediate and there's no special storage for lazy or scheduled dedup.

In a proper dedup system, blocks which have two or more references on a write operation (even if hash matches) will decrease their reference count and a new block will be written to high performance storage (NVMe for example) with a single reference. If there was only one reference, then the block is altered in place and the hash is updated.

Then dedup will run "off-peak" meaning (for example) that if the CPU is under 70%, then the new blocks stored on disk will be compared 1:1 with other blocks with matching hashes and references will be updated only a single copy of the data itself will be maintained. In addition, during this phase, it is possible to lazy compress blocks and migrate to cold-storage (even off-site) or heaven forbid FC SAN storage blocks which are going stale.

Dedup should have absolutely ZERO impact on performance when implemented by engineers who actually have half a brain.

The disadvantage to the system described above is that dedup won't be sexy at trade shows since it might take minutes, hours or more to see the return from the dedup operation.

As for databases, if you're running mainstream SAN (EMC, Hitatchi, 3Par, NetApp), you're absolutely right. You should avoid dedup as much as possible. None of the those companies currently employ the "real brains behind their storage" anymore and they haven't had decent algorithm designers on staff in years. They take a system which works and layer shit upon shit upon shit to sell them. There will be problems using any GOOD storage technologies on those systems.

For database and most modern instance, you should make a move away from block storage oriented systems and focus instead on file servers with proper networking involved. In this case, I would recommend a Gluster cluster (even if you have to run it as VMs) with pNFS or Hyper-V with Windows Storage Spaces Direct. These days, most of the problems with latency and performance are related to forcing too may translations between guest VM and the physical disk. There's also the disgusting SCSI command queuing illness which is something which orders file read and write operations impressively stupidly since NCQ at each point it's processed has no idea what the block structure of the actual disk is. pNFS and SMBv3 are far better suited for modern VM storage than FC and iSCSI can ever be.

That said, there are some scale-out iSCSI solutions which aren't absolutely awful. But scale-out is technically impossible to achieve over FC or NVMe.

P.S. - Dedup in my experience (I write file systems and design hard drive controllers for personal entertainment) shows consistently higher performance and lower latency than the alternative because of the simplicity involved in caching.

P.P.S. - I've been experimenting with technology which is better than dedup as it would instrument guest VMs with a block cache that eliminated all Zero-Block reads and writes at the guest. It improves storage performance more than most other methods... sadly, VMware closes their APIs for storage development, because of this, I have to depend on VMware thin-volumes or FC in-between to implement that technology.

P.P.P.S. - I simply don't see this company doing anything special other than trying to define a new buzz term which is nothing new. Implementing code into the KVM kernel is the same as Microsoft implementing SMB3 into Hyper-V, it's just old hat.

0
0

Cisco goes 32 gigging with Fibre Channel and NVMe

CheesyTheClown

Ugh!

Let's all say this together

Fibrechannel doesn't scale!

MDS is an amazing product and I have used them many times in the past. But let's be honest, it doesn't scale. All flash systems from NetApp for example have a maximum of 64 FC ports per HA pair (which is so antiquated it's not worth picking on here) and that means that the total system bandwidth of the system is about 8Tb/sec. Of course, if you consider that HA pairs suggest you have to design for a total system failure of a single controller which cuts that in half. Then consider that half that bandwidth is upstream, the other half down. Meaning, half is for connecting drives to the system, the other half is for delivering bandwidth to the servers. So we're down to 16 reliable links per cluster. There has to be synchronization between the two controllers in an HA pair. So let's cut that I. Half of we don't want contention related to array coherency.

An NVMe drive consumes about 20Gb/sec bandwidth. So, that's a maximum capacity of 25 online drives in the array. Of course there can be many more drives, but you will never reach the bandwidth of more than 25 drives. Using Scale-Out, it is possible scale wider, but FC doesn't do scale out and MPIO will crash and burn if you try. iSCSI can though.

Now consider general performance. FC Controller are REALLY expensive. Dual ported SAS drives are ridiculously expensive. To scale out performance in a cluster of HA pairs would require millions in controllers and drives. And then because of how limited you are for controllers (whether cost or hard limitations) the processing requires for SAN operations would be insane. See, the best controllers from the best companies are still limited by processing for operations like hashing, deduplication, compression, etc... let's assume you're using a single state of the art FPGA from Intel or Xilinx. The internal memory performance and/or crossbar performance will bottleneck the system further and using multiple chips will actually slow it down since it would consume all the SerDes controllers just for chip interconnect at a speed 1/50th (or worse) than the internal macro ring bus interconnects. If you do this in software instead, even the fastest CPUs couldn't hold a candle to the performance needed for processing a terabit of block data per second. Just the block lookup database alone would kill Intel's best modern CPUs.

FC is wonderful and it's easy. Using tools like the Cisco MDS even makes it a true pleasure to work with. But as soon as you need performance, FC is a dog with fleas.

Does it really matter? Yes. When you can buy a 44 real core, 88 vCPU blade with 1TB of RAM on weekly deals from server vendors, a rack with 16 blades will devastate any SAN and SAN Fabric making the blades completely wasted investments. Blades need local storage with 48-128 internal PCIe lanes dedicated to storage to be cost effective today. That means the average blade should have a minimum of 6xM.2 PCIe NVMe internally. (NVMe IS NOT A NETWORK!!!!!!) then for mass storage, additional SATA SSDs internally makes sense. A blade should have AT LEAST 320Gb/sec storage and RDMA bandwidth and 960Gb/sec is more reasonable. As for mass storage, using an old crappy SAN is perfectly ok for cold storage.

Almost all poor data center performance today is because of SAN. 32Gb FC will drag these problems out for 5 more years. Even with vHBAs offloading VM storage, the cost of FC computationally is absolutely stupid expensive.

Let's add one final point which is that FC and SAN are the definition of stupid regarding container storage.

FC had its day and I loved it. Hell I made a fortune off of it. I dumped it because it is just a really really bad idea in 2017.

If you absolutely must have SAN consider using iSCSI instead. It is theoretically far more scalable than FC because iSCSI uses TCP with sequence counters instead of "reliable paths" to deliver packets. By doing iSCSI over Multicast (which works shockingly well) real scale out can be achieved. Add storage replication over RDMA and you'll really rock it!

3
3

Microsoft's new hardware: eight x86 cores, 40 GPU cores

CheesyTheClown

Re: 4K? Meh

I had the orange... I was told it was called Amber. And it was supposed to be better than green but Eddie Murphy told me that his grandmother suckered him worse with burgers that were better than McDonalds.

What sucks is that simcga almost never worked for me. But to be fair, Sierra was generally good about supporting HGC.

0
0
CheesyTheClown

Re: Project Scorpio?

$700 is excluding VAT. With VAT at 17%, that should be £819. Then consider the "you're in Europe tax" which Apple is the worst about but Microsoft tries to suck at too. I'd guess £850-900.

0
0

Elastifile delivers stretchy file software

CheesyTheClown

Built into Windows Server and Linux?

Why would pay money for something already built into the operating system?

0
3

Google Cloud to offer support as a service: Is accidental IT provider the new Microsoft?

CheesyTheClown

Don't use google for the same reason you don't use AWS or IBM

If you choose to go cloud, you want a single solution that works in the cloud or out. Google, Amazon and others don't make the platform available to take back home. Sure, you can go IaaS, but do you really want IaaS anymore?

Never use a platform which has PaaS or SaaS lock in and Google and AWS are permanent commitments. Once you're in, you can't go out again.

3
2

After 20 years of Visual Studio, Microsoft unfurls its 2017 edition

CheesyTheClown

Re: Getting better all the time

Maintaining projects other than your own is always a problem. But updating to a new IDE and tool chain is just a matter of course and is rarely a challenge. I've moved millions of lines of code from Turbo C++ to Microsoft C++ 7.0 to Visual C++ 1.2 through Visual Studio 2017. Code may require modifications, but with proper build management, it is quite easy to write code to run on 30 platforms without a single #ifdef.

I've been programming for Linux and Mac using Visual C++ since 1998. I used to write my own build tools, then I used qmake from Qt. Never really liked cmake since it was always hackish.

Now I code mostly C# since I've learned to write code which can generate better machine code after JIT than C++ generally can since it targets the local processor instead of a general class or generation of CPUs. Since MS open sourced C# and .NET, it's truly amazing how tight you can write C#. It's not as optimized as JavaScript, but garbage collected languages are typically substantially more optimal for handling complex data structures than C or C++ unless you spend all your time coding deferred memory cleanup yourself.

3
0

Why did Nimble sell and why did HPE buy: We drill into $1.2bn biz deal

CheesyTheClown

Re: Cisco: Be Bold!

Cisco is dumping SAN, why would they buy another one. Cisco is the only company who seems to be trying to take hyperconverged seriously... now if only they figured out that hyperconverged isn't a software SAN.

0
3
CheesyTheClown

And there goes Nimble

To be fair, over the past several software releases, Nimble has been dropping rapidly in quality. But still, they were probably the best option for SAN storage available.

No one shed a tear when Dell bought EMC since EMC was already yesterdays crap and VMware was already falling apart.

But HPE buying Nimble is a disaster since they probably are already trying to decide how many people to lay off to "reduce redundancies" since there's a storage nerd here and storage nerd there. They'll outsource support to India with a bunch of guys with a support script. As for knowledge for marketing, no one at HPE will sell Nimble since they barely understand 3Par and they kinda just figured that out.

I predict that Nimble will perform about as well under HPE as Aruba did... and frankly Aruba is pretty much dead now.

5
8

Sir Tim Berners-Lee refuses to be King Canute, approves DRM as Web standard

CheesyTheClown

Standard DRM = crack once use forever

This is a good thing. Imagine you buy a phone or a tablet and it reaches end of support. This device sold and marketed a device capable of playing standard DRM content might end up black listed because someone else found a method of cracking DRM using that device. Since updates are not available, whoever blacklisted that device can be held liable and sued for their actions.

Consider that browser based DRM is simply not possible.

A plugable module is code which requires standardization of an API. The API will be well understood and will not be restricted. So you write a small loader app and then based on entry points, issue your own keys and decide some of your own streams and find out where the keys are held.

The DRM must be extremely lightweight otherwise batteries will drain to quickly. One could write the DRM using JavaScript which would be smartest and with instruction level vectorization a part of WebAssembly, it could be quick. But it would consume far more power than a hardware solution. So DRM in code would have to be limited more to rights and providing decryption keys for AES or EC. And if the keys can be transferred at all, they can be cracked.

The media player pipeline in Mozilla and Chrome are well understood. The media player pipeline in Windows is designed to be hooked and debugged. There is absolutely no possible way to DRM video on Windows, Linux or Mac that can't be intercepted after decryption. As for Android, unless the DRM blacklists pretty much every Android device ever made, it can't work.

So... good luck trying... I actually buy all my films, but I decrypt them so I can still watch them even if the DRM kills. I lost tons of money buying audiobooks on iTunes which could only be downloaded once. I won't ever buy media I can't decrypt again. I'll join the race to see who can permanently crack the DRM fastest.

6
0

Page:

Forums

Biting the hand that feeds IT © 1998–2017