* Posts by CheesyTheClown

536 posts • joined 3 Jul 2009

Page:

Bill Gates says he'd do CTRL-ALT-DEL with one key if given the chance to go back through time

CheesyTheClown
Bronze badge

Antivaxxers?

Bill Gate is a brilliant man, but sometimes he pisses away time in the wrong way.

Consider ratios.

What's easier, his way or the antivaxxer way? Let's evaluate both.

Bill says that an African child is 100 times more likely to die from preventable diseases than an American.

Logistically, vaccinating and healing Africans is very difficult and nothing but an uphill battle.

The antivaxxers have already been increasing deaths in American related to mumps, measles and reubella. This is much easier as all it takes is a former porn actress with the education correlating to said career choice to campaign on morning TV about how MMR vaccines can be dangerous and cause autism.

So instead of fighting like hell to vaccinate Africans... isn't it easier and cheaper just to let porn actresses talk on morning TV?

The results should in theory be the same... the ratio Bill mentioned will clearly shrink either way.

Of course, if his goal is to actually save lives as opposed to flipping a statistic, we might do better his way.

1
0

China reveals home-grown supercomputer chips after Intel x86 ban

CheesyTheClown
Bronze badge

Re: Interesting side effects of this development..

Let me toss in some ideas/facts :)

Windows NT was never x86/x64 only. It wasn't even originally developed on x86. Windows has been available for multiple architectures for the past 25 years. In fact, it supported multiple architectures long before any other one operating system did. In the old days when BSD or System V were ported to a new architecture, they were renamed as something else and generally there was a lot of drift between code bases due to hardware differences. The result being that UNIX programs were riddled silly with #ifdef statements.

The reason why other architectures with Windows never really took off was that we couldn't afford them. DEC Alpha AXP, the closest to succeeding cost thousands of dollars more than a PC... of course it was 10 times faster in some cases, but we simply couldn't afford it. Once Intel eventually conquered the challenge of working with RAM and system buses operating at frequencies not the same as the internal CPU frequency, they were able to ship DEC Alpha speed processors at x86 prices.

There was another big problem. There was no real Internet at the time. There was no remote desktop for Windows either. The result being that developers didn't have access to DEC Alpha machines to write code on. As such, we wrote code on x86 and said "I wish I had an Alpha. If I had an Alpha, I'd make my program run on it.". So instead of making a much cheaper DEC Alpha which could be used to seed small companies and independent developers with, DEC, in collaboration with Intel decided to make an x86 emulator for Windows on AXP.

The emulator they made was too little too late. The performance was surprisingly good, though they employed technology similar in design to Apple's Rosetta. Dynamic recompilation is not terribly difficult if you consider it. Every program in modern times has fairly clear boundaries. They call functions either in the kernel via system calls which are easy to translate... or they call functions in other libraries which are loaded and linked via 2-5 functions (depending on how they are loaded). When the libraries are from Microsoft, they know clearly what the APIs are... and if there are compatibility problems between the system level ABIs, they can be easily corrected. Some libraries can be easily instrumented with an API definition interface, though C programmers will generally reject the extra work involved... instead just porting their code. And then there's the opportunity that if an API is unknown, the system can simply recompile the library as well... and keep doing this until such time as the boundaries between the two architectures are known.

Here's the problem. In 1996, everyone coded C and even if you were programming in C++, you were basically writing C in C++. It wasn't until around 1999 when Qt became popular that C++ started being used properly. This was a problem because we were also making use of things like inline assembler. We were bypassing normal system call interfaces to hack hardware access. There were tons of problems.

Oh... let's not forget that before Windows XP, about 95% of the Windows world ran either Windows 3.1, 95, 98 or ME. As such, about 95% of all code was written on something other than Windows NT and used system interfaces which weren't compatible with Windows NT. This meant that the programmers would have to at least install Windows NT or 2000 to port their code. This would be great, but before Windows 2000, there weren't device drivers for... well anything. Most of the time, you had to buy special hardware just to run Windows NT. Then consider that Microsoft Visual Studio didn't work nearly as well in Windows 2000 as it did in Windows ME because most developers were targeting Windows ME and therefore Microsoft focused debugger development on ME instead.

So... running code emulated on Alpha did work AWESOME!!!! If the code worked on Windows NT or Windows 2000 on x86 first. Sadly, there was no real infrastructure around Windows NT for a few more years.

That brings us to the point of this rant. Microsoft has... quite publicly stated their intent to make an x86/x64 emulator for ARM. They have demoed it on stage as well. The technology is well known. The technology is well understood. I expect x86/x64 code to regularly run faster on the emulator than as native code because most code is optimized for an architecture where dynamic recompilers can optimize for the specific chip they are executing on and constantly improve the way the code is compiled as its running. This is how things like JavaScript can be faster than hand coded assembly. It adapts to the running system appropriately. In fact, Microsoft should require native code on x64 to run the same way... it would be amazing.

So, the emulator should handle about 90% software compatibility. Not more. For example, I've written code regularly which makes use of special "half-documented" APIs from Microsoft listed as "use at your own risk" since I needed to run code in the kernel space instead of user space as I needed better control over the system scheduler to achieve more real-time results. That code will never run in an emulator. Though nearly everything else will.

Then there's the major programming paradigm shift which has occurred. The number of people coding in system languages like C, C++ and Assembler has dropped considerably. On Linux, people code in languages like Python where possible. It's slow as shit, but works well enough. With advents like Python compiler technology, it's actually not even too pathetically slow anymore. On Windows, people program in .NET. You'd be pretty stupid not to in most cases. We don't really care about the portability. What's important is that the .NET libraries are frigging beautiful compared to legacy coding techniques. We don't need things like Qt and we don't have to diddle with horrible things like the standard C++ library which was designed by blind monkeys more excited about using every feature of the language than actually writing software.

The benefit of this is that .NET code runs unchanged on other architectures such as ARM or MIPS. Code optimized on x86 will remain optimized on ARM. It also gets the benefits of Javascript like dynamic compiler technology since they are basically the same thing.

Linux really never had much in the lines of hardware independent applications. Linux still has a stupid silly amount of code being written in C when it's simply the wrong tool for the job. Linux has the biggest toolbox on the planet and the Linux world still treats C as if it's a hammer and every single problem looks like a nail. Application development should never ever ever be done in system level languages anymore. It's slower... really it is... C and C++ make slower code for applications than Javascript or C#. Having to compile source code on each platform for an application is horrifying. Even considering the structure of the ABI at all is terrifying.

Linux applications have slowly gotten better since people started using Python and C# to write them. Now developers are more focused on function and quality as opposed to untangling #ifdefs and make files.

Now... let's talk super computing. This is not what you think it is I'd imagine. The CPU has never really meant much on super computers. The first thing to understand is that programmers will write code in a high level language which has absolutely no redeeming traits from a computer science perspective. For example, they can use Matlab, Mathematica, Octave, Scilab, ... many other languages. The code they write will generally be formulas containing complex math designed to work on gigantic flat datasets lacking structure at all. They of course could use simulation systems as well which generate this kind of code in the background... it's irrelevant. The code is then distributed to tens of thousands of cores by running a task scheduler. Often, the distributed code will be compiled locally for the local system which could be any processor from any architecture. Then using message passing, different tasks are executed and then collected back to a system which will sort through the results.

It never really mattered what operating system or platform a super computer runs on. In fact, I think you'd find that nearly 90% of all tasks which will run on this beast of a machine would run faster on a quad-SLI PC under a desk that had code written with far less complexity. I've worked on genetic sequencing code for a prestigious university in England which was written using a genetic sequencing system.... very fancy math... very cool algorithm. It was sucking up 1.5 megawatts of power 24/7 crunching out genomes on a big fat super computer. The lab was looking for a bigger budget so they could expand to 3 megawatts for their research.

I spent about 3 days just untangling their code... removing stupid things which made no sense at all... reducing things to be done locally instead of distributed when it would take less time to calculate it than delegate it... etc...

The result was 9 million times better performance. What used to require a 1.5 megawatt computer could now run on a laptop with an nVidia GPU... and do it considerably faster. Sadly... my optimizations were not super computer friendly, so they ended up selling the computer for pennies on the dollar to another research project.

People get super excited about super computers. They are almost always misused. They almost always are utterly wasted resources. It's a case of "Well I have a super computer. It doesn't work unless I message pass... so let me write the absolutely worst code EVER!!!! and then let's completely say who gives a fuck about data structure and let's just make that baby work!!!!"

There are rare exceptions to this... but I'd bet that most supercomputer applications could have been done far better if labs bought programmers hours instead of super computer hours.

0
0

Compsci degrees aren't returning on investment for coders – research

CheesyTheClown
Bronze badge

Re: Peak Code Monkey

It is true that compsci is generally a canonball which is often applied where a fly swatter is better suited. If you're making web pages for a site with 200 unique visitors a day, compsci has little to offer. If you're coding the home page of Amazon or EBay, compsci is critical. One inefficient algorithm can cost millions on hardware and power costs.

Product development... for example when a developer at Google working on Chrome chooses a linked list when a balanced tree would be better has an impact measured in stock markets because faster processors and possibly more memory would be needed on hundreds of millions of PCs. Exabytes of storage would be consumed. Landfills get filled with replaced parts. Power grids get loaded. Millions of barrels of crude are burned, shipping prices increase, etc...

What is written above may sound like an exaggeration, but a telephone which loses a hour of battery life because of bad code may consume another watt per phone per day. Consider that to scale to a billion devices running that software each day. A badly placed if statement which configured a video encoder to perform rectangular vs. diamond pattern motion search could affect 50-100 million users each day.

Consider the cost of a CPU bug.... if Intel or ARM are forced to issue a firmware patch for a multiplication bug, rerouting the function from an optimized pyramid multiplier to a stacked 9-bit multiplier core located in on-chip FPGA will increase power consumption by 1-5 watts on a billion or more devices.

Some of these problems are measured in gigawatts or terawatts of load on power grids driving up commodity prices in markets spanning from power to food.

So... you're right. Compsci isn't so important in most programmer jobs. But in others, the repercussions can be globally disasterous.

5
0

More data lost or stolen in first half of 2017 than the whole of last year

CheesyTheClown
Bronze badge

You mean more detected loss?

Call me an asshole for playing the causality card here.

Did we lose more data or did we manage to detect more data loss?

2
0

Everyone loves programming in Python! You disagree? But it's the fastest growing, says Stack Overflow

CheesyTheClown
Bronze badge

While I have only ever used Python on the rare occasions where it's all I had available (labs on VPN connected systems) and I honestly have little love for language-of-the-month, I don't necessarily agree.

I have seen some great Python code written by great programmers... once in a very rare while. In some cases, this is true of the Open Stack project.

On the other hand, Python gains most of it's power from community contributed modules. As such, it, like PERL, PHP and Ruby before it have libraries for nearly everything. Unfortunately, most are implemented by "newbie programmers" building bits and bobs they need.

This results in about a million absolutely horrifying modules. We see the same happening to Node as well. Consider that Node has probably 40 different libraries in NPM to simply make a synchronous REST call. This makes the platform unusable in production code. When a language has a repository of so many poorly written modules that it is no longer possible to sort through them to find one that works, it becomes almost unusable.

See C++... I use Qt because it provides a high quality set of classes for everything from graphics to collections. The standard C++ library and heaven forbid boost are such a wreck that they have rendered C++ all but unusable.

See Java where even good intentions go horribly wrong. Java on the desktop was absolutely unusually for apps because there were simply too many reboots of the GUI toolkit. AWT was so bad that IBM OTI made SWT and Sun made a bigger mess trying to reboot their dominance with SWING and Google made their own... well let's just say that it didn't work.

I can go on... there should always be a beginner language for people to learn on and then enventually trash. Python is great for now. Maybe there will be something better later. Learning languages should be playgrounds where inexperienced developers can sew their oats before moving on. Cisco for example pushes Python and Ansible to network engineers who learn to code in 8 hours. Imagine if every network engineer or VMware engineer were to start destroying other languages? That would be a million people who have never read a programming book trashing those other languages' ecosystems.

20
0

Huawei developing NMVe over IP SSD

CheesyTheClown
Bronze badge

Nope... block access is file/database access

No storage subsystem (unless it's designed by someone truly stupid) stores blocks as blocks anymore. It stores records to blocks which may or may not be compressed. The compressed referenced blocks are stored in files. Those files may be preallocated into somewhat disk sector aligned pools of blocks, but it would be fantastically stupid to store blocks as blocks.

As such, NVMe is being used as a line protocol and instead of passing it through to a drive, it's being processed (probably in software) at fantastically low speeds which even SCSI protocols could easily saturate.

There will be no advantage in extended addressing since FCoE and iSCSI already supported near infinite addresses to begin with. There will be no advantage in features as NVMe would have to issue commands almost identically to SCSI. There will be no advantage in software support because drivers took care of that anyway... or at least any system with NVMe support can do pluggable drivers. Those which can't will have to translate SCSI to NVMe.

They should have simply created a new block protocol designed to scale properly across fabrics without any stupid buffering issues that would require super stupid solutions like MMIO and implemented the drivers.

Someone will be dumb enough to pay for it

0
0
CheesyTheClown
Bronze badge

What the!?!?!?

What is the advantage of perpetuating protocols optimized for system board to storage access as fabric or network access?

Bare metal systems may under special circumstances benefit from traditional block storage simulated by a controller. It allows remote access and centralized storage for booting systems. This can be pathetically slow and as long as there is a UEFI module or Int13h BIOS extension there is absolutely no reason why either SCSI or NVMe should be used. Higher latencies introduced by cable lengths and centralized controllers make use dependent on unusually extensions to SCSI or NVMe which are less than perfect fits for what they are being used for. A simple encrypted simulated drive emulation in hardware that supports device enumeration, capability enumeration, read block(s) and write block(s) is all that is needed for a network protocol for remote block device access. With little extra effort, the rest can be done with a well written device driver and BIOS/UEFI support that can be natively supported (as is more common today) or via a flash chip added to a network controller. Another option is to put the loader onto an SD card as part of GRUB for instance.

The only reason block storage is needed for a modern bare metal server is to boot the system. We no longer compensate for lack of RAM with swapping as the performance penalty is too high and the cost of RAM is so low. In fact, swapping to disk over fabric is so slow that it can be devestating.

As for virtual machines. They make use of drivers which translate SCSI, NVMe or ATA protocols (in poorly designed environments) or implement paravirtualization (in better environments) which translate block operations into read and write requests within a virtualization storage system which can be VMFS based, VHDX based, etc... this translation then is translated back into block calls relative to the centralized storage system. Where they are translated back to block numbers, then cross referenced against a database and then translated back again to local native block calls (possibly with an additional file system or deduplication hash database) in-between. Blocks are then read from native devices in different places (hot, cold, etc..) and the translation game begins in return.

NVMe and SCSI are great systems for accessing local storage. But using them in a centralized manor is slow, inefficient and in the case of NVMe... insanely wasteful.

Instead, implement device drivers for VMware, Window Server, Linux, etc... which provide the same functionality but while eliminating the insane overhead and inefficiency of SCSI or NVMe over the cable and focus instead on things like security, decentralized hashing, etc...

Please please please stop perpetuating the "storage stupid" which is what this is and focus on making high performance file servers which are far better suited to the task.

0
0

Tintri havin' it large with all-flash EC6000 boxen

CheesyTheClown
Bronze badge

320000 IOPs?

Hmmm... so, the average IOPS of 90000 is expected from a consumer grade SSD today. In a subsystem with compression and dedup running on a common file server with a semi-intelligent file system, on two-way mirrored data, the raw read performance of the disk should be about 180000. Considering dedup and compression, it should be 5-10 times that. Three-way and four-way mirror of hot data should increase performance greatly. With 10 way mirroring, 900000 should be realistic. Add RAM caching for reads and even more should be possible.

In 4U, there should never be a circumstance on an all-RAID platform which IOPS should ever be anywhere near as low as 320000. Did you miss a zero in the article or did they make a product designed for SCSI/NVMe over fabric protocols?

2
5

Networking vendors are good for free lunches, hopeless for networks

CheesyTheClown
Bronze badge

Re: That works for a simple network

Let me come to the table on this. As a former developer of infrastructure networking equipment scaling from chip architecture to routing protocols as well as feeding my family for 5 years by being a Cisco network engineer and quite successfully to now working as hard as I can to automate out as many low level network consultants as possible.

Interior gateway protocols are long overdue for a refresh. The fact that we still run internal networks as opposed to internal fabrics is absolute proof that companies like Cisco, HP, Juniper, etc... are far out of touch with modern technology. The simple fact that we need IGPs is fundamentally wrong.

We depend on archaic standards like OSPF, IS-IS, EIGRP and RIP for networking and all four of these architectures are absolutely horrible and the only redeeming feature they have is that they're compatible with other vendors and old stuff. OSPFv3 with address family support is possibly the worst thing that ever happened to networking.

As for BGP. Don't get carried away. BGP as a protocol will remain necessary, but it's for the purpose of communicating between WANs. BGP is less of a routing protocol as opposed to a dog pissing on a tree to inform the world who owns which IP addresses. BGP doesn't really route so much as force traffic in a general direction. There are multiple enterprise grade open source BGP implementations out there and there's no reason to make your internal network suck because you are concerned about BGP support.

Peering to the Internet requires edge devices which may or may not speak BGP.

When you design a modern network infrastructure, you can completely disregard inter-vendor operability and design a fabric instead. There's a few things you probably want to do. Instead of inventing new fiber standards, it would be profitable to attempt to depend on commercial SFPs. As for vendor codings, I spent a long time making different vendor's SFPs work with my hardware... those codings actually mean something.

So... consider this. Imagine building a network based mostly on a new design where the entire enterprise is a single fabric. By this, I mean that you have a single router for the entire enterprise. That router is made up of 10-100000 boxes which all speak proprietary protocols and are engineered for simplicity and actually route traffic intelligently... without any switching.

You may think this is unrealistic or stupid, but it's really quite possible to do with far fewer transistors than you would use to support modern standards based layer-2 and layer-3 switching. Eliminate the routing table from your network altogether and instead implement something similar to Cisco's fast-caching forwarding mechanism with centralized databases for IP management.

Then to connect to the outside world, you simply buy a Cisco router or three and connect them at the edges.

I can say with confidence after considerable thought (years) on this topic that there's absolutely no reason this couldn't be much simpler and cleaner than modern networking and while three-tier network design would still make sense... or at least spine-leaf... any partial mesh with no single point of failure would work without any silliness like "you need to aggregate your routing tables to keep your routing table small."... we are long past the point where routing table lookups are O(32 > n > 0) where n = bits complexity. Then route learning via conversational characteristics would keep the per interface FIB small.

So... let's be honest... a developer can see the problem of networking clearly ... especially if they know networking.

A network engineer starts by spouting about how things like BGP are really hard... fine... use it as a boundary and stop filling my network with that crap... buy someone else's box for that or run a Linux box to do it.

10
7

Oracle staff report big layoffs across Solaris, SPARC teams

CheesyTheClown
Bronze badge

This is where Cisco should be NOW

If Cisco weren't completely stupid, they would swipe in an make a public announcement that they will hire on the entire group of laid off staff for what their pay was when they were laid off.

At that point, if Cisco isn't entirely stupid, they will :

- start an enterprise ARM development team using the silicon developers.

- use the Infiniband team to implement a proper storage stack for supporting SMB Direct and iWARP, etc...

- use the operating system team to build a solid container platform for Docker. Cisco keeps trying to do something like this but without a good OS team to start with.

- use the ZFS team for ... well Cisco needs a legacy storage solution, so a ZFS/FCoE solution for VMware would be great. Do it on ARM as well and make a single chip storage solution by embedding it within the Cisco VIC FPGA and Cisco will make a fortune

This might be the best opportunity that either Cisco has had in a long time. It's almost as if Larry decided to just throw up a big bunch of gold into the air and let it land in about 2500 places around the world for Cisco to pick up.

It's a good thing if HP doesn't go picking it up, they wouldn't know what to do with an engineer if someone smacked Meg Wittman on the head with one.

Another alternative which would be amazing for this team... Nutanix should suck them up. Not in bits... but the whole damn group. It's easier to hire them all and weed out what you don't need later. 2500 employees is expensive... but Cisco and Nutanix have needed a team of developer like these for a while.

Also, if the teams are heavy on H1-Bs, I would just let them go. There is certainly no shortage of experts in what will be left when they're gone. If Oracle is anything like Cisco or most other companies, then that's about 30-50% of their development teams.

1
0

Don’t buy that Surface, plead Surface cloners

CheesyTheClown
Bronze badge

Re: Pretty sure this doesn't count as a surface alternative

I suppose this depends on your use case.

I was working on a project of 3 million+ lines of C++ code at the time. With XCode or Linux, the average compile time was 8 minutes. With Visual C++ it was about 17 seconds. This isn't because Windows is so much fast than Mac or Linux. It is because Visual C++ has the best precompiled header support of any compiler. Add that to a progressive linker and librarian and there is no comparable product anywhere.

If I would have used Mac OS X and XCode, it would have cost me close to a thousand hours of waiting a year and my work days would have been 16 hours instead of 12.

Would you suggest that using Mac OS was an upgrade in that circumstance?

12
0
CheesyTheClown
Bronze badge

Pretty sure this doesn't count as a surface alternative

I have bought :

- A Samsung Series 7 Slate to be able to develop Windows 8 apps during the beta.

- Two Surface Pros

- A Surface RT

- A Surface Pro 2

- Two Surface Pro 3s (one Core i5/256gb because I had to wait a month longer to get the i7.. then I bought the i7)

- A Surface Book

So... why did I buy these machines? They are the "Official Microsoft Development Computers" for Windows. This means, updated drivers, updated firmware, flawless debugger support (you'd be surprised how important that is), long term support... etc...

Before this, I would buy Macs, delete Mac OS X and install Windows instead.

I buy machines because the vendor invests in them for long term support. Lenevo, Dell, HP, they have dozens of models of machines at a time. You know that as soon as the machines ship, their A-team moves to the next machine and the machine you buy is supported by the B or C team.

Never buy a phone or a computer from a company that offers too many options. This is because there is no possible way they can properly support a machine they really don't care about anymore because they're really only interested in building and selling the next model.

16
1

We experienced Windows Mixed Reality. Results: Well, mixed

CheesyTheClown
Bronze badge

Looking stupid?

I honestly can say that I spend most of my time looking stupid... it hasn't bothered me much so far. In fact, it's entirely possible these headsets will actually make me look better.

Oh, even if they make me look stupid, I really can't imagine that it would bother me much. I'm a engineer and color coordination has always been a problem for me. So, I generally assume I always look stupid.

The real question is whether I can mount these in a wookie mask.

14
0

KVM plans big boosts to storage and nested virtualization

CheesyTheClown
Bronze badge

Re: for real?

On the desktop, I'm with you. VMware works really well on desktop versions and is no hassles and pain free.

For SMB, to be honest, I've setup VMware for years, I dabbled with Hyper-V here and there and until the Windows Server 2016, I wasn't really happy. But with Hyper-V 2016, it is actually much easier now. Not only that, but on single host or 2 or 3 hosts, it's way easier than VMware today. Install Hyper-V (free of charge... no cost... period), setup basic Windows networking, setup a Windows share for storing virtual machines, setup a windows share for storing ISOs. You're basically done.

Of course, if you want the good stuff (vCenter style), you can save a lot of money avoiding SCVMM and either buying ProHVM for less than an hour of your salary per host. Or 5Nine which is REALLY REALLY AWESOME but I stopped recommending or buying once they removed the prices and purchase links from their website in favor of forcing me to talk to a sales person.

I will say tht VMware is compatible... it's not easy. Truly... I use KVM in many environments and I use Hyper-V as well. I still work very often with VMware and it's always funny how many problems it has which the others don't. But of course, it does run pretty much everything and I REALLY LIKE THAT. So, if I'm playing with old operating systems on my laptop, VMware is the only way to go. If I have to get some work done, then Hyper-V or Ubuntu are the only options.

I recommend you check either of them out again. I think you'll find that if you invest one full day of your life in learning either one, you'll never be able to look at VMware again without laughing at how much of a relic it is.

1
0
CheesyTheClown
Bronze badge

Re: RLY?

Paravirtualization allows versions of Windows targetting paravirtualization to operation more like a container than a VM. Paravirtualization gives Windows some truly amazing features which allow it to have insanely higher VM density than if it were running VMware.

For example, probably the most difficult process for a virtualized environment is memory management. The 386 introduced the design pattern we used today which consist of a global descriptor table (which creates something like a file system for physical memory)... it allows each program running to think it has a single contiguous area of memory to operate in... though in reality the memory can be spread out all over the system. Then there's a local descriptor table which manages memory allocation per process. This is a sick (and semi-inaccurate) oversimplication, of how memory works on a modern PC. But it gives you the idea.

When you virtualize, operating systems which don't understand they are being virtualized need to have simulated access to the GDT (which is a fixed are of RAM) and be able to program the system's memory management unit (MMU) through direct calls to update memory.

There's also the principal of hard I/O on Intel mapped CPUs vs. memory mapped I/O. Memory mapping could always be faked by faking the memory locations provided to drivers. But I/O couldn't be handled without intercepting all I/O functions... sometimes by rewriting the executable code on the fly.

To make this happen, VMware used to either recompile code on the fly to intercept I/O calls and MMU programming. Hyper-V Generation 1 does the same. With the advent of the Second Layer Address Table (SLAT which should actually be called the SLUT ... look-up table, but isn't), memory rewrites and dynamic recompilation of MMU code was no longer necessary. The CPU simply introduced a new descriptor table which works as a higher level than the GDT... or nests it.

The I/O issue needed to be addressed. This started by making drivers for each operating system which would bypass the need for making I/O calls directly and it works pretty well except on some of the more difficult operating systems like Windows NT 3.1 or OS/2. I recently failed to launch Apple Yellow Box in Hyper-V because of this. VMware has always been amazing for making legacy stuff work because they are really focused on 100% compatibility even if it makes everything slower.

Microsoft with Windows and KVM with Linux took the alternative approach which was 1000% better which was to simply say "We'll run whatever legacy we can run... but we focus on today and tomorrow. Let VMware diddle with yesterday". So Linux was modified to run as a user mode application and then later with Docker was modified to run without Linux itself. Windows did kinda of the same thing...

But Hyper-V did something really cool. Paravirtualization works on Windows by running the operating system... kind of as usual. But then replaces the memory manager with one that doesn't absolutely require a SLAT. Instead if it needs more memory, it asks the host operating system for more memory. So instead of wasting tons of memory guessing whether 4GB is enough or not... paravirtualization often gives Windows about 200MB and if it needs more, then it gets more. So paravirtualization makes Windows typically 20 or more times more efficient when run this way. The trade off is that the cross boundary (call from guest to host) is more expensive and can have a negative impact on CPU performance. Also, there is more memory operations in general so the system GDT is likely to be more active and possibly fragmented. I'm pretty sure MS will optimize this further in the future.

Then there was drivers. VMware has been so focused on bare metal and Paravirtualization goes the entire opposite route. It ends up that instead of trying to hardware accelerate every VM operation which can require billions more transistors and hundreds of more watts per server, Microsoft focused instead on allowing guest operating systems to gain the benefits of the host OS drivers by removing the need for hard partitiioning device operations. So, where VMware would simulate or expose a PCIe device to the guest VM, Hyper-V would give drivers to the host which would simply allow it to talk directly (and play nicely with) the host devices.

For storage this offers immense storage improvements in hundreds of different ways. With a Chelsio vNIC or Cisco (as a runner up), storage via SMBDirect or pNFS can reach speeds and performance per watt that are so incredibly far above what VMware offers that the environmental protection agencies of the world should sue VMware for gross negligence for their approach. We're talking intentional earth killing.

For network, the performance difference is almost equally huge, but once you virtualize storage intelligently (RDMA is the only way), then networking becomes easier.

But back to paravirtualization. Here's an example. If you want to share the GPU between two VMs in a legacy/archaic system, you would need to get a video adapter which is designed to split itself into a few hundred different PCIe devices (due to SRIOv chance are... a maximum of 255 devices... meaning no more than 255 devices per VM host... so no more than 255 VMs per host). Then you'd need specialized drivers designed to maintain communication with theses PCIe devices and to allow the VMs to migrate from one host to another by doing fancy vMotion magic (cool stuff really). This has severe cost repercussions... nVidia for example charges over $10,000 per host and requires a VERY EXPENSIVE (up to $10,000 or more) graphics card.

Paravirtualization would simply make it so that if the guest wants to make an OpenGL context, it would as the host for the context and the host would forward it to the VM. The Hyper-V driver then forwards the API calls from the guest app to the host driver directly. This means that you're still limited to however many graphics contexts are the maximum supported by the GPU or driver. But it's more than the alternative. VMware does this for 2d, but since VMware doesn't have its own host level graphics subsystem for 3D, it depends on nVidia to rape their customers. Where in Hyper-V it's free and works on $100 graphics cards.

Storage is HUGE... I can go for pages about the benefits for paravirtualization on storage.

So here's the thing. I assume that KVM will get full support for all the base features of paravirtualization. The design is simpler and better than making virtual PCI devices for everything. It's also just plain cleaner (look at Docker). In addition, I hope that they will manage to integrate the Hyper-V GPU bridge APIs by linking Wine to the paravirtualized driver there.

In truth... if you look at paravirtualization... it's actually the exact same thing which VirtIO does with Linux.

VMware has a little paravirtualization, but to make it work, they would probably need to stop making their own kernel and instead go back to Linux or Windows as a base OS to get it to work completely. They simply lack the developer community to do full paravirtualization.

And BTW.. Paravirtualization is the exact opposite of pervy and wrong. What you do to avoid paravirtualization is precisely the pervy and wrong thing.... but if that's what pervy and kinky is... I'm in. I love that kind of stuff. Paravirtualization is WAY BETTER but legacy virtualization is REALLY FUN if you happen to be an OS level developer.

6
0

Microsoft, Red Hat in cross-platform container and .Net cuddle

CheesyTheClown
Bronze badge

Re: No thanks

We started porting our system from .NET Core 4.6 to .NET Core 2 yesterday. We figured there was no harm in making sure our apps work on Windows Server, Linux and specifically in Raspberry Pi.

I think you see it as a battle of Microsoft vs. Not Microsoft. I developed on Linux for a decade and eventually, due to lack of alternatives, moved from Qt on Linux to C# and .NET on Windows because unless I was developing an OS (which I had been doing for a long time), I wanted a real programming language and a real alternative to Qt.

See if you're making a web app, there are a thousand options. In fact, since I started typing this comment, two new options probably were added to Github. And most of them are just plain aweful. Angular 4 with Typescript is very nice, and so are a few others. But even the best ones are very much work in progress.

On the other hand, if you're programming a back end, there is .NET, Java, Node and Python.

I don't use Java because their entire infrastructure is outrageously heavy. Doing anything useful generally requires Tomcat which is just a massive disaster in the making. It makes things easy, but what should use megabytes uses gigabytes, or thousands of hours to optimize.

I don't use Node because while I think it may be the best option... and Typescript is nice, all libraries are written by people who seem to have no interest in uniformity and as such, every module you use takes a month to code review and document before use.

Python is the language of absolutely all languages and can do tens of thousands of things you never knew you wanted to do. But again, the community contributed libraries lack almost any form of quality control. You're locked into a single language, which means that when Python loses its excitement, everything will have to be rewritten. (See the mass migration from Python to Node... no transition path)

Then there's .NET. C# isn't as versatile as Python (nothing is). But when you code against .NET Core, it runs pretty much anywhere. Has performance close to Node than Python. All code written for one language works with all languages on the platform. Documentation standards are high. The main infrastructure has already been reviewed for FIPS-140. There are clear support channels. Apps can be extremely light. Libraries are often optimized. It supports most modern development paradigms. It's completely open. The development team is responsive.

Basically, .NET scores better than average in every category except that it was made by Microsoft.

So... I appreciate that you don't like it. And I am glad people like you are out there using the alternatives which makes sure there are other choices for people like me. But for those of us with no real political bias towards tools, we are all pretty happy that .NET Core 2 has made cross platform realistic.

Maybe at some point your skills and knowledge will help me solve one of my problems and I look forward to thanking you. Yesterday, I sent dinner (pizza and cola) to a guy and his family in England for helping me on a Slack channel. :)

14
3

Wisconsin advances $3bn bribe incentives package for Foxconn

CheesyTheClown
Bronze badge

How do you beat cheap labor?

That's easy. With free labor. Foxconn would build the plant anywhere they could bypass environmental restrictions. Wisconsin already agreed to that. After all, flooding the fresh water supply with gallium arsenide instead of coal related waste will poison 100 generations instead of 10.

Of course, Foxconn will abandon the factory in 5 years when the process changes enough that the clean room they build in Wisconsin is no longer viable. During that time, the government of Wisconsin will pay 100% of the wages. Of course all profits will be sent to China.

But by then Trump will either be out or reelected so he simply won't care anymore.

The only possible good things about this are that instead of paying welfare, we'll pay people $25 an hour to work in a basically toxic factory. Oh and it will lighten pollution related to shipping for a few years while bypassing any tariffs imposed on Chinese imports of LCD screens.

So... it's a big win all around.

Now if Trump can pay Chinese companies to make tooth brushes in America again, we can at least know that if we fall out with China, we can at least brush our teeth.

1
1

The future of Python: Concurrency devoured, Node.js next on menu

CheesyTheClown
Bronze badge

Multithreaded programming is easy, Multithreaded coding is not.

A person with a sound understanding of multi threading and parallel processing should have absolutely no problem planning and implementing large scale multithreaded systems. In fact, while async programming is super-simple, it has many caveats which can be far more complicated to resolve than multithreaded code.

That said, if one is building a database web-app using asynchronous coding is perfect. It's absolutely optimal for coders without a proper education in computer science.

Of course, async patterns can fail absolutely when there is more to application state than single operation procedures. Locking becomes critical when two asynchronous operations are altering state that impact one another. At this point, we are left with the same problems as when threading is used. The good news of course is that the async paradigm generally offers additional utilities to assist with these specific scenarios.

I use the async paradigm often as it offers a poor mans solution to threading which can be quick and easy to maintain.

Back in 1991 (or so), Dr. Dobbs presented a nice approach to handling concurrency that more people should read. It's a crying shame they didn't just open source all articles when they shut down.

3
0

Place your bets: How long will 1TFLOPS HPE box last in space without proper rad hardening

CheesyTheClown
Bronze badge

Shouldn't they try a machine reliable on earth?

HPE has now owned SGI for long enough that all their best engineers will have left and the remaining ones will have been eliminated as redundancies. Therefore, all that's left is HPE engineers... which got rid of all their useful people throughout the dictatorship of the past 3 suits in charge.

I have some serious questions though.

1) If an HPE computer produces the wrong results due to random behavior... Is this considered a success or a failure?

2) If an HPE computer fails in space and support is needed, is the call routed through mission control first or does it go directly to India?

3) How will the cooling system impact the ISS? HPE last I checked only uses one model of fan and it's REALLY REALLY loud... on purpose because they think that if Ferraris sound faster because of how loud they are, then computers should too.

4) 56Gbp/s interconnect? Wasn't this supposed to be a supercomputer? I buy old used 56Gbp/s infiniband equipment for pennies on the dollar these days. Super computers should be running 10 times that by now. Or is this the HPE version where we sell yesterday's technology today?

0
2

Official: Windows for Workstations returns in Fall Creators Update

CheesyTheClown
Bronze badge

Re: 4 CPU's - That's a lot!

Windows Server is LTS which means no mail, store, Ubuntu, etc...

This will be nice

5
1

No, Apple. A 4G Watch is a really bad idea

CheesyTheClown
Bronze badge

Step forwards

Many of us have learned to plan our lives better and not be as dependent on a watch.

If you check the time so often that the few seconds it takes to take the phone from your pocket is inconvenient, you aren't managing your time. Unless you are taking medication that must be precisely timed, you should easily be able to manage your schedule. A person makes a victim of themselves if they ever find themselves unable to schedule. If you have to be somewhere at 10am, leave with enough time to get there at 9:45. If you're running late because of circumstances beyond your control, call and apologize and inform the person "I have encountered unforeseen difficulties. I will be a little late." Then next time, account for additional delays so they don't believe it is habitual.

If you're stressed over time because of conflicts of work and daycare for example, cut back your hours, change day cares, change jobs, hire an au pair, hire a teenager near the day care, make an arrangement with a parent to take turns picking up and dropping off with, etc...

If $500+ watch is in your budget, your life would be better if you spent the money to buy time instead.

Some people believe a successful person wears a fancy watch. Smart people know that success is learning to manage your life without one.

2
1

Mellanox SoCs it to NVMe over Fabrics with BlueField platform

CheesyTheClown
Bronze badge

FPGA, hashing and compression engine?

These chips are absolutely worthless in their current form. NVMe is great stuff for local connectivity and adding RoCE to the mix is truly amazing. But without someplace to implement SMBDirect, pNfS or SRP packet parsing in hardware, the offering is meaningless. NVMe storage is super amazing and super fast. Over 16 PCIe 4.0 lanes, a theoretical 64GByte per second throughput can be expected.

We have a series of highly optimized somewhat standard hashing algorithms that are designed for non-crypto hash functions which can be optimized with minimal transistor count such as murmur and siphash. A few hundred transistors (plus addressing and cache coherency logic approx 1m transistors) can implement DMA based has functions capable of hashing the full 64GB/sec in real time in maybe a watt or two of power with latency measured in picoseconds.

We also have standardized block compression methods such as LZ77 derivatives that can be implemented to offer the same performance in a minimal transistor count.

Then the CPUs would mostly have to handle transaction logging, DNA scheduling, block management (allocation, auto defragmentation, window sliding, garbage collecting, etc...) so one or two ARM cores would be able to process hundreds of times more data with the acceleration engine if these chips had :

1) FPGA for packer header parsing for identifying hashable and compressible payload regions.

2) Cryptographic (block... stream can be done in software) functions for protecting traffic

3) hashing

4) block compression

For a bonus, a dedicated multi-port capacitor backed up SRAM region for hot store transaction logs for hot storage regions would be REALLY nice. Especially with a dump from SRAM to Flash on capacitor discharge function.

This design was super nifty, but it looks like it was architected by someone who thought bandwidth on the cable was what the problem was.

To be honest, a single Intel quad-core Xeon+Arria FPGA would provide at least 10 times the bandwidth and storage capacity with the exception of Infiniband support which is somewhat useless without Infiniband arbitrors which are very expensive and are unecessary with RoCE and DCBx.

Alternatively, using a Xilinx processor/FPGA would be great as well.

With either solution, short time to market is possible by parallelizing storage tasks in OpenCL. So, even if Mellanox managed to do FPGA, even with Mentor as a partner, they would probably be screwed.

Mellanox should team up with lattice or Xilinx and develop a real storage core. CPU based storage is too slow and a bridge with theoretical bandwidths of 64GB/sec is a total waste of money without the additional logic to manipulate the data.

P.S. Mellanox... I have been harsh, but pragmatic. I have now seen 40 press releases of storage vendors who are just f-ing up NVMe storage this year alone. You are by far the closest to getting it right. Now go find a real storage developer who actually understands the full stack. Then ask them to tell you which parts of their code need the most optimization. Then instead of dropping a half assed generic ARM solution on them, make them build an NVMe stack and optimize the hell out of it and add security via transaction log storage.

0
0

Cisco's server CTO says NVMe will shift from speed to capacity tier

CheesyTheClown
Bronze badge

Uh... UCS Azure Stack anyone?

Starship + UCS + NVMe + VMware + Nexus + Windows + Linux + Hyperflex storage etc...

This is a tub of rubbish which is delivered with CVDs which takes weeks to months to deploy.

If you have a validated design and an automation platform, then you plug it in, answer some questions, and let it rip and it's done.

Or you can buy Cisco UCS with Azure Stack, turn it on, answer a few questions and your running in an hour without having to pay $60,000 a blade for licenses plus the Windows tax. Or you can install Ubuntu on a VM on a laptop and point it to a UCS and get a full OpenStack up and running with containers and automation.

Come on guys... Microsoft and Ubuntu have nailed full data center automation, have app stores and eliminate the need for server, storage or network guys in the data center. TCO on Hyperflex is close to a $150,000 dollars more per blade than on Azure Stack or Open Stack. Why the hell would anyone invest so heavily in VMware which is great for legacy... but we already have legacy sorted. Run that and as more services move to Azure Stack or Open Stack, shut down more legacy Vmware blades.

0
0

UK.gov embraces Oracle's cloud: Pragmatism or defeatism?

CheesyTheClown
Bronze badge

Re: Cluebat required

Doesn't matter, under the terms of national security, Oracle will be required to provide access to all data stored on systems owned and/or operated by US companies without informing the owners of the data of the request. It's not supposed as to be happening yet, but sooner or later, the FBI, NSA, etc... will find a legal loophole that will make it happen.

2
0

Man facing $17.5m HPE fraud case has contempt sentence cut by Court of Appeal

CheesyTheClown
Bronze badge

Re: This used to be how commerce worked isn't it?

Sounds to me that the guy was a hell of a sales person if he was selling servers at retail pricing. HP didn't have to cold call all the customers and probably saved millions on staff and red tape. Unless HPE actually lost money on the sales, it sounds like they screwed themselves.

0
1

Electric driverless cars could make petrol and diesel motors 'socially unacceptable'

CheesyTheClown
Bronze badge

Re: Trolley problem.

Consider connected autonomous vehicles.

Either special utility vehicles or nearby delivery vehicles or worst case, nearby consumer vehicles can be algorithmicqlly redirected to a runaway vehicle, match speeds and forcefully decelerate the out of control vehicle.

This would be wildly dangerous with human drivers, especially if they are not properly trained for such maneuvers. But by employing computer controlled cars, it could be possible to achieve this 99 out of 100 times with little more than paint damage.

This doesn't solve a kids chasing a ball into the street without looking, but it can mitigate many issues related to systems failures.

I can already picture sitting in a taxi and hearing. "Please brace yourself, this vehicle has been commandeered for an EV collision avoidance operation. Your insurance company has been notified that the owner of the vehicle in need will cover the cost of any collision damage to this vehicle. Time to impact, 21.4 seconds. Have a nice day"

1
0

New Azure servers to pack Intel FPGAs as Microsoft ARM-lessly embraces Xeon

CheesyTheClown
Bronze badge

Not entirely true but mostly

Alters has been hard at work on reconfigurable FPGA which is exciting. Consider this, calculating hashes for storage is simply faster in electronics, as fast as the gate depth will allow. Regular expressions are faster, SSL is faster, etc...

The problem is, classically, an FPGA had to be programmed all in one go. If Microsoft has optimized the work flow of uploading firmwares and allocating MMIO space to FPGAs and Altera has optimized synthesizing cores and Intel has optimized data routing, the a web server could offload massive work loads to FPGA. Software defined storage can streamline dedup and compression, etc...

This made perfect sense.

0
0

Azure Stack's debut ends the easy ride for AWS, VMware and hyperconverged boxen

CheesyTheClown
Bronze badge

Re: A different battle

I have customers spread out across government, finance, medical and even publishing that can not use public cloud because of safe harbour. They all want cloud, not virtual machines, but end to end PaaS and SaaS, but they couldn't have it because one way or another, you're violating data safe harbour laws or simply giving your data to China or India. This is a huge thing.

3
0
CheesyTheClown
Bronze badge

Re: Game Changer

Do you understand what this is? I'll guess not, but grats on first post before even performing the slightest research.

This is a cloud platform. It is not about launching 1980's era tech on the latest and greatest systems to manage. It's about giving developers a platform to write applications for which can be hosted in multiple places without the interference of IT people. It's about having an app store type environment for delivering systems that scale from a few users to a few million users. It's about delivering a standard platform with standard installers so there is no need for building virtualized 80's style PCs that require IT people running around like idiots to maintain.

You can keep your VMs and SANs and switches. There are a lot of us who are already coding for this platform and can't wait for this to fly. Whether you like it or not, we're going to write software for this. Your boss will buy the software and either you can run it or you can find a new job :)

1
8

Trump's CNN tantrum could delay $85bn AT&T-Time Warner merger

CheesyTheClown
Bronze badge

Please clarify the original claim!

I read the whole article which appears to be a tantrum about Trump. Ok fine, he's an idiot. The article makes lots of different points. What it doesn't do is connect Trump's idiocy with why the merger would be delayed. Is AT&T considering pulling out? Will there be a conflict of interest that would allow Trump to influence the members of the merger and make them stop CNN from being mean to him and hurting his feelings?

I found the article entertaining and I certainly have no love for Trump, but...

What is the connection?

3
0

Analyst: DRAM crisis looms after screwup at Micron fab

CheesyTheClown
Bronze badge

At least they didn't burn another one down

Don't the DRAM and HDD price hikes almost universally come from burning stuff down?

Wouldn't it be better just to say "We're cutting capacity to produce a shortage to force you to pay more"? Or is there an insurance scandal involved as well? After all, how likely is it that their insurance company has someone qualified to assess damages to a semiconductor manufacturing clean room on staff?

"Here, look in this microscope. Do you see that blue spec, they're everywhere now and we have to throw away all our obsolete equipment and replace it with the next generation stuff that we need for the next die reduction".

0
0

PCs will get pricier and you're gonna like it, say Gartner market shamans

CheesyTheClown
Bronze badge

Re: Value for money?

I don't understand. Are you suggesting it's better to stick with spinning disks on laptops and desktops because you can't find a company who makes a SSD based on a stable technology?

Was there ever a moment in history where the hard drive business wasn't like that? Have you ever complained that the methods of suspending magnetic heads over the drive surface on different vendors of hard drives was different? Have you ever complained that the boot code on a western digital drive wasn't the same as on a seagate?

Are you worried that there is something magically different on a SATA cable when using SSD as opposed to on spinning discs?

Are you worried you have to use mSATA, M.2,, PCIe. You don't.

2
0
CheesyTheClown
Bronze badge

Re: Value for money?

Semi-decent PCs? Without SSD and without decent screen resolution?

This is 2017. Semi-decent is a core i5, 16GB RAM and 500GB SSD and at least 2500 pixels wide. Decent is i7, 16GB, 1TB and 3000x2000 and nVidia graphics.

I bought that 2 years ago and have absolutely no inclination to upgrade. Microsoft can get a few extra bucks from me if they sell an upgraded performance base, but for about $750 a year per employee for their PC, it's a cheap option. I'll consider a new one in 2 more years if they come with at least double the specs and a minimum 2GB/sec SSD read time. Otherwise, I'll it will be a $600 a year PC, then $500.

Employers obviously have to consider the cost of buying dozens, hundreds or thousands of PCs. But leasing with an option to buy makes sense if the CapEx is scary. But spending less than $2000 on a PC can be very expensive. There are very few good machines on the market that cheap.

1
0

Cisco automation code needs manual patch

CheesyTheClown
Bronze badge

This is very common in Cisco products

Cisco is great at making products on top of Linux and Apache tools, but they are utterly useless at securing Linux and Apache tools. Currently dozens (maybe more) Linux kernel exploits work against ISE since Cisco doesn't enable configuration of RHEL updates on those boxes. As a result, they are very often very vulnerable. They also are wide open to Tomcat attack vectors because the version of Tomcat running on ISE is ancient and unmatched.

As for root passwords... install ISE on KVM twice (or more) and mount the qcow images on a Linux box after. You'll find that the password for root is the same on all those images. While ssh access as root appears to be disabled, there are a few other accounts with the same issue.

I don't even want to talk about Prime. It's a disaster with these regards.

Surprisingly enough APIC-EM for now seems ok, but that's because about 90% of the APIC-EM platform is a Linux containers host called grapevine. I think the people who worked on that were somewhat more competent (I believe they're mostly European, not the normal 50 programmers/indentured servants for the price of 1 that Cisco typically uses).

I haven't started hacking on IOS-XE... I actually don't look for these bugs. I just write a lot of code against Cisco systems and it seems like every 5 minutes there's another security disaster waiting to happen. They have asked me to help them resolve them but it would require hundreds of hours of my time to file bug reports and I can't waste work hours on solving their problems for them.

Oh... if you're thinking "oh my god, I have to dump Cisco", don't bother, the only boxes currently I would trust for security is white box and unless you know how to assemble a team of system level software and hardware engineers (no... that really smart guy you know from college doesn't count) you should steer clear of those. The companies who use those successfully are the same ones who designed them.

Cisco, you need a bug bounty program. Even if I could make $100 for each bug I stumble on, I would invest the half hour-hour it takes to write a meaningful bug report. Then you can fix this stuff before it ends up a headline.

5
0

If 282-page doc on new NVMe drive spec is tl;dr, you're in luck

CheesyTheClown
Bronze badge

Re: It's a standard for disk drives using Non Volatile Memory.

Intelligent protocols for multi-client access? If only there were some standard method of providing storage access in a flexible manor with encryption support, variable length reads and writes, prioritized queuing, random access, error handling and in a high performance package for non-uniform access over ranged media. Oh, let's not forget vendor specific and defacto standard enhancements as well as feature negotiation. No imagine the ability to scale out, scale up, work with newer physical transports, support memory mapped I/O access and even nifty features like snapshots, checkpoints, transaction processing, deduplication, compression, multicast and more. Imagine such a technology with no practical limitations on bandwidth, support for multiple methods of addressing and full industry support from absolutely everyone except VMware. Then consider hardware acceleration support routability over WAN without any special reliablility requirements.

Oh wait... there is. It's called SMB and SMB Direct depending on whether MMIO matters.

This is 2017. No one wants direct drive access over fabric unless they are simply stupid. Block storage over network/fabric in 2017 is so impressively stupid. It requires too many translations, stupid file inflexible file systems like VMFS, specialized arbiters in path and is extremely inefficient and an order of magnitude worse when introducing replication, compression and deduplication.

The only selling point for block based storage over distance is unintelligent and unskilled staff. The only place where physical device connectivity protocols (SCSI and NVMe) should be used is when you want to connect drives to computers that will then handle file protocols.

BTW, GlusterFS and pNFS are good too.

0
0

One-third of Brit IT projects on track to fail

CheesyTheClown
Bronze badge

So 60%+ are expected to succeed?

That's really not bad.

Consider that most IT projects are specced on a scale to large to achieve.

Consider that most IT projects are approved by people without the knowledge to select contractors based on criteria other than promised schedules and lowest bids.

Consider that most IT people lack enough business knowledge to prioritize business needs and that most business people lack the IT experience to specify meaningful requirements.

To be fair, 60% success is an amazing number.

Now also consider that most IT projects would do better if companies stopped running IT projects and instead made use of turn-key solutions.

How much better are the odds of success when IT solutions are delivered by firms with a specific focus on the vertical markets they are delivering to?

0
0

Heaps of Windows 10 internal builds, private source code leak online

CheesyTheClown
Bronze badge

Re: I'm done with Windows.

Windows 10 Serial driver (C code, based on the same code you've seen... still works) : https://github.com/Microsoft/Windows-driver-samples/tree/master/serial/serial

Windows 10 Virtual Serial driver (C++ code, based on the new SDK with memory safety consider) : https://github.com/Microsoft/Windows-driver-samples/tree/master/serial/VirtualSerial

Mac OS X Serial Driver (C++ code... runs in user mode) : https://opensource.apple.com/source/IOSerialFamily/IOSerialFamily-91/IOSerialFamily.kmodproj/

Using a domain specific language for a kernel which can implement the core kernel code in "unsafe mode" and then implementing the drivers, file systems, etc... in a "safe mode" language meaning memory references instead of pointers (see C11 which makes moves this way... but refuses to break with tradition by doing it as library changes instead of a language feature).

In reality, this is 2017 and if your OS kernel still has a strict language dependence for things like file systems and device drivers, you probably aren't doing it right. These days most of that code should be user mode anyway. And no, user/kernel mode discussions stopped making sense when we started using containers and Intel and AMD started shipping 12+ core consumer CPUs

0
0
CheesyTheClown
Bronze badge

Re: I'm done with Windows.

Ohhh... I'm glad I came back here.

C is a great language and it's extremely diverse. It's absolutely horrifying for something like the Linux kernel though. Consider this, it has no meaningful standard set of libraries which means that support for things like collections and passing collections is a nightmare. Sure you have things like rbtree.[hc] in the kernel, but as anyone who has studied algorithms knows, there is no single algorithm which suites everything.

Let's talk about bounds, stacks, etc... there's absolutely no reason you can't enhance the C compiler to support more memory protection as well. C itself is a very primitive language and it's great for writing the boot code and code which does not need to alter data structures. But there are severe shortcomings in C. Yes, it's 100% possible to add millions of additional lines of repetitive and uninteresting code to implement all those protection checks. But a simple language extension could do a lot more.

Let's talk about where I find nearly all of the exploits in the kernel. This is in error handling and return values. It's amazing how you can cause problems with most code written at two different times by the same person or by two different people. The reason for this is that there's no meaningful way to handle error complex error conditions. Almost all code depends on just returning a negative value which is supposed to mean everything. The solution to this is to return a data structure which is basically a stack of results and error information and then handle it properly. The reason this isn't done is because people get really upset when implementing anything resembling exceptions in C. And yet, nearly every exploit I've found wouldn't have been there if someone implemented try/catch/finally.

Let's talk about data structure leaking and cleanup related to the above. Better yet, let's not... pretty sure that one sentence was enough to cover it all.

This is 2017, not 1969. In 2017, we have language development tools and technologies that allow us to make compilers in a day. This isn't K&R sitting around inventing the table based lexical analyzer. Sticking with the C language instead of creating a proper compiler designed specifically for the implementation of the Linux kernel is just plain stupid.

More importantly, there's absolutely no reason you have to use a standardized programming language for writing anything anymore. If your code... for example an operating system kernel would profit from writing a new programming language for it... do it. You can base it on anything you want. It's actually quite easy... unless you write the language itself in C. Use a language suited for language development instead. Get the point yet?

The next big operating system to follow the Linux kernel will be the operating system which leaves 95% of the C language in tact and implements a compiler which :

a) Eliminates remaining dependencies on assembler by implementing a contextual mode for fixed memory position development.

b) Provides a standard implementation of data structures as the foundation of the language

c) Implements a standard method of handling complex returns... or exceptions (possibly <result,errorstack>)

d) Implements safe vs. non-safe modes of coding. 90% of the Linux kernel could easily have been done in safe mode

e) Offers references instead of pointers as an option. This is REALLY important. Probably the greatest weakness of C for security is the fixed memory location bits. Relocatable memory is really really useful.If you read the kernel and see how many ugly hacks have been made because of it not being present, you'd be shocked. The Linux kernel is completely slammed full of shit code for handling out of memory conditions which exist purely because of supporting architectures lacking MMUs. References can be implemented in C using A LOT of bad and generally inconsistent code. It can be added to a compiler with a bit of work, but when combined with the kernel code, can implement a memory defragmenter that could fix A LOT of the kernel.

And since you're kind enough to respond aggressively, allow me respond somewhat in kind. You're an absolute idiot... though maybe you're only a fool. C# and .NET are actually very good. So is C, Java, C++, and many others. Heck, I write several good language a year when a domain would profit from it. I you don't know why C# and .NET or even better, Javascript are often better than plain C, you probably shouldn't pretend like you know computers.

Did you know that Javascript generally produces far faster and better code in most contexts than C and Assembler today? If you understood how microcode and memory access function, you'd realize there's a huge benefit to recompiling code on the fly. Consider that Javascript spends most of its time recompiling code as it's being run. This is because the first time you compiled it, it was optimal for the current state of the CPU, but as the state of the system changed (that's what happens in multitasking systems) the cache has changed and the CPU core being used may have changed (power state, etc..) and the Javascript compiler will reoptimize the code. It's even possible with Javascript that if you're on a hybrid system containing multiple CPU architectures or generations, the code can be relocated to a CPU which is better suited for the process.

Of course C could be compiled into Javascript or WebAssembly and have the same benefits. The main issue is that you lose support for relocatable memory as WebAssembly to support C/C++ is flat memory. But at least for execution, it's very likely your C code will run faster on WebAssembly than on bare metal. If you then start making use of Javascript/WebAssembly libraries for things like string processing, it will be even faster. If you move all threading to Javascript threading, it will be even better.

This does not mean you should write an operating system kernel in Javascript. Just as C is not suitable for OS development anymore, Javascript never will be.

0
1
CheesyTheClown
Bronze badge

Re: I'm done with Windows.

If you don't mind me asking, what do you mean by "this" when stating "But this is completely different."?

And which threats has MS not addressed lately?

And, the lack of mitigation of threats? Is this only when you avoid forced upgrades? Did you want more secure software or to stay with older and less maintained software which might not be patched? Did you not want the Windows update which blocked wannacry?

You are very excited about Linux. Do you keep it up to date? Do you run antivirus? Do you allow network applications access via SE Linux and later close the holes when you no longer use the app? Have you configured different network profiles for home or public? Do you continue using apps with dependencies on libraries with known vulnerabilities? How do you manage your private keys?

Linux is fun. I spend most of my Linux time reading driver and network stack source looking for rootkits for fun. I love finding nifty things like code injection opportunities in the forwarding tables. Or better, methods of replacing openssl.so with a copy that backdoors the private keys.

Linux's greatest weakness is its dependency on C for everything. It's like placing a welcome mat on the floor and leaving the key beneath it. As such, Linux, GTK, Gnome... not even a challenge.

So... back to "This"?

13
12

How to avoid getting hoodwinked by a DevOps hustler

CheesyTheClown
Bronze badge

Re: If they’re a 'DevOps Expert', they probably aren’t

I'm a programmer with some pretty nice notches on my belt. I'm into data center automation now. I spent 5 solid years establishing proficiency in IT since leaving product development and now am actively working on Powershell DSC and C#. I regularly write and open source modules and tools and work with customers ranging from a few dozen users to the largest government entities in the world. I employ test driven development, code review, unit and integration testing and revision control. I focus 100% on deploying systems that not only work, but repair themselves when things go wrong allowing operations and development work on implementing new systems instead of fixing things that shouldn't have broken to begin with.

I have absolutely no idea what DevOps is even though I'm a developer and my coworkers are operations.

I am certainly not an expert either.

So no... people don't know what it is and the people who are probably best at it are still just learning.

Let's summarize it as this.

If you're scripting... it's not DevOps

If you're doing it by hand... it's not DevOps

If you're describing what you want and then a system implements it and makes sure it stays implemented... it may be DevOps but since there are no reliable systems for that yet, it probably isn't.

6
2

America throws down gauntlet: Accept extra security checks or don't carry laptops on flights

CheesyTheClown
Bronze badge

Re: Anon

Following the Brexit decision, I have resourced my suppliers outside of the UK because of potential difficulties related to red tape similar to the US.

In order to get paid by US companies money I'm owed, I have to hire a US accountant who specializes in international trade to fill out paperwork or simply forfeit 30% of my earnings. I'm told this is because the US simply assumes all money moved out of the US is probably for laundering or tax evasion. That's ok, I've decided that working with governments (this is work I've done for the DHS) that see their friends as potential criminals isn't worth the effort.

So, I've stopped travelling to the states ... and spending money there.... often A LOT because it's becoming too difficult to perform business in the states anymore to be bothered with it. I can't be bothered much with London anymore either. I'd rather make a phone call than fly there. I went through Heathrow 20-30 times last year and up until the week before the Brexit vote, customs in Terminal 5 was quite quick (at least when you flew business). The few times since it was horrifying. And no, I'll be damned before I spend a few hundred pounds to preapprove. I'll just stop my weekend trips to bring the kids for milkshakes at Hamley's. I don't need to spend my money in a country where you're guilty until proven innocent.

What's worst of all of this is that the overly opportunistic nature of the US and the UK breeds their paranoia. They think that since Americans and Brits would be more than willing to take whatever you have the second you're not looking that the rest of the world is like that too. And I'm sure there are some people who are like that. But I refuse to spend my life in fear of those people and I refuse to be treated as if I am potentially one of those people.

Of course, what most brits and Americans don't realize is that most of their own people who they think would take what they have without a second thought... wouldn't.

58
8

VMware's security product to emerge in Q3 as 'App Defence'

CheesyTheClown
Bronze badge

Kudos!!! And WTF!?!?!

I'm a huge fan of advances like this. But this is something I've been doing for years with other systems like ACI and Hyper-V. I know this has it's own little competitive advantage, but it's basically same stuff as the other guys a few years late.

So, if there's actually a focus on this... why isn't VMware working with their Linux and Windows drivers to dig deeper into the system and provide mechanisms through the standardized firewall APIs on those hosts to provide meaningful feedback at an app level to the SDN solution. I mean... really... come on now. I want a method for my web server to say "Drupal needs to update on port X" and then have a policy system which decides whether the Drupal update app should have access or not.

Hasn't anyone told VMware that we've moved on from virtual machines in the software defined datacenter. We're working on containers and containers automation doesn't stop just because you've installed it. Containers request resources from the host and policies on the host grant or deny access to those resources.

Also, VMware and Cisco need to learn that we don't want to do software defined using another stinking controller. We want to define networking from the software. Installers and automation systems are not software, they're installation scripts. If you want an example of what software defined is, notice when a program on Windows asks for access to the network and Windows asks when you'd like to grant that access... and it's not asking for port numbers... it's asking whether that program can have certain access to certain resources. That panel should pop up on the security/network admin's telephone instead and when he/she clicks ok, it should install policies in Windows, NSX, the IPS and the firewall all in one go.

So, really VMware kudos for catching up with 2012. It's really quite cute. But can you please start working on software defined networking?

1
2

AES-256 keys sniffed in seconds using €200 of kit a few inches away

CheesyTheClown
Bronze badge

Re: I'm not even surprised.

Real-time memory encryption in server is a generally bad idea for a multitude of reasons.

1) It's a false sense of security. People will believe it offers some level of protection... it doesn't.

2) The memory controller would have to be issued keys from within each each session. These keys are theoretically shielded from the host system. If the guest operating system implements this technology... kudos for them. It means that direct attacks from VM to VM are taken care of.

3) Drivers loaded on the guest VM will have access to the encrypted memory as they run in kernel mode on the guest VM. This means virtual network, disk and graphics adapters will be able to access memory as unencrypted or issue memory requests to MMU to get access to whatever they want. So, a compromised driver can be an issue. If you read the source code for e1000, VirtIO Ethernet, VMXNET3 drivers in the Linux Kernel, you'll see that they aren't exactly hardened drivers for security. They're good device drivers, but VMXNET3 for example looks very pretty in code format, but that's because it's not particularly bogged down with silly things like bounds checking code.

4) "Bridges" used for performing remote execution on guest VMs will generally have to be available since this is how automation systems work. So, Powershell Remoting (WMI/OMI), QEMU Monitor Protocol, KVM Network Bridge, PowerCLI, etc... all offer methods of performing RPC calls on guest VMs from the host and in many circumstances, directly at the kernel level.

5) Hardware hairpinning is an option as well. PCIe (unlike PCI and older devices) operate entirely on memory mapped I/O (MMIO) which means that all communications with the system and with system memory are performed by using memory reads and writes. In bare metal hypervisors with proper hardware such as Cisco VIC, nVidia GPUs ,etc... the hardware is programmable, partitionable, and can execute code. An example would be to log into the Cisco VIC adapter via out of band management and run show commands for troubleshooting. The iSCSI troubleshooting commands in particular are quite powerful and would easily allow issuing memory reads and writes on the fly from a command line interface. In order to honor them, the MMU in the CPU would have to decrypt the requested memory. Of course, the MMU and OS driver could mark pages appropriately to allow access-lists on individual protected pages. But that's mute when we see point 6)

6) RDMA provides a means of extending system memory from server to server. This works by mapping regions of physical memory in each server to be accessed by hardware from other systems over devices like RDMA over Ethernet NICs or Infiniband HCAs. High performance systems like HPC systems, high performance file servers (like Windows with SMB Direct) and high performance Hypervisors like KVM and Hyper-V (ESXi is very notably not part of this as it sacrifices high performance for high compatibility) perform live migration over RDMA where possible. While it is theoretically possible to move the guest machines in encrypted states, it would be necessary to carry enough information from one server to the other during a migration to provide a decryption key in the new host to access the VM memory as it is moved. That means the private key would have to either be transferred in clear text or would have to be renegotiated through an hypervisor hosted API... providing a new key in clear text to the hypervisor...if only briefly.

The intent of encrypted memory was really really awesome, but extremely poorly thought out. It could have some benefits in places like containers where individual containers could be shielded from the host OS and they don't migrate. But there would still be critical issues with regards to where the decryption keys reside. Also, as containers generally ARE NOT bare metal, so the keys would have to reside on the container host instead.

Thanks for bringing this topic up though. Make sure you tell everyone who intends to depend on encrypted memory that it's at least 10 years and several Windows, Linux, Docker and hardware generations off from being meaningful. But make sure to tell them they should bitch to their vendors to make them support it ASAP. It will require an entire ecosystem (security in layers) approach to make this happen.

3
0

Hey blockheads, is an NVMe over fabrics array a SAN?

CheesyTheClown
Bronze badge

Who cares?

NVMe is simply a standard method of accessing block devices over a PCIe fabric. As with SCSI, it's thoroughly unsuited for transmissions over distance. It generally adopted many of the worst features of SCSI at least with regards to fabric. There is nothing particularly wrong with using memory mapped I/O as a computer interconnect, in fact, it's amazing all the way up to when you try to run it across multiple data centers. At that time, NUMA style technologies no matter how good they are basically fall apart. There's also the issue that access control simply is not suited for ASIC implementation, so employing NVMe and then adding zoning breaks absolutely everything in NVMe. Shared memory systems are about logical mapping of memory of shared memory. It is horrifying for storage.

So, in 2017, when we know that any VM will have to perform software level block translation and encapsulation at some point, why the hell are we trying to simulate NVMe which just lacks any form of intelligence when we should be focused on reducing our dependence on simulated block storage by more accurately mapping file I/O operations across the network.

BTW, the instant we added deduplication and compression, block storage over fabric became truly stupid.

2
6

Another FalconStor CEO out as storage software firm hunts for growth

CheesyTheClown
Bronze badge

Info on FaclconStor?

I've just scrubbed their website and there is no meaningful technical documentation that can be easily found. I found a half-ass feature list and almost no user guides or configuration guides. All I found was endless junk for investors. I almost can't tell if they actually have a product to sell.

From what little I could find, it looked almost like a web front end to Linux LVM2, ZFS and LIO. Now, front ends are great, but there is no information regarding whether their product offers anything special besides a web page on Linux. Heck, it could just as easily be a front end on ZFS and COMSTAR.

How does a company that doesn't even provide a feature list like whether it supports VAAI-NAS or not sell anything? They brag about having presence in 20% of the Fortune 500. Does that mean that 20% are paying customers or do they have a VM running the demo version?

I have tried solutions from dozens of vendors but never FalconStor because I could never figure out why I should. But I guess FalconStor prefers to skip the tech guys

0
0

Uh-Koh! Apple-Samsung judge to oversee buggy Intel modem chip fight

CheesyTheClown
Bronze badge

Re: And Virgin in uk?

That's under the assumption that I had access to such hardware. You are also under the false impression that what you just posted would provide meaningful information. I can see the results of that on some of the links I've encountered and it simply didn't provide much information.

Let's run with this though.

First of all, I'd imagine that if Intel has not released a patch with this problem, it would require alterations to the ASIC in order to correct the issue.

I could probably with some effort borrow a CMTS from a local cable company, I see that I can find a relatively old Cisco 7200 based CMTS for about $2000 from eBay or maybe piece one together from a chassis and a line card for a few hundred dollars. The problem with this is that I wouldn't be able to get DOCSIS 3.0 support operational which may be required.

I see that Puma 6 modems don't cost much either.

So, let's assume I could build a test rig for about $1000 (which I really wouldn't spend unless I had a business case).

I would need to figure out how to get root OS access to the Puma modem which likely is not difficult, though if the modem is running anything other than Linux, it may have just one of those stupid text based management programs. So I would need to connect JTAG cables to run in-circuit-emulation based debuggers. For Intel chips this isn't particularly difficult as they are extremely well known and thoroughly documented.

A much less expensive alternative is to get a boot image for the device and open it in something like IDE pro with a decompiler plugin. This could require much effort to work with since I'd have to guess my way through the file system and operating system code. And if the operating system image is compiled monolithic (instead of using kernel modules) which is common on embedded systems like this, I would have little or no hope of reverse engineering the applicable drivers.

Even if I somehow managed to reverse engineer the drivers (not really that difficult from kernel modules) then I would only have the control APIs to the ASIC, it wouldn't give me insight to the ASIC itself.

As the problem does not appear to be able to be fixed by software, even if I managed to reverse engineer microcode pushed to the chip (disassemble to VHDL or similar), it would likely not cover the areas of the chip which are plagued.

If I had the VHDL code to the chip, it may be difficult to work with. Generally, even without comments, it requires good engineering documentation with the block diagrams of each core... but with this, I more than likely could accurately diagnose the problem and come to similar conclusion that Intel more than likely has which is that there's a hardware limitation somewhere that can't be fixed without replacement.

So we're back to speculation... and more than likely meaningless speculation.

So I stand by my comment that Intel can replace the modems which people complain about with newer models. And for everyone else, suggest that cable companies implement IPS filters to protect their users from attacks.

0
0

HPE hatches HPE Next – a radical overhaul plan so it won't be HPE Last

CheesyTheClown
Bronze badge

Re: Paraphrase "More job cuts"

That's not true. She has turned HPE into a company that buys profitable companies that have customers that can't leave them for at least a few years, kills off their engineering teams, kills their products. Then when all their customers leave because they bought products from smaller more agile companies and now are being treated like hell, HPE either spins off or kills off the business units.

Let's take an example like Aruba. Aruba customers understood exactly what the were buying. They could buy wireless access points and controllers embedded in switches which provided an excellent solution with predictable pricing and fantastic support. Then HPE who had a mostly shit product line because they bought a buggy as shit half finished product as 3com sucked them up and without considering the impact to customers, killed off the Aruba switching products as they were redundant and left customers without integrated controllers. Also, they started moving support to underqualified support centers in India. They killed off proper Aruba specific sales. They merged HPE networking with Aruba as if they are getting ready to spin off enterprise networking as well. The Aruba documentation and communities got hurried in HPE networking hell. Now Aruba exists by selling more equipment to companies who already had Aruba and can't justify dumping Aruba as they haven't had return in investment yet. Besides, the only alternative is Cisco and doing business with Cisco can be very difficult at times.

Simplivity... haha oh dear... they died the moment HPE bought them.

Nimble customers are already being beaten to death by HPE.

Ever since HP was taken over by people who wouldn't know what an oscilloscope was if they had one smashed over their heads, HPE has been strictly an acquisitions and mergers company. They have not been a reliable source of technology for a long time.

9
0

Farewell, slumping 40Gbps Ethernet, we hardly knew ye

CheesyTheClown
Bronze badge

It's about wavelength as opposed to transceivers.

40gb/s is accomplished with 4 bonded (think port channel kinda) 10gbs links. That means we need we need 4 wavelengths to accomplish 40gb/s or 10 for 100gb/s. Using WDM equipment, a 40gb/s trasceiver can deliver 10,20,30 or 40gb/s depending on which wavelengths are optically multiplexed.

100gb/s using 25gb/s transceivers can provide 25,50,75 or 100gb/s over the same wave lengths.

Long range transceivers capable of service provider scale runs are very expensive. But compared to rental of wavelengths cost nothing. I've seen invoices for 4 wavelengths along the transsiberian railroad where short term leases (less than 30 years) were involved measured in millions of dollars per year. Simply replacing a switch and transceiver would boost bandwidth from 20gb/s to 50gb/s without any alterations to the fiber or passive optical components.

So, 40gb/sec makes a lot of sense in data centers where there is no reoccurring costs for fiber. But when working with service providers, an extra million dollars spent on each end of a fiber, the hardware cost is little more than a financial glitch.

5
0
CheesyTheClown
Bronze badge

Re: Moore's Law on Acid

We'll move on to terabit, but as it stands, quantum tunneling is a major problem with modern semiconductors preventing us from going there. If I recall correctly, Intel posted a while back that their research says that we will need 7nm die process to create 1tb transceivers. So for now we'll focus on 400gbs

1
0

Two leading ladies of Europe warn that internet regulation is coming

CheesyTheClown
Bronze badge

Re: But Angela has a working brain...

A Ph.d. in Chemistry, while not likely to be a candidate for the Field's medal any time in the near future should have a high enough level of mathematical understand to grasp concepts such as factoring and coefficients. She might not understand the relationship of mersenne primes and polynomial based encryption mechanisms... but I'm pretty sure she has friends she respects who do.

I don't like Merkel... and with regard to politics, I don't particularly respect her. I suppose this is very likely because the leadership positions in Germany have demands which make people into assholes (where in most other countries, asshole is a prerequisite). But I do think she's competent. Theresa May shares the shit out of me and gives me nightmares. She's basically Donald Trump with an accent which sounds benign. I really think she should change her name to Umbridge and get herself some special quill pens.

16
0

Page:

Forums

Biting the hand that feeds IT © 1998–2017