* Posts by CheesyTheClown

493 posts • joined 3 Jul 2009

Page:

Hey blockheads, is an NVMe over fabrics array a SAN?

CheesyTheClown

Who cares?

NVMe is simply a standard method of accessing block devices over a PCIe fabric. As with SCSI, it's thoroughly unsuited for transmissions over distance. It generally adopted many of the worst features of SCSI at least with regards to fabric. There is nothing particularly wrong with using memory mapped I/O as a computer interconnect, in fact, it's amazing all the way up to when you try to run it across multiple data centers. At that time, NUMA style technologies no matter how good they are basically fall apart. There's also the issue that access control simply is not suited for ASIC implementation, so employing NVMe and then adding zoning breaks absolutely everything in NVMe. Shared memory systems are about logical mapping of memory of shared memory. It is horrifying for storage.

So, in 2017, when we know that any VM will have to perform software level block translation and encapsulation at some point, why the hell are we trying to simulate NVMe which just lacks any form of intelligence when we should be focused on reducing our dependence on simulated block storage by more accurately mapping file I/O operations across the network.

BTW, the instant we added deduplication and compression, block storage over fabric became truly stupid.

1
5

Another FalconStor CEO out as storage software firm hunts for growth

CheesyTheClown

Info on FaclconStor?

I've just scrubbed their website and there is no meaningful technical documentation that can be easily found. I found a half-ass feature list and almost no user guides or configuration guides. All I found was endless junk for investors. I almost can't tell if they actually have a product to sell.

From what little I could find, it looked almost like a web front end to Linux LVM2, ZFS and LIO. Now, front ends are great, but there is no information regarding whether their product offers anything special besides a web page on Linux. Heck, it could just as easily be a front end on ZFS and COMSTAR.

How does a company that doesn't even provide a feature list like whether it supports VAAI-NAS or not sell anything? They brag about having presence in 20% of the Fortune 500. Does that mean that 20% are paying customers or do they have a VM running the demo version?

I have tried solutions from dozens of vendors but never FalconStor because I could never figure out why I should. But I guess FalconStor prefers to skip the tech guys

0
0

Uh-Koh! Apple-Samsung judge to oversee buggy Intel modem chip fight

CheesyTheClown

Re: And Virgin in uk?

That's under the assumption that I had access to such hardware. You are also under the false impression that what you just posted would provide meaningful information. I can see the results of that on some of the links I've encountered and it simply didn't provide much information.

Let's run with this though.

First of all, I'd imagine that if Intel has not released a patch with this problem, it would require alterations to the ASIC in order to correct the issue.

I could probably with some effort borrow a CMTS from a local cable company, I see that I can find a relatively old Cisco 7200 based CMTS for about $2000 from eBay or maybe piece one together from a chassis and a line card for a few hundred dollars. The problem with this is that I wouldn't be able to get DOCSIS 3.0 support operational which may be required.

I see that Puma 6 modems don't cost much either.

So, let's assume I could build a test rig for about $1000 (which I really wouldn't spend unless I had a business case).

I would need to figure out how to get root OS access to the Puma modem which likely is not difficult, though if the modem is running anything other than Linux, it may have just one of those stupid text based management programs. So I would need to connect JTAG cables to run in-circuit-emulation based debuggers. For Intel chips this isn't particularly difficult as they are extremely well known and thoroughly documented.

A much less expensive alternative is to get a boot image for the device and open it in something like IDE pro with a decompiler plugin. This could require much effort to work with since I'd have to guess my way through the file system and operating system code. And if the operating system image is compiled monolithic (instead of using kernel modules) which is common on embedded systems like this, I would have little or no hope of reverse engineering the applicable drivers.

Even if I somehow managed to reverse engineer the drivers (not really that difficult from kernel modules) then I would only have the control APIs to the ASIC, it wouldn't give me insight to the ASIC itself.

As the problem does not appear to be able to be fixed by software, even if I managed to reverse engineer microcode pushed to the chip (disassemble to VHDL or similar), it would likely not cover the areas of the chip which are plagued.

If I had the VHDL code to the chip, it may be difficult to work with. Generally, even without comments, it requires good engineering documentation with the block diagrams of each core... but with this, I more than likely could accurately diagnose the problem and come to similar conclusion that Intel more than likely has which is that there's a hardware limitation somewhere that can't be fixed without replacement.

So we're back to speculation... and more than likely meaningless speculation.

So I stand by my comment that Intel can replace the modems which people complain about with newer models. And for everyone else, suggest that cable companies implement IPS filters to protect their users from attacks.

0
0
CheesyTheClown

Re: And Virgin in uk?

I'd say that the real issue here isn't whether Intel is good or bad. Remember that Intel generally makes pretty great stuff. It's a matter of how this bug is being handled.

I haven't been able to find a great deal of information on this bug other than the artificial stuff like "my modem is experiencing jitter" or your comment about flow tables. I'm going to assume that before using terms like flow tables, you've done some research on this and know what phase of the forwarding pipeline this is and whether population of the flow tables is the problem or not.

I'm curious about the forwarding process of the modem since whether in bridged or routed mode, I'm under the assumption at this point that Intel has implemented a hardware based packet processor that manages most packet buffering and forwarding in the hardware forwarding engine.

Now, the act of forwarding a packet should be deterministic at all times. However the decision making process of how and where to forward the packet can introduce difficulties. This engine almost certainly performs header parsing in hardware. This should not be an issue either since headers per service type should be consistent.

The length of the packet seems to be what is choking the system. Length matters if encountering packets classified as runt or otherwise exceeding the MTU in networking. What seems to matter here is how the device is handling what is likely padded frames. That means that when the hardware is processing a frame which needs to be transmitted with additional padding in order to not be classified as a runt frame (less than 64 bytes + clock recovery and CRC) there is apparently an issue.

So, by following the logic above, it seems to me that jitter and latency is introduced when padding is required when translating bridging from DOCSIS to 802.3. If DOCSIS transmits padding (not sure, I'm not that familiar with DOCSIS) then upon receiving the packet, the packet engine seems to strip the padding (which is healthy I imagine) and processes packet authenticity (CRC or equiv). Then when re-encapsulating as Ethernet, a new frame is constructed. When the frame meets all main requirements, the process is handled in hardware which is observed by the fact that larger packets don't appear to cause issues.

When a runt frame requiring padding is encountered, the modem will generate a protection fault within the CPU which is handled by the operating system. The operating system then signals the device driver for the hardware. The driver then either copies the frame from the buffer into system cache or performs bunches of IO operations on the memory in place in the buffer. The entire frame is likely parsed by CPU at this point, then is placed back into the buffer to forward it again and the driver signals the hardware via MMIO to continue.

The earlier bug we've seen before any patches has been that packets are simply dropped. I don't know if 100% of the runt frames are dropped and this accounts for 6% of the data or if 6% of the runt frames are dropped. This makes a big difference.

So my theory is that the hardware logic is completely missing a runt frame handler and is entirely dependent on software to process runt frames. This sounds crazy, but Cisco.. THE NETWORKING COMPANY has had a known (but quietly hidden) bug in their 6800IA hardware for over 2 years unpatched that drops runt frames when they retag VLANs and they're hoping noone notices and complains.

Given the forwarding engine that is likely provided in the PUMA 6 (speculating) is designed to work like a normal forwarding engine, that means 99.99% of the forwarding work will be done in hardware. If there is an exception encountered meaning that the forwarding table (I assume this is what you mean by flow table) is not populated or there is a packet exception such as a runt frame requiring padding, the CPU will need to process the packet.

It is extremely common that in established conversations, the flow table should not need to be altered. Since you are in bridge mode there are probably 2-3 known flows on the DOCSIS side (being the routers processing at the CMTS) and there is probably at most 1 flow on the Ethernet side which is your network router. However in Layer-3 there could be a great deal more involved when encountering NAT.

That said, since the device likely can handle NAT, it probably has some amazing processing capabilities for handling exceptions with the NAT tables.

But runt frame processing is not handled quickly it seems. This could be that NAT doesn't actually require reading and parsing a full frame, then generating a whole new frame to process. Instead, it probably simply requires using a hardware optimized mechanism to push a new NAT entry to the table and then translation is handled in hardware.

So, then comes latency and jitter. If the packet has to be processed in software and the software itself is not designed for packet processing (meaning plain old Linux, wind river, whatever) then there would be a non-deterministic latency when processing these packets as operating systems can often use between 1ms and 150ms just to respond to an interrupt. This is not an issue for the occasional unknown flow. Chances are, the hardware is using an alternate buffer to forward with during this time. But if there are a lot of frames queued for forwarding, the buffers could be full, and block the pipeline which the unknown frame is being processed... at which time 150ms can be deadly.

So, the next thing that comes up is that there were earlier articles on this topic I believe which blames CPU speed throttling for the problem. This is common. Since the CPU in question spends most of its time asleep as it only needs to handle management and exceptions, it can be REALLY slow most of the time. When a new exception comes in, it will need to throttle the CPU up quickly. This adds more delays... maybe another 50ms ... who knows, I can't find the programmers guide for the chip.

So, now we're seeing lots of delays.

One option is that the ISP simply block runt frames which will kill any games using very small frames. Then beg the game guys to intentionally pad their packets. Of course, chat programs that transmit every character as they're typed will fail as well.

Another option is to optimize the OS kernel for packet processing runt frames... if they can be processed at all. There's a chance the packet forwarding microcode doesn't have a proper mechanism for this. It may demand each packet is handled independently. If this is the case, then without replacing the chip, there may be no answer. Of course, recoding the OS, writing a split core kernel which would allow one core to run the management OS and the other core to run a packet processor can improve performance and provide deterministic forwarding, it would still have high latency. But at least it would be reliable.

Finally, the real solution, recall the products. The issue with this is that the cost of fixing a device that is this cheap with such a small margin is more expensive than just making a new one. That said, if Intel has to sponsor a recall of every single device shipped with these chips, it could mean billions lost.

So the best option may be, help the vendors make runt frame friendly devices. Then if a customer complains, send them a replacement free of charge. Then pay whatever class action suit comes up for $5-$50 million and be done with it. It might even be cheaper just to pay the class action and make you buy your own replacement.

I think Intel unfortunately is handling this the best they can. Bugs happen. And there has never been a cable modem chipset that didn't suffer one problem or another.

I think you'll find that it shouldn't be long before your service provider is in a position offer a new modem with a newer chip that doesn't have the problem. I'd imagine the delay now is quality control.

3
1

HPE hatches HPE Next – a radical overhaul plan so it won't be HPE Last

CheesyTheClown

Re: Paraphrase "More job cuts"

That's not true. She has turned HPE into a company that buys profitable companies that have customers that can't leave them for at least a few years, kills off their engineering teams, kills their products. Then when all their customers leave because they bought products from smaller more agile companies and now are being treated like hell, HPE either spins off or kills off the business units.

Let's take an example like Aruba. Aruba customers understood exactly what the were buying. They could buy wireless access points and controllers embedded in switches which provided an excellent solution with predictable pricing and fantastic support. Then HPE who had a mostly shit product line because they bought a buggy as shit half finished product as 3com sucked them up and without considering the impact to customers, killed off the Aruba switching products as they were redundant and left customers without integrated controllers. Also, they started moving support to underqualified support centers in India. They killed off proper Aruba specific sales. They merged HPE networking with Aruba as if they are getting ready to spin off enterprise networking as well. The Aruba documentation and communities got hurried in HPE networking hell. Now Aruba exists by selling more equipment to companies who already had Aruba and can't justify dumping Aruba as they haven't had return in investment yet. Besides, the only alternative is Cisco and doing business with Cisco can be very difficult at times.

Simplivity... haha oh dear... they died the moment HPE bought them.

Nimble customers are already being beaten to death by HPE.

Ever since HP was taken over by people who wouldn't know what an oscilloscope was if they had one smashed over their heads, HPE has been strictly an acquisitions and mergers company. They have not been a reliable source of technology for a long time.

7
0

Farewell, slumping 40Gbps Ethernet, we hardly knew ye

CheesyTheClown

It's about wavelength as opposed to transceivers.

40gb/s is accomplished with 4 bonded (think port channel kinda) 10gbs links. That means we need we need 4 wavelengths to accomplish 40gb/s or 10 for 100gb/s. Using WDM equipment, a 40gb/s trasceiver can deliver 10,20,30 or 40gb/s depending on which wavelengths are optically multiplexed.

100gb/s using 25gb/s transceivers can provide 25,50,75 or 100gb/s over the same wave lengths.

Long range transceivers capable of service provider scale runs are very expensive. But compared to rental of wavelengths cost nothing. I've seen invoices for 4 wavelengths along the transsiberian railroad where short term leases (less than 30 years) were involved measured in millions of dollars per year. Simply replacing a switch and transceiver would boost bandwidth from 20gb/s to 50gb/s without any alterations to the fiber or passive optical components.

So, 40gb/sec makes a lot of sense in data centers where there is no reoccurring costs for fiber. But when working with service providers, an extra million dollars spent on each end of a fiber, the hardware cost is little more than a financial glitch.

5
0
CheesyTheClown

Re: Moore's Law on Acid

We'll move on to terabit, but as it stands, quantum tunneling is a major problem with modern semiconductors preventing us from going there. If I recall correctly, Intel posted a while back that their research says that we will need 7nm die process to create 1tb transceivers. So for now we'll focus on 400gbs

1
0

Two leading ladies of Europe warn that internet regulation is coming

CheesyTheClown

Re: But Angela has a working brain...

A Ph.d. in Chemistry, while not likely to be a candidate for the Field's medal any time in the near future should have a high enough level of mathematical understand to grasp concepts such as factoring and coefficients. She might not understand the relationship of mersenne primes and polynomial based encryption mechanisms... but I'm pretty sure she has friends she respects who do.

I don't like Merkel... and with regard to politics, I don't particularly respect her. I suppose this is very likely because the leadership positions in Germany have demands which make people into assholes (where in most other countries, asshole is a prerequisite). But I do think she's competent. Theresa May shares the shit out of me and gives me nightmares. She's basically Donald Trump with an accent which sounds benign. I really think she should change her name to Umbridge and get herself some special quill pens.

16
0

Intel to Qualcomm and Microsoft: Nice x86 emulation you've got there, shame if it got sued into oblivion

CheesyTheClown

Instruction stayed the same, the core changes

You have a lot of great points. I always considered the 64KB page to be a smart decision when considering backwards compatibility with 8085. It also worked really well for paging magic on EMS which was not much more difficult to manage than normal x86 segment paging. XMS was tricky as heck and DOS extenders were really only a problem because compiler tech seemed locked into Ohar Lap and others $500+ solutions at the time.

I don't know if you recall Intel's iRMX which was a pretty cool (though insanely expensive) 32-bit DOS for lack of a better term. It even provided real-time extensions which were really useful until we learned that real-time absolutely sucks for anything other than machine control.

Also, DOS was powerful because it was a standardized 16-bit API extension to the IBM BIOS. A 32-bit DOS would have been amazingly difficult as it would have required all software to be rewritten and since nearly everything was already designed to use paged memory. In addition, since most DOS software avoided using Int21h for display functions (favoring Int10h or direct memory access) and many DOS programs used Int13h directly, it would have been very difficult to implement a full replacement for DOS in 32-bit.

Remember; on 286 and sometimes on 386, entering protected mode was easy, but switching back out was extremely difficult as it generally required a simulated boot strap. That means to access 16-bit APIs from 32-bit may not have been possible. They would have had to be rewritten. For most basic I/O functions that wouldn't be problematic, but specifically in the cases of non-ATA (or MFM/RLL) storage devices, the API was provided by vendor BIOSes that reimplemented Int13h. So, in order to make them work, device drivers would not have been optional.

In truth, the expensive 32-bit windowed operating systems with a clear differentiation between processes and system-call oriented cross process communication APIs based on C structures made more sense. In addition, RAM was still expensive with most systems still having 2MB of RAM or less, page exception handling and virtual memory made A LOT of sense as developers had access to as much memory as they needed (even if it was page swapped virtual memory).

I think in truth, most problems we encountered was related to a > $100 price tag. Microsoft always pushed technology by making their tech more accessible to us. There were MANY superior technologies, but Microsoft always delivered at price points we could afford.

Oh... and don't forget to blame Borland. They probably were the biggest driving factor behind the success of DOS and Windows. By shipping full IDEs with project file support and integrated debuggers (don't forget second CRT support) and integration with assembler (inline or TASM) and affordable profilers (I lived inside of Turbo Profiler for ages). Operating system success has almost always been directly tied to accessibility of cheap, good and easy to use development tools. OS/2 totally failed because even though Warp 3.0 was cheap, no one could afford a compiler and SDK.

1
0
CheesyTheClown

Re: x86 bloated!!!

Some would also suggest that RISC suffers similar problems when optimized for transistor depth where highly retiring operations are concerned. Modern CISC processors have relatively streamlined execution units which is what consume most of their transistors... as with RISC. However, RISC which has to increase instruction word size regularly to expand functionality suffer the burden of either requiring more instructions for the same operation as CISC, or they have a higher cost of data fetching which result in longer pipelines that can suffer greater probability of cache miss. Since 2nd level cache and above as well as DDR generally depend on burst for fetches, RISC with narrow instruction words can be a problem. Also consider the pipeline optimization of RISC instructions which may have branch conditions on every instruction can be highly problematic for memory operations.

Almost all modern CPUs implement legacy instructions (such as 16-bit operations) in microcode which executes similar to a JIT compiler that compiles instructions in software.

Most modern transistors on CPUs are spent on operations such as virtual memory, fast cache access and cache coherency.

0
0
CheesyTheClown

Re: At this point...

I believe this is the right direction to think in.

Intel isn't trying to guarantee security in the mobile device market. That ship sailed. In fact, with the advent of WebAssembly, it is likely x86 or ARM will have little or no real impact now. Intel's real problem with mobile platforms like Android was the massive amount of native code written for ARM that wouldn't run unmodified on x86. With WebAssembly that will change.

Intel is more concerned that with Microsoft actively implementing the subsystem required to thunk between emulated x86 and ARM system libraries, it will be possible now to run Windows x86 code unmodified on ARM... or anything else really.

That means that there is nothing stopping desktop and server shipping with the same code as well. This does concern Intel. If Microsoft wants to do this, they will have to license the x86 (more specifically modern SIMD) IP. Intel will almost certainly agree to do this, but it will be expensive since it could theoretically have very widespread impact on Intel chip sales.

Of course, Apple who has proven with Rosetta that this technology work could have moved to ARM years ago. They probably didn't because they decided instead to focus on cross architecture binaries via LLVM to avoid emulating x86. Apple will eventually be able to transition with almost no effort because all code on Mac is compiled cross platform... except hand coded assembly. Microsoft hasn't been as successful with .NET, but recent C++ compilers from Microsoft are going that way as well. The difference is that Microsoft never had the control over how software is made for Windows as Apple has had for Mac or iOS.

5
0

Windows 10 Creators Update preview: Lovin' for Edge and pen users, nowt much else

CheesyTheClown

Hadn't thought much on it.

I just press the Windows key and type. It's clear that the search engine comes from the makers of Bing... but unlike Bing, it often actually comes close. So, I don't think I've actually seen the settings interface. I just search for what I need and generally try to use Powershell for most everything.

I will admit that display resolution should not be categorized as advanced settings.

1
8
CheesyTheClown

Re: Fall Creators Update

The quote "only the strong will survive" is actually a misquote regarding natural selection. Natural selection will select out those of a species least capable of adapting to the changes in their environment. One might suggest that as the world evolves, people who can't figure out how to become comfortable with a new version of Windows after 10 years might be headed down the same road as the dodo.

5
30

Science megablast: Comets may have brought xenon to Earth

CheesyTheClown

Re: Comets? Why bloody comets?

I don't get it.

Was that a legitimate query, a rhetorical question, or maybe an example of British humour (not to be confused with humor. Where humour employs the word funny as in "does that smell funny to you", humor employs the word funny as in "that was so funny I'll need to visit the hospital for stitches after rupturing my spleen from laughing so hard")?

Earth is tiny. Comets are generally small, Hale Bopp is more than 60km in diameter, insignificant compared to earth at 12,742km or even Pluto (2,374km), but its tail extends 500 million km. As orbits the sun, it comes into constant contact with all kinds of debris such as randomly floating junk and stuff left behind by meteors and comets. Gravity sucks them in and they become part of Earth.

Earth doesn't need to be provided with anything any more than your car needs to squash a bug on its windshield. But sometimes it appears we get lucky and manage to gather a few useful items such as something we believe may have helped spark life... which of course happened in originally in England... where Jesus was born and lived... and blessed the queen... and also was the geometric center of the universe... and as such must be protected by aliens of all forms... which is why Theresa May will now secure cameras in your bedrooms and bathrooms for your own protection. After all, this article says that someone or something in space is trying to invade your country with particles and possibly life forming gasses that are believed to be a primordial source of dangerous terrorists.

I guess it would of course be better if we had only British produced Xenon 67P in our atmosphere in the future :) Of course, the British version would be 69P because it's just worth more and it's damn sexy too!

3
0

Tech can do a lot, Prime Minister, but it can't save the NHS

CheesyTheClown

Quality healthcare = Civilization

If people are sick, they don't work.

If people fear getting sick when they're old, they horde savings instead of circulating them in the economy

If people take the burden of healthcare costs on themselves directly, they avoid doctors

Quality healthcare and quality education provided by the government increases tax revenue and at 12% of the GDP is a bargain.

What does it cost the government and the tax payers when hospitals and care givers have to compensate for people defaulting on their medical costs?

If any one person suggests that they think they will save money by not making healthcare and education and integral component of their civilization, they are short sited fools and should move to the U.S. and vote Trump. See where that gets you.

If England hopes to be a leader of anything following Brexit, they will find a way to spend more on healthcare and education, not less. Otherwise, they might as well reopen the work houses and rent spaces on floors where children can rent space to sleep at a price affordable on a street begger's pay.

4
1

You may now kiss the server-side: Dell EMC marries storage software to PowerEdge 14Gs

CheesyTheClown

Management API?

Sounds like some nifty features, but what about a management API?

Can I configure and upgrade the UEFI, firmware, BIOSes, etc... all from a single API? Can we configure the SCSI controller and RAID settings via the standard OOB management interface? Can we configure PXE boot via the OOB management? Can we configure UEFI, SCSI, NVMe, NICs via a single OOB management system while the system is operating? Is there a standard API for configuring VNICs on different NICs from different vendors (as Dell still doesn't make their own... UGH) so that VLAN and VXLAN is supported? Is there support for 802.1ae (MACsec/LinkSec) during boot? Is there support for LLDP during boot? Can I use these APIs for configuring and managing HBAs and vHBAs?

Does the system have an API for configuring certificates? Can we configure IPv6 security (IPSEC) and SNMPv3 views?

Does the ordering and fulfillment systems for the servers provide the MAC addresses for automated provisioning of DHCP, 802.1x MAB and AD accounts? Is the OOB MAC available as a scannable barcode on the shipping box, palette BOM and physical machine itself?

Is there one or two OOB management Ethernet ports? Is there a plug and play API so that no configuration data is stored on device and can be centralized instead? Can I push appropriate configs for the system based on DHCP option 82?

Do any of the other features matter if we still have to manage these servers like it's 1999?

This is 2017, Cisco did a lot of this stuff with UCS Manager. That system is old and clunky, but it works (99% of the time). Why the hell is it you can't after nearly 10 years get a second vendor?

0
0

BA CEO blames messaging and networks for grounding

CheesyTheClown

How could this even happen?

I'm developing a system now that is small and not even mission critical and it has redundancy and failover. Does anyone alive actually make anything anymore that doesn't do this?

1
0

Is it the beginning of the end for Visual Basic? Microsoft to focus on 'core scenarios'

CheesyTheClown

Re: Fickle Microsoft

haha I remember being the C programming king of high school and Windows came out and even with all the help that I could get from the Charles Pezhold book *which I spent two weeks of grocery store wages on), I couldn't for the life of me figure out the API.

Of course, X11, Windows, Mac all have horrible low level APIs... but now, I just code language and environment doesn't really matter anymore. It's more about simply just sitting down to type.

I nearly died laughing at the guy who said that simply changing the language made him go from senior developer to junior developer. I never met a senior level developer who was senior because of how well he could use one particular tool or paradigm. I always considering the most versatile person to be senior and people who speak like he did as ready to be promoted to janitorial staff.

0
0

SAP Anywhere goes nowhere, reaches commercial cul-de-sac

CheesyTheClown

Probably a weak pound issue

If they pen agreements with U.K. companies when the pound is weak, they will have to take what they can get since most U.K. companies don't want to pay prices that make sense on the sales sheet in dollars. US companies are forced to charge quite a bit more because of the very high cost of VAT in Europe and they'd probably have to charge U.K. companies less than their US counterparts.

In addition, I suspect that U.K. regulation is about to make a lot of rapid changes requiring a lot of coding to support it. The cost will be high. It's probably more cost effective to wait until the U.K. market stabilizes following Brexit to bother investing in that.

Of course it could simply be that Trump induced stupid has all their programmers and lawyers so busy that trying to keep up with American issues leaves no time to waste on U.K. stupid as well.

0
0

Hypervisor kid Jeff Ready: Converged to the core, and NO VMware

CheesyTheClown

Re: After what these guys did to their storage customers...

Dedupe on HCI is easy if you're not using VMware as they don't properly support pNFS, it does a bastardized form of it called multipathing.

The solution to this to to sell your soul to VMware, get access to the NDDK and implement a native storage driver which can implement pNFS on its own. There's absolutely no value in doing this and no one should ever bother trying.

There's the alternative which is to attempt to get iSCSI up and running in a scale-out environment. Due to limitations in vSwitch, this isn't an option since multicast iSCSI isn't supported in VMware's initiator and anycast isn't profitable in this case.

FC is out if for no other reason but FC is storage for people who still need mommy to wipe their bottoms for them. FC is so simple stupid, a monkey can run an FC SAN (until they can't... but consultants are cheap right?) and what makes it so simple stupid is that FC doesn't support scale-out AT ALL, though MPIO could scale all the way to two controllers.

So, then there's the question of value. Where's the value in dedup on a VMware HCI platform? That's a tricky one since due to the nature of VMware's broken connectivity options for storage, you can't scale out the system connectivity to begin with. You also can't extend the vmware kernel to support it because even if you have access to the NDDK, there's no one who actually knows it well enough to program with it and if you look at VMware's open source code for their NVMe driver on github, you'll see that you probably don't want to use their code as starting point. It's pretty good... kinda... but I'm tempted to write a few exploits for the rewards now.

Oh, then there's the insane cost and license problem behind the VAAI NAS SDK from VMware. I almost choked when they sent me a mail saying "$5000 and we basically can tell you what to do with your code"... for a 13 page document (guessing the size). So, you can't even properly support NFS to begin with. And no, I would never ever ever agree to the terms of that contract they sent me and there's less chance I would consider paying $5000 for a document that should not even be required.

So, back to dedup... you can dedup... in HCI... no problem! The problem is, how can you possibly get VMware to actually use the dedup and replicated data?

Then there's Windows Server 2016 which ships with scale-out storage, networking and compute all on one disc and all designed from the ground up for.... scale-out.

There's OpenStack which works absolutely awesome on Gluster with scaleout and networking.

So, what you're talking about "dedup on HCI is hard and slow." this is absolutely not true. dedup and scale-out on VMware is damn near impossible. But it's a stock component of all other systems and see a post I made earlier about slow. Slow is not a requirement. It just takes companies with real storage programmers not just hacks that slap shit together using config files.

0
0
CheesyTheClown

Re: Seriously? Did he really said that? With a straight face?

Ok... because there are bad implementations of dedupe out there (lots of them... NetApp being among the worst I've seen), there will always be comments like this.

Let's talk a little about block storage. There are many different levels of lookup for blocks in a storage subsystem. If you look at a tradtional VMware case, there are at least 6 translations, possibly up to 20 for each block access across a network. Adding FibreChannel in-between aggravates the issue quite badly. It adds a lot of latency based on it's 1978 era design (this is no an exaggeration, the SCSI protocol is from 1978). There are many more problems which come into play as well.

Every block oriented storage system which supports any form of resiliency through replication of any sort (which is not an option anymore) has to perform hashing on every single block received. Those hashes must be stored in a database for data protection. For 512-4096 byte blocks, chances are a CRC-32 is suitable for data protection, and for deduplication with a "lazy write cache" it's is also suitable. However, in the case of NetApp for example which is severely broken by design, everything is immediate and there's no special storage for lazy or scheduled dedup.

In a proper dedup system, blocks which have two or more references on a write operation (even if hash matches) will decrease their reference count and a new block will be written to high performance storage (NVMe for example) with a single reference. If there was only one reference, then the block is altered in place and the hash is updated.

Then dedup will run "off-peak" meaning (for example) that if the CPU is under 70%, then the new blocks stored on disk will be compared 1:1 with other blocks with matching hashes and references will be updated only a single copy of the data itself will be maintained. In addition, during this phase, it is possible to lazy compress blocks and migrate to cold-storage (even off-site) or heaven forbid FC SAN storage blocks which are going stale.

Dedup should have absolutely ZERO impact on performance when implemented by engineers who actually have half a brain.

The disadvantage to the system described above is that dedup won't be sexy at trade shows since it might take minutes, hours or more to see the return from the dedup operation.

As for databases, if you're running mainstream SAN (EMC, Hitatchi, 3Par, NetApp), you're absolutely right. You should avoid dedup as much as possible. None of the those companies currently employ the "real brains behind their storage" anymore and they haven't had decent algorithm designers on staff in years. They take a system which works and layer shit upon shit upon shit to sell them. There will be problems using any GOOD storage technologies on those systems.

For database and most modern instance, you should make a move away from block storage oriented systems and focus instead on file servers with proper networking involved. In this case, I would recommend a Gluster cluster (even if you have to run it as VMs) with pNFS or Hyper-V with Windows Storage Spaces Direct. These days, most of the problems with latency and performance are related to forcing too may translations between guest VM and the physical disk. There's also the disgusting SCSI command queuing illness which is something which orders file read and write operations impressively stupidly since NCQ at each point it's processed has no idea what the block structure of the actual disk is. pNFS and SMBv3 are far better suited for modern VM storage than FC and iSCSI can ever be.

That said, there are some scale-out iSCSI solutions which aren't absolutely awful. But scale-out is technically impossible to achieve over FC or NVMe.

P.S. - Dedup in my experience (I write file systems and design hard drive controllers for personal entertainment) shows consistently higher performance and lower latency than the alternative because of the simplicity involved in caching.

P.P.S. - I've been experimenting with technology which is better than dedup as it would instrument guest VMs with a block cache that eliminated all Zero-Block reads and writes at the guest. It improves storage performance more than most other methods... sadly, VMware closes their APIs for storage development, because of this, I have to depend on VMware thin-volumes or FC in-between to implement that technology.

P.P.P.S. - I simply don't see this company doing anything special other than trying to define a new buzz term which is nothing new. Implementing code into the KVM kernel is the same as Microsoft implementing SMB3 into Hyper-V, it's just old hat.

0
0

Cisco goes 32 gigging with Fibre Channel and NVMe

CheesyTheClown

Ugh!

Let's all say this together

Fibrechannel doesn't scale!

MDS is an amazing product and I have used them many times in the past. But let's be honest, it doesn't scale. All flash systems from NetApp for example have a maximum of 64 FC ports per HA pair (which is so antiquated it's not worth picking on here) and that means that the total system bandwidth of the system is about 8Tb/sec. Of course, if you consider that HA pairs suggest you have to design for a total system failure of a single controller which cuts that in half. Then consider that half that bandwidth is upstream, the other half down. Meaning, half is for connecting drives to the system, the other half is for delivering bandwidth to the servers. So we're down to 16 reliable links per cluster. There has to be synchronization between the two controllers in an HA pair. So let's cut that I. Half of we don't want contention related to array coherency.

An NVMe drive consumes about 20Gb/sec bandwidth. So, that's a maximum capacity of 25 online drives in the array. Of course there can be many more drives, but you will never reach the bandwidth of more than 25 drives. Using Scale-Out, it is possible scale wider, but FC doesn't do scale out and MPIO will crash and burn if you try. iSCSI can though.

Now consider general performance. FC Controller are REALLY expensive. Dual ported SAS drives are ridiculously expensive. To scale out performance in a cluster of HA pairs would require millions in controllers and drives. And then because of how limited you are for controllers (whether cost or hard limitations) the processing requires for SAN operations would be insane. See, the best controllers from the best companies are still limited by processing for operations like hashing, deduplication, compression, etc... let's assume you're using a single state of the art FPGA from Intel or Xilinx. The internal memory performance and/or crossbar performance will bottleneck the system further and using multiple chips will actually slow it down since it would consume all the SerDes controllers just for chip interconnect at a speed 1/50th (or worse) than the internal macro ring bus interconnects. If you do this in software instead, even the fastest CPUs couldn't hold a candle to the performance needed for processing a terabit of block data per second. Just the block lookup database alone would kill Intel's best modern CPUs.

FC is wonderful and it's easy. Using tools like the Cisco MDS even makes it a true pleasure to work with. But as soon as you need performance, FC is a dog with fleas.

Does it really matter? Yes. When you can buy a 44 real core, 88 vCPU blade with 1TB of RAM on weekly deals from server vendors, a rack with 16 blades will devastate any SAN and SAN Fabric making the blades completely wasted investments. Blades need local storage with 48-128 internal PCIe lanes dedicated to storage to be cost effective today. That means the average blade should have a minimum of 6xM.2 PCIe NVMe internally. (NVMe IS NOT A NETWORK!!!!!!) then for mass storage, additional SATA SSDs internally makes sense. A blade should have AT LEAST 320Gb/sec storage and RDMA bandwidth and 960Gb/sec is more reasonable. As for mass storage, using an old crappy SAN is perfectly ok for cold storage.

Almost all poor data center performance today is because of SAN. 32Gb FC will drag these problems out for 5 more years. Even with vHBAs offloading VM storage, the cost of FC computationally is absolutely stupid expensive.

Let's add one final point which is that FC and SAN are the definition of stupid regarding container storage.

FC had its day and I loved it. Hell I made a fortune off of it. I dumped it because it is just a really really bad idea in 2017.

If you absolutely must have SAN consider using iSCSI instead. It is theoretically far more scalable than FC because iSCSI uses TCP with sequence counters instead of "reliable paths" to deliver packets. By doing iSCSI over Multicast (which works shockingly well) real scale out can be achieved. Add storage replication over RDMA and you'll really rock it!

3
3

Microsoft's new hardware: eight x86 cores, 40 GPU cores

CheesyTheClown

Re: 4K? Meh

I had the orange... I was told it was called Amber. And it was supposed to be better than green but Eddie Murphy told me that his grandmother suckered him worse with burgers that were better than McDonalds.

What sucks is that simcga almost never worked for me. But to be fair, Sierra was generally good about supporting HGC.

0
0
CheesyTheClown

Re: Project Scorpio?

$700 is excluding VAT. With VAT at 17%, that should be £819. Then consider the "you're in Europe tax" which Apple is the worst about but Microsoft tries to suck at too. I'd guess £850-900.

0
0

Elastifile delivers stretchy file software

CheesyTheClown

Built into Windows Server and Linux?

Why would pay money for something already built into the operating system?

0
3

Google Cloud to offer support as a service: Is accidental IT provider the new Microsoft?

CheesyTheClown

Don't use google for the same reason you don't use AWS or IBM

If you choose to go cloud, you want a single solution that works in the cloud or out. Google, Amazon and others don't make the platform available to take back home. Sure, you can go IaaS, but do you really want IaaS anymore?

Never use a platform which has PaaS or SaaS lock in and Google and AWS are permanent commitments. Once you're in, you can't go out again.

3
2

After 20 years of Visual Studio, Microsoft unfurls its 2017 edition

CheesyTheClown

Re: Getting better all the time

Maintaining projects other than your own is always a problem. But updating to a new IDE and tool chain is just a matter of course and is rarely a challenge. I've moved millions of lines of code from Turbo C++ to Microsoft C++ 7.0 to Visual C++ 1.2 through Visual Studio 2017. Code may require modifications, but with proper build management, it is quite easy to write code to run on 30 platforms without a single #ifdef.

I've been programming for Linux and Mac using Visual C++ since 1998. I used to write my own build tools, then I used qmake from Qt. Never really liked cmake since it was always hackish.

Now I code mostly C# since I've learned to write code which can generate better machine code after JIT than C++ generally can since it targets the local processor instead of a general class or generation of CPUs. Since MS open sourced C# and .NET, it's truly amazing how tight you can write C#. It's not as optimized as JavaScript, but garbage collected languages are typically substantially more optimal for handling complex data structures than C or C++ unless you spend all your time coding deferred memory cleanup yourself.

3
0

Why did Nimble sell and why did HPE buy: We drill into $1.2bn biz deal

CheesyTheClown

Re: Cisco: Be Bold!

Cisco is dumping SAN, why would they buy another one. Cisco is the only company who seems to be trying to take hyperconverged seriously... now if only they figured out that hyperconverged isn't a software SAN.

0
3
CheesyTheClown

And there goes Nimble

To be fair, over the past several software releases, Nimble has been dropping rapidly in quality. But still, they were probably the best option for SAN storage available.

No one shed a tear when Dell bought EMC since EMC was already yesterdays crap and VMware was already falling apart.

But HPE buying Nimble is a disaster since they probably are already trying to decide how many people to lay off to "reduce redundancies" since there's a storage nerd here and storage nerd there. They'll outsource support to India with a bunch of guys with a support script. As for knowledge for marketing, no one at HPE will sell Nimble since they barely understand 3Par and they kinda just figured that out.

I predict that Nimble will perform about as well under HPE as Aruba did... and frankly Aruba is pretty much dead now.

5
8

Sir Tim Berners-Lee refuses to be King Canute, approves DRM as Web standard

CheesyTheClown

Standard DRM = crack once use forever

This is a good thing. Imagine you buy a phone or a tablet and it reaches end of support. This device sold and marketed a device capable of playing standard DRM content might end up black listed because someone else found a method of cracking DRM using that device. Since updates are not available, whoever blacklisted that device can be held liable and sued for their actions.

Consider that browser based DRM is simply not possible.

A plugable module is code which requires standardization of an API. The API will be well understood and will not be restricted. So you write a small loader app and then based on entry points, issue your own keys and decide some of your own streams and find out where the keys are held.

The DRM must be extremely lightweight otherwise batteries will drain to quickly. One could write the DRM using JavaScript which would be smartest and with instruction level vectorization a part of WebAssembly, it could be quick. But it would consume far more power than a hardware solution. So DRM in code would have to be limited more to rights and providing decryption keys for AES or EC. And if the keys can be transferred at all, they can be cracked.

The media player pipeline in Mozilla and Chrome are well understood. The media player pipeline in Windows is designed to be hooked and debugged. There is absolutely no possible way to DRM video on Windows, Linux or Mac that can't be intercepted after decryption. As for Android, unless the DRM blacklists pretty much every Android device ever made, it can't work.

So... good luck trying... I actually buy all my films, but I decrypt them so I can still watch them even if the DRM kills. I lost tons of money buying audiobooks on iTunes which could only be downloaded once. I won't ever buy media I can't decrypt again. I'll join the race to see who can permanently crack the DRM fastest.

6
0

The day after 'S3izure', does anyone feel like moving to the cloud?

CheesyTheClown

Azure Stack

At least Azure Stack will make it possible to move things out of the cloud and back home.

With Amazon and Google, you're screwed

0
2

Nimble gets, well, nimble about hyperconverged infrastructure

CheesyTheClown

Where would it fit?

Microsoft and OpenStack currently implement hyperconverged storage in their systems with full API support and integration between management, compute, storage and networking technologies. VMware does not support hyperconverged storage at all since they haven't built an application container (think vApp) that can describe location independent storage without reference to SCSI LUNs (local, iSCSI or FC). As such, at this time, VMware doesn't support either hyperconverged storage or networking.

So, except for making half-assed attempts at running traditional storage on compute nodes (definitely a good start but very definitely not hyperconverged), where would this fit?

Just remember that hyperconverged requires that you have to do more than just run traditional storage on the same box as compute. It has to actually be converged. Meaning that storage and networking is part of the application itself.

As I said, both Windows and OpenStack clearly define how to achieve this and both support Docker style apps (container or otherwise) through a standard API which actually supports hyperconverged. Adding high speed storage makes it faster, but replacing Storage Spaces or Swift actually hurts the system by introducing unnecessary levels of management and abstraction.

So, if VMware ever learns how make a current generation solution, the market for hyperconverged storage won't exist any longer. It would be like buying a new car and then trying to add a second engine to it that actually made the car slower because of the extra weight.

0
2

Linux on Windows 10: Will penguin treats in Creators Update be enough to lure you?

CheesyTheClown

Re: Java is so easily messed up... just put spaces in a path or a password...

I believe he's referring to the code within the Java standard class libraries which handles cross platform code which is the real reason Java failed as a "cross platform tool". Between file processing and AWT, then SWT and Swing, Java has been a frigging nightmare for developers. It may have improved since I ran screaming from it, but I grew awfully tired of screwing around rewriting half of Java every time I tried anything because the Java implementation was broken and due to sealed classes, it was nearly impossible to make anything work without starting over.

Remember the worst thing to ever happen to Java was to name the language, the intermediate language, the run time, the class libraries and the platform as Java. Because of this, the even Oracle management doesn't understand what something like DalvikVm or SWT is and how it fits. Closure straight out baffles them since it's a non-Java language running on Java and that's confusing.

7
1
CheesyTheClown

Re: Is it better than Cygwin?

Better support for porting Linux apps to Windows for sure. An example would be that Handbrake, the video compression tool can leave all their libraries (shared libraries) in native Linux format compiled with GCC or LLVM with GNU assembler optimizations while building a UI using XAML and C#. This will save thousands of hours working out platform incompatibility issues often associated with porting complex applications to Windows.

Cross compiled tool chains are another advantage. For example, one could develop code using Apple Swift for IOS development directly on Windows and thoroughly troubleshoot the code using tools like Visual Studio and then compile natively for Android.

Android is another one. It's possible to build the native Android emulator for Ubuntu on Windows allowing native access to Dalvik, GCC and LLVM directly from within Visual Studio allowing faster and more accurate memory debugging than has been possible using SSH or Cygwin/MingW implementations.

There are many reasons this is better for developers.

As for users, that's different. A user probably won't care much about the differences since it's basically the same code. It should be a bit better with the recent emergence of alternative to X11 which generally don't "remote" as well for screen mirroring.

5
0

HPE's started firing people at Simplivity, say former employees

CheesyTheClown

HP + Cisco + Dell != Software

If there's anything that Cisco, Dell and HP have proven over and over again is that they are hardware only companies. They simply don't understand things like the fact that software is actually more important than hardware because you can take the software with you when you leave.

VMware still has at least a little bit of trust in the industry because Dell hasn't been stupid enough to try and roll it into Dell as a Dell offering.

Simplivity is absolutely useless now because it's an HPE product and you should be expected to run it on HPE hardware and if you leave, you leave Simplivity also.

Cisco's HyperFlex technology is an absolute joke. Because VMware is a disaster when it comes to software updates, HyperFlex is dangerously scary as you probably for safety reasons should never ever upgrade any hosts running HyperFlex technology... or NSX, you should simply delete the node and start off with a new replication. This means that where every other vendor's hyperconverged technology only needs three servers in a cluster, HyperFlex needs a minimum of four.

Companies need to use storage from the hypervisor vendor exclusively. Using third party hyperconverged storage with VMware, Azure, OpenStack or Citrix is sheer stupidity. It's also excessively wasteful. Currently, VMware's solution is the weakest and the worst to manage. I conspire to believe this is related to having been own by a storage company peddling legacy for so long they didn't want people to depend on better solutions.

1
3

So you want to roll your own cloud

CheesyTheClown

Why not buy your own cloud?

Honestly, just buy an Azure Stack from Cisco, Dell (or if truly desperate HP) and be done with it. Then you have a finished platform with all the cloud services including PaaS and SaaS without the headache of either rolling your own or selling your soul to Amazon, Google or Microsoft.

Yes, I know you'd be running Microsoft software... we already sold our souls to them for that.

0
0

Cancel your cloud panic: At $122bn it's just 5% of all IT spend

CheesyTheClown

Re: Biannual

I honestly have no idea how to respond to this. I am certainly not a speaker of the Queen's English as I find it a disorderly mess. I honestly lost absolutely all respect for the Queen's English when I heard her in an interview refer to the game of football as "Footie". People should prefer Oxford English over Queen's English, the Queen is a gutter slang speaker as well.

I recently learned while paying close attention on a visit to central England that the reason American's spell it color and the English spell it colour is because the American's pronounce it color and the English pronounce it colour which the ou in the English pronunciation is not the conjunction ou but instead the letter O and the letter U being rammed into each other softly.

There are many horrible words in the many different dialects of English. I believe that OED's persistence of documenting every single word ever without properly listed etymology as part of their new definitions any longer, practically disqualifies OED as an official dictionary as opposed to a competitor to "The Urban Dictionary". The last 5 times I've visited the OED, I received poor quality definitions with no further qualifications and have had to refer to wiktionary instead which supplied a slightly better experience.

As for your use of "Wheelie".

I believe that if you are an Englishman, you should be forced to use the term "wheel stand" instead because the British tongue has been grossly infected with a plague of "eeeeeeeee"'s. Every single possible noun in the British tongue has been reduced to a ridiculous single syllable followed by IEs. Honestly, Butties, Footie. The "cutsie shit plague" which has afflicted your nation is unforgivable. Call them sausages instead of bangers. Don't abbreviate mashed potatoes, there is simply no profit in that.

American English sucks like this as well. But unlike the British who seems to feel that they still have some resemblance of authority over than English language and more specifically "The Queen's English", one should strive to set an example of culture and dignity as opposed to allowing your language to degrade into a failed Hello Kitty cartoon.

My blogging/commenting grammar is reflective of my speech pattern as opposed to representative of grammatically correct writing as I would do elsewhere. I believe that if we are to take it upon ourselves to be grammar nazis in public, we should also strive to set a better example.

I'll forgive your wheelie comment today, I do believe that EEEEEEs affliction or not, it is likely the proper word in that place. However, as some point, I'd like to have a nice discussion with you about the British compression of the word "the". For example, I prefer to visit "The Hospital" when I'm ill as opposed to visiting someone named "Hospital". I feel one should be educated at "A University", "The University" or maybe at "Oxford University" or "The University of Chambridge" as opposed to simply "at university".

The almost random but accepted disappearance of the word "The" in The Queen's English would be considered guttural, unrefined or "Straight out damn near toothless redneck" in other dialects. For example, I would expect Kanye West to selectively omit the word "The" as he may not be able to spell it.

3
0
CheesyTheClown

More cloud spending when aaS is removed

This year marks the beginning of fully supported private clouds being shipped. You'll get the full public cloud experience with SaaS, PaaS and IaaS as a package you can buy in a box and have delivered. As such, most of the money currently earmarked for spending on "servers and storage and stuff" will be earmarked for "private cloud" instead.

We're about to see a massive move out of the public cloud as the cost of uncertainty increases throughout the world. With Theresa May being the first new leader of hate related politics and quickly followed by The Donald and Germany, France, Poland, etc... coming up soon, public cloud VERY SCARY right now. Possibly the worst choice any company can make is to place their business files on servers controlled by American or European countries that are lead by populist politicians. Consider that hosting data in the public cloud within the UK makes it susceptible to the snooper charter and the new follow up bills. The US government is suing Microsoft, Google, Amazon and others to claim the should have access to data held in data centers outside of America simply because American companies manage the data.

Populist propaganda removes human and civil rights from people generally under the heading of national security. While the cloud technology is perfectly sound, the problem is politics.

I was in a Microsoft Azure Security in the Cloud session last week held by Microsoft and asked "If I use one of the non-Microsoft Azure data centers located in Germany, does Microsoft U.S. have access to my data". The guy really avoided answering but eventually admitted in theory a subpoena issued in the U.S. would be all that would be required to give access to data in non-Microsoft data centers in Germany because it's still part of the Azure platform. Due to additional laws in America, Microsoft would be required to gag themselves and not tell anyone that the US government is snooping.

While I don't have anything to hide from the American's and certainly don't care if they are checking out the naked pictures I keep of myself (I'm not an attractive person) on my cloud accounts... I think that there are many companies out there that have to avoid that. There are no American companies currently delivering cloud services in any data center anywhere that can actually meet the requirements of EU Safe Harbor. UK companies are REALLY REALLY REALLY out on that one thanks to Miss May.

So... in the end, cloud will grow like crazy, but not in the public cloud. Instead, turn-key private cloud will be where we are in 5 years.

3
0

UnBrex-pected move: Amazon raises UK workforce to 24,000

CheesyTheClown

Cheap labor?

Hiring cheap labor is always good practice. Best part is, the new employees won't be able to afford international travel, so they'll be close during vacations.

0
0

The stunted physical SAN market – Dell man gives Wikibon forecasts his blessing

CheesyTheClown

Hyperconverged will die shortly after as well

HyperConverged simply means that software stores virtual disks reliably and efficiently on the virtualization hosts themselves. Windows Storage Spaces/ReFS and systems like GlusterFS/ZFS have be mature for some time. VMware is about 5 years behind but may eventually mature to a similar level as to Windows and Linux.

Once people eventually figure out that scale out file servers running natively on hypervisor hosts is more efficient and reliable, the entire aftermarket hyperconverged market will simply die.

0
2

Connected car in the second-hand lot? Don't buy it if you're not hack-savvy

CheesyTheClown

Pretty sure it's brand dependent

BMW makes it nearly impossible to connect to your own car. In many cases you can't even connect to a car you properly own. I'm pretty sure that their system which is paranoid strict about device connectivity won't let the new owner connect unless the old owner first releases it.

0
0

Hyperconverged market gets hyper-competitive as new riders enter field

CheesyTheClown

Re: HPE/Simplivity not a competitor

Like how Aruba, SGI, DEC, Compaq, 3com, Tandem, etc... all benefited from HPE sales and engineering? There are plenty more, but HPE buys companies in that top right quadrant, rides them a few months and as the customers start looking elsewhere, they buy someone else. HPE has been a chop shop since the dot com era.

I'm not saying Cisco is better with a track record like they have with Cloupia and now Cliqur, but HPE is where IT innovation goes to die.

Even HPE born and raised hardware is so ignored by engineering that ILO is damn near unusable at this time. It's API barely works, it's command line fails more often than it works. It's SNMP is actually insecure and practically and industry joke. Oh, and if you want it to work "right" it requires you keep an unpatched Windows XP with IE 7 or 8 to even get KVM to operate semi-ok. As for installing client certs... just don't bother.

1
2
CheesyTheClown

Windows 2016, Gluster & Docker/OpenStack?

Is it a competition to see who will pay the most money to keep using VMware? Honestly, storage is part of the base OS now... networking too... unless you want to pay more and use VMware... which well doesn't really solve anything anymore. Don't get me wrong, I'm all for retro things. But it seems like hyperconverged products from EMC, Cisco, HP/Simplivity or NetApp is more about spending money for absolutely no apparent reason.

In addition, I can't really understand why server vendors are still screwing around with enterprise SSD when Microsoft, Gluster and others have obsoleted the need for it. Dual-ported SAS or NVMe seems like the dumbest idea I've heard of in a while.

People, reliability, redundancy and performance comes from sharding and scale out. When you depend on things like dual-ported storage, you actually limit your reliability, performance and redundancy.

And no... fibre channel is longer a viable option for storage connectivity anymore. Why do you think the FC ASIC vendors are experimenting with alternative protocols over their fabrics?

0
3

UK Snoopers' Charter gagging order drafted for London Internet Exchange directors

CheesyTheClown

Didn't this behavior collapse the Empire?

I am not completely familiar with British history, but somehow I recall hearing that a blind overly-nationalistic belief was the primary flaw in the later empire which eventually led to its collapse.

It seems to me that as with the Americans, Britain seems to believe that simply having been squeezed from a particular vagina in a particular place justifies an unjustified belief in ones superiority.

Patriotism is a disgusting illness. It leads to some sort of lethargic behavior that allows a person to blindly believe they have no need to try to succeed since simply claiming membership in a birthright is a satisfactory alternative.

39
6

Global IPv4 address drought: Seriously, we're done now. We're done

CheesyTheClown

CGNAT?

I'm using my phone right now to post this. It has a private IP over LTE and works just fine. When I tether my laptop, it works just fine. I regularly visit sites behind load balances that multiplex at layer-5, in fact, there are often tens of thousands of major websites operating sharing a single IP.

Our current IPv4 problem is entirely greed based and artificial. There is absolutely no reason we can't solve the problem. With less than 100,000 registered active autonomous systems on the internet, we certainly should be able to make due with a few hundred thousand /24 networks.

0
1

Microsoft ups Surface slab prices for Brits. Darn weak pound, eh?

CheesyTheClown

Supply and demand?

I'm pretty sure that the people at Microsoft report their quarterly results in dollars. When they sell to customers in other countries, they account for value added tax where applicable, shipping if necessary, cost of support (employing locals), regionalization (spelling checkers with colour and favourite), etc...

If the value of a local currency drops too drastically relative to the value of the dollar, Microsoft must increase prices to cover the exchange rate related losses.

If the market can't or won't bare the adjustment, they will incur a different set of losses and choose to stay and fight or give up and leave.

Microsoft probably waited for the pound to reach a level they expect be stable and made a big painful adjustment that should compensate for possible further minor shifts allowing the U.K. Market to adjust to the change and go on as normal. I also assume they are not sitting and celebrating this change or even taking pride in it.

Consider as someone living in Norway, our currency devalued by 50% during the oil crisis and hasn't recovered even though oil more or less has. We feel your pain but also understand that $1 is $1 and it takes more crowns to make a dollar today than 3 years ago.

2
1

HPE, Samsung take clouds to carriers

CheesyTheClown

What the?

Network function virtualization is a standard component of Windows Server and OpenStack. I think Nutanix even has something that could be considered NFV if you ignore what NFV actually is. By using it with Docker and/or Azure apps, it's entirely transparent. Why the hell would anyone pay for this? More to the point, why the hell would anyone ever use any platform that doesn't make this a minimum standard feature?

0
0

Dell's XtremIO has a new reseller in Japan – its all-flash rival Fujitsu

CheesyTheClown

Bluray vs. HD-DVD?

Remember when Bluray won in the format wars? It was hilarious, Sony won the war when HD-DVD just died because they stopped pressing the discs and stopped making the players. Sony was sure that they would be rich because the whole world would flock to their format and what really happened was that Sony, should have learned that they probably should have just stopped making Bluray too because the world had already simply ditched using discs and moved to download services. Instead Sony went all in and now has almost no presence in the consumer video market to speak of. The moral is, neither Bluray or HD-DVD won, but the HD-DVD guys lost less because they knew when to pull out.

Dell/EMC, NetApp, Hitatchi, HP, etc... are all going all in on storage and all flash believing that they can win and take the cake using things like NVMe and such, but in reality, they're all hanging on to something which is already being forgotten.

SANs made a lot of sense in a time when file systems and operating systems lacked the ability to provide the storage needed for server farms and later virtualization. Now with the exception of VMware who seems to think that storage is a product as opposed to a component, the world is moving away from these technologies and we'll instead use scale-out file servers running on our compute nodes which provide performance and redundancy with none of the bandwidth problems SAN has. We'll use clouds and version controlled file systems to provide backups as well. It provides us with substantially lower TCO, better support, better integration and a clear long term path for growth in capacity and performance without the massive lost investments SAN are doomed to.

So, while the dozens or hundreds of storage companies battle it out, the hypervisor vendors will simply localize the storage and provide something better eliminating the need or desire to use these dinosaurs.

I wonder, which companies will be the smart ones who realize that the ship has sailed and they weren't on it first. I think Dell's merger with EMC will be interesting because the only thing of value they appear to have gotten from the deal is VMware and that company is so plagued with legacy customers demanding support, Dell will probably miss the boat on too many other opportunities by trying to force VMware to become something else.

0
6

Stallman's Free Software Foundation says we need a free phone OS

CheesyTheClown

Isn't he cute?

Stallman managed to make it into the news again. And here we thought he was finally gone.

1) You can make the best free phone OS but no one will use it

2) Every vendor will give it a try because ... well why not

3) Every vendor will stop supporting it within days of it being released

The consumer who defines the success of a platform or not doesn't give a shit about free. They want music, videos and games.

19
7

Ooops! One in three tech IPOs now trading below their starting price

CheesyTheClown

Re: Why?

VMware went to shit when it became board controlled. All their competitors are miles ahead of them in every area and VMware, possible the most innovative company of the first five years of the millennium has become a "me too... kinda" company. Hardware support for virtualization has eliminated competitive edges in hypervisors. It's become about integration and management of which VMware is thoroughly lacking. Even now, they actually sell their system APIs blocking developers from establishing a community and ensuring their vendors will get innovative features first.

Facebook and others actually produce a surprising amount. In the case of Facebook, they provide massive amounts of innovative technology to the community. Oh... and they have managed to monetize the shit out of their platform.

1
0

Page:

Forums

Biting the hand that feeds IT © 1998–2017