* Posts by CheesyTheClown

682 posts • joined 3 Jul 2009


Oi! Not encrypting RPC traffic? IETF bods would like to change that

Silver badge

Re: stunnel, wireguard

TLS1.3 is a major change. I'd imagine that with new protocols, we'd use TLS and DTLS 1.3 as opposed to earlier versions.

Also consider that the performance issues with earlier versions of TLS have been mostly handshake related. This is a short term problem in NFS since NFS is a long term protocol.

There are some real issues with NFSv4 which make it unsuitable for environments which require distance. It's not nearly as terrible as using a FibreChannel techology, but it can be pretty bad all the same. Most people don't properly prepare their networks for NFSv4. NFS loses so much performance it's barely useable if the MTU on the connection is less than 8500 bytes.

NFS also has a ridiculously high retry overhead.

NFS should NEVER EVER EVER EVER be run over TCP... if you ever think that running NFS over TCP is a good idea... stop everything you're doing and read the RFC which explains that TCP support is only there for interoperability. Unless you're using some REALLY REALLY bad software like VMware which seems wholly intent on having poor NFS support (no PNFS support for how long after PNFS came out?) you should run NFS as UDP only.

There are many reasons for this... the most obvious reason is that TCP is a truly horrible protocol. It's a quick and dirty solution for programmers who don't want to learn how protocols work or understand anything about state machines. UDP is for people who have real work to do. Quic is even better, but that's a little while off.

I would recommend against using wireGuard.

- It's doing in kernel what should be done in user space

- It's two letter variable name hell

- It's directly modifying sk_buff instead of using helper functions which increases risk over time with kernel updates to security holes being introduced.

- key exchange is extremely limited

I won't say I see any real security holes in it, and I will admit it's some of the most cleanly written kernel module code I've seen in a long time. But there's a LOT of complexity in there and it's running in absolutely privileged kernel mode. It looks like a great place to attack a server. One minor unnoticed change to the kernel tree and specifically sk_buff and this thing is a welcome matt to hackers.


FPGAs? Sure, them too. Liqid pours chips over composable computing systems

Silver badge


I'm somewhat proficient in VHDL and I've done a bit of functional programming as well. The issue is that when something generally is a series of instructions, it's often uncomfortable and simply backwards to describe things in terms of state.

I've told people before that a great starting point for learning to do VHDL is to write a parser using a language grammar tool. It's one of the simplest forms of functional programming to learn.

Another thing to realize is that the backwards fashion in which most HDLs are written is extra difficult since something in terms of "Hello World" is a nightmare as there's a LOT of setup to do to produce even a basic synthesized entity. Hell, for that fact, simply the setup work for an entity itself is intimidating if you don't already understand implicitly what that means.

There's been a lot of work into things like System-C and System-Verilog to make this all a little easier, but it's still a HUGE leap.

Now, OpenCL has proven to be a great solution for a lot of people. While the code generated by OpenCL for the purpose is generally horrible at best, it does lower the entry level a great deal for programmers.

Consider a card like this one which Liquid is pushing.

You need to take a data set, load it into memory in a way that makes it available to the FPGA (whether internally or over the PCIe bus), then you need to make easily parallelizable code which can be provided as source to the engine which compiles it and uploads it to the card. Of course, the complexity of the compilation phase is substantially higher than uploading to a GPU, so the processing time can be very long. Then the code is loaded on the card and executed and the resulting data needs to be transferred back to main memory.

There are A LOT of programmers who wouldn't have the first idea where to start with this. There's always cut and paste, but it can be extremely difficult to learn to write OpenCL code that would take less time to compile (synthesize), upload and run that couldn't have just been run faster on CPU.

Then there's things like memory alignment. Programmers who understand memory alignment on x86 CPUs (and there's far fewer of those than there should be) can find themselves lost when considering that RAM within an FPGA is addressed entirely differently. Heck, RAM within the FPGA might have 5 or more entirely different patterns of how it's accessed. Consider that most programmers (except for people like those on the x264 project) rarely consider how their code interacts with L1, L2, L3 and L4 cache. They simply spray and pray. Processor affinity is almost never a consideration. We probably wouldn't even need most supercomputers if scientific programmers understood how to distribute their data sets across memory properly.

I've increased calculation performance on data sets more than 10,000 fold within a few hours just by aligning memory and distributing the data set so that key coefficients would always reside within L1 or worst case, L2 cache.

I've increased code even more simply by choosing the proper non-arbitrary scale matrix multiplication function for the job. It's fascinating how many developers simply multiply a matrix against another matrix with a complete disregard for how a matrix is calculated. I actually one time saw a 50,000x performance improvement by refactoring the math of a relatively simple formula from a 3x4 to a 4x4 matrix and moving it from an arbitrary math library to a game programmers library. The company who I did it for was amazed because they had been renting GPU time to run Matlab in the cloud and by simply making code which could be optimized properly by the compiler... a total of Google->Copy & Paste->Compile->Link the company saved tens of thousands of dollars.

When I see things like the latest two entries into the super-computer Top500, all I can think of is that the code running on there almost certainly could be optimized to distribute via Docker into a Kubernetes cluster, the data sets can be logically distributed for Map/Reduce and instead of buying a hundred million dollars of computer or renting time on it, the same simulations could be performed for a few hundred bucks in the cloud.

Hell, if the data set were properly optimized for map/reduce instead of using some insane massive shared memory monster, it probably would run on a used servers in a rack. I bought a 128-Core Cisco UCS cluster with 1.5TB of RAM for under $15,000. It doesn't even have GPUs and for a rough comparison, when I tested using crypto-currency mining as a POC, it was out-performing $15,000 worth of NVidia graphics cards... of course, the power cost was MUCH higher, but it wasn't meant to test feasibility of crypto mining, it was just a means of testing highly optimized code on different platforms. And frankly, scrypt is a pretty good test.

I'll tell you ... FP is lovely.. if you can bend to it. F# is very nice and Haskell is pretty nice as well. Some purists will swear by LISP or Scheme, and there's the crazies in the Ericsson camp.

The issue with FP isn't whether it's good or easy or not. It's the same problem you'll encounter with HDLs, the code written in it is generally written by very mathematical minds that think in terms of state and it makes it utterly unreadable.


Another 3D printer? Oh, stop it, you're killing us. Perhaps literally: Fears over ultrafine dust

Silver badge

Re: 'Give us money'

I’m not certain. I’ve been looking into charcoal filtration for the printers I share an office with. I find that SLA printing is nasty to share a room with. FDM isn’t as bad, but I sometimes wonder if I’m getting headaches from it. I currently have 4 FDM printers running pretty much 24/7 and it’s better to be safe than sorry.


Samsung claims key-value Z-SSD will be fastest flash ever

Silver badge

Yes please

Just... yes please.

I’ve been desperately waiting for a something like this. If they have a KV solution which supports replication that would be absolutely amazing!!!


There's no 'I' in 'IMFT' – because Micron intends to buy Intel out of 3D XPoint joint venture

Silver badge

Octane flopped because

RAM prices were to high and to use Optane as an acceleration tool for SSD, it was too rich for most people’s blood. Let’s not forget that GPU prices were triple what was reasonable. There simply was no room in most people’s budgets for a product that simply didn’t give enough of a boost to justify the additional cost as opposed to getting faster RAM or a better GPU


That 'Surface will die in 2019' prediction is still a goer, says soothsayer

Silver badge

Is there anything wrong with Windows 10?

Ok, so I'm at a loss... I have Windows 10 in front of me now. I seriously can't see anything particularly wrong with it. It's fast, it's responsive, it's stable, it generally just works. It hasn't had most of the security issues that we've had in the past and most of the modern security issues are about users messing up.

I would say pretty much the same about Mac OS X. The real shortcoming to OS X these days is that if you want to run Linux, you need a VM and Windows doesn't need it. And the Mac OS X command line is extremely limited compared to Linux.


Oslo clever clogs craft code to scan di mavens and snare dodgy staff

Silver badge

Re: It's all academic

The funny thing is that Norwegian law wouldn’t allow this system to be used :)


Spoiler alert: Google's would-be iPhone killer Pixel 3 – so many leaks

Silver badge

Re: Fscking notch...

And, you can’t hold the phone one handed and read shit without constantly moving your fingers.

Silver badge

Re: Mistaken

I agree. Though this past year, I have started purchasing or renting films from Google Play and Windows Store. This is because Apple makes it difficult for me to even understand which account I’m paying from. Sometimes I buy a film on iTunes and it pulls from my PayPal... other times it pulls from my credit card. Google and Microsoft are easier to manage.

I buy iPhone because Apple makes one or two models a year and updates seem to come for years after they stop selling the model. That makes me feel as though there is a return on investment. Or it did. But since around the time Jobs kicked off, the iPhone has become progressively worse. In addition, my entire phone seems hellbent on trying to sell me shit. I mean seriously,

I’ve bought most of the songs I like already. I have about 2000-3000 tracks in my iTunes catalog. If I were to pay for Apple Music, I would need to listen to an average of about 15 new songs a month... every month for it to be profitable. That means I’d have to listen to 180 new songs a year to make it cheaper than buying the songs I like outright. I’m not that guy. Most of what I listen to is old. I don’t even turn the stereo in my car on. I have no interest in listening to music to simply hear noise. I don’t want Apple Music. I will never want Apple Music. Why the fuck can’t I open my music player and not be constantly attacked about buying Apple Music?

Then there’s the headphone jack. I have two laptops, an iPhone and a TV at home. Bluetooth sucks for that. Why would I ever want to spend my whole life pairing my headphones. It’s easier to just plug and unplug. Also, I depend on corded headphones to make sure that I never leave my headphones or telephone behind.

Apple is sooooooooo far from what I came to love about them. But what does it matter if I’m just someone who used to spend $7,000 a year with Apple. Now I have a Surface Book 2 and am willing to switch to Android if Google releases a high end phone with a headphone jack. I’m willing to pay $1200 for a Google branded phone (won’t buy knock offs made by companies who don’t write the OS). It should be small enough to fit in my pocket but large enough to read. It should have edges so I don’t have to move my fingers to read text... none of this curving off the edge shit. It should also be easy enough to unlock that I don’t need to look at it or pick it up to see if I want to pick it up. Thumb print is fine.

Basically, I want an iPhone 6S Plus but with Android. I have a top spec iPhone X sitting on the coffee table collecting dust. I’m back on my 6S Plus... the last good phone Apple made... but Apple apparently doesn’t run unit tests on the 6S Plus anymore.


Developer goes rogue, shoots four colleagues at ERP code maker

Silver badge

An American also seems involved.

Many countries have many guns. It’s a US anomaly with regards to human behavior that is causing the shootings. If you haven’t ever been to America, the US is somewhat of a cesspool or hate and almost British like superiority trips. It’s a non-stop environment of toxicity. Their news networks run almost non-stop hate trips to hopefully scrape by with enough ratings and viewers.

I left America 20 years ago and each time I go back, I’m absolutely shocked at how everyone is superior to everyone else. I just met an American yesterday who in less than two minutes told me why his daughter was superior to her peers.

It’s also amazing how incredible the toxicity of hate is. It’s a non-stop degradation of humanity. Every news paper, news channel, social media network, etc... is absolutely non-stop negativity.

It’s not about the guns... I think the guns are just an excuse now. I think it’s about everyone from the president downward selling superiority, hate and distrust. I’m pretty sure if you took the guns away, it would be bombs.


Spent your week box-ticking? It can't be as bad as the folk at this firm

Silver badge

Cisco ISE

It sounds like Cisco ISE’s TrustSec tools.

The good news is that in the latest version, the mouse wheel works most of the time. It used to be click 5 boxes, the move to the tiny little scroll bar and then click 5 more. Now you can click 5 and scroll using the wheel. So safely clicking 676 boxes when you have 26 groups is almost doable without too many mistakes now.


Hello 'WOS': Windows on Arm now has a price

Silver badge

Re: I Wish You Luck

I use ARM every day in my development environment. I work almost entirely on Raspberry Pi these days.

I would profit greatly from a Windows laptop running on ARM with Raspbian running in WSL.

That said, I already get 12 hours battery life on my Surface Book 2 for watching videos and I also have. Core i7 with 16GB RAM and a GTX 1060.

Nokia basically destroyed their entire telephone business by shipping underpowered machines with too little RAM because they actually believed batter life was why people bought phones. They bragged non-stop about how Symbian didn’t need 200Mhz CPUs and 32MB of RAM and yet, the web did and when iPhone came out and was a CPU, Memory and battery whore, people dumped Nokia like the piece of crap it was. The switch to Windows was just a final death throw.

After all these years, ARM advocates seem to think people give a crap about battery life and are willing to sacrifice all else... like compatibility or usability just so they can not run what they want or be able to use it just because they can’t carry a small charger with them. I honestly believe that until ARM laptops are down to $399 or less and deliver always online Core i5 performance, they won’t sell more than a handful of laptops.

Let’s also consider that no company shipping Qualcomm laptops are making a real effort at it. They’re building them just in case someone shows interest. But really, the mass market doesn’t have a clue what this is or why it matters and for that much money, there are far more impressive options.

And oh... connectivity. If always connected was really a core business for Microsoft, why is it that my 2018 model Surface Book 2 15” Computer packs LTE?


VMware 'pressured' hotel to shut down tech event close to VMworld, IGEL sues resort giant

Silver badge

Skipped Cisco Live two years and will next

Cisco has been holding Live! In Vegas lately. I have absolutely no interest in me, my colleagues or my customers being in Vegas for the event.

The town is too loud. It’s very tacky. It is precisely the place civilized people would not want to be associated with. Let’s be honest, “what happens in Vegas...” guess what, this is not the kind of professional relationship I want to maintain with those who depend on me or I depend on.

Why would you want to hold a conference in Vegas?

1) Legalized prostitution

2) Legalized gambling

3) Free booze at the tables

4) Free or cheap buffets to gouge yourself at

5) Readily available narcotics of all sorts

6) Massive amounts of waste... not a little, the city must be one of the most disgustingly wasteful cities on earth.

7) Sequins... if that’s your thing.

Can you honestly say that you would want your serious customers to believe this is the type of behavior you associate with professionalism?


Pavilion compares RocE and TCP NVMe over Fabrics performance

Silver badge

Digging for use cases?

Ok, let’s kill the use case already.

MongoDB... you scale this out, not up, MongoDB’s performance will always be better when run with local disk instead of centralized.

Then, let’s talk how MongoDB is deployed.

It’s done through Kubernetes... not as a VM, but as a container. If you need more storage per node, you probably need a new DB admin who actually has a clue.

Then there’s development environment. When you deploy a development environment, you run minikube and deploy. Done. No point in spinning up a whole VM. It’s just wasteful and locks the developer into a desktop.

Of course there’s also cloud instances of MongoDB if you really need something online to be shared.

And for tests... you would never use a production database cluster for tests. You wouldn’t spin up a new database cluster on a SAN or central storage. You’d run it on minikube or in the cloud on Appveyor or something similar.

If latency is really an issue for your storage, instead of a few narrow 25Gbe pipes to an oversubscribed PCIe ASIC for switching and an FPGA for block lookups, you would instead use more small scale nodes, map/reduce and spread the work-load with tiered storage.

A 25GbE network or RoCE network in general would cost a massive fortune to compensate for a poorly designed database. Instead, it’s better to use 1GbE or even 100MbE to scale the compute workload into more small nodes. 99% of the time, 100 $500 nodes connected by $30 a port networking will use less power, cost considerably less to operate and perform substantially better than 9 $25,000 nodes.

Also, with a proper map/reduce design, the vast majority of operations become RAM based which will drastically reduce latency compared to even the most impressive NVMe architectures based on obsessive scrubbing. Go the extra mile and make indexes that are actually well formed and use views and/or eventing to mutate records and NVMe is a really useless idea.

Now, a common problem I’ve encountered is in HPC... this is an area where propagating data sets for map reduce can consume hours of time given the right data set. There are times where processes don’t justify 2 extra months of optimization. In this case, NVMe is still a bad idea because RAM caching in an RDMA environment is much smarter.

I just don’t see a market for all flash NVMe except in legacy networks.

That said, I just designed a data center network for a legacy VMware installation earlier today. I threw about $120,000 of switches at the problem. Of course, if we had worked on downscaling the data center and moving to K8s, we probably could have saved the company $2 million over the next 3 years.


You lead the all-flash array market. And you, you, you, you, you and you...

Silver badge

What's the value anymore?

Ok, here's the thing... all flash is generally a really bad idea for multiple different reasons.

M.2 Flash has a theoretical maximum performance of 3.94GB/sec bandwidth on the bus. Therefore a system with 10 of these drives should be able to theoretically transfer an aggregate bandwidth of 39.4GB a second in the right circumstances.

A single lane of networking or fibre channel is approximately 25Gb/sec which is less than 1/10th of the bus bandwidth of a drive. So in a circumstance where a controller can provide 10 or more lanes of bus bandwidth for data transfers, this would be great, but this numbers are so incredibly high that this is not even an option.

So, we know for a fact that the bus capacity of even the highest performance storage systems can barely make a dent in a very low end all flash environment.

Let's get to semiconductors.

Let's consider 10 M.2 drives with 4 32Gb Fibre Channel adapters. This would mean that a minimum of 72 PCIe 3.0 lanes would be required to allow full saturation of all busses.

This is great, but the next problem is that in this configuration, there's no means of block translation between systems. That means that things like virtual LUNs would not be possible.

It is theoretically possible to implement in FPGA (DO NOT USE ASIC HERE) a traffic controller capable of handling protocols and full capacity translation using a CPU style MMU for translation of regions of storage instead of regions of memory, but the complexity would have to be extremely limited and because of the translation table coherency, it would be extremely volatile.

Now... the next issue is that assuming some absolute miracle worker out there manages to develop a provisioning, translation and allocation system for course grained storage, this would more or less mean that things like thin provisioned LUNs would be borderline impossible in this configuration. In fact, based on modern technology, it could maybe be possible with custom FPGAs designed specifically for an individual design, but the volumes would be far too low to ever see return on investment for the ASIC vendor.

Well, now we're back to dumb storage arrays. That means no compression, thin provisioning, deduplication and without at least another 40 lanes of PCI 3.0 serialized over fibre for long runs, there's pretty much no chance of guaranteed replication.

Remember this is only a 10 device M.2 system with only 4 fibre channel HBAs.

All Flash vs. spinning disk hybrid has never been a sane argument. Any storage system needs to properly manage storage. The protocols and the software involved need to be rock solid and well designed. FibreChannel and iSCSI have so much legacy that they're utterly useless for modern storage as they don't handle real world storage problems on the right sides of the cable anymore. Even with things like VMware's SCSI extensions for VAAI, there is far too much on the cable and thanks to fixed sized blocks, it should never exist. If nothing else, they lack any support for compression. Forget other things like client side deduplication so that hashes for dedup could be calculate not just for dedup, but for an additional non-secure means of authentication.

Now let's discuss cost a little.

Mathematics and physics and pure logic says that data redundancy requires a minimum of 3 active copies of a single piece of data at all times. This is not negotiable. This is an absolute bare minimum. That would mean to have the minimum requirement for redundant data, a company should have a minimum of 3 full storage arrays and possibly a 4th for circumstances with long term maintenance.

To build an all flash array with a minimal configuration, this would cost so much money that no company on earth should ever piss that much away. It just doesn't make sense.

The same stands true of fibre channel fabrics. There needs to be at least 3 in order to make commitments to uptime. This is not my rule. This is elementary school level math.

Fibre channel may support this, but the software and systems don't. It can be done on iSCSI, but certainly not on NVMe as a fabric for example. The cost would also be impossible to justify.

This is no longer 2010 when virtualization was nifty and fun and worth a try. This is 2018 when a single server can theoretically need to recover from failure of 500 or more virtual machines at a single time.

All Flash is not an option anymore. It's absolutely necessary to consider eliminating dumb storage. This means block based storage. We have a limited number of storage requirements which is reflected by every cloud vendor.

1) File storage.

This can be solved using S3 and many other methods, but S3 on a broadly distributed file system makes perfect sense. If you need NFS for now... have fun but avoid it. The important factor to consider here is that classical random file I/O is no longer a requirement.

2) Table/SQL storage

This is a legacy technology which is on its VERY SLOW way out. We'll still see a lot of systems actively developed towards this technology for some time, but it's no longer a prefered means of storage for systems as it lacks flexibility and is extremely hard to manage back end storage for.

3) Unstructured storage

This is often called NoSQL. This is a means that all systems have queryable storage which works kinda like records in a database but far smarter. So the data stored is saved as a file, but the contents can be queried. Looking at a system like Mongo or Couchbase shows what this is. Redis is good for this too but generally has volatility issues.

4) Logging

Unstructured storage can often be used for this, but the query front end will be more focussed on record ages with regards to querying and storage tiering.

Unless a storage solution offers all 4 of these solutions it's not really a storage solution it's just a bunch of drives and cables with severely limited bandwidth being constantly fought over.

Map/Reduce technology is absolutely a minimum requirement for all modern storage and this requires full layer-7 capabilities in the storage subsystems. This way as nodes are added, performance increases and in many cases decrease overhead.

As such, it makes no sense to implement a data center today on a SAN technology. It really makes absolutely no sense at all to deploy for example a containers based architecture on such a technology.

If you want to better understand this, start googling at Kubernetes and work your way through containerd and cgroups. You'll find that this block storage should always be local only. This means that if you were to deploy for example MongoDB, SQL servers, etc... as containers, they should always have permanent data stores that require no network or fabric access. All request will be managed locally and the system will scale as needed. Booting nodes via SAN may seem logical as well, but the overhead is extremely high and in reality, PXE or preferably HTTPS booting via UEFI is a much better solution.

Oh... and enterprise SSD is just a bad investment. It doesn't actually offer any benefits when your storage system is properly designed. RAID is really really really a bad idea. This is not how you secure storage anymore. It's really just wasted disk and wasted performance.

But there are a lot of companies out there who waste a lot of money on virtual machines. This is for legacy reasons. I suppose this will keep happening for a while. But if your IT department is even moderately competent, they should not be installing all flash arrays, they should instead be optimizing the storage solutions they already have to operate with the datasets they're actually running. I think you'll find that with the exception of some very special and very large data sets (like a capture from a run of the large hadron collider) more often than not, most existing virtualized storage systems would work just as well with a few SSD drives added as cache for their existing spinning disks.


Flash, spinning rust, cloud 'n' tape. Squeeze. Oof. Hyperconverge our storage suitcase, would you?

Silver badge

Re: Lenevo and Cloudistics could be a fail

This looks great, but suffers the same general problem as AzureStack.

First of all, to be honest, from a governance perspective, I don't trust Google to meet our needs. If nothing else, I don't trust Google to respect safe harbour. Microsoft has now spent years fighting the US government with regards to safe harbour issues, but Google simply provides transparency related to them. I have absolutely nothing to hide personally, but for business, I have to be vigilant with regards to peoples medical and financial records. This is not information that any company outside of my country has legal right to. That means, I can't even trust a root certificate outside this country. That also means that I can't use any identity systems controlled by any company outside of this country. That means no Google login or Azure AD. That also means no Azure Stack or GCP.

Beyond that, Cisco simply doesn't make anything even close to small enough for cloud computing anymore. They used to have the UCS-M series blades which were still too big. To run a cloud, you need a minimum of 9 nodes spread across 3 locations. The infrastructure cost of Cisco is far too high to consider this.

It's much better to have more nodes in more locations. As such we're experimenting with single board computers like Raspberry Pi (which is too underpowered but is promising) and LattePanda Alphas which are too expensive and possibly overpowered to run a cloud infrastructure.

We're looking now at Fedora (we'd choose RedHat, but don't know how to do business with them), Kubernetes, Couchbase and .NET Core. This combination seems to be among the most solid options on the market. We're also looking at OpenFaaS, but OpenFaaS is extremely heavy weight in the sense that it spins up containers for everything. Containers are insanely heavy to host a function. So we're looking into other means of isolating code.

We're walking very softly because we know that as soon as a component becomes part of our cloud, it's a permanent part which will require 20-50 years support. We need something we know will run on new hardware and have support.

Google is amazing and I'd love to use a hybrid cloud, but the problem with public clouds at all is that the money we could be spending on developers and engineers and supporting our customers is instead being burned on governance, compliance and legal. Instead, we need a full detached system which is why I was attracted by Lenevo's solution until it was clear that Cloudistics is focused only on selling to C-Level types and not to the engineers who will have to use it.

Silver badge

Lenevo and Cloudistics could be a fail

So, I'm working a lot on private cloud these days. The reason is that none of the public cloud vendors meet my governance requirements for the system my company is developing.

Azure Stack is out of the question because it requires that the platform is connected to the Internet for Azure AD. So... no luck there.

I've been looking and looking and to be fair, the best solution I've seen is to simply install Linux, Kubernetes, Couchbase and OpenFaaS. With these four items, it should be possible to run and maintain pretty much anything we need. We'll have to contribute changes to OpenFaaS as it's still not quite the answer to all our problems, and we're considering writing a Couchbase backend for OpenFaaS as well. But once all that is covered, it's a much better solution than other things.

That said, we keep our eyes open for alternatives. So when I saw a possible solution in this article, I went to check. It's a closed platform with no developer (or system administrator) documentation online. There's no open source links and there's no apparent community behind it.

So, why in the world would anyone ever invest in a platform from a company like Cloudistics which no one has ever heard of, has no community and hence no "experts" and more than likely won't exist in 12 months time?

If I were shareholder of a company who chose to use this solution in its current state, I would consider litigation for gross mismanagement of the company. This is an excellent example of how companies like Cisco, Lenevo, HPE and others are so completely out of touch with what the cloud is that white box actually makes more sense.


ReactOS 0.4.9 release metes out stability and self-hosting, still looks like a '90s fever dream

Silver badge

Re: Use case for ReactOS

I'll start with... because "Some of us like it" and don't really mind paying a few bucks for it.

I also am a heavy development user. And although I am really perfectly happy with vi most of the time, I much prefer Visual Studio. I actually just wrote a Linux kernel module using Visual Studio 2017 and Windows Subsystem for Linux for the most part. Which is really funny since WSL doesn't use the Linux kernel.

There are simply some of us who like to have Windows running on their systems. Even if I were using Linux as the host OS, I would still do most of my work in virtual machines for organizational reasons and frankly, WSL on Windows is just a thing of beauty.

As for more modern UIs like many people complain about here. I honestly haven't noticed. You press the Windows key and type what you want to start and it works. This has been true since Windows 7 and has only gotten better over time.

Then there's virtualization. Hyper-V is a paravirtualization engine which is frigging spectacular. With the latest release of QEMU which is accelerated on Windows now (like kqemu) you can run anything and everything beautifully.

I have no issues with the software you run... I believe if you sat coding next to me, you'd probably see as many cool new things as I'd see sitting next to you. But honestly, I've never found a computer which runs Linux desktop with even mediocre performance. They're generally just too slow for me. So, I use Windows which is ridiculously fast instead.

As for Bill Gates. Are you aware that Bill has more or less sold out of Microsoft? He's down to little more than 1% of the company. You can give Microsoft gobs of money and he would never really notice. Take it a little further and you might realize that this isn't the Bill Gates of the 1980s. He's grown up and now is a pretty darn good fella. So far as I can tell, since he's been married, he's evolved into one of the most amazingly nice people on earth. I can't see that he's done anything in the past 15-20 years which would actually justify a dislike of him or a distrust of his motives.... unless you're Donald Trump who Bill kind of attacked recently for speaking a little too affectionately about Bill's daughter's appearance.


Windows 10 IoT Core Services unleashed to public preview

Silver badge

Re: Well if MS are offering to do that...

Some of us don't use registered MAC addresses. We simply use duplicate address detection and randomize. There's really no benefit to registered MAC addresses anymore. Simply set the 7th bit to 1 and use a DAD method.

Also consider that many of us don't use Ethernet for connectivity. There are many other solutions for IoT. A friend of mine just brought up a 1.2 million node multinational IoT network on LTE.

MAC address filtering and management is basically a dead end. There's just no value in it for many of us. It really only adds a massive management overhead to production of devices. And layer-2 is so bunged to begin with that random MAC addresses with DAD can't really make it any worse.


Who fancies a six-core, 32GB RAM, 4TB NVME ... convertible tablet?

Silver badge

Will have bugs and no love from HP

For a product of this complexity to be good, it needs to reach high enough volumes thatthe user feedback on the product is good enough to solve problems. A company the size of HP will ship this, but the volume of big reports will be low due to a few reasons.

1) the user count is low

2) the typical user of this product won’t have a reliable means of reporting the bugs other than forums. This is because they work for companies who can afford these systems and would have to report through IT. IT will not fully understand or appreciate the problems or how they actually effect the user and therefore will not be able to convey the problems appropriately.

3) HP does not make the path from user to developer/QA transparent as once the product is shipped, those teams are reassigned.

As such, HPs large product portfolio is precisely why this is a bad purchase. Companies like Microsoft and Apple build a small number of systems and maintain them long term. Even with the huge specifications on these PCs, a lower end system and offloading some to the cloud is far more fiscally responsible.

Of course, people will buy them and if we read about them later, I doubt the user response will be overly positive.

I’m using a Surface Book 2 15” with a Norwegian keyboard even though I have it configured to English. This is because a LOT of negative feedback reached MS on the earlier shipments and by buying a model I was sure came off the assembly line a few months later, I was confident that many of the early issues were addressed.

This laptop from HP will not have that benefit because to produce them profitably, they will need to make probably almost all the laptops of this model they will ship or at least components like motherboards in a single batch. So, even later shipments will probably not see any real fundamental fixes.

But if you REALLY need the specs, have a blast :) You’re probably better off with a workstation PC and Remote Desktop from a good laptop though.


Even Microsoft's lost interest in Windows Phone: Skype and Yammer apps killed

Silver badge

Re: MS kills UWP apps, Telephony API appears in Windows

Nope, both hands know what’s happening. The telephony APIs allow for Android integration. So the APIs permit Windows 10 Always Online devices (laptops with built in LTE) to provide a consistent experience across phone and laptop.

For instance, you will probably be able to make a call from your laptop. They also integrated messaging.

But I guess that’s not as exciting as assuming it means that Microsoft is confused. :)


White House calls its own China tech cash-inject ban 'fake news'

Silver badge

Re: Enjoy this while it lasts

I don’t know whether I want to agree or debate this.

We saw republicans dropping from the election for now reason that seemed clear. Just one after another dropped out and yielded to Trump with no explaination to be had. Each time they dropped out and made their support for Trump clear it looked like people behaving as if they were forced to under duress.

Bernie seemed to have real support of the people because they believed in him politicaly. As though they liked his message. Hillary seemed to garner support by people who liked her making fun of Trump and also by people voting for superficial reasons. I’ve been a long believer that it’s time for a female president. I remember as a child being excited that Geraldine Ferrara was running. But Hillary simply scared me because her message didn’t seem to be anything other than “I’ll win and it’s my turn!”

Sander dropped out out of what seemed like frustration over the stubborn child stomping her feet and claiming “I’ll win, it’s my turn!”

I have had great hopes that if this election proved anything to the American people, it’s that the two parties are so corrupt that people need a choice and neither party is offering a choice to the people.

Amazon, Facebook, Twitter, Google, Microsoft, Netflix, and others can all change the platform. They can reinvent the entire two party system overnight. All it would take is to each build on their platforms a new electoral process to identify and support candidates that they would then have added to the ballot. If each company run different competitions and systems to identify and sponsor candidates, we could have a presidential election with 10 or more alternatives to choose from.

They can even allow underdogs to get a grip on the elections. For example, traditional fund raisers which reward only people willing to sell their political capital would become irrelevant. People could get elected because they were in fact popular instead of having sold their souls in exchange for enough money for some commercial time.

I think Trump and Hillary may be the best thing to ever happen to America. If two shit bags like them can end up being the only possible choices the people had, then it’s clear it’s time for a change.


Why aren't startups working? They're not great at creating jobs... or disrupting big biz

Silver badge

What do you mean?

So, let's say this is 1980 and you start a new business.

You'll need a personal assistant/secretary to :

- type and post letters

- sort and manage incoming letters

- perform basic book keeping tasks

- arrange appointments

- answer phones

- book travel

You'll need an accountant to :

- manage more complex book keeping

- apply for small business loans

- arrange yearly reports

You'll need a lawyer to :

- handle daily legal issues

- write simple contracts

You'll need an entire room full of sales people to

- perform business development tasks

- call every number in the phone book

- manage and maintain customer indexes

You'll need a "copy boy" to

- run errands

- copy things

- distribute mail


Now in 2018

You'll need

- an app for your phone to scan receipts into your accounting software

- an accounting app to perform year end reports and to manage your bank accounts

- an app to click together legal documents based on a wizard

- a customer relationship manager application

- a web site service for your home page

- etc...

Let's imagine you are a lawyer in 1980...

- You'd study law

- Graduate

- Take a junior position doing shit work

- Pass the bar

- work for years taking your boss's shitty customers

- work for years trying to sell your body to get your own customers

- one your portfolio was big enough, you'd become a senior partner who would take a cut from everyone else's customers.

The reason the senior lawyer hired junior lawyers was because there was a massive amount of work to do and a senior partner would spend most of their time talking and delegating the actual work to a team of juniors, researchers and paralegals.

Now the senior can do 95% of the work themselves by using an iPad with research and contract software installed in less time than it would have taken to delegate. So where a law firm may have employed 10-20 juniors, paralegals and researchers in 1980 per senior, today, one junior lawyer probably can easily handle the work placed on them by two seniors.

There's no point hiring tons of people anymore. Creating a startup that is dependent on a head count is suicide from the beginning. If you're a people based company, then the second someone smarter sees there's a profit to be made, they'll open the same type of business with far more automated.


Cray slaps an all-flash makeover on its L300 array to do HPC stuff

Silver badge

What is the goal to be accomplished?

Let's assume for the moment that we're talking about HPC. So far as I know, whether using Infiniband or RDMAoE, all modern HPC environments are RDMA enabled. To people who don't know what this means, it means that all the memory connected to all the CPUs can be allocated as a single logical pool from all points within the system.

If you had 4000 nodes at 256GB of RAM per node, that would provide approximately 1 Petabyte of RAM online at a given time. The amount of time to load a dataset into the RAM will take some time, but compared to performing large random access operations across NVMe which is REALLY REALLY REALLY slow in comparison, it makes absolutely no sense to operate from data storage. Also, storage fabrics, even using NVMe are ridiculously slow due to the fact that even though the layer-1 to layer-3 are in fact fabric oriented, the layer 4-7 storage protocols are not suited for micro-segmentation. As such, it makes absolutely no sense whatsoever to use NVMe for storage related tasks in super-computing environments.

Now, there's the other issue. Most supercomputing code is written using a task broker that is similar in nature to Kubernetes. It spins up massive numbers of copies related to where the CPU capacity is available. This is because that while many super computing centers embrace language extensions such as OpenMP to handle instruction level optimization and threading, they generally are skeptical about run-time type information which would allow annotation of code with attributes that could be used while scheduling tasks.

Consider that moving the data set to the processor upon which it will operate can mean moving gigabytes, terabytes or even petabytes of memory transfer. However, if the data set were distributed into nodes within zones, then a large scale dataset could be geographically mapped within the routing regions of a fabric and the processes which would require moving megabytes or gigabytes at worst can be moved to where the data is when needed. This is the same concept as vMotion but far smarter.

If the task is moved from one part of the super computer to another to bring it closer to the desired memoryset, the program memory can stay entirely in tact but only the CPU task will be moved. Then on heap read operations the MMU will kick in to access remote pages and then relocate the memory locally.

It's a similar principle to map/reduce except in a massive data set environment, map reduce may not work given the unstructured layout of the data. Instead, marking functions with RTTI annotation can allow the JIT and scheduler to move executing processes to the closest available zone within the super computer to access the memory needed by the following operations. A process move within a supercomputer using RDMA could happen in microseconds or milliseconds at worst.

Using a system like this, it could actually be faster to simply have massive tape drives or reel to reel for the data set as only linear access is needed.

But then again... why bother using the mllions of dollars of capacity you already own when you could just add a few more million dollars of capacity.


Norwegian tourist board says it can't a-fjord the bad publicity from 'Land of Chlamydia' posters

Silver badge

Re: Norwegian History

I think if you checked the Norwegian economy, you might find oil and natural gas doesn't account for as much as you might think.

Silver badge

Ummm been done

There's a chain called Kondomriet all over Norway that sells electric replacements for sexual activities that generally require fluid exchange between participants.

They even advertise them pretty much everywhere with an "Orgasm guarantee". Though I wonder if that's just a gimmick. How many people would actually attempt to return a used item such as that.


What's all the C Plus Fuss? Bjarne Stroustrup warns of dangerous future plans for his C++

Silver badge

Mark Twain on Language Reform

Read that and it all makes sense


Wires, chips, and LEDs: US trade bigwigs detail Chinese kit that's going to cost a lot more

Silver badge

There goes buying from the U.S.

My company resold $750 million of products manufactured in the US last year. Already, these products are at a high premium compared to French and Chinese products. They are a tough sell and it’s almost entirely based on price.

Those items are built mostly from steal, chips, LEDs and wires.

Unless those US companies move their manufacturing outside of the US, we’ll be forced to switch vendors, otherwise the price hikes will be a problem for us. I know that the exported products will have refunds on the duties leaving the US, but the vendors cannot legally charge foreigners less than they charge Americans for these products. So, we’ll have to feel the penalty.

So, I expect to see an email from leadership this coming week telling us to propose alternatives to American products.


Intel confirms it’ll release GPUs in 2020

Silver badge

Re: Always good to have competition to rein in that nVidia/AMD duopoly

The big difference between desktop and mobile GPUs is that a mobile GPU is still a GPU. Desktop GPUs are about large scale cores and most of the companies you mentioned in the mobile space lack the in-house skills to handle ASIC cores. When you license their tech, usually you’re getting a whole lot of VHDL (or similar) bits that can be added to another set of cores. ARM I believe does work a lot on their ASIC synthesis and of course Qualcom does as well, but their cores are not meant to be discrete parts.

Remember most IP core companies struggle with high speed serial busses which is why USB3, SATA and PCIe running at 10Gb/sec or more is hard to come by from those vendors.

AMD, Intel and NVidia have massive ASIC simulators that cost hundreds of millions of dollars from companies like Mentor graphics to verify their designs on. Samsung could probably do it and probably Qualcomm, but even ARM may have difficulties developing these technologies.

ASIC development is also closed loop. Very few universities in the world offer actual ASIC development programs in-house. The graduates of those programs are quickly sucked up by massive companies and are offered very good packages for their skills.

These days, companies like Google, Microsoft and Apple are doing a lot of ASIC design in house. Most other new-comets don’t even know how to manage an ASIC project. It’s often surprising that none of the big boys like Qualcomm have sucked up TI who have strong expertise in DSP ASIC synthesis. Though even TI has struggled A LOT with high speed serial in recent years. Maxwell’s theory is murder for most companies.

So most GPU vendors are limited to what they can design and test in FPGA which is extremely limiting.

Oh... let’s not even talk about what problems would arise for most companies attempting to handle either OpenCL or TensorFlow in their hardware and drivers. Or what about Vulcan. All of these would devastate most companies. Consider that AMD, Intel and NVidia release a new driver almost every month for GPU. Most small companies couldn’t afford that scale of development or even distribution.


UK's first transatlantic F-35 delivery flight delayed by weather

Silver badge

Wouldn’t it be most responsbile if....

The F-35s are simply left grounded?

I mean honestly... who in their right mind would fly something that expensive into a situation where they might get damaged?

Let’s face it, if one of these planes becomes damaged in training or in a fight, the financial repercussions would be devestating. That would be massive money simply flushed down the drain.

The pilots are something else we can’t afford to risk. To train an F-35 pilot is so amazingly expensive we can’t possibly afford to place them in harms way.

I think it would be best to just keep the planes grounded.


Microsoft commits: We're buying GitHub for $7.5 beeeeeeellion

Silver badge

Re: Shite

haha I actually should have read the entire post first. I went to the same website you did. I have to admit, I shamelessly download software from there all the time because sometimes I forget how good things are today unless I compare them to the days that came before.

I tried writing a compiler using Turbo C 2.0 recently. That simple did not go well.

Even though they had an IDE, it was single file and it lacked all the great new features we love and adore in modern IDEs. Now I managed to do it. I had a simple compiler up and running within about an hour, but to be fair, it was an absolute nightmare.

That said, the compile times and executable sizes were really impressive.

But of course things like real mode memory was not a great deal of fun. Also whenever you start coding in C, you get this obsessive need to start rewriting the entire planet. I was 10 minutes away from writing a transpiler to generate C code because C is such a miserable language to write anything useful in. No concept of a string and pathetic support for data structures and non-relocatable memory... YUCK!!!

I will gladly take Visual Studio 2017 over 1980s text editors. Heck, I'll take Notepad++ over those old tools.

You should get a copy of some of those old tools up and running and try to write something in them. It's actually really funny to find out that the keys actually don't do what you fingers think they do anymore. And what's worse, try doing it without using Google. :) I swear it's painful but entertaining. GWBASIC is a real hoot.

Silver badge

Re: @Harley

I hope you don't mind me asking.

Have you written anything that would signify anyone actually knowing your name?

Writing books since you were an infant?

"and still some cunts get me name wrong"

I'm curious, where is the connection? Just because you have a book that has been published in a small handful of languages, 47 languages suggests that your books probably weren't interesting enough to be picked up broadly. I would say that if your book was published in 47 languages then :

a) It was probably some fiction novel of some type

b) It didn't catch on enough to justify translating it for lower volume markets

c) It probably hasn't seen the NY Times best seller list and if it had, it was at 97th place for a week.

I suppose I could go on, but let me say that if your book was only to translated to 47 languages, there could be a good reason no one has heard of you and certainly no one would know how to spell your name.

Also, while I'm possibly one of the most arrogant and stuck up assholes on the register, I like to occasionally contribute something positive and informative. In the last month, you've written not a single positive or informative comment on any article or as a response to someone else's comments. Your entire purpose for posting on the comments is purely to make snotty one line remarks that are generally degrading.

Now I'm not going to suggest that I'm "Mr. Ray of Fucking Sunshine" over here. But seriously man, did you actually just refer to someone as a cunt... for mistyping a name that probably no one has ever heard of outside of your personal social circle?

I'll make the assumption that you're English as I've never seen another culture on Earth that tosses that word around so nonchalantly as the English do. And to help you better understand yourself, I'll use something I learned from a fellow countryman of yours.

Simon Cowell one time make a remark "Miss, it is your parents job to tell you how pretty you are and how pretty you sing. But did you ever consider recording yourself and listening to your own singing before coming on this show? You're awful."

Now I'm sure that girl is running around telling everyone how she should be taken more seriously because she has been seen performing in 47 countries and subtitled in 47 languages. And I'm sure that your mom and dad read the first 15 or 20 pages of your book(s) so they can tell you how great of an author you are. But let's be honest, the depths of your thinking are far too shallow to be successful as a writer. A creative mind would be able to perform far better than to revert to choosing the most offensive word in his vocabulary to describe a person who mis-typed the name of someone no one has ever heard of.

I think I'll try to help make you famous. I do a great deal of public speaking in my work. I do this is at least 47 countries where people have actually paid to hear me speak in all of them. I'm really famous you know... I'm probably almost as big as David Hasselhoff is in Germany... umm maybe not quite.

So what I'll do for you is that from now on, whenever I am trying to explain a person who sees themselves as being more impressive than they really are, I'll refer to them as a "J.R. Hartley... and that's with a T". So for example :

I was listening to a climate denier on Fox News the other morning and he made a real ass of himself by publicly claiming he has the ears of the leaders of 47 nations. I mean seriously, could he possibly be more of a J.R. Hartley... and that's Hartley with a T... like the great author as opposed to Hartley without a T... like the broken motorcycle.

I bet with that kind of publicity, you might even get translated to a 48th language someday and then... you will be REALLY famous and no one will ever be a cunt and mistype your name again. And I'm willing to do this just for you... because you my good friend J.R. Hartley with a T are a ray of fucking sunshine!


Five actually useful real-world things that came out at Apple's WWDC

Silver badge

Re: Damn it

I’m gonna probably jump to Android soon. I’ve used iPhone since the early days and am pretty much tired of the non-stop Apple works with everything as long as it’s Apple.

Home automation works if you have a unit in every room. Amazon Echo is $99 and Echo Dot is $29. So in a house with 5 bedrooms, a living room, a kitchen, two bathrooms and two hallways, the Echo is expensive but a reasonable solution. Home Pod is too big to begin with and even at half the price is too expensive.

I spend about $1000 a year on the iTunes Store. To control my music, either I have to store it on a server after downloading it or I have to use an Apple device. Movies can’t even be decrypted legally, so Apple is a requirement. We have 6 screens in the house, 4 have Chromecast built in. One has an Apple TV and the last has a PC.

We don’t want to add Apple TV to all the screens because they would need separate power and separate remotes. Then there’s the mounting issues.

So, we often find ourselves renting films on Google that we already own on iTunes.

The door locks we have aren’t compatible with any service, but writing a skill for Alexa took about an hour. Writing a function for Cortana took 15 minutes.

I don’t believe I will be allowed by Apple to write the skill for Siri, so I’d have to throw away $2000 of perfectly good door locks.

I love my iPhone 6S Plus. But every iPhone patch breaks something new. Watching videos gets more and more inconvenient. My audible app actually skips... it sounds like a scratched record. I have an iPhone X but I’ll end up dead from using it.

Then there’s my car. iPhone integration isn’t bad. But if I want proper integration, I’ll have to pay $400 a year to BMW.

So, I may end up switching to Android even though I hate Android just because it actually gives me options. So I’ll have a phone that sucks, but at least it will work with my other stuff.

Oh, there’s the other issue. I’ve been waiting 8 years for a new line of Macs to buy. The last notebook Apple made which didn’t suck was the MacBook Air 11 inch. I still use a 2011 model of it. And Mac Mini is so out of date it is horrifying. If Apple doesn’t make a new PC suitable for software development before my MacBook dies, I don’t think I’ll buy anything current.

I’m pretty sure Apple as a tech company died with Steve Jobs :(


Have you heard about ransomware? Now's the time to ask: Are you covered?

Silver badge

Sure... why simply protect yourself?

Ransomware is for people who can’t turn on Windows Backup/Restore or Apple Time Machine.

How bloody hard is it to simply enable automatic recovery options in the OS? If your company is ever hit by ransomware, it’s because your IT staff or firm is incompetent.

In Windows, it’s a single group policy setting.

On Mac, if you haven’t read “Mac for enterprise” documentation and learned how to onboard a Mac for management, you’re a fool. It’s just like group policy.

These are not advanced features. These are sys admin 101 things.


If you have cash to burn, racks to fill, problems to brute-force, Nvidia has an HGX-2 for you

Silver badge

CPU from the Terminator?

When I saw the picture, it reminded me of the CPU from the Terminator. Maybe this is what it looked like before it was shrunk?

I’m not sure if that’s relevant when discussing AI


IBM's Watson Health wing left looking poorly after 'massive' layoffs

Silver badge

Re: Merge?

I’ve walked into companies, seen HPE and walked out. It’s just not worth the pain. Every time you give them money, they take it, and sell off the business unit. And their servers and networking are just not good enough.


Dixons to shutter 92 UK Carphone Warehouse shops after profit warning

Silver badge

Re: No surprise

I shopped at Dixon’s last summer while visiting Ireland. They blatantly screwed me and insisted that the advertisement sitting on the counter which triggered my impulse purchase of a LTE modem with an included data package... clearly marked as such did not come with the SIM card and I would have to buy it separate and refused to take the product back.

The time before that, a few years earlier, the screwed me on something else, but I chalked it up to a failure on the store to train their people.

I am allowed to spend about £500 per person while traveling and remain in my duty free limit. So, when the family and I travel, we spend about £2200 on crap we don’t need but can’t survive without and get duty refunds on the expensive stuff. We also know whatever we buy is disposable, if it breaks, we throw it away.

It’s pretty common for us to travel to countries which have Dixon’s two or three times a year. And we spend precisely £0 there... even if they have a better price.

There are almost no companies I wish financial ruin on. But Dixon’s is one of the few that I do.


Epyc fail? We can defeat AMD's virtual machine encryption, say boffins

Silver badge

Re: The attack can only be partially mtitigated

Deep packet inspection is generally not worth much. Unless your deep packet inspection engine can sandbox all code and all data that passes through it. it will never be able to provide better security than proper endpoint protection.

Deep packet inspection doesn't offer anything more than rate limiting the nonsense traffic. So it's certainly worth it. Whether you're using Snort based Cisco products or PfSense... or whatever, there is value.

That said, I actually come from the broadcast video background. I spend the evening speaking about SDI forward error correction and no-return-to-zero with a fellow engineer and my 14 year old daughter last night. The other guy and I worked together for years developing chips and firmware for those things.

I'd be pretty hard pressed to see any circumstance where there would be any value in an IPS on video content delivery channels. I certainly could never identify a circumstance where there's any value in 40Gbp/s networking unless you're buying into the looney tunes nonsense Cisco started by trying to sucker their customers into buying 10Gb/s networking for delivering content that could be delivered at 800Mb/s with almost no compression (as in 1.5Gb/s SDI which has about 1.1Gb/s of actual data which can easily compress below 1Gb without loss or latency issues)

If you're a CDN, you're scaling up when you should be scaling out. That's putting a lot of eggs in one basket. It's a very 1990's-2000's way of thinking. It didn't scale then, it doesn't scale now.

Of course, I'm purely speculating on your design, but even if you're a big production studio handling lots of multi-camera ingest, you are probably way too over-provisioned. Also, if you're doing layered security, you should never be in a circumstance where you'd need to inspect more than a few megabytes a second of traffic.

But again, I'm speculating. Every design usually has a reason other than "we like to spend money"... but these days, with the advent of all the SMPTE members pushing for uncompressed (idiots) because it allows them to make A LOT MORE MONEY, a lot of people are falling for it.

Silver badge

Re: The attack can only be partially mtitigated

Not really about the host.

If there’s an attack vector available to a VM from the host... which I’m confident there always must be due to the thought process I followed above, then the issue is whether it’s possible to always mitigate the attacks from the guest to the host. And they should be by employing the old dynamic recompiled support which was used in hypervisors to trap things like legacy inb/outb instructions.

As such, it’s whether someone can hop contexts and read memory of other guests on the same host.

I make a huge effort to encrypt sensitive data (like keychains) in TPM when I’m coding. But so far as I know, there is still no solid TPM virtualization tech.

Silver badge

The attack can only be partially mtitigated

So long as there's a means to provide plain-text memory access to virtual machines for things like communication with something other than the virtual machine itself... like the hardware or hypervisor for example, it will always be possible to alter the SLAT to choose which memory to encrypt and which memory to not encrypt.

I hadn't considered this attack vector earlier, but now that it's in the open, it's obvious that there is no possible way to create a walled garden suitable to this as there will always have to be gates available.

Let's not overlook that an additional attack vector would be to pause scheduling to the VM, allocate a new virtual page, inject it into the SLAT marked as clear text, then push code into that page, and find a means to trigger it. I would recommend through the VM network driver for example.

There's that attack vector too.... it should be possible to exploit the VM virtual NIC driver. VMXNET3 is a famously bad driver. After doing a code audit on Linux of VMware's kernel drivers, I transitioned from VMware because it there were so many completely obvious security holes that I couldn't run my servers in good faith on the platform. There was that and the $800,000 in licenses I was paying for it... which everyone else just gives away for free now.

So, the real trick would be to inject a VIB on VMware which would allow code injection through VMXNET3 or the video driver which is even better as there's the wide open window to inject shaders into OpenGL or DirectX which is almost certainly being run as MesaGL software rasterizer or WARP.

This would be perfect... create a clear text page, trigger a window size change to trigger resolution change. Provide the clear text page as the frame buffer to the guest... and voila, there's a clear path to start uploading code for graphics rendering. This will likely not work well with NVidia Grid, but there are like 5 people in the world using that.

haha... this article was great.... now that I know that it counts as an attack if you attack the guest from the host, it opens an endless barrel of worms.

I need to update my CV to say "Security Researcher" and hack some VIBs together. It's not even a challenge.


IPv6 growth is slowing and no one knows why. Let's see if El Reg can address what's going on

Silver badge

Lots of stuff going on here

I've been running IPv6 almost exclusively for a decade at home. I've been running IPv6 at work for about 5 years as well.

Let's assess a few of the real reasons for IPv6 not happening.

Security :

With IPv4, you get NAT which is like a firewall but accidentally. It's a collateral firewall :) The idea is that you can't receive incoming traffic unless it's in response to an initial outgoing packet which creates the translation. As such, IPv4 and NAT are generally a poor man's security solution which is amazingly effective. Of course opening ports through PAT can mess that up, but most people who do this generally don't have a real problem making this happen. With modern UPnP solutions to allow applications to open ports as needed at the router, it's even a little better. With Windows Firewall or the equivalent, it's quite safe to be on IPv4.

IPv6 by contrast makes every single device addressable. This means that inbound traffic is free to come as it pleases... leaving the entire end-point security to the user's PC which more often then not is vulnerable to attack. IPv6 can be made a little more secure using things like reflexive ACLs or making use of a good zone based firewalling solution, but with these options enabled, many of the so called benefits of one IP per device dissolve in these conditions.

No need for public addresses:

It's really a very small audience who needs public IP addresses. In the 1990's we had massive amounts of software written to use TCP as its based protocol and to target point to point communication requiring direct addressing. This is 2018, almost every application registers against a cloud based service through some REST API for presence. When two end points need to speak directly with one another, the server will communicate desired source and destination addresses and ports to each party and the clients will send initial packets to the given destinations from the specified sources to force the creation of a translation at the NAT device. Unless the two hosts are on the same ISP with the same CG-NAT device serving them both, this should work flawlessly. Otherwise, a sequence of different addresses will need to be tried to find the right combination to achieve firewall traversal.

In short, we no longer have a real dependency on IPv6 to provide public accessibility.

Network Load Balancers

20 years ago, only the most massive companies deployed load balancers. Certainly less than 1 in 100 would have hardware accelerated load balancers capable of processing layer-7 data and almost certainly none of them could accelerate SSL.

These days, there are multiple solutions to this problem. As such, a cloud service like Azure, Google Cloud or Amazon can serve hundreds of millions of websites from a few IP addresses located around the world.

File transfer services

No one copies files directly from one computer to another anymore. We don't setup shares and copy. We copy to a server and back down again or use sneaker net with large USB thumb drives. With DropBox, OneDrive, Box, etc... in addition, our largest files on our hard drives are cloud hosted anyway. So if we lose a copy, we just download it again.

I can go on... but we simply don't need IPv6 anymore. The only reason we're running out of IP addresses is because of hording. I know of more than a few original Class B networks which have 10 or less addresses in legitimate use. People are hording addresses because they are worth A LOT of money. One guy I know is trying to sell a Class B to a big CDN and is asking $2 million and it's probably worth it at today's rates.

IPv6 is about features. It's a great protocol and I love it. But let's be honest, I'll be dead long before IPv4 has met its end.


Microsoft returns to Valley of Death? Cheap Surface threatens the hardware show

Silver badge

Build said Windows Store is temporary

Windows Store is mandatory at first so that users download the appropriate installers.

But in the future, MSIX should cover direct distribution through alternative channels. I think they just want to be able to gain meaningful telemetry on ARM products before unleashing the beast.

That said, Windows Store has improved... I’ve been using it far more often the past few months. I’m not sure what they did, but it seems less covered with crapware and actually looks like someone is actually monitoring it now.

Silver badge

Re: Low Cost? at $499!

I’d imagine that this will be an ARM based device with LTE. The power usage is already good on the platform but will probably improve over time.

Also consider the trade off for capex vs opex. You may pay less, but the few year old model will not support hardware video decoding of newer codecs. As such, if you’re watching lots of films or clips, the battery will drain FAST on the older machine. Also, depending on which variant of streaming services you use, to gain support for w3c DRM, you may be forced to download H.264 instead of more modern video formats which can easily increase your bandwidth usage over LTE by 3-4 fold.

This surprisely is the best reason to buy new tablets every 3 years.

Silver badge

Re: Low Cost? at $499!

I have some of those $50-$100 tablets. I’m pretty sure that although they run varying editions of Android, we’re talking a different device class.

I think $200 is where tablets start to become almost usable.$300-$400 actually provide a nice experience. $3000 is absolutely frigging brilliant.

I’m not sure how I feel about the $50-$100 range. They have value in some cases, but they get very expensive because they almost never support software updates and very often, their screens have major touch issues making them very difficult to use at all. So you tend to need to buy 3-4 of them for every $300-$400 tablet you’d have bought otherwise.

The one exception I might concede is the $200ish Lenevo items. Of course, the LTE models are more expensive.

Silver badge

Re: It doesn't much matter

I honestly don’t understand the Windows hate. I’m quite fond of Windows and Mac and ElementaryOS.

I like them for different reasons. I honestly could never imagine coding for a living on a Mac. Even when coding for Mac, I use Windows and I very heavily use Ubuntu through WSL which is far far better these days than the Mac command line experience.

Also, Windows 10 is frigging fast. Of course I’m using a Surface Book 2 15” which is a mega-beast of a machine. But I also use some much older equipment and I just can’t feel the hate.

Stability wise it is crazy. I don’t bother rebooting except after major Windows updates and sometimes it takes a few reboots when installing a new machine. Using tech like .NET, even when I’m developing, my programs can go weeks or months without a crash.

That said, Mac has a lot of good stuff too. The App Store is still nicer. Apple is still about a million miles away from getting VPN or Remote Desktop right though. Mac will probably never win any prizes for being fast, but it’s very consistent. What I love about coding on Mac is how the development tools do an awesome job of helping you find the right place to put your text and such.

When working as an IT guy (a big part of my job), I like the Mac a lot. Being a network engineer doesn’t require anything fancy. Just a web browser, a text editor, ssh, telnet and serial. Omnigraffle is nice too, but I tend to use PowerPoint there.

Again, can’t feel the hate.

These days, since the Mac keyboards have gotten so bad, I tend to either use a Mac Book Air from 2011 or a PC. The MacBook Pro latest and greatest sits docked to some screens. I find however, I almost never use it anymore unless it’s via iRapp because I just can’t stand typing on it anymore :(

Of course if you have a favorite compute and can do your stuff... go for it. I actually recommend trying a Surface Book 2 at some point. Add some mixed reality goggles and you’re set for life.


Blighty's super-duper F-35B fighter jets are due to arrive in a few weeks

Silver badge

A plane so expensive it’s useless

So, an F-16 can be had for $20 million with all the bells and whistles. A F-22 can be had for about $130 million fully loaded. A F-14 is about $22 million.

For the price of a single F-35, an entire squadron could be equipped and while I’m sure that the F-35 is really nifty, I wonder how well it would perform against a few dozen drones and/or F-14/F-16’s flown by highly competent pilots.

Consider that it should never be possible for an F-35 pilot to be able to log enough hours to become as skilled in the plane as he/she could have in a F-14 or F-16. The reason is simple wear and tear. With such an incredibly high operating cost, no F-35 pilot should ever be able to clock 1000+ hours in simulated conflicts in such a plane. It was expensive on old airplanes like the F-14 and Korean War era jets were used for training. Even with advanced flight simulators, this will never work on the F-35. It would probably cost a minimum investment of $500 million per trained pilot.

As for stealth, are you seriously trying to convince me that radio and/or heat invisibility has any value in an era where we can simply target on sight instead? If I were a country posing a thread to any country with aircraft carriers, I could easily launch high resolution optics into low earth orbit to track said aircraft carriers for peanuts. I would know precisley where each carrier was and would pick up jet stream from any take offs that could then be visiually tracked.

As for all the fancy AI features and tech. I’m sorry, unless the pilots are engineers with 20+ years experience in multiple disciplines of technology, the mass economy required for proper bug reporting can not be accomplished. Consider for example programs you currently use.

Software which costs A LOT and is only available to a limited number of technicians is buggy as hell, see Service Now or Cisco ISE for examples. Consider Apple’s Final Cut Pro which used to cost thousands of dollars. It was a bug ridden piece of shit. Users tended to find work arounds rather than reporting bugs. The bugs they did report were generally quite aweful in how they were written.

Software with thousands or hundreds of thousands of users produced public forums that greatly increased the number of bugs reported multiple times by multiple sources allowing them to be addressed and thereby creating proper fixes.

The only alternative would be for the developers to actually dog food their own products in real production environments. This way, when they encounter the problems themselves, they could properly instrument their systems and build fixed far more efficiently.

With a billion dollar aircraft, there is no chance in hell any government will allow a developer/engineer into a cockpit and afterwards let them duct tape a 3D printed diagnostics tool to it without months or years of lab testing first. Trial and error troubleshooting is completely out of the question.

The fact is, the guy/gal capable of diagnosing and fixing the problem won’t be allowed anywhere near the driver’s seat of the vehicle to do their jobs. They probably won’t even be allowed on the carriers to observe from nearby.

There are so many problems with a plane that costs this much from a purely common logic perspective it is sickening.

They built a plane that costs so much that as soon as one crashes, malfunctions, etc... the cost is so high the rest will have to be grounded until an investigation committee approves further flight trials. Let’s not forget that if a plane malfunctions and a pilot bails out, no matter how awesome that pilot may be, he/she will never see the inside of a cockpit again. You simply don’t crash a billion dollar aircraft and expect governments to turn the other cheek. In fact, you probably will never find a job flying for FedEx after that.

This might be the dumbest aircraft project in history. Right up with the Russian space shuttle project.

For a billion dollars, a country could design, build and deploy over 10,000 long range, armored kamakazi drones. They can be controlled like video games and can fly, land and explode on 10,000 targets simultaneously. No need for nukes. No need for massive bombs and earth shattering explosions. A single automated factory can produce and deploy them as fast as you can feed them materials. If done properly, a ship could be equipped as a floating factory capable of always building the latest model as needed. It might even be possible to do it from blimps or other airships.

It would be possibly to drop hundreds or thousands of drones from near space on a city, then active flight systems as they approach the ground, fly to precoordinated targets such as building supports and the demolish entire cities. Just pop up something like Google maps, click the positions on each building to deploy a bomb drone, drop 50% more than you need and let them all navigate to where they will be needed, stick themselves to their positions and wait for the “all clear”.

So while all the F-35 nations are wasting their budgets on useless planes and trying to pass rules about how drones can be used in warfare, countries with cheap labor and limited financial resources are probably figuring out how to 3D print most of their parts, stockpiling materials and preparing for a new type of warfare that F-35s aren’t ready for.


Cheap-ish. Not Intel. Nice graphics. Pick, er, 3: AMD touts Ryzen Pro processors for business

Silver badge

Re: Microsoft priority for "business" ryzen flawed

Linux doesn’t necessarily have a standard security stack which is probably the issue. There are many Linux kernel and virtualization security features and AMD does generally support those. But Windows makes a fairly well defined set of APIs for the platform as a whole. This means that when you use the Windows encryption APIs, if the CPU supports hardware encryption, it will be hardware accelerated.

On Linux, you would need to have an OpenSSL implementation that makes use of kernel modules for encryption which may or may not be vendor specific. The same would go for the multitude of other encryption APIs. One downside being that if a bug is found, on Windows, theoretically the next Windows update will fix it for everything. For Linux, every kernel module and every encryption library would have to be updated to support it. That said, the response time to patch these libraries are FAST!!! but if you’re using a Cisco ISE Server, it could take 8 months to a year and still not actually be patched.... which is why using software like this from companies like Cisco should be avoided at all cost.

AMD is working just as hard as Intel to support Linux in this sense. But Linux also depends very heavily on the community to update their libraries as quickly as possible. So, if a flaw is found in an AMD encryption or security library, it is very possible that the developers won’t have access to an AMD platform to verify against. Though many online CI/CD services exist which probably will.

That said, I tend to unit and integration test my security code against a very limited set of CPUs, the Intel generations and a handful of specific ARM CPUs. I probably won’t pay the additional money to test against AMD. It wouldn’t justify a high enough volume to be bothered by that. It would be safer to just say “Use at your own risk on AMD”. If AMD ever gains a noticeable market share again, I’ll consider otherwise.

Of course, I am developing all my server applications against Raspberry Pi now because I simply can’t write code bad enough to justify more than that. I am writing a management system for 2.5 million active users at this time and since everything other than our internet systems are cloud based now, I could never imagine needing more than a few Raspberry PIs to handle the few millions transactions a day we’re processing.

It was pretty awesome all things considered. A data center at $100 a node after power, storage and connectivity vs our old servers at $120,000 a node. What’s worse is that thanks to in-memory databases and map/reduce, it’s much faster on the Raspberry PIs because we’re using the money saved on IT to focus more on good development practice.


Spine-leaf makes grief, says Arista as it reveals new campus kit

Silver badge


The problem with modern data center design is attempting to solve redundancy through networking. The safest design approach is hub and spoke. Three separate clusters built using hub and spoke simplifies the network design greatly.

Then, all services should be run as a cluster in all three locations. Instead of fighting to keep a service running at all times in each cluster, simply fight to make sure that at least one cluster is operating at all times.

This design sounds expensive, but consider the decrease in network and fabric and interface costs and the cost of a third cluster is negligible.

This is not 1994. All server software today is designed to operate in n+1 where n is at least 2... using a system like Azure Stack, Kubernets, Mesos, etc... are all more than capable of ensuring a properly operating server service in this design. Also, when moving to a NoSQL + Object Storage environment and possibly a scale out file server for legacy applications means that N+1 storage can be easily handled on gigabit Ethernet.

This design eliminates the need for vMotion/live migration, eliminates the need for SAN and decreases design costs on CapEx as well as OpEx across the board.

Designing based on a “VMware is the only way” approach increases costs at least 10 fold. It increases cost of hardware, software and more. In addition, it makes it so the platform is generally designed from the aspect of how an IT crew would see it without any understanding of what services are actually needed by IS.

A recent study I participated in showed that more than 95% of the cost of building and operating a data center was the actually based purely on the cost of building and operating the management systems for a data center. By rethinking the design based on operations of IS, we could increase uptime substantially and decrease costs even more.

The moral of the story is, friends don’t let friends let IT people anywhere near their data centers.


You have GNU sense of humor! Glibc abortion 'joke' diff tiff leaves Richard Stallman miffed

Silver badge

Shouldn’t quality and professionalism be the issue?

Man pages on Linux have been on a nearly consistent decline relative to the number of features added to the system. As authors of Linux utilities depend more on web based documentation, man pages have become more and more horrifying in quality.

Let’s also make clear that there are many of us who believe Stallman should simply be muted and censored as his behavior is generally reprehensible. I have actually experienced opposition to use of LGPL code by legal teams because they feared being associated in any way with such an oaf. I do not discount the contributions made by Stallman, but I believe his damage to the GNU world far outweighs the benefits at this time. He clearly marks everything he touches as questionable with regards to professionalism.

As to jokes in man pages. This can be saved for flame wars in forums. There is no benefit to adding them to documentation that should be free of anything other that empirical data unless positing a theory with regards to appropriate use. For example “I would recommend use of an alternative function as the algorithm used in this one may prove questionable with regards to data security.”

I have no opionion regarding the specific joke in question as I see it as lacking the depth necessary to make it entertaining. I see it as offering no more engaging value than the labeling of a manhole cover. But I also believe that even if it were a funny joke, it’s better for the forums.


if dev == woman then dont_be(asshole): Stack Overflow tries again to be more friendly to non-male non-pasty coders

Silver badge

Re: Maybe a silly question, but...

I’m utterly confused.

If I ask :

How can I marshal a event generated in a callback on one thread into the user interface thread?

How exactly would a naughty comment be made?




Biting the hand that feeds IT © 1998–2018