* Posts by CheesyTheClown

673 posts • joined 3 Jul 2009

Page:

Developer goes rogue, shoots four colleagues at ERP code maker

CheesyTheClown
Silver badge

An American also seems involved.

Many countries have many guns. It’s a US anomaly with regards to human behavior that is causing the shootings. If you haven’t ever been to America, the US is somewhat of a cesspool or hate and almost British like superiority trips. It’s a non-stop environment of toxicity. Their news networks run almost non-stop hate trips to hopefully scrape by with enough ratings and viewers.

I left America 20 years ago and each time I go back, I’m absolutely shocked at how everyone is superior to everyone else. I just met an American yesterday who in less than two minutes told me why his daughter was superior to her peers.

It’s also amazing how incredible the toxicity of hate is. It’s a non-stop degradation of humanity. Every news paper, news channel, social media network, etc... is absolutely non-stop negativity.

It’s not about the guns... I think the guns are just an excuse now. I think it’s about everyone from the president downward selling superiority, hate and distrust. I’m pretty sure if you took the guns away, it would be bombs.

1
0

Spent your week box-ticking? It can't be as bad as the folk at this firm

CheesyTheClown
Silver badge

Cisco ISE

It sounds like Cisco ISE’s TrustSec tools.

The good news is that in the latest version, the mouse wheel works most of the time. It used to be click 5 boxes, the move to the tiny little scroll bar and then click 5 more. Now you can click 5 and scroll using the wheel. So safely clicking 676 boxes when you have 26 groups is almost doable without too many mistakes now.

2
0

Hello 'WOS': Windows on Arm now has a price

CheesyTheClown
Silver badge

Re: I Wish You Luck

I use ARM every day in my development environment. I work almost entirely on Raspberry Pi these days.

I would profit greatly from a Windows laptop running on ARM with Raspbian running in WSL.

That said, I already get 12 hours battery life on my Surface Book 2 for watching videos and I also have. Core i7 with 16GB RAM and a GTX 1060.

Nokia basically destroyed their entire telephone business by shipping underpowered machines with too little RAM because they actually believed batter life was why people bought phones. They bragged non-stop about how Symbian didn’t need 200Mhz CPUs and 32MB of RAM and yet, the web did and when iPhone came out and was a CPU, Memory and battery whore, people dumped Nokia like the piece of crap it was. The switch to Windows was just a final death throw.

After all these years, ARM advocates seem to think people give a crap about battery life and are willing to sacrifice all else... like compatibility or usability just so they can not run what they want or be able to use it just because they can’t carry a small charger with them. I honestly believe that until ARM laptops are down to $399 or less and deliver always online Core i5 performance, they won’t sell more than a handful of laptops.

Let’s also consider that no company shipping Qualcomm laptops are making a real effort at it. They’re building them just in case someone shows interest. But really, the mass market doesn’t have a clue what this is or why it matters and for that much money, there are far more impressive options.

And oh... connectivity. If always connected was really a core business for Microsoft, why is it that my 2018 model Surface Book 2 15” Computer packs LTE?

8
6

VMware 'pressured' hotel to shut down tech event close to VMworld, IGEL sues resort giant

CheesyTheClown
Silver badge

Skipped Cisco Live two years and will next

Cisco has been holding Live! In Vegas lately. I have absolutely no interest in me, my colleagues or my customers being in Vegas for the event.

The town is too loud. It’s very tacky. It is precisely the place civilized people would not want to be associated with. Let’s be honest, “what happens in Vegas...” guess what, this is not the kind of professional relationship I want to maintain with those who depend on me or I depend on.

Why would you want to hold a conference in Vegas?

1) Legalized prostitution

2) Legalized gambling

3) Free booze at the tables

4) Free or cheap buffets to gouge yourself at

5) Readily available narcotics of all sorts

6) Massive amounts of waste... not a little, the city must be one of the most disgustingly wasteful cities on earth.

7) Sequins... if that’s your thing.

Can you honestly say that you would want your serious customers to believe this is the type of behavior you associate with professionalism?

56
8

Pavilion compares RocE and TCP NVMe over Fabrics performance

CheesyTheClown
Silver badge

Digging for use cases?

Ok, let’s kill the use case already.

MongoDB... you scale this out, not up, MongoDB’s performance will always be better when run with local disk instead of centralized.

Then, let’s talk how MongoDB is deployed.

It’s done through Kubernetes... not as a VM, but as a container. If you need more storage per node, you probably need a new DB admin who actually has a clue.

Then there’s development environment. When you deploy a development environment, you run minikube and deploy. Done. No point in spinning up a whole VM. It’s just wasteful and locks the developer into a desktop.

Of course there’s also cloud instances of MongoDB if you really need something online to be shared.

And for tests... you would never use a production database cluster for tests. You wouldn’t spin up a new database cluster on a SAN or central storage. You’d run it on minikube or in the cloud on Appveyor or something similar.

If latency is really an issue for your storage, instead of a few narrow 25Gbe pipes to an oversubscribed PCIe ASIC for switching and an FPGA for block lookups, you would instead use more small scale nodes, map/reduce and spread the work-load with tiered storage.

A 25GbE network or RoCE network in general would cost a massive fortune to compensate for a poorly designed database. Instead, it’s better to use 1GbE or even 100MbE to scale the compute workload into more small nodes. 99% of the time, 100 $500 nodes connected by $30 a port networking will use less power, cost considerably less to operate and perform substantially better than 9 $25,000 nodes.

Also, with a proper map/reduce design, the vast majority of operations become RAM based which will drastically reduce latency compared to even the most impressive NVMe architectures based on obsessive scrubbing. Go the extra mile and make indexes that are actually well formed and use views and/or eventing to mutate records and NVMe is a really useless idea.

Now, a common problem I’ve encountered is in HPC... this is an area where propagating data sets for map reduce can consume hours of time given the right data set. There are times where processes don’t justify 2 extra months of optimization. In this case, NVMe is still a bad idea because RAM caching in an RDMA environment is much smarter.

I just don’t see a market for all flash NVMe except in legacy networks.

That said, I just designed a data center network for a legacy VMware installation earlier today. I threw about $120,000 of switches at the problem. Of course, if we had worked on downscaling the data center and moving to K8s, we probably could have saved the company $2 million over the next 3 years.

1
4

You lead the all-flash array market. And you, you, you, you, you and you...

CheesyTheClown
Silver badge

What's the value anymore?

Ok, here's the thing... all flash is generally a really bad idea for multiple different reasons.

M.2 Flash has a theoretical maximum performance of 3.94GB/sec bandwidth on the bus. Therefore a system with 10 of these drives should be able to theoretically transfer an aggregate bandwidth of 39.4GB a second in the right circumstances.

A single lane of networking or fibre channel is approximately 25Gb/sec which is less than 1/10th of the bus bandwidth of a drive. So in a circumstance where a controller can provide 10 or more lanes of bus bandwidth for data transfers, this would be great, but this numbers are so incredibly high that this is not even an option.

So, we know for a fact that the bus capacity of even the highest performance storage systems can barely make a dent in a very low end all flash environment.

Let's get to semiconductors.

Let's consider 10 M.2 drives with 4 32Gb Fibre Channel adapters. This would mean that a minimum of 72 PCIe 3.0 lanes would be required to allow full saturation of all busses.

This is great, but the next problem is that in this configuration, there's no means of block translation between systems. That means that things like virtual LUNs would not be possible.

It is theoretically possible to implement in FPGA (DO NOT USE ASIC HERE) a traffic controller capable of handling protocols and full capacity translation using a CPU style MMU for translation of regions of storage instead of regions of memory, but the complexity would have to be extremely limited and because of the translation table coherency, it would be extremely volatile.

Now... the next issue is that assuming some absolute miracle worker out there manages to develop a provisioning, translation and allocation system for course grained storage, this would more or less mean that things like thin provisioned LUNs would be borderline impossible in this configuration. In fact, based on modern technology, it could maybe be possible with custom FPGAs designed specifically for an individual design, but the volumes would be far too low to ever see return on investment for the ASIC vendor.

Well, now we're back to dumb storage arrays. That means no compression, thin provisioning, deduplication and without at least another 40 lanes of PCI 3.0 serialized over fibre for long runs, there's pretty much no chance of guaranteed replication.

Remember this is only a 10 device M.2 system with only 4 fibre channel HBAs.

All Flash vs. spinning disk hybrid has never been a sane argument. Any storage system needs to properly manage storage. The protocols and the software involved need to be rock solid and well designed. FibreChannel and iSCSI have so much legacy that they're utterly useless for modern storage as they don't handle real world storage problems on the right sides of the cable anymore. Even with things like VMware's SCSI extensions for VAAI, there is far too much on the cable and thanks to fixed sized blocks, it should never exist. If nothing else, they lack any support for compression. Forget other things like client side deduplication so that hashes for dedup could be calculate not just for dedup, but for an additional non-secure means of authentication.

Now let's discuss cost a little.

Mathematics and physics and pure logic says that data redundancy requires a minimum of 3 active copies of a single piece of data at all times. This is not negotiable. This is an absolute bare minimum. That would mean to have the minimum requirement for redundant data, a company should have a minimum of 3 full storage arrays and possibly a 4th for circumstances with long term maintenance.

To build an all flash array with a minimal configuration, this would cost so much money that no company on earth should ever piss that much away. It just doesn't make sense.

The same stands true of fibre channel fabrics. There needs to be at least 3 in order to make commitments to uptime. This is not my rule. This is elementary school level math.

Fibre channel may support this, but the software and systems don't. It can be done on iSCSI, but certainly not on NVMe as a fabric for example. The cost would also be impossible to justify.

This is no longer 2010 when virtualization was nifty and fun and worth a try. This is 2018 when a single server can theoretically need to recover from failure of 500 or more virtual machines at a single time.

All Flash is not an option anymore. It's absolutely necessary to consider eliminating dumb storage. This means block based storage. We have a limited number of storage requirements which is reflected by every cloud vendor.

1) File storage.

This can be solved using S3 and many other methods, but S3 on a broadly distributed file system makes perfect sense. If you need NFS for now... have fun but avoid it. The important factor to consider here is that classical random file I/O is no longer a requirement.

2) Table/SQL storage

This is a legacy technology which is on its VERY SLOW way out. We'll still see a lot of systems actively developed towards this technology for some time, but it's no longer a prefered means of storage for systems as it lacks flexibility and is extremely hard to manage back end storage for.

3) Unstructured storage

This is often called NoSQL. This is a means that all systems have queryable storage which works kinda like records in a database but far smarter. So the data stored is saved as a file, but the contents can be queried. Looking at a system like Mongo or Couchbase shows what this is. Redis is good for this too but generally has volatility issues.

4) Logging

Unstructured storage can often be used for this, but the query front end will be more focussed on record ages with regards to querying and storage tiering.

Unless a storage solution offers all 4 of these solutions it's not really a storage solution it's just a bunch of drives and cables with severely limited bandwidth being constantly fought over.

Map/Reduce technology is absolutely a minimum requirement for all modern storage and this requires full layer-7 capabilities in the storage subsystems. This way as nodes are added, performance increases and in many cases decrease overhead.

As such, it makes no sense to implement a data center today on a SAN technology. It really makes absolutely no sense at all to deploy for example a containers based architecture on such a technology.

If you want to better understand this, start googling at Kubernetes and work your way through containerd and cgroups. You'll find that this block storage should always be local only. This means that if you were to deploy for example MongoDB, SQL servers, etc... as containers, they should always have permanent data stores that require no network or fabric access. All request will be managed locally and the system will scale as needed. Booting nodes via SAN may seem logical as well, but the overhead is extremely high and in reality, PXE or preferably HTTPS booting via UEFI is a much better solution.

Oh... and enterprise SSD is just a bad investment. It doesn't actually offer any benefits when your storage system is properly designed. RAID is really really really a bad idea. This is not how you secure storage anymore. It's really just wasted disk and wasted performance.

But there are a lot of companies out there who waste a lot of money on virtual machines. This is for legacy reasons. I suppose this will keep happening for a while. But if your IT department is even moderately competent, they should not be installing all flash arrays, they should instead be optimizing the storage solutions they already have to operate with the datasets they're actually running. I think you'll find that with the exception of some very special and very large data sets (like a capture from a run of the large hadron collider) more often than not, most existing virtualized storage systems would work just as well with a few SSD drives added as cache for their existing spinning disks.

0
27

Flash, spinning rust, cloud 'n' tape. Squeeze. Oof. Hyperconverge our storage suitcase, would you?

CheesyTheClown
Silver badge

Re: Lenevo and Cloudistics could be a fail

This looks great, but suffers the same general problem as AzureStack.

First of all, to be honest, from a governance perspective, I don't trust Google to meet our needs. If nothing else, I don't trust Google to respect safe harbour. Microsoft has now spent years fighting the US government with regards to safe harbour issues, but Google simply provides transparency related to them. I have absolutely nothing to hide personally, but for business, I have to be vigilant with regards to peoples medical and financial records. This is not information that any company outside of my country has legal right to. That means, I can't even trust a root certificate outside this country. That also means that I can't use any identity systems controlled by any company outside of this country. That means no Google login or Azure AD. That also means no Azure Stack or GCP.

Beyond that, Cisco simply doesn't make anything even close to small enough for cloud computing anymore. They used to have the UCS-M series blades which were still too big. To run a cloud, you need a minimum of 9 nodes spread across 3 locations. The infrastructure cost of Cisco is far too high to consider this.

It's much better to have more nodes in more locations. As such we're experimenting with single board computers like Raspberry Pi (which is too underpowered but is promising) and LattePanda Alphas which are too expensive and possibly overpowered to run a cloud infrastructure.

We're looking now at Fedora (we'd choose RedHat, but don't know how to do business with them), Kubernetes, Couchbase and .NET Core. This combination seems to be among the most solid options on the market. We're also looking at OpenFaaS, but OpenFaaS is extremely heavy weight in the sense that it spins up containers for everything. Containers are insanely heavy to host a function. So we're looking into other means of isolating code.

We're walking very softly because we know that as soon as a component becomes part of our cloud, it's a permanent part which will require 20-50 years support. We need something we know will run on new hardware and have support.

Google is amazing and I'd love to use a hybrid cloud, but the problem with public clouds at all is that the money we could be spending on developers and engineers and supporting our customers is instead being burned on governance, compliance and legal. Instead, we need a full detached system which is why I was attracted by Lenevo's solution until it was clear that Cloudistics is focused only on selling to C-Level types and not to the engineers who will have to use it.

0
0
CheesyTheClown
Silver badge

Lenevo and Cloudistics could be a fail

So, I'm working a lot on private cloud these days. The reason is that none of the public cloud vendors meet my governance requirements for the system my company is developing.

Azure Stack is out of the question because it requires that the platform is connected to the Internet for Azure AD. So... no luck there.

I've been looking and looking and to be fair, the best solution I've seen is to simply install Linux, Kubernetes, Couchbase and OpenFaaS. With these four items, it should be possible to run and maintain pretty much anything we need. We'll have to contribute changes to OpenFaaS as it's still not quite the answer to all our problems, and we're considering writing a Couchbase backend for OpenFaaS as well. But once all that is covered, it's a much better solution than other things.

That said, we keep our eyes open for alternatives. So when I saw a possible solution in this article, I went to check. It's a closed platform with no developer (or system administrator) documentation online. There's no open source links and there's no apparent community behind it.

So, why in the world would anyone ever invest in a platform from a company like Cloudistics which no one has ever heard of, has no community and hence no "experts" and more than likely won't exist in 12 months time?

If I were shareholder of a company who chose to use this solution in its current state, I would consider litigation for gross mismanagement of the company. This is an excellent example of how companies like Cisco, Lenevo, HPE and others are so completely out of touch with what the cloud is that white box actually makes more sense.

2
0

ReactOS 0.4.9 release metes out stability and self-hosting, still looks like a '90s fever dream

CheesyTheClown
Silver badge

Re: Use case for ReactOS

I'll start with... because "Some of us like it" and don't really mind paying a few bucks for it.

I also am a heavy development user. And although I am really perfectly happy with vi most of the time, I much prefer Visual Studio. I actually just wrote a Linux kernel module using Visual Studio 2017 and Windows Subsystem for Linux for the most part. Which is really funny since WSL doesn't use the Linux kernel.

There are simply some of us who like to have Windows running on their systems. Even if I were using Linux as the host OS, I would still do most of my work in virtual machines for organizational reasons and frankly, WSL on Windows is just a thing of beauty.

As for more modern UIs like many people complain about here. I honestly haven't noticed. You press the Windows key and type what you want to start and it works. This has been true since Windows 7 and has only gotten better over time.

Then there's virtualization. Hyper-V is a paravirtualization engine which is frigging spectacular. With the latest release of QEMU which is accelerated on Windows now (like kqemu) you can run anything and everything beautifully.

I have no issues with the software you run... I believe if you sat coding next to me, you'd probably see as many cool new things as I'd see sitting next to you. But honestly, I've never found a computer which runs Linux desktop with even mediocre performance. They're generally just too slow for me. So, I use Windows which is ridiculously fast instead.

As for Bill Gates. Are you aware that Bill has more or less sold out of Microsoft? He's down to little more than 1% of the company. You can give Microsoft gobs of money and he would never really notice. Take it a little further and you might realize that this isn't the Bill Gates of the 1980s. He's grown up and now is a pretty darn good fella. So far as I can tell, since he's been married, he's evolved into one of the most amazingly nice people on earth. I can't see that he's done anything in the past 15-20 years which would actually justify a dislike of him or a distrust of his motives.... unless you're Donald Trump who Bill kind of attacked recently for speaking a little too affectionately about Bill's daughter's appearance.

5
6

Windows 10 IoT Core Services unleashed to public preview

CheesyTheClown
Silver badge

Re: Well if MS are offering to do that...

Some of us don't use registered MAC addresses. We simply use duplicate address detection and randomize. There's really no benefit to registered MAC addresses anymore. Simply set the 7th bit to 1 and use a DAD method.

Also consider that many of us don't use Ethernet for connectivity. There are many other solutions for IoT. A friend of mine just brought up a 1.2 million node multinational IoT network on LTE.

MAC address filtering and management is basically a dead end. There's just no value in it for many of us. It really only adds a massive management overhead to production of devices. And layer-2 is so bunged to begin with that random MAC addresses with DAD can't really make it any worse.

1
0

Who fancies a six-core, 32GB RAM, 4TB NVME ... convertible tablet?

CheesyTheClown
Silver badge

Will have bugs and no love from HP

For a product of this complexity to be good, it needs to reach high enough volumes thatthe user feedback on the product is good enough to solve problems. A company the size of HP will ship this, but the volume of big reports will be low due to a few reasons.

1) the user count is low

2) the typical user of this product won’t have a reliable means of reporting the bugs other than forums. This is because they work for companies who can afford these systems and would have to report through IT. IT will not fully understand or appreciate the problems or how they actually effect the user and therefore will not be able to convey the problems appropriately.

3) HP does not make the path from user to developer/QA transparent as once the product is shipped, those teams are reassigned.

As such, HPs large product portfolio is precisely why this is a bad purchase. Companies like Microsoft and Apple build a small number of systems and maintain them long term. Even with the huge specifications on these PCs, a lower end system and offloading some to the cloud is far more fiscally responsible.

Of course, people will buy them and if we read about them later, I doubt the user response will be overly positive.

I’m using a Surface Book 2 15” with a Norwegian keyboard even though I have it configured to English. This is because a LOT of negative feedback reached MS on the earlier shipments and by buying a model I was sure came off the assembly line a few months later, I was confident that many of the early issues were addressed.

This laptop from HP will not have that benefit because to produce them profitably, they will need to make probably almost all the laptops of this model they will ship or at least components like motherboards in a single batch. So, even later shipments will probably not see any real fundamental fixes.

But if you REALLY need the specs, have a blast :) You’re probably better off with a workstation PC and Remote Desktop from a good laptop though.

5
9

Even Microsoft's lost interest in Windows Phone: Skype and Yammer apps killed

CheesyTheClown
Silver badge

Re: MS kills UWP apps, Telephony API appears in Windows

Nope, both hands know what’s happening. The telephony APIs allow for Android integration. So the APIs permit Windows 10 Always Online devices (laptops with built in LTE) to provide a consistent experience across phone and laptop.

For instance, you will probably be able to make a call from your laptop. They also integrated messaging.

But I guess that’s not as exciting as assuming it means that Microsoft is confused. :)

0
0

White House calls its own China tech cash-inject ban 'fake news'

CheesyTheClown
Silver badge

Re: Enjoy this while it lasts

I don’t know whether I want to agree or debate this.

We saw republicans dropping from the election for now reason that seemed clear. Just one after another dropped out and yielded to Trump with no explaination to be had. Each time they dropped out and made their support for Trump clear it looked like people behaving as if they were forced to under duress.

Bernie seemed to have real support of the people because they believed in him politicaly. As though they liked his message. Hillary seemed to garner support by people who liked her making fun of Trump and also by people voting for superficial reasons. I’ve been a long believer that it’s time for a female president. I remember as a child being excited that Geraldine Ferrara was running. But Hillary simply scared me because her message didn’t seem to be anything other than “I’ll win and it’s my turn!”

Sander dropped out out of what seemed like frustration over the stubborn child stomping her feet and claiming “I’ll win, it’s my turn!”

I have had great hopes that if this election proved anything to the American people, it’s that the two parties are so corrupt that people need a choice and neither party is offering a choice to the people.

Amazon, Facebook, Twitter, Google, Microsoft, Netflix, and others can all change the platform. They can reinvent the entire two party system overnight. All it would take is to each build on their platforms a new electoral process to identify and support candidates that they would then have added to the ballot. If each company run different competitions and systems to identify and sponsor candidates, we could have a presidential election with 10 or more alternatives to choose from.

They can even allow underdogs to get a grip on the elections. For example, traditional fund raisers which reward only people willing to sell their political capital would become irrelevant. People could get elected because they were in fact popular instead of having sold their souls in exchange for enough money for some commercial time.

I think Trump and Hillary may be the best thing to ever happen to America. If two shit bags like them can end up being the only possible choices the people had, then it’s clear it’s time for a change.

2
0

Why aren't startups working? They're not great at creating jobs... or disrupting big biz

CheesyTheClown
Silver badge

What do you mean?

So, let's say this is 1980 and you start a new business.

You'll need a personal assistant/secretary to :

- type and post letters

- sort and manage incoming letters

- perform basic book keeping tasks

- arrange appointments

- answer phones

- book travel

You'll need an accountant to :

- manage more complex book keeping

- apply for small business loans

- arrange yearly reports

You'll need a lawyer to :

- handle daily legal issues

- write simple contracts

You'll need an entire room full of sales people to

- perform business development tasks

- call every number in the phone book

- manage and maintain customer indexes

You'll need a "copy boy" to

- run errands

- copy things

- distribute mail

Etc...

Now in 2018

You'll need

- an app for your phone to scan receipts into your accounting software

- an accounting app to perform year end reports and to manage your bank accounts

- an app to click together legal documents based on a wizard

- a customer relationship manager application

- a web site service for your home page

- etc...

Let's imagine you are a lawyer in 1980...

- You'd study law

- Graduate

- Take a junior position doing shit work

- Pass the bar

- work for years taking your boss's shitty customers

- work for years trying to sell your body to get your own customers

- one your portfolio was big enough, you'd become a senior partner who would take a cut from everyone else's customers.

The reason the senior lawyer hired junior lawyers was because there was a massive amount of work to do and a senior partner would spend most of their time talking and delegating the actual work to a team of juniors, researchers and paralegals.

Now the senior can do 95% of the work themselves by using an iPad with research and contract software installed in less time than it would have taken to delegate. So where a law firm may have employed 10-20 juniors, paralegals and researchers in 1980 per senior, today, one junior lawyer probably can easily handle the work placed on them by two seniors.

There's no point hiring tons of people anymore. Creating a startup that is dependent on a head count is suicide from the beginning. If you're a people based company, then the second someone smarter sees there's a profit to be made, they'll open the same type of business with far more automated.

3
1

Cray slaps an all-flash makeover on its L300 array to do HPC stuff

CheesyTheClown
Silver badge

What is the goal to be accomplished?

Let's assume for the moment that we're talking about HPC. So far as I know, whether using Infiniband or RDMAoE, all modern HPC environments are RDMA enabled. To people who don't know what this means, it means that all the memory connected to all the CPUs can be allocated as a single logical pool from all points within the system.

If you had 4000 nodes at 256GB of RAM per node, that would provide approximately 1 Petabyte of RAM online at a given time. The amount of time to load a dataset into the RAM will take some time, but compared to performing large random access operations across NVMe which is REALLY REALLY REALLY slow in comparison, it makes absolutely no sense to operate from data storage. Also, storage fabrics, even using NVMe are ridiculously slow due to the fact that even though the layer-1 to layer-3 are in fact fabric oriented, the layer 4-7 storage protocols are not suited for micro-segmentation. As such, it makes absolutely no sense whatsoever to use NVMe for storage related tasks in super-computing environments.

Now, there's the other issue. Most supercomputing code is written using a task broker that is similar in nature to Kubernetes. It spins up massive numbers of copies related to where the CPU capacity is available. This is because that while many super computing centers embrace language extensions such as OpenMP to handle instruction level optimization and threading, they generally are skeptical about run-time type information which would allow annotation of code with attributes that could be used while scheduling tasks.

Consider that moving the data set to the processor upon which it will operate can mean moving gigabytes, terabytes or even petabytes of memory transfer. However, if the data set were distributed into nodes within zones, then a large scale dataset could be geographically mapped within the routing regions of a fabric and the processes which would require moving megabytes or gigabytes at worst can be moved to where the data is when needed. This is the same concept as vMotion but far smarter.

If the task is moved from one part of the super computer to another to bring it closer to the desired memoryset, the program memory can stay entirely in tact but only the CPU task will be moved. Then on heap read operations the MMU will kick in to access remote pages and then relocate the memory locally.

It's a similar principle to map/reduce except in a massive data set environment, map reduce may not work given the unstructured layout of the data. Instead, marking functions with RTTI annotation can allow the JIT and scheduler to move executing processes to the closest available zone within the super computer to access the memory needed by the following operations. A process move within a supercomputer using RDMA could happen in microseconds or milliseconds at worst.

Using a system like this, it could actually be faster to simply have massive tape drives or reel to reel for the data set as only linear access is needed.

But then again... why bother using the mllions of dollars of capacity you already own when you could just add a few more million dollars of capacity.

0
0

Norwegian tourist board says it can't a-fjord the bad publicity from 'Land of Chlamydia' posters

CheesyTheClown
Silver badge

Re: Norwegian History

I think if you checked the Norwegian economy, you might find oil and natural gas doesn't account for as much as you might think.

8
1
CheesyTheClown
Silver badge

Ummm been done

There's a chain called Kondomriet all over Norway that sells electric replacements for sexual activities that generally require fluid exchange between participants.

They even advertise them pretty much everywhere with an "Orgasm guarantee". Though I wonder if that's just a gimmick. How many people would actually attempt to return a used item such as that.

3
0

What's all the C Plus Fuss? Bjarne Stroustrup warns of dangerous future plans for his C++

CheesyTheClown
Silver badge

Mark Twain on Language Reform

Read that and it all makes sense

4
0

Wires, chips, and LEDs: US trade bigwigs detail Chinese kit that's going to cost a lot more

CheesyTheClown
Silver badge

There goes buying from the U.S.

My company resold $750 million of products manufactured in the US last year. Already, these products are at a high premium compared to French and Chinese products. They are a tough sell and it’s almost entirely based on price.

Those items are built mostly from steal, chips, LEDs and wires.

Unless those US companies move their manufacturing outside of the US, we’ll be forced to switch vendors, otherwise the price hikes will be a problem for us. I know that the exported products will have refunds on the duties leaving the US, but the vendors cannot legally charge foreigners less than they charge Americans for these products. So, we’ll have to feel the penalty.

So, I expect to see an email from leadership this coming week telling us to propose alternatives to American products.

33
0

Intel confirms it’ll release GPUs in 2020

CheesyTheClown
Silver badge

Re: Always good to have competition to rein in that nVidia/AMD duopoly

The big difference between desktop and mobile GPUs is that a mobile GPU is still a GPU. Desktop GPUs are about large scale cores and most of the companies you mentioned in the mobile space lack the in-house skills to handle ASIC cores. When you license their tech, usually you’re getting a whole lot of VHDL (or similar) bits that can be added to another set of cores. ARM I believe does work a lot on their ASIC synthesis and of course Qualcom does as well, but their cores are not meant to be discrete parts.

Remember most IP core companies struggle with high speed serial busses which is why USB3, SATA and PCIe running at 10Gb/sec or more is hard to come by from those vendors.

AMD, Intel and NVidia have massive ASIC simulators that cost hundreds of millions of dollars from companies like Mentor graphics to verify their designs on. Samsung could probably do it and probably Qualcomm, but even ARM may have difficulties developing these technologies.

ASIC development is also closed loop. Very few universities in the world offer actual ASIC development programs in-house. The graduates of those programs are quickly sucked up by massive companies and are offered very good packages for their skills.

These days, companies like Google, Microsoft and Apple are doing a lot of ASIC design in house. Most other new-comets don’t even know how to manage an ASIC project. It’s often surprising that none of the big boys like Qualcomm have sucked up TI who have strong expertise in DSP ASIC synthesis. Though even TI has struggled A LOT with high speed serial in recent years. Maxwell’s theory is murder for most companies.

So most GPU vendors are limited to what they can design and test in FPGA which is extremely limiting.

Oh... let’s not even talk about what problems would arise for most companies attempting to handle either OpenCL or TensorFlow in their hardware and drivers. Or what about Vulcan. All of these would devastate most companies. Consider that AMD, Intel and NVidia release a new driver almost every month for GPU. Most small companies couldn’t afford that scale of development or even distribution.

2
0

UK's first transatlantic F-35 delivery flight delayed by weather

CheesyTheClown
Silver badge

Wouldn’t it be most responsbile if....

The F-35s are simply left grounded?

I mean honestly... who in their right mind would fly something that expensive into a situation where they might get damaged?

Let’s face it, if one of these planes becomes damaged in training or in a fight, the financial repercussions would be devestating. That would be massive money simply flushed down the drain.

The pilots are something else we can’t afford to risk. To train an F-35 pilot is so amazingly expensive we can’t possibly afford to place them in harms way.

I think it would be best to just keep the planes grounded.

1
3

Microsoft commits: We're buying GitHub for $7.5 beeeeeeellion

CheesyTheClown
Silver badge

Re: Shite

haha I actually should have read the entire post first. I went to the same website you did. I have to admit, I shamelessly download software from there all the time because sometimes I forget how good things are today unless I compare them to the days that came before.

I tried writing a compiler using Turbo C 2.0 recently. That simple did not go well.

Even though they had an IDE, it was single file and it lacked all the great new features we love and adore in modern IDEs. Now I managed to do it. I had a simple compiler up and running within about an hour, but to be fair, it was an absolute nightmare.

That said, the compile times and executable sizes were really impressive.

But of course things like real mode memory was not a great deal of fun. Also whenever you start coding in C, you get this obsessive need to start rewriting the entire planet. I was 10 minutes away from writing a transpiler to generate C code because C is such a miserable language to write anything useful in. No concept of a string and pathetic support for data structures and non-relocatable memory... YUCK!!!

I will gladly take Visual Studio 2017 over 1980s text editors. Heck, I'll take Notepad++ over those old tools.

You should get a copy of some of those old tools up and running and try to write something in them. It's actually really funny to find out that the keys actually don't do what you fingers think they do anymore. And what's worse, try doing it without using Google. :) I swear it's painful but entertaining. GWBASIC is a real hoot.

1
0
CheesyTheClown
Silver badge

Re: @Harley

I hope you don't mind me asking.

Have you written anything that would signify anyone actually knowing your name?

Writing books since you were an infant?

"and still some cunts get me name wrong"

I'm curious, where is the connection? Just because you have a book that has been published in a small handful of languages, 47 languages suggests that your books probably weren't interesting enough to be picked up broadly. I would say that if your book was published in 47 languages then :

a) It was probably some fiction novel of some type

b) It didn't catch on enough to justify translating it for lower volume markets

c) It probably hasn't seen the NY Times best seller list and if it had, it was at 97th place for a week.

I suppose I could go on, but let me say that if your book was only to translated to 47 languages, there could be a good reason no one has heard of you and certainly no one would know how to spell your name.

Also, while I'm possibly one of the most arrogant and stuck up assholes on the register, I like to occasionally contribute something positive and informative. In the last month, you've written not a single positive or informative comment on any article or as a response to someone else's comments. Your entire purpose for posting on the comments is purely to make snotty one line remarks that are generally degrading.

Now I'm not going to suggest that I'm "Mr. Ray of Fucking Sunshine" over here. But seriously man, did you actually just refer to someone as a cunt... for mistyping a name that probably no one has ever heard of outside of your personal social circle?

I'll make the assumption that you're English as I've never seen another culture on Earth that tosses that word around so nonchalantly as the English do. And to help you better understand yourself, I'll use something I learned from a fellow countryman of yours.

Simon Cowell one time make a remark "Miss, it is your parents job to tell you how pretty you are and how pretty you sing. But did you ever consider recording yourself and listening to your own singing before coming on this show? You're awful."

Now I'm sure that girl is running around telling everyone how she should be taken more seriously because she has been seen performing in 47 countries and subtitled in 47 languages. And I'm sure that your mom and dad read the first 15 or 20 pages of your book(s) so they can tell you how great of an author you are. But let's be honest, the depths of your thinking are far too shallow to be successful as a writer. A creative mind would be able to perform far better than to revert to choosing the most offensive word in his vocabulary to describe a person who mis-typed the name of someone no one has ever heard of.

I think I'll try to help make you famous. I do a great deal of public speaking in my work. I do this is at least 47 countries where people have actually paid to hear me speak in all of them. I'm really famous you know... I'm probably almost as big as David Hasselhoff is in Germany... umm maybe not quite.

So what I'll do for you is that from now on, whenever I am trying to explain a person who sees themselves as being more impressive than they really are, I'll refer to them as a "J.R. Hartley... and that's with a T". So for example :

I was listening to a climate denier on Fox News the other morning and he made a real ass of himself by publicly claiming he has the ears of the leaders of 47 nations. I mean seriously, could he possibly be more of a J.R. Hartley... and that's Hartley with a T... like the great author as opposed to Hartley without a T... like the broken motorcycle.

I bet with that kind of publicity, you might even get translated to a 48th language someday and then... you will be REALLY famous and no one will ever be a cunt and mistype your name again. And I'm willing to do this just for you... because you my good friend J.R. Hartley with a T are a ray of fucking sunshine!

2
0

Five actually useful real-world things that came out at Apple's WWDC

CheesyTheClown
Silver badge

Re: Damn it

I’m gonna probably jump to Android soon. I’ve used iPhone since the early days and am pretty much tired of the non-stop Apple works with everything as long as it’s Apple.

Home automation works if you have a unit in every room. Amazon Echo is $99 and Echo Dot is $29. So in a house with 5 bedrooms, a living room, a kitchen, two bathrooms and two hallways, the Echo is expensive but a reasonable solution. Home Pod is too big to begin with and even at half the price is too expensive.

I spend about $1000 a year on the iTunes Store. To control my music, either I have to store it on a server after downloading it or I have to use an Apple device. Movies can’t even be decrypted legally, so Apple is a requirement. We have 6 screens in the house, 4 have Chromecast built in. One has an Apple TV and the last has a PC.

We don’t want to add Apple TV to all the screens because they would need separate power and separate remotes. Then there’s the mounting issues.

So, we often find ourselves renting films on Google that we already own on iTunes.

The door locks we have aren’t compatible with any service, but writing a skill for Alexa took about an hour. Writing a function for Cortana took 15 minutes.

I don’t believe I will be allowed by Apple to write the skill for Siri, so I’d have to throw away $2000 of perfectly good door locks.

I love my iPhone 6S Plus. But every iPhone patch breaks something new. Watching videos gets more and more inconvenient. My audible app actually skips... it sounds like a scratched record. I have an iPhone X but I’ll end up dead from using it.

Then there’s my car. iPhone integration isn’t bad. But if I want proper integration, I’ll have to pay $400 a year to BMW.

So, I may end up switching to Android even though I hate Android just because it actually gives me options. So I’ll have a phone that sucks, but at least it will work with my other stuff.

Oh, there’s the other issue. I’ve been waiting 8 years for a new line of Macs to buy. The last notebook Apple made which didn’t suck was the MacBook Air 11 inch. I still use a 2011 model of it. And Mac Mini is so out of date it is horrifying. If Apple doesn’t make a new PC suitable for software development before my MacBook dies, I don’t think I’ll buy anything current.

I’m pretty sure Apple as a tech company died with Steve Jobs :(

12
6

Have you heard about ransomware? Now's the time to ask: Are you covered?

CheesyTheClown
Silver badge

Sure... why simply protect yourself?

Ransomware is for people who can’t turn on Windows Backup/Restore or Apple Time Machine.

How bloody hard is it to simply enable automatic recovery options in the OS? If your company is ever hit by ransomware, it’s because your IT staff or firm is incompetent.

In Windows, it’s a single group policy setting.

On Mac, if you haven’t read “Mac for enterprise” documentation and learned how to onboard a Mac for management, you’re a fool. It’s just like group policy.

These are not advanced features. These are sys admin 101 things.

0
0

If you have cash to burn, racks to fill, problems to brute-force, Nvidia has an HGX-2 for you

CheesyTheClown
Silver badge

CPU from the Terminator?

When I saw the picture, it reminded me of the CPU from the Terminator. Maybe this is what it looked like before it was shrunk?

I’m not sure if that’s relevant when discussing AI

3
0

IBM's Watson Health wing left looking poorly after 'massive' layoffs

CheesyTheClown
Silver badge

Re: Merge?

I’ve walked into companies, seen HPE and walked out. It’s just not worth the pain. Every time you give them money, they take it, and sell off the business unit. And their servers and networking are just not good enough.

0
0

Dixons to shutter 92 UK Carphone Warehouse shops after profit warning

CheesyTheClown
Silver badge

Re: No surprise

I shopped at Dixon’s last summer while visiting Ireland. They blatantly screwed me and insisted that the advertisement sitting on the counter which triggered my impulse purchase of a LTE modem with an included data package... clearly marked as such did not come with the SIM card and I would have to buy it separate and refused to take the product back.

The time before that, a few years earlier, the screwed me on something else, but I chalked it up to a failure on the store to train their people.

I am allowed to spend about £500 per person while traveling and remain in my duty free limit. So, when the family and I travel, we spend about £2200 on crap we don’t need but can’t survive without and get duty refunds on the expensive stuff. We also know whatever we buy is disposable, if it breaks, we throw it away.

It’s pretty common for us to travel to countries which have Dixon’s two or three times a year. And we spend precisely £0 there... even if they have a better price.

There are almost no companies I wish financial ruin on. But Dixon’s is one of the few that I do.

15
2

Epyc fail? We can defeat AMD's virtual machine encryption, say boffins

CheesyTheClown
Silver badge

Re: The attack can only be partially mtitigated

Deep packet inspection is generally not worth much. Unless your deep packet inspection engine can sandbox all code and all data that passes through it. it will never be able to provide better security than proper endpoint protection.

Deep packet inspection doesn't offer anything more than rate limiting the nonsense traffic. So it's certainly worth it. Whether you're using Snort based Cisco products or PfSense... or whatever, there is value.

That said, I actually come from the broadcast video background. I spend the evening speaking about SDI forward error correction and no-return-to-zero with a fellow engineer and my 14 year old daughter last night. The other guy and I worked together for years developing chips and firmware for those things.

I'd be pretty hard pressed to see any circumstance where there would be any value in an IPS on video content delivery channels. I certainly could never identify a circumstance where there's any value in 40Gbp/s networking unless you're buying into the looney tunes nonsense Cisco started by trying to sucker their customers into buying 10Gb/s networking for delivering content that could be delivered at 800Mb/s with almost no compression (as in 1.5Gb/s SDI which has about 1.1Gb/s of actual data which can easily compress below 1Gb without loss or latency issues)

If you're a CDN, you're scaling up when you should be scaling out. That's putting a lot of eggs in one basket. It's a very 1990's-2000's way of thinking. It didn't scale then, it doesn't scale now.

Of course, I'm purely speculating on your design, but even if you're a big production studio handling lots of multi-camera ingest, you are probably way too over-provisioned. Also, if you're doing layered security, you should never be in a circumstance where you'd need to inspect more than a few megabytes a second of traffic.

But again, I'm speculating. Every design usually has a reason other than "we like to spend money"... but these days, with the advent of all the SMPTE members pushing for uncompressed (idiots) because it allows them to make A LOT MORE MONEY, a lot of people are falling for it.

1
0
CheesyTheClown
Silver badge

Re: The attack can only be partially mtitigated

Not really about the host.

If there’s an attack vector available to a VM from the host... which I’m confident there always must be due to the thought process I followed above, then the issue is whether it’s possible to always mitigate the attacks from the guest to the host. And they should be by employing the old dynamic recompiled support which was used in hypervisors to trap things like legacy inb/outb instructions.

As such, it’s whether someone can hop contexts and read memory of other guests on the same host.

I make a huge effort to encrypt sensitive data (like keychains) in TPM when I’m coding. But so far as I know, there is still no solid TPM virtualization tech.

4
1
CheesyTheClown
Silver badge

The attack can only be partially mtitigated

So long as there's a means to provide plain-text memory access to virtual machines for things like communication with something other than the virtual machine itself... like the hardware or hypervisor for example, it will always be possible to alter the SLAT to choose which memory to encrypt and which memory to not encrypt.

I hadn't considered this attack vector earlier, but now that it's in the open, it's obvious that there is no possible way to create a walled garden suitable to this as there will always have to be gates available.

Let's not overlook that an additional attack vector would be to pause scheduling to the VM, allocate a new virtual page, inject it into the SLAT marked as clear text, then push code into that page, and find a means to trigger it. I would recommend through the VM network driver for example.

There's that attack vector too.... it should be possible to exploit the VM virtual NIC driver. VMXNET3 is a famously bad driver. After doing a code audit on Linux of VMware's kernel drivers, I transitioned from VMware because it there were so many completely obvious security holes that I couldn't run my servers in good faith on the platform. There was that and the $800,000 in licenses I was paying for it... which everyone else just gives away for free now.

So, the real trick would be to inject a VIB on VMware which would allow code injection through VMXNET3 or the video driver which is even better as there's the wide open window to inject shaders into OpenGL or DirectX which is almost certainly being run as MesaGL software rasterizer or WARP.

This would be perfect... create a clear text page, trigger a window size change to trigger resolution change. Provide the clear text page as the frame buffer to the guest... and voila, there's a clear path to start uploading code for graphics rendering. This will likely not work well with NVidia Grid, but there are like 5 people in the world using that.

haha... this article was great.... now that I know that it counts as an attack if you attack the guest from the host, it opens an endless barrel of worms.

I need to update my CV to say "Security Researcher" and hack some VIBs together. It's not even a challenge.

20
0

IPv6 growth is slowing and no one knows why. Let's see if El Reg can address what's going on

CheesyTheClown
Silver badge

Lots of stuff going on here

I've been running IPv6 almost exclusively for a decade at home. I've been running IPv6 at work for about 5 years as well.

Let's assess a few of the real reasons for IPv6 not happening.

Security :

With IPv4, you get NAT which is like a firewall but accidentally. It's a collateral firewall :) The idea is that you can't receive incoming traffic unless it's in response to an initial outgoing packet which creates the translation. As such, IPv4 and NAT are generally a poor man's security solution which is amazingly effective. Of course opening ports through PAT can mess that up, but most people who do this generally don't have a real problem making this happen. With modern UPnP solutions to allow applications to open ports as needed at the router, it's even a little better. With Windows Firewall or the equivalent, it's quite safe to be on IPv4.

IPv6 by contrast makes every single device addressable. This means that inbound traffic is free to come as it pleases... leaving the entire end-point security to the user's PC which more often then not is vulnerable to attack. IPv6 can be made a little more secure using things like reflexive ACLs or making use of a good zone based firewalling solution, but with these options enabled, many of the so called benefits of one IP per device dissolve in these conditions.

No need for public addresses:

It's really a very small audience who needs public IP addresses. In the 1990's we had massive amounts of software written to use TCP as its based protocol and to target point to point communication requiring direct addressing. This is 2018, almost every application registers against a cloud based service through some REST API for presence. When two end points need to speak directly with one another, the server will communicate desired source and destination addresses and ports to each party and the clients will send initial packets to the given destinations from the specified sources to force the creation of a translation at the NAT device. Unless the two hosts are on the same ISP with the same CG-NAT device serving them both, this should work flawlessly. Otherwise, a sequence of different addresses will need to be tried to find the right combination to achieve firewall traversal.

In short, we no longer have a real dependency on IPv6 to provide public accessibility.

Network Load Balancers

20 years ago, only the most massive companies deployed load balancers. Certainly less than 1 in 100 would have hardware accelerated load balancers capable of processing layer-7 data and almost certainly none of them could accelerate SSL.

These days, there are multiple solutions to this problem. As such, a cloud service like Azure, Google Cloud or Amazon can serve hundreds of millions of websites from a few IP addresses located around the world.

File transfer services

No one copies files directly from one computer to another anymore. We don't setup shares and copy. We copy to a server and back down again or use sneaker net with large USB thumb drives. With DropBox, OneDrive, Box, etc... in addition, our largest files on our hard drives are cloud hosted anyway. So if we lose a copy, we just download it again.

I can go on... but we simply don't need IPv6 anymore. The only reason we're running out of IP addresses is because of hording. I know of more than a few original Class B networks which have 10 or less addresses in legitimate use. People are hording addresses because they are worth A LOT of money. One guy I know is trying to sell a Class B to a big CDN and is asking $2 million and it's probably worth it at today's rates.

IPv6 is about features. It's a great protocol and I love it. But let's be honest, I'll be dead long before IPv4 has met its end.

5
3

Microsoft returns to Valley of Death? Cheap Surface threatens the hardware show

CheesyTheClown
Silver badge

Build said Windows Store is temporary

Windows Store is mandatory at first so that users download the appropriate installers.

But in the future, MSIX should cover direct distribution through alternative channels. I think they just want to be able to gain meaningful telemetry on ARM products before unleashing the beast.

That said, Windows Store has improved... I’ve been using it far more often the past few months. I’m not sure what they did, but it seems less covered with crapware and actually looks like someone is actually monitoring it now.

3
9
CheesyTheClown
Silver badge

Re: Low Cost? at $499!

I’d imagine that this will be an ARM based device with LTE. The power usage is already good on the platform but will probably improve over time.

Also consider the trade off for capex vs opex. You may pay less, but the few year old model will not support hardware video decoding of newer codecs. As such, if you’re watching lots of films or clips, the battery will drain FAST on the older machine. Also, depending on which variant of streaming services you use, to gain support for w3c DRM, you may be forced to download H.264 instead of more modern video formats which can easily increase your bandwidth usage over LTE by 3-4 fold.

This surprisely is the best reason to buy new tablets every 3 years.

1
2
CheesyTheClown
Silver badge

Re: Low Cost? at $499!

I have some of those $50-$100 tablets. I’m pretty sure that although they run varying editions of Android, we’re talking a different device class.

I think $200 is where tablets start to become almost usable.$300-$400 actually provide a nice experience. $3000 is absolutely frigging brilliant.

I’m not sure how I feel about the $50-$100 range. They have value in some cases, but they get very expensive because they almost never support software updates and very often, their screens have major touch issues making them very difficult to use at all. So you tend to need to buy 3-4 of them for every $300-$400 tablet you’d have bought otherwise.

The one exception I might concede is the $200ish Lenevo items. Of course, the LTE models are more expensive.

6
5
CheesyTheClown
Silver badge

Re: It doesn't much matter

I honestly don’t understand the Windows hate. I’m quite fond of Windows and Mac and ElementaryOS.

I like them for different reasons. I honestly could never imagine coding for a living on a Mac. Even when coding for Mac, I use Windows and I very heavily use Ubuntu through WSL which is far far better these days than the Mac command line experience.

Also, Windows 10 is frigging fast. Of course I’m using a Surface Book 2 15” which is a mega-beast of a machine. But I also use some much older equipment and I just can’t feel the hate.

Stability wise it is crazy. I don’t bother rebooting except after major Windows updates and sometimes it takes a few reboots when installing a new machine. Using tech like .NET, even when I’m developing, my programs can go weeks or months without a crash.

That said, Mac has a lot of good stuff too. The App Store is still nicer. Apple is still about a million miles away from getting VPN or Remote Desktop right though. Mac will probably never win any prizes for being fast, but it’s very consistent. What I love about coding on Mac is how the development tools do an awesome job of helping you find the right place to put your text and such.

When working as an IT guy (a big part of my job), I like the Mac a lot. Being a network engineer doesn’t require anything fancy. Just a web browser, a text editor, ssh, telnet and serial. Omnigraffle is nice too, but I tend to use PowerPoint there.

Again, can’t feel the hate.

These days, since the Mac keyboards have gotten so bad, I tend to either use a Mac Book Air from 2011 or a PC. The MacBook Pro latest and greatest sits docked to some screens. I find however, I almost never use it anymore unless it’s via iRapp because I just can’t stand typing on it anymore :(

Of course if you have a favorite compute and can do your stuff... go for it. I actually recommend trying a Surface Book 2 at some point. Add some mixed reality goggles and you’re set for life.

26
14

Blighty's super-duper F-35B fighter jets are due to arrive in a few weeks

CheesyTheClown
Silver badge

A plane so expensive it’s useless

So, an F-16 can be had for $20 million with all the bells and whistles. A F-22 can be had for about $130 million fully loaded. A F-14 is about $22 million.

For the price of a single F-35, an entire squadron could be equipped and while I’m sure that the F-35 is really nifty, I wonder how well it would perform against a few dozen drones and/or F-14/F-16’s flown by highly competent pilots.

Consider that it should never be possible for an F-35 pilot to be able to log enough hours to become as skilled in the plane as he/she could have in a F-14 or F-16. The reason is simple wear and tear. With such an incredibly high operating cost, no F-35 pilot should ever be able to clock 1000+ hours in simulated conflicts in such a plane. It was expensive on old airplanes like the F-14 and Korean War era jets were used for training. Even with advanced flight simulators, this will never work on the F-35. It would probably cost a minimum investment of $500 million per trained pilot.

As for stealth, are you seriously trying to convince me that radio and/or heat invisibility has any value in an era where we can simply target on sight instead? If I were a country posing a thread to any country with aircraft carriers, I could easily launch high resolution optics into low earth orbit to track said aircraft carriers for peanuts. I would know precisley where each carrier was and would pick up jet stream from any take offs that could then be visiually tracked.

As for all the fancy AI features and tech. I’m sorry, unless the pilots are engineers with 20+ years experience in multiple disciplines of technology, the mass economy required for proper bug reporting can not be accomplished. Consider for example programs you currently use.

Software which costs A LOT and is only available to a limited number of technicians is buggy as hell, see Service Now or Cisco ISE for examples. Consider Apple’s Final Cut Pro which used to cost thousands of dollars. It was a bug ridden piece of shit. Users tended to find work arounds rather than reporting bugs. The bugs they did report were generally quite aweful in how they were written.

Software with thousands or hundreds of thousands of users produced public forums that greatly increased the number of bugs reported multiple times by multiple sources allowing them to be addressed and thereby creating proper fixes.

The only alternative would be for the developers to actually dog food their own products in real production environments. This way, when they encounter the problems themselves, they could properly instrument their systems and build fixed far more efficiently.

With a billion dollar aircraft, there is no chance in hell any government will allow a developer/engineer into a cockpit and afterwards let them duct tape a 3D printed diagnostics tool to it without months or years of lab testing first. Trial and error troubleshooting is completely out of the question.

The fact is, the guy/gal capable of diagnosing and fixing the problem won’t be allowed anywhere near the driver’s seat of the vehicle to do their jobs. They probably won’t even be allowed on the carriers to observe from nearby.

There are so many problems with a plane that costs this much from a purely common logic perspective it is sickening.

They built a plane that costs so much that as soon as one crashes, malfunctions, etc... the cost is so high the rest will have to be grounded until an investigation committee approves further flight trials. Let’s not forget that if a plane malfunctions and a pilot bails out, no matter how awesome that pilot may be, he/she will never see the inside of a cockpit again. You simply don’t crash a billion dollar aircraft and expect governments to turn the other cheek. In fact, you probably will never find a job flying for FedEx after that.

This might be the dumbest aircraft project in history. Right up with the Russian space shuttle project.

For a billion dollars, a country could design, build and deploy over 10,000 long range, armored kamakazi drones. They can be controlled like video games and can fly, land and explode on 10,000 targets simultaneously. No need for nukes. No need for massive bombs and earth shattering explosions. A single automated factory can produce and deploy them as fast as you can feed them materials. If done properly, a ship could be equipped as a floating factory capable of always building the latest model as needed. It might even be possible to do it from blimps or other airships.

It would be possibly to drop hundreds or thousands of drones from near space on a city, then active flight systems as they approach the ground, fly to precoordinated targets such as building supports and the demolish entire cities. Just pop up something like Google maps, click the positions on each building to deploy a bomb drone, drop 50% more than you need and let them all navigate to where they will be needed, stick themselves to their positions and wait for the “all clear”.

So while all the F-35 nations are wasting their budgets on useless planes and trying to pass rules about how drones can be used in warfare, countries with cheap labor and limited financial resources are probably figuring out how to 3D print most of their parts, stockpiling materials and preparing for a new type of warfare that F-35s aren’t ready for.

13
6

Cheap-ish. Not Intel. Nice graphics. Pick, er, 3: AMD touts Ryzen Pro processors for business

CheesyTheClown
Silver badge

Re: Microsoft priority for "business" ryzen flawed

Linux doesn’t necessarily have a standard security stack which is probably the issue. There are many Linux kernel and virtualization security features and AMD does generally support those. But Windows makes a fairly well defined set of APIs for the platform as a whole. This means that when you use the Windows encryption APIs, if the CPU supports hardware encryption, it will be hardware accelerated.

On Linux, you would need to have an OpenSSL implementation that makes use of kernel modules for encryption which may or may not be vendor specific. The same would go for the multitude of other encryption APIs. One downside being that if a bug is found, on Windows, theoretically the next Windows update will fix it for everything. For Linux, every kernel module and every encryption library would have to be updated to support it. That said, the response time to patch these libraries are FAST!!! but if you’re using a Cisco ISE Server, it could take 8 months to a year and still not actually be patched.... which is why using software like this from companies like Cisco should be avoided at all cost.

AMD is working just as hard as Intel to support Linux in this sense. But Linux also depends very heavily on the community to update their libraries as quickly as possible. So, if a flaw is found in an AMD encryption or security library, it is very possible that the developers won’t have access to an AMD platform to verify against. Though many online CI/CD services exist which probably will.

That said, I tend to unit and integration test my security code against a very limited set of CPUs, the Intel generations and a handful of specific ARM CPUs. I probably won’t pay the additional money to test against AMD. It wouldn’t justify a high enough volume to be bothered by that. It would be safer to just say “Use at your own risk on AMD”. If AMD ever gains a noticeable market share again, I’ll consider otherwise.

Of course, I am developing all my server applications against Raspberry Pi now because I simply can’t write code bad enough to justify more than that. I am writing a management system for 2.5 million active users at this time and since everything other than our internet systems are cloud based now, I could never imagine needing more than a few Raspberry PIs to handle the few millions transactions a day we’re processing.

It was pretty awesome all things considered. A data center at $100 a node after power, storage and connectivity vs our old servers at $120,000 a node. What’s worse is that thanks to in-memory databases and map/reduce, it’s much faster on the Raspberry PIs because we’re using the money saved on IT to focus more on good development practice.

4
1

Spine-leaf makes grief, says Arista as it reveals new campus kit

CheesyTheClown
Silver badge

Nonsense

The problem with modern data center design is attempting to solve redundancy through networking. The safest design approach is hub and spoke. Three separate clusters built using hub and spoke simplifies the network design greatly.

Then, all services should be run as a cluster in all three locations. Instead of fighting to keep a service running at all times in each cluster, simply fight to make sure that at least one cluster is operating at all times.

This design sounds expensive, but consider the decrease in network and fabric and interface costs and the cost of a third cluster is negligible.

This is not 1994. All server software today is designed to operate in n+1 where n is at least 2... using a system like Azure Stack, Kubernets, Mesos, etc... are all more than capable of ensuring a properly operating server service in this design. Also, when moving to a NoSQL + Object Storage environment and possibly a scale out file server for legacy applications means that N+1 storage can be easily handled on gigabit Ethernet.

This design eliminates the need for vMotion/live migration, eliminates the need for SAN and decreases design costs on CapEx as well as OpEx across the board.

Designing based on a “VMware is the only way” approach increases costs at least 10 fold. It increases cost of hardware, software and more. In addition, it makes it so the platform is generally designed from the aspect of how an IT crew would see it without any understanding of what services are actually needed by IS.

A recent study I participated in showed that more than 95% of the cost of building and operating a data center was the actually based purely on the cost of building and operating the management systems for a data center. By rethinking the design based on operations of IS, we could increase uptime substantially and decrease costs even more.

The moral of the story is, friends don’t let friends let IT people anywhere near their data centers.

0
0

You have GNU sense of humor! Glibc abortion 'joke' diff tiff leaves Richard Stallman miffed

CheesyTheClown
Silver badge

Shouldn’t quality and professionalism be the issue?

Man pages on Linux have been on a nearly consistent decline relative to the number of features added to the system. As authors of Linux utilities depend more on web based documentation, man pages have become more and more horrifying in quality.

Let’s also make clear that there are many of us who believe Stallman should simply be muted and censored as his behavior is generally reprehensible. I have actually experienced opposition to use of LGPL code by legal teams because they feared being associated in any way with such an oaf. I do not discount the contributions made by Stallman, but I believe his damage to the GNU world far outweighs the benefits at this time. He clearly marks everything he touches as questionable with regards to professionalism.

As to jokes in man pages. This can be saved for flame wars in forums. There is no benefit to adding them to documentation that should be free of anything other that empirical data unless positing a theory with regards to appropriate use. For example “I would recommend use of an alternative function as the algorithm used in this one may prove questionable with regards to data security.”

I have no opionion regarding the specific joke in question as I see it as lacking the depth necessary to make it entertaining. I see it as offering no more engaging value than the labeling of a manhole cover. But I also believe that even if it were a funny joke, it’s better for the forums.

27
84

if dev == woman then dont_be(asshole): Stack Overflow tries again to be more friendly to non-male non-pasty coders

CheesyTheClown
Silver badge

Re: Maybe a silly question, but...

I’m utterly confused.

If I ask :

How can I marshal a event generated in a callback on one thread into the user interface thread?

How exactly would a naughty comment be made?

5
0

Highway to the auto-zone: Cisco is catching up to Brocade in Fibre Channel speed race

CheesyTheClown
Silver badge

How about horses for carts?

I wonder how Cisco is holding up on horses for horse drawn carts? Are they doing well there?

Any nifty tools to speed up Morse code for telegraph?

Oh.. they have NVMe fabric stuff too? I wonder how many suckers will buy into that?

I hear Cisco has some great stuff for making telephone more nifty too.

0
0

Single single-sign-on SNAFU threatens three Cisco products

CheesyTheClown
Silver badge

Re: Is it me...

Nope, it's you... there's just really no point patching Cisco security products.

Let's keep this simple. If an Internet facing device is not automatically patching itself, it is not a security device.

Security devices download security patches live and deploy them in the background.

Cisco's desktop software panders to network/security engineers who can't work with desktop teams to properly deploy automatic software updates.

For that fact, a core feature of Cisco ISE is to ensure you have all the updates you need or it won't let you in, but it has no subscription service to inform itself of these requirements. As such, no one actually enforces these rules and as such, no one ever upgrades.

Don't worry... ISE is only Cisco's most important security tool in their entire portfolio, but they try to keep it secure by sending 1-2 updates a year. They ignore security bug reports... for example in their impressively insecure SAML implementation in ISE... I mean really... I have never seen such horrible code in a security product. Watch the logs for SAML and see it burn. If you can't hack ISE after watching the SAML logs, you simply are dense, I bet even the sales guy could hack ISE after looking there.

The moral of the story is... Cisco doesn't make security products. They make lots of stuff they sell as security products. And if they fail, it was your fault for not properly maintaining them.

0
0
CheesyTheClown
Silver badge

Re: Is it me...

The answer is of course no.

1) Cisco, Checkpoint, PaloAlto, etc... all run their firewalls on top of Linux distributions which they don't properly maintain. Cisco for example tries to make their own Linux LTS branch but only selectively pull in patches. To be honest, while Linux is great for many things, security is pretty close to the bottom of the list. I still think Linux should be called "hackers den".

2) Most modern firewalls run as virtual appliances, often on VMware. VMware drivers are a rats nest of security holes that simply are not solvable. Their VMXNET3 driver which they ship as the default on the Linux kernel (the one which EVERYONE uses) is so full of security holes it's disgusting. It's extremely problematic when firewalls running on VMware become insecure because you can simply code-inject as much as you'd like before the kernel even knows there's a packet of data. 100% untraceable.

3) PFSense is frigging awesome but doesn't scale at all.

4) Juniper is quite nice but once you get past a 50 user office on an appliance, it's a waste of effort.

As a note, before anyone goes all Palo Alto on my ass. Palo Alto is good as long as you don't touch anything. Just plug it in, make it run passive, set a password, configure your subscription, and that's it. Palo Alto is among the worst firewalls I've ever encountered because they rapidly weaken as you change configuration.

So the answer is simply... no you can't buy a real firewall instead. So, you have to make due with whatever option will give you the best company to sue when you get hacked.

That said... and I REALLY REALLY REALLY don't want to be nice to them. I ABSOLUTELY HATE THESE BUGGERS.... I kinda almost sorta like the solution from McAfee. I don't have that much experience with them, but I find that as they have a great deal of experience in desktop clients and they try to be part of Windows and Mac instead of some half-assed AnyConnect like solution, they do a far better job of integrating for end to end security than anything I've seen from anyone else. Their software is good at keeping itself updated. And their management portal for everything from edge to desktop is actually usable.

But in the end, they are pretty much all shit

1
0

Slick HCI trick: VMware smooths off vSAN's rough edges

CheesyTheClown
Silver badge

What about the price tag?

So, Windows Server Enterprise which needs to be licensed for each server anyway comes with Hyper-V, Storage Spaces Direct and Microsoft Network Controller. It also comes with Project Honolulu. I would call this a direct replacement for VMware, but VMware just isn't even close anymore. Every time I touch VMware,I feel like I'm saying hello to 2009.

Then there's Nutanix which is pretty expensive but is a single product which includes storage, networking and management.

The only reason to use VMware is if you spent a million dollars or more on hardware and instead of replacing that hardware with $200,000 worth, you will instead insist on paying another million dollars in software to avoid admitting you made a mistake buying the first million dollars of stuff.

People who buy VMware are people who let the vendors tell them what they need. I always love it when an IT sales guy comes in who has 25 years experience selling IT stuff to IT people in lots of different business types. They have absolutely no idea what the actual business of the customer is, but they are telling the customer what their needs are.

I was in a room at a company last year. I have 10 years experience as a developer of video codecs and transmission protocols. I worked along side of people who went on to take the most important technical/signal processing roles at organizations like the European Space Agency. I've even been known to spend a morning extending a protocol to support transmitting additional languages so that the UN could broadcast more languages with their TV signals. I was brought in by our sales guy, a guy who has been selling IT crap like VMware for 25 years. He considers himself an expert. He refused to let the engineer in charge of their video encoding system and myself discuss what they're doing. He insisted on directing the conversation and exerting his dominance in the meeting to attempt to force the customer to buy what Cisco and VMware were telling him was the important stuff to sell this week. When I clearly explained that he was attempting to force the customer to buy $2 million worth of equipment for a job which didn't justify more than $100,000 and that the $2 million sale would accomplish absolutely nothing for the customer as the customer didn't need any of the stuff he was trying to sell, he became irate and refused to bring me to meetings anymore.

In the future, spend time thinking on this one question.

What percentage of your IT spending is to buy infrastructure hardware or software that doesn't actually accomplish any specific business task other than to hopefully make the other infrastructure hardware or software work better which also probably doesn't have a direct business case?

Now ask yourself this.

What percentage of your storage is actually business data and what percentage is simply storing things like operating systems and ISOs and all that stuff? Do you honestly have 2 terabytes of business data?

What about this.

Have you actually spent money on things like fiber channel, SANs, other storage subsystems to improve performance and the stability of large data transfers simply to improve the speed at which the non-business data moves? So, how much money have you spent in the last X number of years to improve storage performance ... not because you're generating millions of transactions per second for business tasks, but because you built IT systems that are so complex that things like NVMe storage subsystems seem like a good idea?

Let's go to this.

Now that most of your core systems like identity, messaging, collaboration and office are mostly cloud based, are you still building expensive storage and virtualization systems? If you were to evaluate actual business data performance requirements, you'd probably find a small cluster of SQL servers running on Intel NUC machines would more than satisfy your entire enterprise's needs for non-cloud storage. Your entire business systems probably would also run perfectly well on a few NUCs. Are you spending in IT wisely? Are you letting system integrators sell you things you clearly don't need simply because you need to better support something else you bought which also adds no business value?

Before you lash out at me, take the time to tell me... could you be worth more to your company by decreasing the IT spending by making breaks from religious beliefs like VMware and SANs and such? Could you make a 5 year plan to minimize IT spending and facilitate your company's needs? Wouldn't you be clearly more valuable to management if you spent your time trying to facilitate management's needs instead of the IT sales guy's needs?

Don't get me wrong, I feed my family by you buying tons of shit you don't need. In fact, if you actually used your brain to actually support your company as opposed to me and my family, I'd be out of work. In the past 6 years of working in IT, I have not once actually provided value to the customer. I have let the advertisements convince the management they should look into technology to make them more agile. Then we go in with absolutely no understanding of their business and sell them systems they don't really have a plan for either and then charge lots of hours to help them deploy systems they didn't need and would never use to ensure that the systems we sold them which they didn't need and wouldn't use wouldn't be wasted.

An example... if you're considering VMware for a NoSQL environment, this is a very very very bad idea. NoSQL performs best when it's scaled very wide and works even better when run on bare metal or at least in a container. Deploying NoSQL on VMware and on a SAN goes against absolutely everything that NoSQL was designed to fix. This is true for things like Hadoop and Map/Reduce systems as well. These are systems that should never ever ever ever be found in virtualized environments like VMware. But, it is clear that VMware is working on making things like MongoDB work on VSAN which might be the dumbest thing I've ever heard. Well... other than running MongoDB on a SAN. That is truly the worst investment in history. Using enterprise storage with enterprise backplanes on enterprise servers to run a system that was specifically designed to 100% eliminate the need for running on enterprise storage with enterprise backplanes on enterprise servers.

But... I'm sure that some salesman will have a great Christmas because of you.

4
7

Car-crash television: 'Excuse me ma'am, do you speak English?' 'Yes I do,' replies AMD's CEO

CheesyTheClown
Silver badge

Re: F1 is a Car Crash

I'm pretty impressed.... I had to look up what F1 was. Then I realized it was those cars from Iron Man.

I didn't realize people knew this much about people who drive around in circles over and over again.

I suppose it's cultural or something.

Is it true that these cars are meant to be as similar as possible and that the organizers strictly prohibit the teams from doing anything to modernize the vehicles beyond tuning them? Is it basically really well tuned Ford Model T technology? It's just an internal combustion engine with lots of electronics to tweak and tune them right?

From a technological perspective, are they allowed to do anything interesting outside of material sciences? Can they even do anything good with material sciences? Like could they make the body of a more advanced composite than their competition? Could they make something like a run flat tire using a carbon nanotube structure which would allow them three or four more laps without changing tires?

It disappoints me a little that AMD would spend so much money on something as wasteful as this. But I'd imagine that it helps them make sales.

5
16

Accenture, Capgemini, Deloitte creating app to register 3m EU nationals living in Brexit Britain

CheesyTheClown
Silver badge

Re: An app?

If they don’t have a supported mobile phone, they won’t be able to install the UK approved “Big Brother” backdoor required for unlocking the phone by authorities.

Do you honestly think Theresa May will approve anyone not proving themselves to the UK by forfeituring their right to privacy.

5
0

My PC makes ‘negative energy waves’, said user, then demanded fix

CheesyTheClown
Silver badge

Allergies to blue LEDs, not WiFi

I have cured a lot of users ailments due to their allergy to blue LEDs. By turning off the LEDs on Cisco access points, the people suffering the most can sit in peace while using their laptops on WiFi everywhere you disable the lights.

2
0

Mind the gap: Men paid 18.6% more than women in Blighty tech sector

CheesyTheClown
Silver badge

This is the wrong measurement

I would like to see a comparison of three things :

1) People who do their work task by task vs. people who do their work and brag about every single thing they do each time they do something.

I believe wholeheartedly that if you were to do this research, you'll find that the gender gap shrinks considerably. Men or women who spend less time working and more time bragging about how important their contributions to the company are get paid a great deal more.

2) People who climb ladders actively vs. those who work and expect to be rewarded fairly.

You'll find that people who "make themselves seem important" and then actively create bidding wars for them are paid far better than people who don't.

And most importantly....

3) Height and voice depth

I'm absolutely convinced that you'll find that taller people (regardless of gender) are paid more. Women of course can equalize this by wearing heels, but when heels are past a certain point, then end up looking cheap and desperate. There is only so much they can do here. Of course, man or woman, keeping a small waistline will exaggerate the appearance of their height, so living as a vegan or an anorexic can even the odds here.

Of course, voice depth means a lot. Listen to a man or woman with a higher pitched voice vs. a deeper voice. You'll likely find that the deeper the person's voice, the more serious and important they seem. This is true until such time as a voice becomes so deep that no one can understand it.

Consider that someone listing their accomplishments in baritone sounds confident. Listing your accomplishments in soprano sounds like whining. I think you'll find that women who have an alto voice will consistently perform better than those who speak with a soprano.

Bonus) Accents

The more "educated" a dialect and vocabulary, the higher a person will be paid. Using larger and more advanced word appropriately with a more distinguished/professor-like pronunciation, the more people will earn.

Many of these things can be faked, but the "faking it" takes time, effort and also talent to get right. If you look like you're faking any of them, you won't be taken seriously and people who see you as being weak and worth less instead. So 4.5" heels on a woman can be natural if their foot is proportionately large enough. 5" heels look like a secretary trying to show her legs. Same as men and shoes. Raised heels inside a shoe can't be more than a CM or two max. Elongating a mans legs to look "feminine" by adding 3 or 4cm makes the man appear submissive. The goal is to achieve dominance through appearance and elocution while not appearing as though attempting to do so.

I am in a top working class salary bracket. I make a lot more than nearly every woman in the company (and there are A LOT of them) and have managed a senior level position. I have completely wrecked the averages because it would take A LOT of women to make the average lean back towards them. I achieved this through a combination of dedication to my work as well as marketing myself wisely. My salary will most likely not increase drastically again relative to inflation as I've reached my pay ceiling for my skills and comfort level in "pimping/whoring myself out". In fact, my goal is to achieve 5 years in this bracket before I start seeing a decline without a major shift in strategy.

Let's research the real issues and unfortunately learn that some people actually achieve higher pay by manipulating their physical appearance to be paid more.

5
1

Meet the open sorcerers who have vowed to make Facebook history

CheesyTheClown
Silver badge

Re: standards exist from ITU, GSM and IETF

XMPP didn’t catch on because it was even worse than the SIP it was trying I replace.

I have to implement an XMPP server soon. I have been googling like mad for months and while I’ve implemented dozens of major protocols in my life, I haven’t the slightest idea where to start with XMPP.

If you can’t implement a protocol, you can’t integrate it. It looks to me like XMPP will take months or more just to get the basic features running.

No I can’t use a library. They’re not good enough.

No I won’t use a C or C++ program, I refuse to take those security risks. I will need to support communication between 100,000+ devices and the only reason we need XMPP is because of security. I’m not going to start by using languages which run native code on the servers.

1
1

Page:

Forums

Biting the hand that feeds IT © 1998–2018