* Posts by CheesyTheClown

690 posts • joined 3 Jul 2009

Page:

DNAaaahahaha: Twins' 23andMe, Ancestry, etc genetic tests vary wildly, surprising no one

CheesyTheClown Silver badge

Boffins or Bafoons?

They obviously are different :)

Journalists generally have absolutely no respect for science.

You're quoting "Boffins at Yale university, having studied the women's raw DNA data, said all the numbers should have been dead on."

Let's start by saying that the definition of a pair of identical twins is that they were both hatched from the same egg, or more accurately the egg split in half after being inseminated and produced two separate masses which eventually developed into two individual humans. I have not looked it up and I'm already guilty of one of the same critical mistakes made by the journalist which is that I have no verified my facts. But this is how I understand it.

If I am correct, then cellular reproduction through mitosis should have split the nucleotides of the original cell precisely. This means that over a certain percentage of the genetic pairs to be reproduced survived in tact. Let me clarify, from what I can fathom, simply mathematical entropy dictates that there must be an error in every cellular reproduction. It is not mathematically possibly for two cells to be 100% alike. 99.9999% is realistic, but not 100%. This is a mandatory aspect of science. To make the particularly clear, refer to Walter Lewin's initial lecture from Physics 801.x in MIT Open Courseware on Youtube where he explains how to measure in science.

So, we're presented by your quote "Boffin's at Yale..." which leads me to ask :

- What is a boffin?

- What is the measure of a boffin?

- Who is qualified to measure whether this individual is a boffin?

- What is the track record of accuracy by the boffin?

- Was the boffin a scientist? If so what field?

- Was the boffin a student? If so, what level and field?

- Was the boffin an administrator? Were the a scientist before? When did they last practice? How well did their research hold up when peer reviewed? Did they leave science because they weren't very good at it and now they wear a suit?

And "having studied the women's raw DNA data"

- How contaminated was the data (0% is not possible)

- How was it studied?

- Were all 3 billion strands sequenced and compared? Was it simply a selection?

- Did they study just the saliva as the companies did or was it a blood sample?

- Does saliva increase contamination?

- Was the DNA sample taken at Yale?

- Was the DNA sample shipped?

- If it was shipped, was it shipped the same way?

- Could air freight cause premature decay or even mutation, etc...?

And "said all the numbers should have been dead on"

- Was the boffin really a boffin? (see above)

- What does dead on mean? What is the percentage of error?

- What numbers are we talking about?

- Should the results been identical between the twins?

- Could two samples from the same twin produce the same discrepancies?

- Could two separate analysis of the same sample produce the same discrepancies?

- What happens when one person spits into a tube and then the spit is separated into two tubes?

- Did the boffin say "all the numbers should have been dead on" or did they provide a meaningful figure?

- Is this the journalist's interpretation of what the boffin said?

- Are these words the words of the original journalist or was it rewritten to make it sound more British? I've never heard any self respecting person use the term boffin if they actually had a clue to begin with. Same for the word expert.

- Did the "boffin" dumb down the results for someone who is obviously oblivious?

Overall, does this comment translate to :

"Qualified scientists with proven track records specializing in the field of human genetic research as it relates to ancestry evaluated 98% of the sample from each of the two twins, verified that they are in fact identical and that there should be no more than 5% margin of error when comparing the results of genetic studies between the two girls."

I'm not a scientist. I'm barely a high school graduate. But before I would dispute the accuracy of what the AC said by defending it by stating "Boffins at Yale university, having studied the women's raw DNA data, said all the numbers should have been dead on.", I would absolutely start with attempting to answer those questions above.

At this time, while I believe strongly that these twins are in fact identical and I believe they were verified at some point in their life (whether by a "boffin at Yale" or elsewhere) to have been conceived from the same egg, I would not offer it as evidence without further research or conclusive proof. That would be an insult to science. My believe is irrelevant unless this were some high school sociological report for a political science class. And even then, that's a misnomer.

Let's address ancestry as well.

I read the results and let's take an except as you did.

- 23andMe reckoned the twins are about 40 per cent Italian, and 25 per cent Eastern European;

- AncestryDNA said they are about 40 per cent Russia or Eastern European, and 30 per cent Italian;

- MyHeritageDNA concluded are about 60 per cent Balkan, and 20 per cent Greek.

I'm no expert on ancestry, but I have questions.

- What does it mean to be italian, eastern european, russian, etc...?

- Within how many generations would the be counting?

- Would the 40% Italian refer to the 1800's very likely means balkan and greek in 200AD?

- Is Russian of east european or east asian or central asian descent?

- Is balkan east european or russian?

What I'm reading here is that all three companies were in 100% agreement. If anything, for an imperfect science, it's impressive how perfectly they agree.

As another item

- Two of the tests reported that the twins had no Middle Eastern ancestry, while the three others did, with FamilyTreeDNA saying 13 per cent of their sample matched with the region.

I'm pretty sure that if we believe modern science, everyone on earth should come from the middle east and before that Africa. So unless the twins come from another planet, they have middle eastern descent. The question is, how many generations back? Also, what counts as middle eastern descent? If we refer to religion, only a few thousand years ago, Jerusalem was in war with the Seleucid empire which almost definitely spread seeds from the middle east to the Mediterranean or maybe the Mediterranean seed was spread widely enough to influence what is considered the middle eastern bloodline today.

Again, I don't see the data conflicting, I would need far more information to sound even moderately intelligent.

And I'll attack one more

- On top of this, each test couldn't quite agree on the percentages between the sisters, which is odd because the twins share a single genetic profile.

This is absolutely 10000% not true. They are born from the same egg. I can't find her birth date, but I'd put her at about 33 years old, but that may simply be because she dresses like an old lady. Either way, let's round to 30 years old.

Unless she and her sister have been in the womb together until last week which I doubt, they should have been through an infinitely different life causing infinite changes in their their genetic code from one another. Their genes can't possibly be that similar anymore. Almost certainly less than 99% similar now. That's at least 30 million differences from one another given that a human gene consists of 3 billion nucleotides or genetic pairs.

Consider that simply walking in the sun causes genetic mutation. Pressures from drinking water can cause genetic mutation (I read a peer reviewed paper on this, but can't cite it).

So, please don't bash the AC, he's as full of shit as I am... the only thing I got out of this article is that some girl who has a twin sister has now made an international impact with her ignorance of science in an effort to make a headline to increase the ratings of her TV show which should have served the purpose of informing people of facts.

I would like to see some real research on this topic from people who are far smarter than me. At this time, I have many questions and wouldn't even know where to start answering them.

IBM to kill off Watson... Workspace from end of February

CheesyTheClown Silver badge

Maybe if someone knew it was there?

Ok, so I work for a company which is A LOT older than IBM, has one tenth the head count but is 1/4 the size in dollars.... and while IBM is a pretty interesting company, I wonder if there's something failing when IBM isn't sitting at my office begging me to buy their stuff.

My company has spending to do for our customers which could be worth several points on their share values if they were to make an effort. But while we give Cisco about $2 billion a year, I don't think IBM even tries to gain our love. And to be honest, with the project I'm working on, if I even knew that Watson Workplaces was there, I might have considered it as a solution.

IBM is failing because Ginny is targeting only C-level business and she's a acquisitions and mergers monster. She's great at that. Every time she lose an important customer, she buys the company she lost them to. But here's the thing. Working for the world's second largest telecom provider and walking distance from the CEO's office, I couldn't tell you how I would even start a conversation with IBM. I bet they have a pile of crap I could find interesting that could save me a lot of time and work to make deliveries to my customers. I'd even consider buying a mainframe and writing new projects for 20 year projects on them. But, I have no idea where I would get the people or expertise or even contacts required to even talk with IBM.

I guess IBM only wants to sell to people who know what they have.

Oracle boss's Brexit Britain trip shutdown due to US government shutdown

CheesyTheClown Silver badge

Re: WTAF?

I was wondering about this as well. I don't care if you're Donald Trump or even someone important, try getting through Heathrow without a passport. You can't even transfer planes in the same terminal without being treated like a criminal at Heathrow. I've traveled business or first class through Heathrow many times and I just finished moving all my frequent flier miles (200,000+ executive club miles) to One World because I refuse to travel through the UK anymore since the security got stupid.

So, the author is awful. This was an expired passport.

I'm an American citizen and I've had passports replaced in 12 hours or less by having FedExing the forms through Boston and having them walked through by an agent. It's pretty simple actually.

But during a shutdown, I'd imagine that this is not possible. That said, it's an Oracle thing, it's not really important that people like Hurd show up... even Larry does things like ditching his keynote at Oracle conferences to play with his boats.

Insiders! The good news: Windows 10 Sandbox is here for testing. Bad news: Microsoft has already broken it

CheesyTheClown Silver badge

Re: Windows sandbox

Ok... first of all, Sandbox is an insider feature which means that things will and DO go wrong. It's not meant to be reliable software, it's meant to be bleeding edge. Think of it as the alpha and beta versions of times past.

Second of all, security fixes from release builds generally come into sandbox builds. The security fixes are tested against release and if they're tested against insider as well, it's a bonus.

Internet Explorer is an application built on top of many Windows APIs including for example the URL engine for things like fetching and caching web media. It's like libcurl. Just like libcurl, wget, etc... there are always updates and security patches being made to it. So, when making updates and security fixes to the web browser, if fixes need to be made to the underlying operating system as well, they are.

That said, I've had fixes to IE break my code many times. This is ok. I get a more secure and often higher performance platform to run my code on. It's worth a patch or two if it keeps my users safe.

As for what the sandbox does, I'd imagine that the same APIs which IE uses to sandbox the file system from web apps requesting access to system resources (probably via Javascript) are used to provide file system access for example. If I had to guess, it probably is tied to the file linking mechanism.

London Gatwick Airport reopens but drone chaos perps still not found

CheesyTheClown Silver badge

Re: How hard is the approximate localization of a 2.4GHz sender operating in or near an airport?

Let’s assume for the moment that we were to plan the perfect crime here. This is a fun game.

1) Communications

Don’t use ISM, instead use LTE and VPNs. It’s pretty cheap and easy to buy SIM cards and activate them in places like Romania without a postal address or even ID. Buy cards that works throughout Europe. They’re cheap enough and far more effective for ranged communication than 2.4. Additionally, jamming is a bigger problem as you can’t jam telephone signals at an airport during a crisis. In a place like England where people are dumb enough to vote Brexit and have fights in bars over children kicking balls around, it would cause riots.

2) Use 3D printed drones. Don’t buy commercial, they’re too expensive and too easy to track. Just download any of a hundred tested designs and order the parts from a hundred different Chinese shops.

3) Don’t design for automatic landing.

4) Add solar cells and plan ahead. Don’t try planting them yourself, instead, launch them 20 miles away from different sites and have them fly closer each day at low altitude until they are close.

5) Don’t depend on remote control. Write a program which uses GPS and AGPS to put on an “Animitronic Performance”. Then they can run for hours without fear of interference.

6) Stream everything to Twitch or something else anonymously. Send streams to Indonesian web sites or something like that instead.

I have to go shopping... but it would be fun to continue ... share your thoughts.

It's the wobbly Microsoft service sweepstake! If you have 'Teams', you've won a lifetime Slack sub

CheesyTheClown Silver badge

Re: Of course, given recent statements from the rumor mill...

What do you mean poorly on Linux? I’m not baiting, I’m currently investing heavily into Powershell on Linux and would like a heads up.

Bordeaux-no! Wine guzzling at UK.gov events rises 20%

CheesyTheClown Silver badge

This is true

I've met polite people from France.

I've actually met pretty girls from England

I've met Finnish people who understand sarcasm

I've met American's who aren't entirely binary in every one of their beliefs

I haven't bothered with American wines in the past 20 years, though I know they have improved.

I generally drink Spanish wine (Marquee De Caceras, Faustino I) as they are compatible with my food preference.

I have a fridge full of Dom Perignon I have been collecting for 20 years to serve at my children's weddings.

The one thing I can be sure of though... booze is simply too strong these days and it's ruining it across all international borders. When I read comments from the UK about people who work in places where booze is permitted or not, I'm generally shocked. Having a glass of wine based on 1950's and earlier standards would be similar to a 3-5% alcohol and it may have even been watered as well. These days, at 13% and higher, the person drinking it probably is useless for a while afterwards.

Also, a glass of wine in the 1950's was considerably smaller than it is today. Having a glass of wine with lunch really didn't provide enough alcohol to consider. Today however, people are basically getting buzzed at lunch.

I would love to see a return to when "drinking wine" or "table wine" was a good idea. Just enough alcohol to make the flavor work, but not enough to get blasted. I've had terrible experiences with modern wines. They all taste like alcohol. It's almost as if we judging the quality of a drink based on how well we believe it will mess us up. I wonder if the European nations still remember how to make wine properly and if they could actually create wines that earned their merits on flavor as opposed to toxicity.

Well now you node: They're not known for speed, but Ceph storage systems can fly

CheesyTheClown Silver badge

Re: 6ms+ w NVMe

I was thinking the same thing. But when working with asynchronous writes, it’s actually not an issue. The real issue is how many writes can be queued. If you look at most of the block based storage systems (NetApp for example) they all have insanely low write latency, but their scalability is horrifying. I would never consider Ceph for block storage since that’s just plain stupid. Block storage is dead and only for VM losers who insist on having no clue what is actually using the storage.

I would have been far more interested in seeing database performance tests running on the storage cluster. I also think that things like erasure coding is just a terrible idea in general. File or record replication is the only sensible solution for modern storage.

A major issue which is what most people ignore on modern storage and is why block storage is just plain stupid is transaction management on power loss. Write times tend to take a really long time when writes are entirely transactional. NVMe as a fabric protocol is a really really really bad idea because it removes any intelligence from the write process.

The main problem with write latency for block storage on a system like Ceph is that it’s basically reliably storing blocks as files. This has a really high cost. It’s a great design, but again, block storage just is so wrong on so many levels that I wish they would just kill it off.

So if Micron wants to impress me, I’d much rather see a cluster of much much smaller nodes running something like MongoDB or Couchbase in a cluster. A great test would be performance across a cluster of Latte Panda Alpha nodes with a single Micron SSD each. Use gigabit network switches and enable QoS and multicast. I suspect they should see quadruple the performance that they are publishing here for substantially less money.

Better yet, how about a similar design providing high performance object storage for photographs? When managing map/reduce cluster storage, add hot and cold as well, it would be dozens of times faster per transaction.

This is a design that uses new tools to solve an old problem which no one should be wasting more money on. Big servers are soooooo 2015.

Oi! Not encrypting RPC traffic? IETF bods would like to change that

CheesyTheClown Silver badge

Re: stunnel, wireguard

TLS1.3 is a major change. I'd imagine that with new protocols, we'd use TLS and DTLS 1.3 as opposed to earlier versions.

Also consider that the performance issues with earlier versions of TLS have been mostly handshake related. This is a short term problem in NFS since NFS is a long term protocol.

There are some real issues with NFSv4 which make it unsuitable for environments which require distance. It's not nearly as terrible as using a FibreChannel techology, but it can be pretty bad all the same. Most people don't properly prepare their networks for NFSv4. NFS loses so much performance it's barely useable if the MTU on the connection is less than 8500 bytes.

NFS also has a ridiculously high retry overhead.

NFS should NEVER EVER EVER EVER be run over TCP... if you ever think that running NFS over TCP is a good idea... stop everything you're doing and read the RFC which explains that TCP support is only there for interoperability. Unless you're using some REALLY REALLY bad software like VMware which seems wholly intent on having poor NFS support (no PNFS support for how long after PNFS came out?) you should run NFS as UDP only.

There are many reasons for this... the most obvious reason is that TCP is a truly horrible protocol. It's a quick and dirty solution for programmers who don't want to learn how protocols work or understand anything about state machines. UDP is for people who have real work to do. Quic is even better, but that's a little while off.

I would recommend against using wireGuard.

- It's doing in kernel what should be done in user space

- It's two letter variable name hell

- It's directly modifying sk_buff instead of using helper functions which increases risk over time with kernel updates to security holes being introduced.

- key exchange is extremely limited

I won't say I see any real security holes in it, and I will admit it's some of the most cleanly written kernel module code I've seen in a long time. But there's a LOT of complexity in there and it's running in absolutely privileged kernel mode. It looks like a great place to attack a server. One minor unnoticed change to the kernel tree and specifically sk_buff and this thing is a welcome matt to hackers.

FPGAs? Sure, them too. Liqid pours chips over composable computing systems

CheesyTheClown Silver badge

Counter-intuitive

I'm somewhat proficient in VHDL and I've done a bit of functional programming as well. The issue is that when something generally is a series of instructions, it's often uncomfortable and simply backwards to describe things in terms of state.

I've told people before that a great starting point for learning to do VHDL is to write a parser using a language grammar tool. It's one of the simplest forms of functional programming to learn.

Another thing to realize is that the backwards fashion in which most HDLs are written is extra difficult since something in terms of "Hello World" is a nightmare as there's a LOT of setup to do to produce even a basic synthesized entity. Hell, for that fact, simply the setup work for an entity itself is intimidating if you don't already understand implicitly what that means.

There's been a lot of work into things like System-C and System-Verilog to make this all a little easier, but it's still a HUGE leap.

Now, OpenCL has proven to be a great solution for a lot of people. While the code generated by OpenCL for the purpose is generally horrible at best, it does lower the entry level a great deal for programmers.

Consider a card like this one which Liquid is pushing.

You need to take a data set, load it into memory in a way that makes it available to the FPGA (whether internally or over the PCIe bus), then you need to make easily parallelizable code which can be provided as source to the engine which compiles it and uploads it to the card. Of course, the complexity of the compilation phase is substantially higher than uploading to a GPU, so the processing time can be very long. Then the code is loaded on the card and executed and the resulting data needs to be transferred back to main memory.

There are A LOT of programmers who wouldn't have the first idea where to start with this. There's always cut and paste, but it can be extremely difficult to learn to write OpenCL code that would take less time to compile (synthesize), upload and run that couldn't have just been run faster on CPU.

Then there's things like memory alignment. Programmers who understand memory alignment on x86 CPUs (and there's far fewer of those than there should be) can find themselves lost when considering that RAM within an FPGA is addressed entirely differently. Heck, RAM within the FPGA might have 5 or more entirely different patterns of how it's accessed. Consider that most programmers (except for people like those on the x264 project) rarely consider how their code interacts with L1, L2, L3 and L4 cache. They simply spray and pray. Processor affinity is almost never a consideration. We probably wouldn't even need most supercomputers if scientific programmers understood how to distribute their data sets across memory properly.

I've increased calculation performance on data sets more than 10,000 fold within a few hours just by aligning memory and distributing the data set so that key coefficients would always reside within L1 or worst case, L2 cache.

I've increased code even more simply by choosing the proper non-arbitrary scale matrix multiplication function for the job. It's fascinating how many developers simply multiply a matrix against another matrix with a complete disregard for how a matrix is calculated. I actually one time saw a 50,000x performance improvement by refactoring the math of a relatively simple formula from a 3x4 to a 4x4 matrix and moving it from an arbitrary math library to a game programmers library. The company who I did it for was amazed because they had been renting GPU time to run Matlab in the cloud and by simply making code which could be optimized properly by the compiler... a total of Google->Copy & Paste->Compile->Link the company saved tens of thousands of dollars.

When I see things like the latest two entries into the super-computer Top500, all I can think of is that the code running on there almost certainly could be optimized to distribute via Docker into a Kubernetes cluster, the data sets can be logically distributed for Map/Reduce and instead of buying a hundred million dollars of computer or renting time on it, the same simulations could be performed for a few hundred bucks in the cloud.

Hell, if the data set were properly optimized for map/reduce instead of using some insane massive shared memory monster, it probably would run on a used servers in a rack. I bought a 128-Core Cisco UCS cluster with 1.5TB of RAM for under $15,000. It doesn't even have GPUs and for a rough comparison, when I tested using crypto-currency mining as a POC, it was out-performing $15,000 worth of NVidia graphics cards... of course, the power cost was MUCH higher, but it wasn't meant to test feasibility of crypto mining, it was just a means of testing highly optimized code on different platforms. And frankly, scrypt is a pretty good test.

I'll tell you ... FP is lovely.. if you can bend to it. F# is very nice and Haskell is pretty nice as well. Some purists will swear by LISP or Scheme, and there's the crazies in the Ericsson camp.

The issue with FP isn't whether it's good or easy or not. It's the same problem you'll encounter with HDLs, the code written in it is generally written by very mathematical minds that think in terms of state and it makes it utterly unreadable.

Another 3D printer? Oh, stop it, you're killing us. Perhaps literally: Fears over ultrafine dust

CheesyTheClown Silver badge

Re: 'Give us money'

I’m not certain. I’ve been looking into charcoal filtration for the printers I share an office with. I find that SLA printing is nasty to share a room with. FDM isn’t as bad, but I sometimes wonder if I’m getting headaches from it. I currently have 4 FDM printers running pretty much 24/7 and it’s better to be safe than sorry.

Samsung claims key-value Z-SSD will be fastest flash ever

CheesyTheClown Silver badge

Yes please

Just... yes please.

I’ve been desperately waiting for a something like this. If they have a KV solution which supports replication that would be absolutely amazing!!!

There's no 'I' in 'IMFT' – because Micron intends to buy Intel out of 3D XPoint joint venture

CheesyTheClown Silver badge

Octane flopped because

RAM prices were to high and to use Optane as an acceleration tool for SSD, it was too rich for most people’s blood. Let’s not forget that GPU prices were triple what was reasonable. There simply was no room in most people’s budgets for a product that simply didn’t give enough of a boost to justify the additional cost as opposed to getting faster RAM or a better GPU

That 'Surface will die in 2019' prediction is still a goer, says soothsayer

CheesyTheClown Silver badge

Is there anything wrong with Windows 10?

Ok, so I'm at a loss... I have Windows 10 in front of me now. I seriously can't see anything particularly wrong with it. It's fast, it's responsive, it's stable, it generally just works. It hasn't had most of the security issues that we've had in the past and most of the modern security issues are about users messing up.

I would say pretty much the same about Mac OS X. The real shortcoming to OS X these days is that if you want to run Linux, you need a VM and Windows doesn't need it. And the Mac OS X command line is extremely limited compared to Linux.

Oslo clever clogs craft code to scan di mavens and snare dodgy staff

CheesyTheClown Silver badge

Re: It's all academic

The funny thing is that Norwegian law wouldn’t allow this system to be used :)

Spoiler alert: Google's would-be iPhone killer Pixel 3 – so many leaks

CheesyTheClown Silver badge

Re: Fscking notch...

And, you can’t hold the phone one handed and read shit without constantly moving your fingers.

CheesyTheClown Silver badge

Re: Mistaken

I agree. Though this past year, I have started purchasing or renting films from Google Play and Windows Store. This is because Apple makes it difficult for me to even understand which account I’m paying from. Sometimes I buy a film on iTunes and it pulls from my PayPal... other times it pulls from my credit card. Google and Microsoft are easier to manage.

I buy iPhone because Apple makes one or two models a year and updates seem to come for years after they stop selling the model. That makes me feel as though there is a return on investment. Or it did. But since around the time Jobs kicked off, the iPhone has become progressively worse. In addition, my entire phone seems hellbent on trying to sell me shit. I mean seriously,

I’ve bought most of the songs I like already. I have about 2000-3000 tracks in my iTunes catalog. If I were to pay for Apple Music, I would need to listen to an average of about 15 new songs a month... every month for it to be profitable. That means I’d have to listen to 180 new songs a year to make it cheaper than buying the songs I like outright. I’m not that guy. Most of what I listen to is old. I don’t even turn the stereo in my car on. I have no interest in listening to music to simply hear noise. I don’t want Apple Music. I will never want Apple Music. Why the fuck can’t I open my music player and not be constantly attacked about buying Apple Music?

Then there’s the headphone jack. I have two laptops, an iPhone and a TV at home. Bluetooth sucks for that. Why would I ever want to spend my whole life pairing my headphones. It’s easier to just plug and unplug. Also, I depend on corded headphones to make sure that I never leave my headphones or telephone behind.

Apple is sooooooooo far from what I came to love about them. But what does it matter if I’m just someone who used to spend $7,000 a year with Apple. Now I have a Surface Book 2 and am willing to switch to Android if Google releases a high end phone with a headphone jack. I’m willing to pay $1200 for a Google branded phone (won’t buy knock offs made by companies who don’t write the OS). It should be small enough to fit in my pocket but large enough to read. It should have edges so I don’t have to move my fingers to read text... none of this curving off the edge shit. It should also be easy enough to unlock that I don’t need to look at it or pick it up to see if I want to pick it up. Thumb print is fine.

Basically, I want an iPhone 6S Plus but with Android. I have a top spec iPhone X sitting on the coffee table collecting dust. I’m back on my 6S Plus... the last good phone Apple made... but Apple apparently doesn’t run unit tests on the 6S Plus anymore.

Developer goes rogue, shoots four colleagues at ERP code maker

CheesyTheClown Silver badge

An American also seems involved.

Many countries have many guns. It’s a US anomaly with regards to human behavior that is causing the shootings. If you haven’t ever been to America, the US is somewhat of a cesspool or hate and almost British like superiority trips. It’s a non-stop environment of toxicity. Their news networks run almost non-stop hate trips to hopefully scrape by with enough ratings and viewers.

I left America 20 years ago and each time I go back, I’m absolutely shocked at how everyone is superior to everyone else. I just met an American yesterday who in less than two minutes told me why his daughter was superior to her peers.

It’s also amazing how incredible the toxicity of hate is. It’s a non-stop degradation of humanity. Every news paper, news channel, social media network, etc... is absolutely non-stop negativity.

It’s not about the guns... I think the guns are just an excuse now. I think it’s about everyone from the president downward selling superiority, hate and distrust. I’m pretty sure if you took the guns away, it would be bombs.

Spent your week box-ticking? It can't be as bad as the folk at this firm

CheesyTheClown Silver badge

Cisco ISE

It sounds like Cisco ISE’s TrustSec tools.

The good news is that in the latest version, the mouse wheel works most of the time. It used to be click 5 boxes, the move to the tiny little scroll bar and then click 5 more. Now you can click 5 and scroll using the wheel. So safely clicking 676 boxes when you have 26 groups is almost doable without too many mistakes now.

Hello 'WOS': Windows on Arm now has a price

CheesyTheClown Silver badge

Re: I Wish You Luck

I use ARM every day in my development environment. I work almost entirely on Raspberry Pi these days.

I would profit greatly from a Windows laptop running on ARM with Raspbian running in WSL.

That said, I already get 12 hours battery life on my Surface Book 2 for watching videos and I also have. Core i7 with 16GB RAM and a GTX 1060.

Nokia basically destroyed their entire telephone business by shipping underpowered machines with too little RAM because they actually believed batter life was why people bought phones. They bragged non-stop about how Symbian didn’t need 200Mhz CPUs and 32MB of RAM and yet, the web did and when iPhone came out and was a CPU, Memory and battery whore, people dumped Nokia like the piece of crap it was. The switch to Windows was just a final death throw.

After all these years, ARM advocates seem to think people give a crap about battery life and are willing to sacrifice all else... like compatibility or usability just so they can not run what they want or be able to use it just because they can’t carry a small charger with them. I honestly believe that until ARM laptops are down to $399 or less and deliver always online Core i5 performance, they won’t sell more than a handful of laptops.

Let’s also consider that no company shipping Qualcomm laptops are making a real effort at it. They’re building them just in case someone shows interest. But really, the mass market doesn’t have a clue what this is or why it matters and for that much money, there are far more impressive options.

And oh... connectivity. If always connected was really a core business for Microsoft, why is it that my 2018 model Surface Book 2 15” Computer packs LTE?

VMware 'pressured' hotel to shut down tech event close to VMworld, IGEL sues resort giant

CheesyTheClown Silver badge

Skipped Cisco Live two years and will next

Cisco has been holding Live! In Vegas lately. I have absolutely no interest in me, my colleagues or my customers being in Vegas for the event.

The town is too loud. It’s very tacky. It is precisely the place civilized people would not want to be associated with. Let’s be honest, “what happens in Vegas...” guess what, this is not the kind of professional relationship I want to maintain with those who depend on me or I depend on.

Why would you want to hold a conference in Vegas?

1) Legalized prostitution

2) Legalized gambling

3) Free booze at the tables

4) Free or cheap buffets to gouge yourself at

5) Readily available narcotics of all sorts

6) Massive amounts of waste... not a little, the city must be one of the most disgustingly wasteful cities on earth.

7) Sequins... if that’s your thing.

Can you honestly say that you would want your serious customers to believe this is the type of behavior you associate with professionalism?

Pavilion compares RocE and TCP NVMe over Fabrics performance

CheesyTheClown Silver badge

Digging for use cases?

Ok, let’s kill the use case already.

MongoDB... you scale this out, not up, MongoDB’s performance will always be better when run with local disk instead of centralized.

Then, let’s talk how MongoDB is deployed.

It’s done through Kubernetes... not as a VM, but as a container. If you need more storage per node, you probably need a new DB admin who actually has a clue.

Then there’s development environment. When you deploy a development environment, you run minikube and deploy. Done. No point in spinning up a whole VM. It’s just wasteful and locks the developer into a desktop.

Of course there’s also cloud instances of MongoDB if you really need something online to be shared.

And for tests... you would never use a production database cluster for tests. You wouldn’t spin up a new database cluster on a SAN or central storage. You’d run it on minikube or in the cloud on Appveyor or something similar.

If latency is really an issue for your storage, instead of a few narrow 25Gbe pipes to an oversubscribed PCIe ASIC for switching and an FPGA for block lookups, you would instead use more small scale nodes, map/reduce and spread the work-load with tiered storage.

A 25GbE network or RoCE network in general would cost a massive fortune to compensate for a poorly designed database. Instead, it’s better to use 1GbE or even 100MbE to scale the compute workload into more small nodes. 99% of the time, 100 $500 nodes connected by $30 a port networking will use less power, cost considerably less to operate and perform substantially better than 9 $25,000 nodes.

Also, with a proper map/reduce design, the vast majority of operations become RAM based which will drastically reduce latency compared to even the most impressive NVMe architectures based on obsessive scrubbing. Go the extra mile and make indexes that are actually well formed and use views and/or eventing to mutate records and NVMe is a really useless idea.

Now, a common problem I’ve encountered is in HPC... this is an area where propagating data sets for map reduce can consume hours of time given the right data set. There are times where processes don’t justify 2 extra months of optimization. In this case, NVMe is still a bad idea because RAM caching in an RDMA environment is much smarter.

I just don’t see a market for all flash NVMe except in legacy networks.

That said, I just designed a data center network for a legacy VMware installation earlier today. I threw about $120,000 of switches at the problem. Of course, if we had worked on downscaling the data center and moving to K8s, we probably could have saved the company $2 million over the next 3 years.

You lead the all-flash array market. And you, you, you, you, you and you...

CheesyTheClown Silver badge

What's the value anymore?

Ok, here's the thing... all flash is generally a really bad idea for multiple different reasons.

M.2 Flash has a theoretical maximum performance of 3.94GB/sec bandwidth on the bus. Therefore a system with 10 of these drives should be able to theoretically transfer an aggregate bandwidth of 39.4GB a second in the right circumstances.

A single lane of networking or fibre channel is approximately 25Gb/sec which is less than 1/10th of the bus bandwidth of a drive. So in a circumstance where a controller can provide 10 or more lanes of bus bandwidth for data transfers, this would be great, but this numbers are so incredibly high that this is not even an option.

So, we know for a fact that the bus capacity of even the highest performance storage systems can barely make a dent in a very low end all flash environment.

Let's get to semiconductors.

Let's consider 10 M.2 drives with 4 32Gb Fibre Channel adapters. This would mean that a minimum of 72 PCIe 3.0 lanes would be required to allow full saturation of all busses.

This is great, but the next problem is that in this configuration, there's no means of block translation between systems. That means that things like virtual LUNs would not be possible.

It is theoretically possible to implement in FPGA (DO NOT USE ASIC HERE) a traffic controller capable of handling protocols and full capacity translation using a CPU style MMU for translation of regions of storage instead of regions of memory, but the complexity would have to be extremely limited and because of the translation table coherency, it would be extremely volatile.

Now... the next issue is that assuming some absolute miracle worker out there manages to develop a provisioning, translation and allocation system for course grained storage, this would more or less mean that things like thin provisioned LUNs would be borderline impossible in this configuration. In fact, based on modern technology, it could maybe be possible with custom FPGAs designed specifically for an individual design, but the volumes would be far too low to ever see return on investment for the ASIC vendor.

Well, now we're back to dumb storage arrays. That means no compression, thin provisioning, deduplication and without at least another 40 lanes of PCI 3.0 serialized over fibre for long runs, there's pretty much no chance of guaranteed replication.

Remember this is only a 10 device M.2 system with only 4 fibre channel HBAs.

All Flash vs. spinning disk hybrid has never been a sane argument. Any storage system needs to properly manage storage. The protocols and the software involved need to be rock solid and well designed. FibreChannel and iSCSI have so much legacy that they're utterly useless for modern storage as they don't handle real world storage problems on the right sides of the cable anymore. Even with things like VMware's SCSI extensions for VAAI, there is far too much on the cable and thanks to fixed sized blocks, it should never exist. If nothing else, they lack any support for compression. Forget other things like client side deduplication so that hashes for dedup could be calculate not just for dedup, but for an additional non-secure means of authentication.

Now let's discuss cost a little.

Mathematics and physics and pure logic says that data redundancy requires a minimum of 3 active copies of a single piece of data at all times. This is not negotiable. This is an absolute bare minimum. That would mean to have the minimum requirement for redundant data, a company should have a minimum of 3 full storage arrays and possibly a 4th for circumstances with long term maintenance.

To build an all flash array with a minimal configuration, this would cost so much money that no company on earth should ever piss that much away. It just doesn't make sense.

The same stands true of fibre channel fabrics. There needs to be at least 3 in order to make commitments to uptime. This is not my rule. This is elementary school level math.

Fibre channel may support this, but the software and systems don't. It can be done on iSCSI, but certainly not on NVMe as a fabric for example. The cost would also be impossible to justify.

This is no longer 2010 when virtualization was nifty and fun and worth a try. This is 2018 when a single server can theoretically need to recover from failure of 500 or more virtual machines at a single time.

All Flash is not an option anymore. It's absolutely necessary to consider eliminating dumb storage. This means block based storage. We have a limited number of storage requirements which is reflected by every cloud vendor.

1) File storage.

This can be solved using S3 and many other methods, but S3 on a broadly distributed file system makes perfect sense. If you need NFS for now... have fun but avoid it. The important factor to consider here is that classical random file I/O is no longer a requirement.

2) Table/SQL storage

This is a legacy technology which is on its VERY SLOW way out. We'll still see a lot of systems actively developed towards this technology for some time, but it's no longer a prefered means of storage for systems as it lacks flexibility and is extremely hard to manage back end storage for.

3) Unstructured storage

This is often called NoSQL. This is a means that all systems have queryable storage which works kinda like records in a database but far smarter. So the data stored is saved as a file, but the contents can be queried. Looking at a system like Mongo or Couchbase shows what this is. Redis is good for this too but generally has volatility issues.

4) Logging

Unstructured storage can often be used for this, but the query front end will be more focussed on record ages with regards to querying and storage tiering.

Unless a storage solution offers all 4 of these solutions it's not really a storage solution it's just a bunch of drives and cables with severely limited bandwidth being constantly fought over.

Map/Reduce technology is absolutely a minimum requirement for all modern storage and this requires full layer-7 capabilities in the storage subsystems. This way as nodes are added, performance increases and in many cases decrease overhead.

As such, it makes no sense to implement a data center today on a SAN technology. It really makes absolutely no sense at all to deploy for example a containers based architecture on such a technology.

If you want to better understand this, start googling at Kubernetes and work your way through containerd and cgroups. You'll find that this block storage should always be local only. This means that if you were to deploy for example MongoDB, SQL servers, etc... as containers, they should always have permanent data stores that require no network or fabric access. All request will be managed locally and the system will scale as needed. Booting nodes via SAN may seem logical as well, but the overhead is extremely high and in reality, PXE or preferably HTTPS booting via UEFI is a much better solution.

Oh... and enterprise SSD is just a bad investment. It doesn't actually offer any benefits when your storage system is properly designed. RAID is really really really a bad idea. This is not how you secure storage anymore. It's really just wasted disk and wasted performance.

But there are a lot of companies out there who waste a lot of money on virtual machines. This is for legacy reasons. I suppose this will keep happening for a while. But if your IT department is even moderately competent, they should not be installing all flash arrays, they should instead be optimizing the storage solutions they already have to operate with the datasets they're actually running. I think you'll find that with the exception of some very special and very large data sets (like a capture from a run of the large hadron collider) more often than not, most existing virtualized storage systems would work just as well with a few SSD drives added as cache for their existing spinning disks.

Flash, spinning rust, cloud 'n' tape. Squeeze. Oof. Hyperconverge our storage suitcase, would you?

CheesyTheClown Silver badge

Re: Lenevo and Cloudistics could be a fail

This looks great, but suffers the same general problem as AzureStack.

First of all, to be honest, from a governance perspective, I don't trust Google to meet our needs. If nothing else, I don't trust Google to respect safe harbour. Microsoft has now spent years fighting the US government with regards to safe harbour issues, but Google simply provides transparency related to them. I have absolutely nothing to hide personally, but for business, I have to be vigilant with regards to peoples medical and financial records. This is not information that any company outside of my country has legal right to. That means, I can't even trust a root certificate outside this country. That also means that I can't use any identity systems controlled by any company outside of this country. That means no Google login or Azure AD. That also means no Azure Stack or GCP.

Beyond that, Cisco simply doesn't make anything even close to small enough for cloud computing anymore. They used to have the UCS-M series blades which were still too big. To run a cloud, you need a minimum of 9 nodes spread across 3 locations. The infrastructure cost of Cisco is far too high to consider this.

It's much better to have more nodes in more locations. As such we're experimenting with single board computers like Raspberry Pi (which is too underpowered but is promising) and LattePanda Alphas which are too expensive and possibly overpowered to run a cloud infrastructure.

We're looking now at Fedora (we'd choose RedHat, but don't know how to do business with them), Kubernetes, Couchbase and .NET Core. This combination seems to be among the most solid options on the market. We're also looking at OpenFaaS, but OpenFaaS is extremely heavy weight in the sense that it spins up containers for everything. Containers are insanely heavy to host a function. So we're looking into other means of isolating code.

We're walking very softly because we know that as soon as a component becomes part of our cloud, it's a permanent part which will require 20-50 years support. We need something we know will run on new hardware and have support.

Google is amazing and I'd love to use a hybrid cloud, but the problem with public clouds at all is that the money we could be spending on developers and engineers and supporting our customers is instead being burned on governance, compliance and legal. Instead, we need a full detached system which is why I was attracted by Lenevo's solution until it was clear that Cloudistics is focused only on selling to C-Level types and not to the engineers who will have to use it.

CheesyTheClown Silver badge

Lenevo and Cloudistics could be a fail

So, I'm working a lot on private cloud these days. The reason is that none of the public cloud vendors meet my governance requirements for the system my company is developing.

Azure Stack is out of the question because it requires that the platform is connected to the Internet for Azure AD. So... no luck there.

I've been looking and looking and to be fair, the best solution I've seen is to simply install Linux, Kubernetes, Couchbase and OpenFaaS. With these four items, it should be possible to run and maintain pretty much anything we need. We'll have to contribute changes to OpenFaaS as it's still not quite the answer to all our problems, and we're considering writing a Couchbase backend for OpenFaaS as well. But once all that is covered, it's a much better solution than other things.

That said, we keep our eyes open for alternatives. So when I saw a possible solution in this article, I went to check. It's a closed platform with no developer (or system administrator) documentation online. There's no open source links and there's no apparent community behind it.

So, why in the world would anyone ever invest in a platform from a company like Cloudistics which no one has ever heard of, has no community and hence no "experts" and more than likely won't exist in 12 months time?

If I were shareholder of a company who chose to use this solution in its current state, I would consider litigation for gross mismanagement of the company. This is an excellent example of how companies like Cisco, Lenevo, HPE and others are so completely out of touch with what the cloud is that white box actually makes more sense.

ReactOS 0.4.9 release metes out stability and self-hosting, still looks like a '90s fever dream

CheesyTheClown Silver badge

Re: Use case for ReactOS

I'll start with... because "Some of us like it" and don't really mind paying a few bucks for it.

I also am a heavy development user. And although I am really perfectly happy with vi most of the time, I much prefer Visual Studio. I actually just wrote a Linux kernel module using Visual Studio 2017 and Windows Subsystem for Linux for the most part. Which is really funny since WSL doesn't use the Linux kernel.

There are simply some of us who like to have Windows running on their systems. Even if I were using Linux as the host OS, I would still do most of my work in virtual machines for organizational reasons and frankly, WSL on Windows is just a thing of beauty.

As for more modern UIs like many people complain about here. I honestly haven't noticed. You press the Windows key and type what you want to start and it works. This has been true since Windows 7 and has only gotten better over time.

Then there's virtualization. Hyper-V is a paravirtualization engine which is frigging spectacular. With the latest release of QEMU which is accelerated on Windows now (like kqemu) you can run anything and everything beautifully.

I have no issues with the software you run... I believe if you sat coding next to me, you'd probably see as many cool new things as I'd see sitting next to you. But honestly, I've never found a computer which runs Linux desktop with even mediocre performance. They're generally just too slow for me. So, I use Windows which is ridiculously fast instead.

As for Bill Gates. Are you aware that Bill has more or less sold out of Microsoft? He's down to little more than 1% of the company. You can give Microsoft gobs of money and he would never really notice. Take it a little further and you might realize that this isn't the Bill Gates of the 1980s. He's grown up and now is a pretty darn good fella. So far as I can tell, since he's been married, he's evolved into one of the most amazingly nice people on earth. I can't see that he's done anything in the past 15-20 years which would actually justify a dislike of him or a distrust of his motives.... unless you're Donald Trump who Bill kind of attacked recently for speaking a little too affectionately about Bill's daughter's appearance.

Windows 10 IoT Core Services unleashed to public preview

CheesyTheClown Silver badge

Re: Well if MS are offering to do that...

Some of us don't use registered MAC addresses. We simply use duplicate address detection and randomize. There's really no benefit to registered MAC addresses anymore. Simply set the 7th bit to 1 and use a DAD method.

Also consider that many of us don't use Ethernet for connectivity. There are many other solutions for IoT. A friend of mine just brought up a 1.2 million node multinational IoT network on LTE.

MAC address filtering and management is basically a dead end. There's just no value in it for many of us. It really only adds a massive management overhead to production of devices. And layer-2 is so bunged to begin with that random MAC addresses with DAD can't really make it any worse.

Who fancies a six-core, 32GB RAM, 4TB NVME ... convertible tablet?

CheesyTheClown Silver badge

Will have bugs and no love from HP

For a product of this complexity to be good, it needs to reach high enough volumes thatthe user feedback on the product is good enough to solve problems. A company the size of HP will ship this, but the volume of big reports will be low due to a few reasons.

1) the user count is low

2) the typical user of this product won’t have a reliable means of reporting the bugs other than forums. This is because they work for companies who can afford these systems and would have to report through IT. IT will not fully understand or appreciate the problems or how they actually effect the user and therefore will not be able to convey the problems appropriately.

3) HP does not make the path from user to developer/QA transparent as once the product is shipped, those teams are reassigned.

As such, HPs large product portfolio is precisely why this is a bad purchase. Companies like Microsoft and Apple build a small number of systems and maintain them long term. Even with the huge specifications on these PCs, a lower end system and offloading some to the cloud is far more fiscally responsible.

Of course, people will buy them and if we read about them later, I doubt the user response will be overly positive.

I’m using a Surface Book 2 15” with a Norwegian keyboard even though I have it configured to English. This is because a LOT of negative feedback reached MS on the earlier shipments and by buying a model I was sure came off the assembly line a few months later, I was confident that many of the early issues were addressed.

This laptop from HP will not have that benefit because to produce them profitably, they will need to make probably almost all the laptops of this model they will ship or at least components like motherboards in a single batch. So, even later shipments will probably not see any real fundamental fixes.

But if you REALLY need the specs, have a blast :) You’re probably better off with a workstation PC and Remote Desktop from a good laptop though.

Even Microsoft's lost interest in Windows Phone: Skype and Yammer apps killed

CheesyTheClown Silver badge

Re: MS kills UWP apps, Telephony API appears in Windows

Nope, both hands know what’s happening. The telephony APIs allow for Android integration. So the APIs permit Windows 10 Always Online devices (laptops with built in LTE) to provide a consistent experience across phone and laptop.

For instance, you will probably be able to make a call from your laptop. They also integrated messaging.

But I guess that’s not as exciting as assuming it means that Microsoft is confused. :)

White House calls its own China tech cash-inject ban 'fake news'

CheesyTheClown Silver badge

Re: Enjoy this while it lasts

I don’t know whether I want to agree or debate this.

We saw republicans dropping from the election for now reason that seemed clear. Just one after another dropped out and yielded to Trump with no explaination to be had. Each time they dropped out and made their support for Trump clear it looked like people behaving as if they were forced to under duress.

Bernie seemed to have real support of the people because they believed in him politicaly. As though they liked his message. Hillary seemed to garner support by people who liked her making fun of Trump and also by people voting for superficial reasons. I’ve been a long believer that it’s time for a female president. I remember as a child being excited that Geraldine Ferrara was running. But Hillary simply scared me because her message didn’t seem to be anything other than “I’ll win and it’s my turn!”

Sander dropped out out of what seemed like frustration over the stubborn child stomping her feet and claiming “I’ll win, it’s my turn!”

I have had great hopes that if this election proved anything to the American people, it’s that the two parties are so corrupt that people need a choice and neither party is offering a choice to the people.

Amazon, Facebook, Twitter, Google, Microsoft, Netflix, and others can all change the platform. They can reinvent the entire two party system overnight. All it would take is to each build on their platforms a new electoral process to identify and support candidates that they would then have added to the ballot. If each company run different competitions and systems to identify and sponsor candidates, we could have a presidential election with 10 or more alternatives to choose from.

They can even allow underdogs to get a grip on the elections. For example, traditional fund raisers which reward only people willing to sell their political capital would become irrelevant. People could get elected because they were in fact popular instead of having sold their souls in exchange for enough money for some commercial time.

I think Trump and Hillary may be the best thing to ever happen to America. If two shit bags like them can end up being the only possible choices the people had, then it’s clear it’s time for a change.

Why aren't startups working? They're not great at creating jobs... or disrupting big biz

CheesyTheClown Silver badge

What do you mean?

So, let's say this is 1980 and you start a new business.

You'll need a personal assistant/secretary to :

- type and post letters

- sort and manage incoming letters

- perform basic book keeping tasks

- arrange appointments

- answer phones

- book travel

You'll need an accountant to :

- manage more complex book keeping

- apply for small business loans

- arrange yearly reports

You'll need a lawyer to :

- handle daily legal issues

- write simple contracts

You'll need an entire room full of sales people to

- perform business development tasks

- call every number in the phone book

- manage and maintain customer indexes

You'll need a "copy boy" to

- run errands

- copy things

- distribute mail

Etc...

Now in 2018

You'll need

- an app for your phone to scan receipts into your accounting software

- an accounting app to perform year end reports and to manage your bank accounts

- an app to click together legal documents based on a wizard

- a customer relationship manager application

- a web site service for your home page

- etc...

Let's imagine you are a lawyer in 1980...

- You'd study law

- Graduate

- Take a junior position doing shit work

- Pass the bar

- work for years taking your boss's shitty customers

- work for years trying to sell your body to get your own customers

- one your portfolio was big enough, you'd become a senior partner who would take a cut from everyone else's customers.

The reason the senior lawyer hired junior lawyers was because there was a massive amount of work to do and a senior partner would spend most of their time talking and delegating the actual work to a team of juniors, researchers and paralegals.

Now the senior can do 95% of the work themselves by using an iPad with research and contract software installed in less time than it would have taken to delegate. So where a law firm may have employed 10-20 juniors, paralegals and researchers in 1980 per senior, today, one junior lawyer probably can easily handle the work placed on them by two seniors.

There's no point hiring tons of people anymore. Creating a startup that is dependent on a head count is suicide from the beginning. If you're a people based company, then the second someone smarter sees there's a profit to be made, they'll open the same type of business with far more automated.

Cray slaps an all-flash makeover on its L300 array to do HPC stuff

CheesyTheClown Silver badge

What is the goal to be accomplished?

Let's assume for the moment that we're talking about HPC. So far as I know, whether using Infiniband or RDMAoE, all modern HPC environments are RDMA enabled. To people who don't know what this means, it means that all the memory connected to all the CPUs can be allocated as a single logical pool from all points within the system.

If you had 4000 nodes at 256GB of RAM per node, that would provide approximately 1 Petabyte of RAM online at a given time. The amount of time to load a dataset into the RAM will take some time, but compared to performing large random access operations across NVMe which is REALLY REALLY REALLY slow in comparison, it makes absolutely no sense to operate from data storage. Also, storage fabrics, even using NVMe are ridiculously slow due to the fact that even though the layer-1 to layer-3 are in fact fabric oriented, the layer 4-7 storage protocols are not suited for micro-segmentation. As such, it makes absolutely no sense whatsoever to use NVMe for storage related tasks in super-computing environments.

Now, there's the other issue. Most supercomputing code is written using a task broker that is similar in nature to Kubernetes. It spins up massive numbers of copies related to where the CPU capacity is available. This is because that while many super computing centers embrace language extensions such as OpenMP to handle instruction level optimization and threading, they generally are skeptical about run-time type information which would allow annotation of code with attributes that could be used while scheduling tasks.

Consider that moving the data set to the processor upon which it will operate can mean moving gigabytes, terabytes or even petabytes of memory transfer. However, if the data set were distributed into nodes within zones, then a large scale dataset could be geographically mapped within the routing regions of a fabric and the processes which would require moving megabytes or gigabytes at worst can be moved to where the data is when needed. This is the same concept as vMotion but far smarter.

If the task is moved from one part of the super computer to another to bring it closer to the desired memoryset, the program memory can stay entirely in tact but only the CPU task will be moved. Then on heap read operations the MMU will kick in to access remote pages and then relocate the memory locally.

It's a similar principle to map/reduce except in a massive data set environment, map reduce may not work given the unstructured layout of the data. Instead, marking functions with RTTI annotation can allow the JIT and scheduler to move executing processes to the closest available zone within the super computer to access the memory needed by the following operations. A process move within a supercomputer using RDMA could happen in microseconds or milliseconds at worst.

Using a system like this, it could actually be faster to simply have massive tape drives or reel to reel for the data set as only linear access is needed.

But then again... why bother using the mllions of dollars of capacity you already own when you could just add a few more million dollars of capacity.

Norwegian tourist board says it can't a-fjord the bad publicity from 'Land of Chlamydia' posters

CheesyTheClown Silver badge

Re: Norwegian History

I think if you checked the Norwegian economy, you might find oil and natural gas doesn't account for as much as you might think.

CheesyTheClown Silver badge

Ummm been done

There's a chain called Kondomriet all over Norway that sells electric replacements for sexual activities that generally require fluid exchange between participants.

They even advertise them pretty much everywhere with an "Orgasm guarantee". Though I wonder if that's just a gimmick. How many people would actually attempt to return a used item such as that.

What's all the C Plus Fuss? Bjarne Stroustrup warns of dangerous future plans for his C++

CheesyTheClown Silver badge

Mark Twain on Language Reform

Read that and it all makes sense

Wires, chips, and LEDs: US trade bigwigs detail Chinese kit that's going to cost a lot more

CheesyTheClown Silver badge

There goes buying from the U.S.

My company resold $750 million of products manufactured in the US last year. Already, these products are at a high premium compared to French and Chinese products. They are a tough sell and it’s almost entirely based on price.

Those items are built mostly from steal, chips, LEDs and wires.

Unless those US companies move their manufacturing outside of the US, we’ll be forced to switch vendors, otherwise the price hikes will be a problem for us. I know that the exported products will have refunds on the duties leaving the US, but the vendors cannot legally charge foreigners less than they charge Americans for these products. So, we’ll have to feel the penalty.

So, I expect to see an email from leadership this coming week telling us to propose alternatives to American products.

Intel confirms it’ll release GPUs in 2020

CheesyTheClown Silver badge

Re: Always good to have competition to rein in that nVidia/AMD duopoly

The big difference between desktop and mobile GPUs is that a mobile GPU is still a GPU. Desktop GPUs are about large scale cores and most of the companies you mentioned in the mobile space lack the in-house skills to handle ASIC cores. When you license their tech, usually you’re getting a whole lot of VHDL (or similar) bits that can be added to another set of cores. ARM I believe does work a lot on their ASIC synthesis and of course Qualcom does as well, but their cores are not meant to be discrete parts.

Remember most IP core companies struggle with high speed serial busses which is why USB3, SATA and PCIe running at 10Gb/sec or more is hard to come by from those vendors.

AMD, Intel and NVidia have massive ASIC simulators that cost hundreds of millions of dollars from companies like Mentor graphics to verify their designs on. Samsung could probably do it and probably Qualcomm, but even ARM may have difficulties developing these technologies.

ASIC development is also closed loop. Very few universities in the world offer actual ASIC development programs in-house. The graduates of those programs are quickly sucked up by massive companies and are offered very good packages for their skills.

These days, companies like Google, Microsoft and Apple are doing a lot of ASIC design in house. Most other new-comets don’t even know how to manage an ASIC project. It’s often surprising that none of the big boys like Qualcomm have sucked up TI who have strong expertise in DSP ASIC synthesis. Though even TI has struggled A LOT with high speed serial in recent years. Maxwell’s theory is murder for most companies.

So most GPU vendors are limited to what they can design and test in FPGA which is extremely limiting.

Oh... let’s not even talk about what problems would arise for most companies attempting to handle either OpenCL or TensorFlow in their hardware and drivers. Or what about Vulcan. All of these would devastate most companies. Consider that AMD, Intel and NVidia release a new driver almost every month for GPU. Most small companies couldn’t afford that scale of development or even distribution.

UK's first transatlantic F-35 delivery flight delayed by weather

CheesyTheClown Silver badge

Wouldn’t it be most responsbile if....

The F-35s are simply left grounded?

I mean honestly... who in their right mind would fly something that expensive into a situation where they might get damaged?

Let’s face it, if one of these planes becomes damaged in training or in a fight, the financial repercussions would be devestating. That would be massive money simply flushed down the drain.

The pilots are something else we can’t afford to risk. To train an F-35 pilot is so amazingly expensive we can’t possibly afford to place them in harms way.

I think it would be best to just keep the planes grounded.

Microsoft commits: We're buying GitHub for $7.5 beeeeeeellion

CheesyTheClown Silver badge

Re: Shite

haha I actually should have read the entire post first. I went to the same website you did. I have to admit, I shamelessly download software from there all the time because sometimes I forget how good things are today unless I compare them to the days that came before.

I tried writing a compiler using Turbo C 2.0 recently. That simple did not go well.

Even though they had an IDE, it was single file and it lacked all the great new features we love and adore in modern IDEs. Now I managed to do it. I had a simple compiler up and running within about an hour, but to be fair, it was an absolute nightmare.

That said, the compile times and executable sizes were really impressive.

But of course things like real mode memory was not a great deal of fun. Also whenever you start coding in C, you get this obsessive need to start rewriting the entire planet. I was 10 minutes away from writing a transpiler to generate C code because C is such a miserable language to write anything useful in. No concept of a string and pathetic support for data structures and non-relocatable memory... YUCK!!!

I will gladly take Visual Studio 2017 over 1980s text editors. Heck, I'll take Notepad++ over those old tools.

You should get a copy of some of those old tools up and running and try to write something in them. It's actually really funny to find out that the keys actually don't do what you fingers think they do anymore. And what's worse, try doing it without using Google. :) I swear it's painful but entertaining. GWBASIC is a real hoot.

CheesyTheClown Silver badge

Re: @Harley

I hope you don't mind me asking.

Have you written anything that would signify anyone actually knowing your name?

Writing books since you were an infant?

"and still some cunts get me name wrong"

I'm curious, where is the connection? Just because you have a book that has been published in a small handful of languages, 47 languages suggests that your books probably weren't interesting enough to be picked up broadly. I would say that if your book was published in 47 languages then :

a) It was probably some fiction novel of some type

b) It didn't catch on enough to justify translating it for lower volume markets

c) It probably hasn't seen the NY Times best seller list and if it had, it was at 97th place for a week.

I suppose I could go on, but let me say that if your book was only to translated to 47 languages, there could be a good reason no one has heard of you and certainly no one would know how to spell your name.

Also, while I'm possibly one of the most arrogant and stuck up assholes on the register, I like to occasionally contribute something positive and informative. In the last month, you've written not a single positive or informative comment on any article or as a response to someone else's comments. Your entire purpose for posting on the comments is purely to make snotty one line remarks that are generally degrading.

Now I'm not going to suggest that I'm "Mr. Ray of Fucking Sunshine" over here. But seriously man, did you actually just refer to someone as a cunt... for mistyping a name that probably no one has ever heard of outside of your personal social circle?

I'll make the assumption that you're English as I've never seen another culture on Earth that tosses that word around so nonchalantly as the English do. And to help you better understand yourself, I'll use something I learned from a fellow countryman of yours.

Simon Cowell one time make a remark "Miss, it is your parents job to tell you how pretty you are and how pretty you sing. But did you ever consider recording yourself and listening to your own singing before coming on this show? You're awful."

Now I'm sure that girl is running around telling everyone how she should be taken more seriously because she has been seen performing in 47 countries and subtitled in 47 languages. And I'm sure that your mom and dad read the first 15 or 20 pages of your book(s) so they can tell you how great of an author you are. But let's be honest, the depths of your thinking are far too shallow to be successful as a writer. A creative mind would be able to perform far better than to revert to choosing the most offensive word in his vocabulary to describe a person who mis-typed the name of someone no one has ever heard of.

I think I'll try to help make you famous. I do a great deal of public speaking in my work. I do this is at least 47 countries where people have actually paid to hear me speak in all of them. I'm really famous you know... I'm probably almost as big as David Hasselhoff is in Germany... umm maybe not quite.

So what I'll do for you is that from now on, whenever I am trying to explain a person who sees themselves as being more impressive than they really are, I'll refer to them as a "J.R. Hartley... and that's with a T". So for example :

I was listening to a climate denier on Fox News the other morning and he made a real ass of himself by publicly claiming he has the ears of the leaders of 47 nations. I mean seriously, could he possibly be more of a J.R. Hartley... and that's Hartley with a T... like the great author as opposed to Hartley without a T... like the broken motorcycle.

I bet with that kind of publicity, you might even get translated to a 48th language someday and then... you will be REALLY famous and no one will ever be a cunt and mistype your name again. And I'm willing to do this just for you... because you my good friend J.R. Hartley with a T are a ray of fucking sunshine!

Five actually useful real-world things that came out at Apple's WWDC

CheesyTheClown Silver badge

Re: Damn it

I’m gonna probably jump to Android soon. I’ve used iPhone since the early days and am pretty much tired of the non-stop Apple works with everything as long as it’s Apple.

Home automation works if you have a unit in every room. Amazon Echo is $99 and Echo Dot is $29. So in a house with 5 bedrooms, a living room, a kitchen, two bathrooms and two hallways, the Echo is expensive but a reasonable solution. Home Pod is too big to begin with and even at half the price is too expensive.

I spend about $1000 a year on the iTunes Store. To control my music, either I have to store it on a server after downloading it or I have to use an Apple device. Movies can’t even be decrypted legally, so Apple is a requirement. We have 6 screens in the house, 4 have Chromecast built in. One has an Apple TV and the last has a PC.

We don’t want to add Apple TV to all the screens because they would need separate power and separate remotes. Then there’s the mounting issues.

So, we often find ourselves renting films on Google that we already own on iTunes.

The door locks we have aren’t compatible with any service, but writing a skill for Alexa took about an hour. Writing a function for Cortana took 15 minutes.

I don’t believe I will be allowed by Apple to write the skill for Siri, so I’d have to throw away $2000 of perfectly good door locks.

I love my iPhone 6S Plus. But every iPhone patch breaks something new. Watching videos gets more and more inconvenient. My audible app actually skips... it sounds like a scratched record. I have an iPhone X but I’ll end up dead from using it.

Then there’s my car. iPhone integration isn’t bad. But if I want proper integration, I’ll have to pay $400 a year to BMW.

So, I may end up switching to Android even though I hate Android just because it actually gives me options. So I’ll have a phone that sucks, but at least it will work with my other stuff.

Oh, there’s the other issue. I’ve been waiting 8 years for a new line of Macs to buy. The last notebook Apple made which didn’t suck was the MacBook Air 11 inch. I still use a 2011 model of it. And Mac Mini is so out of date it is horrifying. If Apple doesn’t make a new PC suitable for software development before my MacBook dies, I don’t think I’ll buy anything current.

I’m pretty sure Apple as a tech company died with Steve Jobs :(

Have you heard about ransomware? Now's the time to ask: Are you covered?

CheesyTheClown Silver badge

Sure... why simply protect yourself?

Ransomware is for people who can’t turn on Windows Backup/Restore or Apple Time Machine.

How bloody hard is it to simply enable automatic recovery options in the OS? If your company is ever hit by ransomware, it’s because your IT staff or firm is incompetent.

In Windows, it’s a single group policy setting.

On Mac, if you haven’t read “Mac for enterprise” documentation and learned how to onboard a Mac for management, you’re a fool. It’s just like group policy.

These are not advanced features. These are sys admin 101 things.

If you have cash to burn, racks to fill, problems to brute-force, Nvidia has an HGX-2 for you

CheesyTheClown Silver badge

CPU from the Terminator?

When I saw the picture, it reminded me of the CPU from the Terminator. Maybe this is what it looked like before it was shrunk?

I’m not sure if that’s relevant when discussing AI

IBM's Watson Health wing left looking poorly after 'massive' layoffs

CheesyTheClown Silver badge

Re: Merge?

I’ve walked into companies, seen HPE and walked out. It’s just not worth the pain. Every time you give them money, they take it, and sell off the business unit. And their servers and networking are just not good enough.

Dixons to shutter 92 UK Carphone Warehouse shops after profit warning

CheesyTheClown Silver badge

Re: No surprise

I shopped at Dixon’s last summer while visiting Ireland. They blatantly screwed me and insisted that the advertisement sitting on the counter which triggered my impulse purchase of a LTE modem with an included data package... clearly marked as such did not come with the SIM card and I would have to buy it separate and refused to take the product back.

The time before that, a few years earlier, the screwed me on something else, but I chalked it up to a failure on the store to train their people.

I am allowed to spend about £500 per person while traveling and remain in my duty free limit. So, when the family and I travel, we spend about £2200 on crap we don’t need but can’t survive without and get duty refunds on the expensive stuff. We also know whatever we buy is disposable, if it breaks, we throw it away.

It’s pretty common for us to travel to countries which have Dixon’s two or three times a year. And we spend precisely £0 there... even if they have a better price.

There are almost no companies I wish financial ruin on. But Dixon’s is one of the few that I do.

Epyc fail? We can defeat AMD's virtual machine encryption, say boffins

CheesyTheClown Silver badge

Re: The attack can only be partially mtitigated

Deep packet inspection is generally not worth much. Unless your deep packet inspection engine can sandbox all code and all data that passes through it. it will never be able to provide better security than proper endpoint protection.

Deep packet inspection doesn't offer anything more than rate limiting the nonsense traffic. So it's certainly worth it. Whether you're using Snort based Cisco products or PfSense... or whatever, there is value.

That said, I actually come from the broadcast video background. I spend the evening speaking about SDI forward error correction and no-return-to-zero with a fellow engineer and my 14 year old daughter last night. The other guy and I worked together for years developing chips and firmware for those things.

I'd be pretty hard pressed to see any circumstance where there would be any value in an IPS on video content delivery channels. I certainly could never identify a circumstance where there's any value in 40Gbp/s networking unless you're buying into the looney tunes nonsense Cisco started by trying to sucker their customers into buying 10Gb/s networking for delivering content that could be delivered at 800Mb/s with almost no compression (as in 1.5Gb/s SDI which has about 1.1Gb/s of actual data which can easily compress below 1Gb without loss or latency issues)

If you're a CDN, you're scaling up when you should be scaling out. That's putting a lot of eggs in one basket. It's a very 1990's-2000's way of thinking. It didn't scale then, it doesn't scale now.

Of course, I'm purely speculating on your design, but even if you're a big production studio handling lots of multi-camera ingest, you are probably way too over-provisioned. Also, if you're doing layered security, you should never be in a circumstance where you'd need to inspect more than a few megabytes a second of traffic.

But again, I'm speculating. Every design usually has a reason other than "we like to spend money"... but these days, with the advent of all the SMPTE members pushing for uncompressed (idiots) because it allows them to make A LOT MORE MONEY, a lot of people are falling for it.

CheesyTheClown Silver badge

Re: The attack can only be partially mtitigated

Not really about the host.

If there’s an attack vector available to a VM from the host... which I’m confident there always must be due to the thought process I followed above, then the issue is whether it’s possible to always mitigate the attacks from the guest to the host. And they should be by employing the old dynamic recompiled support which was used in hypervisors to trap things like legacy inb/outb instructions.

As such, it’s whether someone can hop contexts and read memory of other guests on the same host.

I make a huge effort to encrypt sensitive data (like keychains) in TPM when I’m coding. But so far as I know, there is still no solid TPM virtualization tech.

CheesyTheClown Silver badge

The attack can only be partially mtitigated

So long as there's a means to provide plain-text memory access to virtual machines for things like communication with something other than the virtual machine itself... like the hardware or hypervisor for example, it will always be possible to alter the SLAT to choose which memory to encrypt and which memory to not encrypt.

I hadn't considered this attack vector earlier, but now that it's in the open, it's obvious that there is no possible way to create a walled garden suitable to this as there will always have to be gates available.

Let's not overlook that an additional attack vector would be to pause scheduling to the VM, allocate a new virtual page, inject it into the SLAT marked as clear text, then push code into that page, and find a means to trigger it. I would recommend through the VM network driver for example.

There's that attack vector too.... it should be possible to exploit the VM virtual NIC driver. VMXNET3 is a famously bad driver. After doing a code audit on Linux of VMware's kernel drivers, I transitioned from VMware because it there were so many completely obvious security holes that I couldn't run my servers in good faith on the platform. There was that and the $800,000 in licenses I was paying for it... which everyone else just gives away for free now.

So, the real trick would be to inject a VIB on VMware which would allow code injection through VMXNET3 or the video driver which is even better as there's the wide open window to inject shaders into OpenGL or DirectX which is almost certainly being run as MesaGL software rasterizer or WARP.

This would be perfect... create a clear text page, trigger a window size change to trigger resolution change. Provide the clear text page as the frame buffer to the guest... and voila, there's a clear path to start uploading code for graphics rendering. This will likely not work well with NVidia Grid, but there are like 5 people in the world using that.

haha... this article was great.... now that I know that it counts as an attack if you attack the guest from the host, it opens an endless barrel of worms.

I need to update my CV to say "Security Researcher" and hack some VIBs together. It's not even a challenge.

IPv6 growth is slowing and no one knows why. Let's see if El Reg can address what's going on

CheesyTheClown Silver badge

Lots of stuff going on here

I've been running IPv6 almost exclusively for a decade at home. I've been running IPv6 at work for about 5 years as well.

Let's assess a few of the real reasons for IPv6 not happening.

Security :

With IPv4, you get NAT which is like a firewall but accidentally. It's a collateral firewall :) The idea is that you can't receive incoming traffic unless it's in response to an initial outgoing packet which creates the translation. As such, IPv4 and NAT are generally a poor man's security solution which is amazingly effective. Of course opening ports through PAT can mess that up, but most people who do this generally don't have a real problem making this happen. With modern UPnP solutions to allow applications to open ports as needed at the router, it's even a little better. With Windows Firewall or the equivalent, it's quite safe to be on IPv4.

IPv6 by contrast makes every single device addressable. This means that inbound traffic is free to come as it pleases... leaving the entire end-point security to the user's PC which more often then not is vulnerable to attack. IPv6 can be made a little more secure using things like reflexive ACLs or making use of a good zone based firewalling solution, but with these options enabled, many of the so called benefits of one IP per device dissolve in these conditions.

No need for public addresses:

It's really a very small audience who needs public IP addresses. In the 1990's we had massive amounts of software written to use TCP as its based protocol and to target point to point communication requiring direct addressing. This is 2018, almost every application registers against a cloud based service through some REST API for presence. When two end points need to speak directly with one another, the server will communicate desired source and destination addresses and ports to each party and the clients will send initial packets to the given destinations from the specified sources to force the creation of a translation at the NAT device. Unless the two hosts are on the same ISP with the same CG-NAT device serving them both, this should work flawlessly. Otherwise, a sequence of different addresses will need to be tried to find the right combination to achieve firewall traversal.

In short, we no longer have a real dependency on IPv6 to provide public accessibility.

Network Load Balancers

20 years ago, only the most massive companies deployed load balancers. Certainly less than 1 in 100 would have hardware accelerated load balancers capable of processing layer-7 data and almost certainly none of them could accelerate SSL.

These days, there are multiple solutions to this problem. As such, a cloud service like Azure, Google Cloud or Amazon can serve hundreds of millions of websites from a few IP addresses located around the world.

File transfer services

No one copies files directly from one computer to another anymore. We don't setup shares and copy. We copy to a server and back down again or use sneaker net with large USB thumb drives. With DropBox, OneDrive, Box, etc... in addition, our largest files on our hard drives are cloud hosted anyway. So if we lose a copy, we just download it again.

I can go on... but we simply don't need IPv6 anymore. The only reason we're running out of IP addresses is because of hording. I know of more than a few original Class B networks which have 10 or less addresses in legitimate use. People are hording addresses because they are worth A LOT of money. One guy I know is trying to sell a Class B to a big CDN and is asking $2 million and it's probably worth it at today's rates.

IPv6 is about features. It's a great protocol and I love it. But let's be honest, I'll be dead long before IPv4 has met its end.

Microsoft returns to Valley of Death? Cheap Surface threatens the hardware show

CheesyTheClown Silver badge

Build said Windows Store is temporary

Windows Store is mandatory at first so that users download the appropriate installers.

But in the future, MSIX should cover direct distribution through alternative channels. I think they just want to be able to gain meaningful telemetry on ARM products before unleashing the beast.

That said, Windows Store has improved... I’ve been using it far more often the past few months. I’m not sure what they did, but it seems less covered with crapware and actually looks like someone is actually monitoring it now.

Page:

Biting the hand that feeds IT © 1998–2019