* Posts by CheesyTheClown

701 posts • joined 3 Jul 2009

Page:

Don't mean to alarm you, but Boeing has built an unmanned fighter jet called 'Loyal Wingman'

CheesyTheClown

If they deliver

Let’s be honest... Boeing isn’t exactly we’ll known for delivering anything in “Maybe 12 months”. As soon as they do a half assed demo, Boeing will claim to be out of money and it will end in a way late, way over budget, never delivered product.

In the meantime, any country that the plane would be useful against will focus on much smaller, mich cheaper, autonomous drones.... because they won’t have the same stupid tender process as western governments do

NAND it feels so good to be a gangsta: Only Intel flash revenues on the rise after brutal quarter

CheesyTheClown

The almighty dollar!

Thanks to the strong dollar, the majority of the world can't afford to pay in dollars what the market is demanding. Sure, we ship more bits, but if you want sell them at all, you have to consider that you can't negotiate in dollars. They're too damn expensive. So, of course the revenue will be lower. You have to ship the same product for less dollars if you want to ship at all.

Oh, there's also the issue that people are finally figuring out that enterprise SSD doesn't really pay off. You just need to stop using SANs and instead use proper scale-out file systems.

Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo

CheesyTheClown

There has been progress

I do almost all my ARM development on Raspberry Pi. This is a bit of a disaster.

First of all, the Pi 3B+ is no a reliable development platform. I’ve tried Banana and others as well. But only Raspberry as a maintained Linux distro.

The Linux vendors (especially Redhat) refuse to support ARM for development on any widely available SBC. This is because even though Raspberry PI is possibly the most sold SBC ever (except maybe Arduino), they don’t invest in building a meaningful development platform on the device.

Cloud platforms are a waste because... well, they’re in the cloud.

Until ARM takes developers seriously, they will be a second class citizen. At Microsoft Build 2018, there were booths demonstrating Qualcomm ARM based laptops. They weren’t available for sale and they weren’t even attempting to seed them. As a result, 5,000 developers with budgets to spend left without even trying them.

This was probably the biggest failure I’ve ever seen by a company hoping to create a new market. They passed up the chance to get their product in front of massive numbers of developers who would make software that would make them look good.

Now, thanks to no real support from ARM, Qualcomm, Redhat, and others, I’ve made all ARM development an afterthought.

Surface Studio 2: The Vulture rakes a talon over Microsoft's latest box of desktop delight

CheesyTheClown

$100 a month? Not a bad ROI

If you consider that this machine will last a minimum of 3 years, $3600 is pretty cheap actually. It's a nice looking machine and because of it's appearance, the user will be happy to hang onto it a little longer than a normal machine. I can easily see this machine lasting 5 years which would make the machine REALLY cheap.

When you're thinking in terms of return on investment, if you can get a machine which will meet the needs of the user for around $100 a month, it's a bargain. This is why I bought a Surface Book 2 15" with little hesitation. The Office, Adobe and Visual Studio Subscriptions cost substantially more per month than the laptop.

I'm considering this machine, but I have to be honest, I'd like to see a modular base. Meaning, take this precise design and make the base something that could slide apart into two pieces.

The reason for this is actually service related. This is a heavy computer. It has to be to support the screen when being used as a tablet. 80% of the problems which will occur with this PC will occur in the base. When it comes to servicing these machines, they risk easy damage by being moved around. This is not an IT guy PC, it's something which is pretty. I'd like to simply slide a latch, then slide the PC part of the system off and bring it in for service.

Upgradability would be nice using the same system as well. But I'm still waiting for Microsoft to say "Hey, bought a 15 inch Surface Book 2? We have an upgraded keyboard and GPU to sell you"

CheesyTheClown

Re: Hmmmmm!

I work in post-production for a while. We were almost exclusively a Mac shop at the time, but we did most of our rendering on the workstation. Even more so when people began using laptops for post.

The earlier comment that the hardware has far outpaced the software is true. Sure, there are some rare exceptions. And if you're working on feature length productions rather than a 45 second commercial spot at 2k (max resolution.. meaning 1125 frames at 2k resolution), you'll need substantially more. But a GTX1060 or GTX1070 is WAY MORE than enough to manage rendering most video using current generation Adobe tools. Even 3D rendering with ray tracing will be able to work ok. Remember, you don't ray trace while editing (though we might get closer now with GTX2060+ cards). Instead, we render and even then, with the settings turned way down. Ray tracing on a single workstation can generally run overnight and be ready in the morning. If it's a rush, cloud based rendering is becoming more popular.

This machine should last 5-7 years without a problem. Most of the guys I know who still have jobs in TV (there is way too much supply and simply way too little demand for TV people) generally run 5-7 year old systems. Or more accurately, they wait that long before considering an upgrade.

UC Berkeley reacts to 'uni Huawei ban' reports: We unplugged, like, one thing no one cares about

CheesyTheClown

Re: BT Infinity uses Huawei and no one seems to care

Telenor Global Services runs a Tier-1 LTE service provider on Huawei which most western governments depend on for secure communication... and Huawei has administrative credentials for all the devices since they also have the operations agreements for the hardware.

None of this is classified information if you can read Norwegian.

CheesyTheClown

Re: UC Berkeley Stazi

I have no idea what has me smiling more

1) The comment itself... it was lovely

2) The use of the word superfluous in daily conversation.

3) The fact that you could spell superfluous that way and it was still recognizable.

I can go on... I'm practically pissing my pants in happiness of this comment and the AC's correction of your spelling of wordz :)

CheesyTheClown

Re: RE: Chunky Munky

Congratulations, you won the $52 gazillion jackpot... you're 100000000% correct!

In case you were wondering. There is no $52 gazillion jackpot. In fact, there's no so thing as a gazillion that I'm aware of outside of literature.

Also, 100000000% is meaningless, its only value is to say you're 100% correct but I wish I could give you bonus points if it were possible.

The poster above was making the point the Huawei sold A LOT of smart phones... in fact, so many that the number is ridiculously big and as a result, it shouldn't make too big an impact on their revenues if UC Berkeley turns off a box.

SD-WAN admin? Your number came up in Cisco's latest bug list

CheesyTheClown

Starting points for security researchers

CDP on IOS-XE: remote code execution

I reported this but got blown off 18 months ago. Using any XE image 3.12 or later, especially on switches, single-step the CDP module to find an overflow. In some versions, the kernel segfaults (overflow) simply parsing changes to native vlan changes from the remote. CSR1000v can be used to reproduce. The error is probably in misuse of sk_buff and Alan Cox's psnap module.

SAML on ISE Man in the middle

Reported this two years ago, got blown off. ISE's SAML implementation incorrectly reports SAML versioning in their schema when identifying the Sp. This is due to hard coded values and ignoring settings from the IDp. It seems that the SAML service also ignores updated authentication tokens. Turn on verbose logging and intercept packets in transit between ISE and AD FS... alter packets and fake signatures to reproduce.

ISE Web Portal

just message me for a really long list. I'm typing with a mouse because my keyboard batteries died.

Cisco only cares about security after it's a public CVE

Dear humans, We thought it was time we looked through YOUR source code. We found a mystery ancestor. Signed, the computers

CheesyTheClown

Re: Many mysteries

Oddly, while you and I generally don't get along all that well elsewhere, but I'm forced to agree with you here.

I am actively working on getting admitted to the masters program at the local university to contribute towards automating medical general practitioners out of a job if I can. There are many issues associated with illnesses, but stage of detection is generally the number one factor deciding whether an illness is treatable or not.

I make a comment often that the only dust you'll ever see in a doctor's office is the dust on the top of his/her books.

it's a matter of exponential growth more than exponential decay though.

As you get older and in theory gain knowledge and wisdom, the time required to maintain the health of that information grows. And if you constantly update your knowledge, your overall understanding of the field of interest will increase along side it.

Primary and secondary school provide an excellent opportunity to provide most people a glimpse of what's out there. In fact, I've been teaching adults for years different topics of engineering, science and math. What I've learned is that with rare exceptions, most every person I communicate with probably completed the education they'll draw from in life at the end of the 4th or 5th grade. This doesn't mean that they are stupid, it simply means that they've had no practical use for anything more advanced in their given careers. This is because most people simply don't need it.

High school allows people to see all the amazing jobs that are out there and available to them. I learned a great deal in high school before dropping out and starting at the university instead.

What's important and we see it all the time in modern society is that people smarter at a young age and it hurts them as they get older since the teachers don't teach the cost of information.

The XX and XY chromosome thing is a real problem. It wasn't long ago that we were told school kids that the mitochondria was the power source of the cell. When I was a kid, it was common knowledge that we had never successfully managed to "crack the shell" of the mitochondria and looked inside. Now science books are being updated to teach us that our genetic code resides within the mitochondrion DNA as well.

Cellular microbiology is a topic which all children learn at some level or another, but most people can't name a single part of the cell other than the nucleus and they constantly make announcements like "You're going bald because of your mother's father not me" as if balding were a single gene. By their education, they would make it so that boys receive 0% of their genetics from daddy.

We generally teach kids more than enough to make them dangerous. We don't place expiration dates on the information.

P.S. - nice use of nigh, it's one of those words I'm envious of when I see it but never seem to use when the opportunity presents itself.

DNAaaahahaha: Twins' 23andMe, Ancestry, etc genetic tests vary wildly, surprising no one

CheesyTheClown

Re: Boffins or Bafoons?

I followed up on more articles on this now. And while you have some valid points, there are still many questions left open and yes, they "dumbed the shit down" for news and TV substantially. Multiple times during the interview the "boffin" damn near back peddled in order to keep the words small enough for the journalist to understand.

The people who were interviewed were clearly "boffins" in the field. This means they were "experts" on something and not on another, but the people who interviewed them didn't thing to talk to someone who could interpret the data more clearly.

A boffin in my experience is someone that dumb people call smart without understanding what that means so they can tell other dumb people that they should trust this data because it comes from a genuine boffin. For example, a guy in a blue shirt at an electronics store with a logo that says "Geek" is clearly a boffin, he has a shirt to prove it. Imagine how impressed those same people are by the geniuses at the genius bar.

I believe the "boffins" in question here (after reading their credentials) are specialists on topics relating to genetics but lack interest or focus on more commercialized genetics (meaning mail order) and genealogy. They were questioned though as if this would be a particular area of expertise for them. This means that although they might be able to speculate in areas they lack the data set to speak authoritatively on, they answered anyway since they didn't realize how wide spread their findings would be published in the mainstream media.

The numbers published leave most of my questions in tact.

- How does shipping impact it

- Where the results actually contradictory?

- What were the results of two samples from the same twin from the same lab?

I can go on.... the point is, the people interviewed were geneticist and I assume pretty damn good ones. They are experts in sequencing and applying their findings to medical application. I'm sure there are even people with an interest in forensics on the team. But this team doesn't seem to have a whole lot of background on historical migratory patterns. They also don't appear to be accounting for shipping methods which means they are probably incredibly brilliant people who when handling samples minimize contamination and they're being asked to evaluate how a sample stuck into a non-sterile device and then shipped through random methods would fair.

You're right, I should have read more and now I have. Before looking like an idiot and simply jumping on "boffins at yale" as an excuse to grandstand, I would still need many questions answered.

Cool... they are identical and proven.

- What does that mean in context of a pair of 30 year old identical twins who clearly show substantial variations in development?

- What is the margin of error when taking two samples from the same twin vs one sample from each twin?

- What is the result when ensuring the labs receive non contaminated samples?

Additionally

- When the results are interpreted by a historical anthropologist and accounting for migratory patterns and possible differences introduced by how generational data is interpreted, were the results actually different or was this a rounding error?

But... I guess since now that you and I have read the same articles and seen the same interviews, the fact that you seem to be happy and I still seem to want reduce a great number of variables... it could be that I'm an idiot or it could be that I have high standards of what I consider to be meaningful data. If I had interviewed the "boffins" I would have asked for tolerances and certainties for all measurements.

You can tell the scientists involved here intentionally kept the words small and simple for people like you to avoid confusion like this. They even came straight out and said things like "identical". Absolutely no scientific evidence ever has been admissible without defining the precision and 0% deviation is not a possible measurement. 10^-100 percent is, but 0% isn't. Any scientist presenting information in any way without that is trying to sooth the fools. And the fools will quote him or her without asking additional "boffins" additional questions to multiply the tolerances to reduce the margin of error of their findings.

The AC was crap... your response was arrogant and uniformed crap. My post was arrogant uninformed crap and the story was misleading crap. The difference is, I'm claiming we're all full of crap and you're choosing which line of crap you'll present as fact to degrade people.

Tell me. If the article said buffoons instead of boffins, would that margin off error impact how you'd use the "evidence" to show you're superior enough to call people idiots?

So, are you prepared to step up and admit you are full of shit too or will you continue to present yourself as the "boffin" to we lowly idiots?

CheesyTheClown

Boffins or Bafoons?

They obviously are different :)

Journalists generally have absolutely no respect for science.

You're quoting "Boffins at Yale university, having studied the women's raw DNA data, said all the numbers should have been dead on."

Let's start by saying that the definition of a pair of identical twins is that they were both hatched from the same egg, or more accurately the egg split in half after being inseminated and produced two separate masses which eventually developed into two individual humans. I have not looked it up and I'm already guilty of one of the same critical mistakes made by the journalist which is that I have no verified my facts. But this is how I understand it.

If I am correct, then cellular reproduction through mitosis should have split the nucleotides of the original cell precisely. This means that over a certain percentage of the genetic pairs to be reproduced survived in tact. Let me clarify, from what I can fathom, simply mathematical entropy dictates that there must be an error in every cellular reproduction. It is not mathematically possibly for two cells to be 100% alike. 99.9999% is realistic, but not 100%. This is a mandatory aspect of science. To make the particularly clear, refer to Walter Lewin's initial lecture from Physics 801.x in MIT Open Courseware on Youtube where he explains how to measure in science.

So, we're presented by your quote "Boffin's at Yale..." which leads me to ask :

- What is a boffin?

- What is the measure of a boffin?

- Who is qualified to measure whether this individual is a boffin?

- What is the track record of accuracy by the boffin?

- Was the boffin a scientist? If so what field?

- Was the boffin a student? If so, what level and field?

- Was the boffin an administrator? Were the a scientist before? When did they last practice? How well did their research hold up when peer reviewed? Did they leave science because they weren't very good at it and now they wear a suit?

And "having studied the women's raw DNA data"

- How contaminated was the data (0% is not possible)

- How was it studied?

- Were all 3 billion strands sequenced and compared? Was it simply a selection?

- Did they study just the saliva as the companies did or was it a blood sample?

- Does saliva increase contamination?

- Was the DNA sample taken at Yale?

- Was the DNA sample shipped?

- If it was shipped, was it shipped the same way?

- Could air freight cause premature decay or even mutation, etc...?

And "said all the numbers should have been dead on"

- Was the boffin really a boffin? (see above)

- What does dead on mean? What is the percentage of error?

- What numbers are we talking about?

- Should the results been identical between the twins?

- Could two samples from the same twin produce the same discrepancies?

- Could two separate analysis of the same sample produce the same discrepancies?

- What happens when one person spits into a tube and then the spit is separated into two tubes?

- Did the boffin say "all the numbers should have been dead on" or did they provide a meaningful figure?

- Is this the journalist's interpretation of what the boffin said?

- Are these words the words of the original journalist or was it rewritten to make it sound more British? I've never heard any self respecting person use the term boffin if they actually had a clue to begin with. Same for the word expert.

- Did the "boffin" dumb down the results for someone who is obviously oblivious?

Overall, does this comment translate to :

"Qualified scientists with proven track records specializing in the field of human genetic research as it relates to ancestry evaluated 98% of the sample from each of the two twins, verified that they are in fact identical and that there should be no more than 5% margin of error when comparing the results of genetic studies between the two girls."

I'm not a scientist. I'm barely a high school graduate. But before I would dispute the accuracy of what the AC said by defending it by stating "Boffins at Yale university, having studied the women's raw DNA data, said all the numbers should have been dead on.", I would absolutely start with attempting to answer those questions above.

At this time, while I believe strongly that these twins are in fact identical and I believe they were verified at some point in their life (whether by a "boffin at Yale" or elsewhere) to have been conceived from the same egg, I would not offer it as evidence without further research or conclusive proof. That would be an insult to science. My believe is irrelevant unless this were some high school sociological report for a political science class. And even then, that's a misnomer.

Let's address ancestry as well.

I read the results and let's take an except as you did.

- 23andMe reckoned the twins are about 40 per cent Italian, and 25 per cent Eastern European;

- AncestryDNA said they are about 40 per cent Russia or Eastern European, and 30 per cent Italian;

- MyHeritageDNA concluded are about 60 per cent Balkan, and 20 per cent Greek.

I'm no expert on ancestry, but I have questions.

- What does it mean to be italian, eastern european, russian, etc...?

- Within how many generations would the be counting?

- Would the 40% Italian refer to the 1800's very likely means balkan and greek in 200AD?

- Is Russian of east european or east asian or central asian descent?

- Is balkan east european or russian?

What I'm reading here is that all three companies were in 100% agreement. If anything, for an imperfect science, it's impressive how perfectly they agree.

As another item

- Two of the tests reported that the twins had no Middle Eastern ancestry, while the three others did, with FamilyTreeDNA saying 13 per cent of their sample matched with the region.

I'm pretty sure that if we believe modern science, everyone on earth should come from the middle east and before that Africa. So unless the twins come from another planet, they have middle eastern descent. The question is, how many generations back? Also, what counts as middle eastern descent? If we refer to religion, only a few thousand years ago, Jerusalem was in war with the Seleucid empire which almost definitely spread seeds from the middle east to the Mediterranean or maybe the Mediterranean seed was spread widely enough to influence what is considered the middle eastern bloodline today.

Again, I don't see the data conflicting, I would need far more information to sound even moderately intelligent.

And I'll attack one more

- On top of this, each test couldn't quite agree on the percentages between the sisters, which is odd because the twins share a single genetic profile.

This is absolutely 10000% not true. They are born from the same egg. I can't find her birth date, but I'd put her at about 33 years old, but that may simply be because she dresses like an old lady. Either way, let's round to 30 years old.

Unless she and her sister have been in the womb together until last week which I doubt, they should have been through an infinitely different life causing infinite changes in their their genetic code from one another. Their genes can't possibly be that similar anymore. Almost certainly less than 99% similar now. That's at least 30 million differences from one another given that a human gene consists of 3 billion nucleotides or genetic pairs.

Consider that simply walking in the sun causes genetic mutation. Pressures from drinking water can cause genetic mutation (I read a peer reviewed paper on this, but can't cite it).

So, please don't bash the AC, he's as full of shit as I am... the only thing I got out of this article is that some girl who has a twin sister has now made an international impact with her ignorance of science in an effort to make a headline to increase the ratings of her TV show which should have served the purpose of informing people of facts.

I would like to see some real research on this topic from people who are far smarter than me. At this time, I have many questions and wouldn't even know where to start answering them.

IBM to kill off Watson... Workspace from end of February

CheesyTheClown

Maybe if someone knew it was there?

Ok, so I work for a company which is A LOT older than IBM, has one tenth the head count but is 1/4 the size in dollars.... and while IBM is a pretty interesting company, I wonder if there's something failing when IBM isn't sitting at my office begging me to buy their stuff.

My company has spending to do for our customers which could be worth several points on their share values if they were to make an effort. But while we give Cisco about $2 billion a year, I don't think IBM even tries to gain our love. And to be honest, with the project I'm working on, if I even knew that Watson Workplaces was there, I might have considered it as a solution.

IBM is failing because Ginny is targeting only C-level business and she's a acquisitions and mergers monster. She's great at that. Every time she lose an important customer, she buys the company she lost them to. But here's the thing. Working for the world's second largest telecom provider and walking distance from the CEO's office, I couldn't tell you how I would even start a conversation with IBM. I bet they have a pile of crap I could find interesting that could save me a lot of time and work to make deliveries to my customers. I'd even consider buying a mainframe and writing new projects for 20 year projects on them. But, I have no idea where I would get the people or expertise or even contacts required to even talk with IBM.

I guess IBM only wants to sell to people who know what they have.

Oracle boss's Brexit Britain trip shutdown due to US government shutdown

CheesyTheClown

Re: WTAF?

I was wondering about this as well. I don't care if you're Donald Trump or even someone important, try getting through Heathrow without a passport. You can't even transfer planes in the same terminal without being treated like a criminal at Heathrow. I've traveled business or first class through Heathrow many times and I just finished moving all my frequent flier miles (200,000+ executive club miles) to One World because I refuse to travel through the UK anymore since the security got stupid.

So, the author is awful. This was an expired passport.

I'm an American citizen and I've had passports replaced in 12 hours or less by having FedExing the forms through Boston and having them walked through by an agent. It's pretty simple actually.

But during a shutdown, I'd imagine that this is not possible. That said, it's an Oracle thing, it's not really important that people like Hurd show up... even Larry does things like ditching his keynote at Oracle conferences to play with his boats.

Insiders! The good news: Windows 10 Sandbox is here for testing. Bad news: Microsoft has already broken it

CheesyTheClown

Re: Windows sandbox

Ok... first of all, Sandbox is an insider feature which means that things will and DO go wrong. It's not meant to be reliable software, it's meant to be bleeding edge. Think of it as the alpha and beta versions of times past.

Second of all, security fixes from release builds generally come into sandbox builds. The security fixes are tested against release and if they're tested against insider as well, it's a bonus.

Internet Explorer is an application built on top of many Windows APIs including for example the URL engine for things like fetching and caching web media. It's like libcurl. Just like libcurl, wget, etc... there are always updates and security patches being made to it. So, when making updates and security fixes to the web browser, if fixes need to be made to the underlying operating system as well, they are.

That said, I've had fixes to IE break my code many times. This is ok. I get a more secure and often higher performance platform to run my code on. It's worth a patch or two if it keeps my users safe.

As for what the sandbox does, I'd imagine that the same APIs which IE uses to sandbox the file system from web apps requesting access to system resources (probably via Javascript) are used to provide file system access for example. If I had to guess, it probably is tied to the file linking mechanism.

London Gatwick Airport reopens but drone chaos perps still not found

CheesyTheClown

Re: How hard is the approximate localization of a 2.4GHz sender operating in or near an airport?

Let’s assume for the moment that we were to plan the perfect crime here. This is a fun game.

1) Communications

Don’t use ISM, instead use LTE and VPNs. It’s pretty cheap and easy to buy SIM cards and activate them in places like Romania without a postal address or even ID. Buy cards that works throughout Europe. They’re cheap enough and far more effective for ranged communication than 2.4. Additionally, jamming is a bigger problem as you can’t jam telephone signals at an airport during a crisis. In a place like England where people are dumb enough to vote Brexit and have fights in bars over children kicking balls around, it would cause riots.

2) Use 3D printed drones. Don’t buy commercial, they’re too expensive and too easy to track. Just download any of a hundred tested designs and order the parts from a hundred different Chinese shops.

3) Don’t design for automatic landing.

4) Add solar cells and plan ahead. Don’t try planting them yourself, instead, launch them 20 miles away from different sites and have them fly closer each day at low altitude until they are close.

5) Don’t depend on remote control. Write a program which uses GPS and AGPS to put on an “Animitronic Performance”. Then they can run for hours without fear of interference.

6) Stream everything to Twitch or something else anonymously. Send streams to Indonesian web sites or something like that instead.

I have to go shopping... but it would be fun to continue ... share your thoughts.

It's the wobbly Microsoft service sweepstake! If you have 'Teams', you've won a lifetime Slack sub

CheesyTheClown

Re: Of course, given recent statements from the rumor mill...

What do you mean poorly on Linux? I’m not baiting, I’m currently investing heavily into Powershell on Linux and would like a heads up.

Bordeaux-no! Wine guzzling at UK.gov events rises 20%

CheesyTheClown

This is true

I've met polite people from France.

I've actually met pretty girls from England

I've met Finnish people who understand sarcasm

I've met American's who aren't entirely binary in every one of their beliefs

I haven't bothered with American wines in the past 20 years, though I know they have improved.

I generally drink Spanish wine (Marquee De Caceras, Faustino I) as they are compatible with my food preference.

I have a fridge full of Dom Perignon I have been collecting for 20 years to serve at my children's weddings.

The one thing I can be sure of though... booze is simply too strong these days and it's ruining it across all international borders. When I read comments from the UK about people who work in places where booze is permitted or not, I'm generally shocked. Having a glass of wine based on 1950's and earlier standards would be similar to a 3-5% alcohol and it may have even been watered as well. These days, at 13% and higher, the person drinking it probably is useless for a while afterwards.

Also, a glass of wine in the 1950's was considerably smaller than it is today. Having a glass of wine with lunch really didn't provide enough alcohol to consider. Today however, people are basically getting buzzed at lunch.

I would love to see a return to when "drinking wine" or "table wine" was a good idea. Just enough alcohol to make the flavor work, but not enough to get blasted. I've had terrible experiences with modern wines. They all taste like alcohol. It's almost as if we judging the quality of a drink based on how well we believe it will mess us up. I wonder if the European nations still remember how to make wine properly and if they could actually create wines that earned their merits on flavor as opposed to toxicity.

Well now you node: They're not known for speed, but Ceph storage systems can fly

CheesyTheClown

Re: 6ms+ w NVMe

I was thinking the same thing. But when working with asynchronous writes, it’s actually not an issue. The real issue is how many writes can be queued. If you look at most of the block based storage systems (NetApp for example) they all have insanely low write latency, but their scalability is horrifying. I would never consider Ceph for block storage since that’s just plain stupid. Block storage is dead and only for VM losers who insist on having no clue what is actually using the storage.

I would have been far more interested in seeing database performance tests running on the storage cluster. I also think that things like erasure coding is just a terrible idea in general. File or record replication is the only sensible solution for modern storage.

A major issue which is what most people ignore on modern storage and is why block storage is just plain stupid is transaction management on power loss. Write times tend to take a really long time when writes are entirely transactional. NVMe as a fabric protocol is a really really really bad idea because it removes any intelligence from the write process.

The main problem with write latency for block storage on a system like Ceph is that it’s basically reliably storing blocks as files. This has a really high cost. It’s a great design, but again, block storage just is so wrong on so many levels that I wish they would just kill it off.

So if Micron wants to impress me, I’d much rather see a cluster of much much smaller nodes running something like MongoDB or Couchbase in a cluster. A great test would be performance across a cluster of Latte Panda Alpha nodes with a single Micron SSD each. Use gigabit network switches and enable QoS and multicast. I suspect they should see quadruple the performance that they are publishing here for substantially less money.

Better yet, how about a similar design providing high performance object storage for photographs? When managing map/reduce cluster storage, add hot and cold as well, it would be dozens of times faster per transaction.

This is a design that uses new tools to solve an old problem which no one should be wasting more money on. Big servers are soooooo 2015.

Oi! Not encrypting RPC traffic? IETF bods would like to change that

CheesyTheClown

Re: stunnel, wireguard

TLS1.3 is a major change. I'd imagine that with new protocols, we'd use TLS and DTLS 1.3 as opposed to earlier versions.

Also consider that the performance issues with earlier versions of TLS have been mostly handshake related. This is a short term problem in NFS since NFS is a long term protocol.

There are some real issues with NFSv4 which make it unsuitable for environments which require distance. It's not nearly as terrible as using a FibreChannel techology, but it can be pretty bad all the same. Most people don't properly prepare their networks for NFSv4. NFS loses so much performance it's barely useable if the MTU on the connection is less than 8500 bytes.

NFS also has a ridiculously high retry overhead.

NFS should NEVER EVER EVER EVER be run over TCP... if you ever think that running NFS over TCP is a good idea... stop everything you're doing and read the RFC which explains that TCP support is only there for interoperability. Unless you're using some REALLY REALLY bad software like VMware which seems wholly intent on having poor NFS support (no PNFS support for how long after PNFS came out?) you should run NFS as UDP only.

There are many reasons for this... the most obvious reason is that TCP is a truly horrible protocol. It's a quick and dirty solution for programmers who don't want to learn how protocols work or understand anything about state machines. UDP is for people who have real work to do. Quic is even better, but that's a little while off.

I would recommend against using wireGuard.

- It's doing in kernel what should be done in user space

- It's two letter variable name hell

- It's directly modifying sk_buff instead of using helper functions which increases risk over time with kernel updates to security holes being introduced.

- key exchange is extremely limited

I won't say I see any real security holes in it, and I will admit it's some of the most cleanly written kernel module code I've seen in a long time. But there's a LOT of complexity in there and it's running in absolutely privileged kernel mode. It looks like a great place to attack a server. One minor unnoticed change to the kernel tree and specifically sk_buff and this thing is a welcome matt to hackers.

FPGAs? Sure, them too. Liqid pours chips over composable computing systems

CheesyTheClown

Counter-intuitive

I'm somewhat proficient in VHDL and I've done a bit of functional programming as well. The issue is that when something generally is a series of instructions, it's often uncomfortable and simply backwards to describe things in terms of state.

I've told people before that a great starting point for learning to do VHDL is to write a parser using a language grammar tool. It's one of the simplest forms of functional programming to learn.

Another thing to realize is that the backwards fashion in which most HDLs are written is extra difficult since something in terms of "Hello World" is a nightmare as there's a LOT of setup to do to produce even a basic synthesized entity. Hell, for that fact, simply the setup work for an entity itself is intimidating if you don't already understand implicitly what that means.

There's been a lot of work into things like System-C and System-Verilog to make this all a little easier, but it's still a HUGE leap.

Now, OpenCL has proven to be a great solution for a lot of people. While the code generated by OpenCL for the purpose is generally horrible at best, it does lower the entry level a great deal for programmers.

Consider a card like this one which Liquid is pushing.

You need to take a data set, load it into memory in a way that makes it available to the FPGA (whether internally or over the PCIe bus), then you need to make easily parallelizable code which can be provided as source to the engine which compiles it and uploads it to the card. Of course, the complexity of the compilation phase is substantially higher than uploading to a GPU, so the processing time can be very long. Then the code is loaded on the card and executed and the resulting data needs to be transferred back to main memory.

There are A LOT of programmers who wouldn't have the first idea where to start with this. There's always cut and paste, but it can be extremely difficult to learn to write OpenCL code that would take less time to compile (synthesize), upload and run that couldn't have just been run faster on CPU.

Then there's things like memory alignment. Programmers who understand memory alignment on x86 CPUs (and there's far fewer of those than there should be) can find themselves lost when considering that RAM within an FPGA is addressed entirely differently. Heck, RAM within the FPGA might have 5 or more entirely different patterns of how it's accessed. Consider that most programmers (except for people like those on the x264 project) rarely consider how their code interacts with L1, L2, L3 and L4 cache. They simply spray and pray. Processor affinity is almost never a consideration. We probably wouldn't even need most supercomputers if scientific programmers understood how to distribute their data sets across memory properly.

I've increased calculation performance on data sets more than 10,000 fold within a few hours just by aligning memory and distributing the data set so that key coefficients would always reside within L1 or worst case, L2 cache.

I've increased code even more simply by choosing the proper non-arbitrary scale matrix multiplication function for the job. It's fascinating how many developers simply multiply a matrix against another matrix with a complete disregard for how a matrix is calculated. I actually one time saw a 50,000x performance improvement by refactoring the math of a relatively simple formula from a 3x4 to a 4x4 matrix and moving it from an arbitrary math library to a game programmers library. The company who I did it for was amazed because they had been renting GPU time to run Matlab in the cloud and by simply making code which could be optimized properly by the compiler... a total of Google->Copy & Paste->Compile->Link the company saved tens of thousands of dollars.

When I see things like the latest two entries into the super-computer Top500, all I can think of is that the code running on there almost certainly could be optimized to distribute via Docker into a Kubernetes cluster, the data sets can be logically distributed for Map/Reduce and instead of buying a hundred million dollars of computer or renting time on it, the same simulations could be performed for a few hundred bucks in the cloud.

Hell, if the data set were properly optimized for map/reduce instead of using some insane massive shared memory monster, it probably would run on a used servers in a rack. I bought a 128-Core Cisco UCS cluster with 1.5TB of RAM for under $15,000. It doesn't even have GPUs and for a rough comparison, when I tested using crypto-currency mining as a POC, it was out-performing $15,000 worth of NVidia graphics cards... of course, the power cost was MUCH higher, but it wasn't meant to test feasibility of crypto mining, it was just a means of testing highly optimized code on different platforms. And frankly, scrypt is a pretty good test.

I'll tell you ... FP is lovely.. if you can bend to it. F# is very nice and Haskell is pretty nice as well. Some purists will swear by LISP or Scheme, and there's the crazies in the Ericsson camp.

The issue with FP isn't whether it's good or easy or not. It's the same problem you'll encounter with HDLs, the code written in it is generally written by very mathematical minds that think in terms of state and it makes it utterly unreadable.

Another 3D printer? Oh, stop it, you're killing us. Perhaps literally: Fears over ultrafine dust

CheesyTheClown

Re: 'Give us money'

I’m not certain. I’ve been looking into charcoal filtration for the printers I share an office with. I find that SLA printing is nasty to share a room with. FDM isn’t as bad, but I sometimes wonder if I’m getting headaches from it. I currently have 4 FDM printers running pretty much 24/7 and it’s better to be safe than sorry.

Samsung claims key-value Z-SSD will be fastest flash ever

CheesyTheClown

Yes please

Just... yes please.

I’ve been desperately waiting for a something like this. If they have a KV solution which supports replication that would be absolutely amazing!!!

There's no 'I' in 'IMFT' – because Micron intends to buy Intel out of 3D XPoint joint venture

CheesyTheClown

Octane flopped because

RAM prices were to high and to use Optane as an acceleration tool for SSD, it was too rich for most people’s blood. Let’s not forget that GPU prices were triple what was reasonable. There simply was no room in most people’s budgets for a product that simply didn’t give enough of a boost to justify the additional cost as opposed to getting faster RAM or a better GPU

That 'Surface will die in 2019' prediction is still a goer, says soothsayer

CheesyTheClown

Is there anything wrong with Windows 10?

Ok, so I'm at a loss... I have Windows 10 in front of me now. I seriously can't see anything particularly wrong with it. It's fast, it's responsive, it's stable, it generally just works. It hasn't had most of the security issues that we've had in the past and most of the modern security issues are about users messing up.

I would say pretty much the same about Mac OS X. The real shortcoming to OS X these days is that if you want to run Linux, you need a VM and Windows doesn't need it. And the Mac OS X command line is extremely limited compared to Linux.

Oslo clever clogs craft code to scan di mavens and snare dodgy staff

CheesyTheClown

Re: It's all academic

The funny thing is that Norwegian law wouldn’t allow this system to be used :)

Spoiler alert: Google's would-be iPhone killer Pixel 3 – so many leaks

CheesyTheClown

Re: Fscking notch...

And, you can’t hold the phone one handed and read shit without constantly moving your fingers.

CheesyTheClown

Re: Mistaken

I agree. Though this past year, I have started purchasing or renting films from Google Play and Windows Store. This is because Apple makes it difficult for me to even understand which account I’m paying from. Sometimes I buy a film on iTunes and it pulls from my PayPal... other times it pulls from my credit card. Google and Microsoft are easier to manage.

I buy iPhone because Apple makes one or two models a year and updates seem to come for years after they stop selling the model. That makes me feel as though there is a return on investment. Or it did. But since around the time Jobs kicked off, the iPhone has become progressively worse. In addition, my entire phone seems hellbent on trying to sell me shit. I mean seriously,

I’ve bought most of the songs I like already. I have about 2000-3000 tracks in my iTunes catalog. If I were to pay for Apple Music, I would need to listen to an average of about 15 new songs a month... every month for it to be profitable. That means I’d have to listen to 180 new songs a year to make it cheaper than buying the songs I like outright. I’m not that guy. Most of what I listen to is old. I don’t even turn the stereo in my car on. I have no interest in listening to music to simply hear noise. I don’t want Apple Music. I will never want Apple Music. Why the fuck can’t I open my music player and not be constantly attacked about buying Apple Music?

Then there’s the headphone jack. I have two laptops, an iPhone and a TV at home. Bluetooth sucks for that. Why would I ever want to spend my whole life pairing my headphones. It’s easier to just plug and unplug. Also, I depend on corded headphones to make sure that I never leave my headphones or telephone behind.

Apple is sooooooooo far from what I came to love about them. But what does it matter if I’m just someone who used to spend $7,000 a year with Apple. Now I have a Surface Book 2 and am willing to switch to Android if Google releases a high end phone with a headphone jack. I’m willing to pay $1200 for a Google branded phone (won’t buy knock offs made by companies who don’t write the OS). It should be small enough to fit in my pocket but large enough to read. It should have edges so I don’t have to move my fingers to read text... none of this curving off the edge shit. It should also be easy enough to unlock that I don’t need to look at it or pick it up to see if I want to pick it up. Thumb print is fine.

Basically, I want an iPhone 6S Plus but with Android. I have a top spec iPhone X sitting on the coffee table collecting dust. I’m back on my 6S Plus... the last good phone Apple made... but Apple apparently doesn’t run unit tests on the 6S Plus anymore.

Developer goes rogue, shoots four colleagues at ERP code maker

CheesyTheClown

An American also seems involved.

Many countries have many guns. It’s a US anomaly with regards to human behavior that is causing the shootings. If you haven’t ever been to America, the US is somewhat of a cesspool or hate and almost British like superiority trips. It’s a non-stop environment of toxicity. Their news networks run almost non-stop hate trips to hopefully scrape by with enough ratings and viewers.

I left America 20 years ago and each time I go back, I’m absolutely shocked at how everyone is superior to everyone else. I just met an American yesterday who in less than two minutes told me why his daughter was superior to her peers.

It’s also amazing how incredible the toxicity of hate is. It’s a non-stop degradation of humanity. Every news paper, news channel, social media network, etc... is absolutely non-stop negativity.

It’s not about the guns... I think the guns are just an excuse now. I think it’s about everyone from the president downward selling superiority, hate and distrust. I’m pretty sure if you took the guns away, it would be bombs.

Spent your week box-ticking? It can't be as bad as the folk at this firm

CheesyTheClown

Cisco ISE

It sounds like Cisco ISE’s TrustSec tools.

The good news is that in the latest version, the mouse wheel works most of the time. It used to be click 5 boxes, the move to the tiny little scroll bar and then click 5 more. Now you can click 5 and scroll using the wheel. So safely clicking 676 boxes when you have 26 groups is almost doable without too many mistakes now.

Hello 'WOS': Windows on Arm now has a price

CheesyTheClown

Re: I Wish You Luck

I use ARM every day in my development environment. I work almost entirely on Raspberry Pi these days.

I would profit greatly from a Windows laptop running on ARM with Raspbian running in WSL.

That said, I already get 12 hours battery life on my Surface Book 2 for watching videos and I also have. Core i7 with 16GB RAM and a GTX 1060.

Nokia basically destroyed their entire telephone business by shipping underpowered machines with too little RAM because they actually believed batter life was why people bought phones. They bragged non-stop about how Symbian didn’t need 200Mhz CPUs and 32MB of RAM and yet, the web did and when iPhone came out and was a CPU, Memory and battery whore, people dumped Nokia like the piece of crap it was. The switch to Windows was just a final death throw.

After all these years, ARM advocates seem to think people give a crap about battery life and are willing to sacrifice all else... like compatibility or usability just so they can not run what they want or be able to use it just because they can’t carry a small charger with them. I honestly believe that until ARM laptops are down to $399 or less and deliver always online Core i5 performance, they won’t sell more than a handful of laptops.

Let’s also consider that no company shipping Qualcomm laptops are making a real effort at it. They’re building them just in case someone shows interest. But really, the mass market doesn’t have a clue what this is or why it matters and for that much money, there are far more impressive options.

And oh... connectivity. If always connected was really a core business for Microsoft, why is it that my 2018 model Surface Book 2 15” Computer packs LTE?

VMware 'pressured' hotel to shut down tech event close to VMworld, IGEL sues resort giant

CheesyTheClown

Skipped Cisco Live two years and will next

Cisco has been holding Live! In Vegas lately. I have absolutely no interest in me, my colleagues or my customers being in Vegas for the event.

The town is too loud. It’s very tacky. It is precisely the place civilized people would not want to be associated with. Let’s be honest, “what happens in Vegas...” guess what, this is not the kind of professional relationship I want to maintain with those who depend on me or I depend on.

Why would you want to hold a conference in Vegas?

1) Legalized prostitution

2) Legalized gambling

3) Free booze at the tables

4) Free or cheap buffets to gouge yourself at

5) Readily available narcotics of all sorts

6) Massive amounts of waste... not a little, the city must be one of the most disgustingly wasteful cities on earth.

7) Sequins... if that’s your thing.

Can you honestly say that you would want your serious customers to believe this is the type of behavior you associate with professionalism?

Pavilion compares RocE and TCP NVMe over Fabrics performance

CheesyTheClown

Digging for use cases?

Ok, let’s kill the use case already.

MongoDB... you scale this out, not up, MongoDB’s performance will always be better when run with local disk instead of centralized.

Then, let’s talk how MongoDB is deployed.

It’s done through Kubernetes... not as a VM, but as a container. If you need more storage per node, you probably need a new DB admin who actually has a clue.

Then there’s development environment. When you deploy a development environment, you run minikube and deploy. Done. No point in spinning up a whole VM. It’s just wasteful and locks the developer into a desktop.

Of course there’s also cloud instances of MongoDB if you really need something online to be shared.

And for tests... you would never use a production database cluster for tests. You wouldn’t spin up a new database cluster on a SAN or central storage. You’d run it on minikube or in the cloud on Appveyor or something similar.

If latency is really an issue for your storage, instead of a few narrow 25Gbe pipes to an oversubscribed PCIe ASIC for switching and an FPGA for block lookups, you would instead use more small scale nodes, map/reduce and spread the work-load with tiered storage.

A 25GbE network or RoCE network in general would cost a massive fortune to compensate for a poorly designed database. Instead, it’s better to use 1GbE or even 100MbE to scale the compute workload into more small nodes. 99% of the time, 100 $500 nodes connected by $30 a port networking will use less power, cost considerably less to operate and perform substantially better than 9 $25,000 nodes.

Also, with a proper map/reduce design, the vast majority of operations become RAM based which will drastically reduce latency compared to even the most impressive NVMe architectures based on obsessive scrubbing. Go the extra mile and make indexes that are actually well formed and use views and/or eventing to mutate records and NVMe is a really useless idea.

Now, a common problem I’ve encountered is in HPC... this is an area where propagating data sets for map reduce can consume hours of time given the right data set. There are times where processes don’t justify 2 extra months of optimization. In this case, NVMe is still a bad idea because RAM caching in an RDMA environment is much smarter.

I just don’t see a market for all flash NVMe except in legacy networks.

That said, I just designed a data center network for a legacy VMware installation earlier today. I threw about $120,000 of switches at the problem. Of course, if we had worked on downscaling the data center and moving to K8s, we probably could have saved the company $2 million over the next 3 years.

You lead the all-flash array market. And you, you, you, you, you and you...

CheesyTheClown

What's the value anymore?

Ok, here's the thing... all flash is generally a really bad idea for multiple different reasons.

M.2 Flash has a theoretical maximum performance of 3.94GB/sec bandwidth on the bus. Therefore a system with 10 of these drives should be able to theoretically transfer an aggregate bandwidth of 39.4GB a second in the right circumstances.

A single lane of networking or fibre channel is approximately 25Gb/sec which is less than 1/10th of the bus bandwidth of a drive. So in a circumstance where a controller can provide 10 or more lanes of bus bandwidth for data transfers, this would be great, but this numbers are so incredibly high that this is not even an option.

So, we know for a fact that the bus capacity of even the highest performance storage systems can barely make a dent in a very low end all flash environment.

Let's get to semiconductors.

Let's consider 10 M.2 drives with 4 32Gb Fibre Channel adapters. This would mean that a minimum of 72 PCIe 3.0 lanes would be required to allow full saturation of all busses.

This is great, but the next problem is that in this configuration, there's no means of block translation between systems. That means that things like virtual LUNs would not be possible.

It is theoretically possible to implement in FPGA (DO NOT USE ASIC HERE) a traffic controller capable of handling protocols and full capacity translation using a CPU style MMU for translation of regions of storage instead of regions of memory, but the complexity would have to be extremely limited and because of the translation table coherency, it would be extremely volatile.

Now... the next issue is that assuming some absolute miracle worker out there manages to develop a provisioning, translation and allocation system for course grained storage, this would more or less mean that things like thin provisioned LUNs would be borderline impossible in this configuration. In fact, based on modern technology, it could maybe be possible with custom FPGAs designed specifically for an individual design, but the volumes would be far too low to ever see return on investment for the ASIC vendor.

Well, now we're back to dumb storage arrays. That means no compression, thin provisioning, deduplication and without at least another 40 lanes of PCI 3.0 serialized over fibre for long runs, there's pretty much no chance of guaranteed replication.

Remember this is only a 10 device M.2 system with only 4 fibre channel HBAs.

All Flash vs. spinning disk hybrid has never been a sane argument. Any storage system needs to properly manage storage. The protocols and the software involved need to be rock solid and well designed. FibreChannel and iSCSI have so much legacy that they're utterly useless for modern storage as they don't handle real world storage problems on the right sides of the cable anymore. Even with things like VMware's SCSI extensions for VAAI, there is far too much on the cable and thanks to fixed sized blocks, it should never exist. If nothing else, they lack any support for compression. Forget other things like client side deduplication so that hashes for dedup could be calculate not just for dedup, but for an additional non-secure means of authentication.

Now let's discuss cost a little.

Mathematics and physics and pure logic says that data redundancy requires a minimum of 3 active copies of a single piece of data at all times. This is not negotiable. This is an absolute bare minimum. That would mean to have the minimum requirement for redundant data, a company should have a minimum of 3 full storage arrays and possibly a 4th for circumstances with long term maintenance.

To build an all flash array with a minimal configuration, this would cost so much money that no company on earth should ever piss that much away. It just doesn't make sense.

The same stands true of fibre channel fabrics. There needs to be at least 3 in order to make commitments to uptime. This is not my rule. This is elementary school level math.

Fibre channel may support this, but the software and systems don't. It can be done on iSCSI, but certainly not on NVMe as a fabric for example. The cost would also be impossible to justify.

This is no longer 2010 when virtualization was nifty and fun and worth a try. This is 2018 when a single server can theoretically need to recover from failure of 500 or more virtual machines at a single time.

All Flash is not an option anymore. It's absolutely necessary to consider eliminating dumb storage. This means block based storage. We have a limited number of storage requirements which is reflected by every cloud vendor.

1) File storage.

This can be solved using S3 and many other methods, but S3 on a broadly distributed file system makes perfect sense. If you need NFS for now... have fun but avoid it. The important factor to consider here is that classical random file I/O is no longer a requirement.

2) Table/SQL storage

This is a legacy technology which is on its VERY SLOW way out. We'll still see a lot of systems actively developed towards this technology for some time, but it's no longer a prefered means of storage for systems as it lacks flexibility and is extremely hard to manage back end storage for.

3) Unstructured storage

This is often called NoSQL. This is a means that all systems have queryable storage which works kinda like records in a database but far smarter. So the data stored is saved as a file, but the contents can be queried. Looking at a system like Mongo or Couchbase shows what this is. Redis is good for this too but generally has volatility issues.

4) Logging

Unstructured storage can often be used for this, but the query front end will be more focussed on record ages with regards to querying and storage tiering.

Unless a storage solution offers all 4 of these solutions it's not really a storage solution it's just a bunch of drives and cables with severely limited bandwidth being constantly fought over.

Map/Reduce technology is absolutely a minimum requirement for all modern storage and this requires full layer-7 capabilities in the storage subsystems. This way as nodes are added, performance increases and in many cases decrease overhead.

As such, it makes no sense to implement a data center today on a SAN technology. It really makes absolutely no sense at all to deploy for example a containers based architecture on such a technology.

If you want to better understand this, start googling at Kubernetes and work your way through containerd and cgroups. You'll find that this block storage should always be local only. This means that if you were to deploy for example MongoDB, SQL servers, etc... as containers, they should always have permanent data stores that require no network or fabric access. All request will be managed locally and the system will scale as needed. Booting nodes via SAN may seem logical as well, but the overhead is extremely high and in reality, PXE or preferably HTTPS booting via UEFI is a much better solution.

Oh... and enterprise SSD is just a bad investment. It doesn't actually offer any benefits when your storage system is properly designed. RAID is really really really a bad idea. This is not how you secure storage anymore. It's really just wasted disk and wasted performance.

But there are a lot of companies out there who waste a lot of money on virtual machines. This is for legacy reasons. I suppose this will keep happening for a while. But if your IT department is even moderately competent, they should not be installing all flash arrays, they should instead be optimizing the storage solutions they already have to operate with the datasets they're actually running. I think you'll find that with the exception of some very special and very large data sets (like a capture from a run of the large hadron collider) more often than not, most existing virtualized storage systems would work just as well with a few SSD drives added as cache for their existing spinning disks.

Flash, spinning rust, cloud 'n' tape. Squeeze. Oof. Hyperconverge our storage suitcase, would you?

CheesyTheClown

Re: Lenevo and Cloudistics could be a fail

This looks great, but suffers the same general problem as AzureStack.

First of all, to be honest, from a governance perspective, I don't trust Google to meet our needs. If nothing else, I don't trust Google to respect safe harbour. Microsoft has now spent years fighting the US government with regards to safe harbour issues, but Google simply provides transparency related to them. I have absolutely nothing to hide personally, but for business, I have to be vigilant with regards to peoples medical and financial records. This is not information that any company outside of my country has legal right to. That means, I can't even trust a root certificate outside this country. That also means that I can't use any identity systems controlled by any company outside of this country. That means no Google login or Azure AD. That also means no Azure Stack or GCP.

Beyond that, Cisco simply doesn't make anything even close to small enough for cloud computing anymore. They used to have the UCS-M series blades which were still too big. To run a cloud, you need a minimum of 9 nodes spread across 3 locations. The infrastructure cost of Cisco is far too high to consider this.

It's much better to have more nodes in more locations. As such we're experimenting with single board computers like Raspberry Pi (which is too underpowered but is promising) and LattePanda Alphas which are too expensive and possibly overpowered to run a cloud infrastructure.

We're looking now at Fedora (we'd choose RedHat, but don't know how to do business with them), Kubernetes, Couchbase and .NET Core. This combination seems to be among the most solid options on the market. We're also looking at OpenFaaS, but OpenFaaS is extremely heavy weight in the sense that it spins up containers for everything. Containers are insanely heavy to host a function. So we're looking into other means of isolating code.

We're walking very softly because we know that as soon as a component becomes part of our cloud, it's a permanent part which will require 20-50 years support. We need something we know will run on new hardware and have support.

Google is amazing and I'd love to use a hybrid cloud, but the problem with public clouds at all is that the money we could be spending on developers and engineers and supporting our customers is instead being burned on governance, compliance and legal. Instead, we need a full detached system which is why I was attracted by Lenevo's solution until it was clear that Cloudistics is focused only on selling to C-Level types and not to the engineers who will have to use it.

CheesyTheClown

Lenevo and Cloudistics could be a fail

So, I'm working a lot on private cloud these days. The reason is that none of the public cloud vendors meet my governance requirements for the system my company is developing.

Azure Stack is out of the question because it requires that the platform is connected to the Internet for Azure AD. So... no luck there.

I've been looking and looking and to be fair, the best solution I've seen is to simply install Linux, Kubernetes, Couchbase and OpenFaaS. With these four items, it should be possible to run and maintain pretty much anything we need. We'll have to contribute changes to OpenFaaS as it's still not quite the answer to all our problems, and we're considering writing a Couchbase backend for OpenFaaS as well. But once all that is covered, it's a much better solution than other things.

That said, we keep our eyes open for alternatives. So when I saw a possible solution in this article, I went to check. It's a closed platform with no developer (or system administrator) documentation online. There's no open source links and there's no apparent community behind it.

So, why in the world would anyone ever invest in a platform from a company like Cloudistics which no one has ever heard of, has no community and hence no "experts" and more than likely won't exist in 12 months time?

If I were shareholder of a company who chose to use this solution in its current state, I would consider litigation for gross mismanagement of the company. This is an excellent example of how companies like Cisco, Lenevo, HPE and others are so completely out of touch with what the cloud is that white box actually makes more sense.

ReactOS 0.4.9 release metes out stability and self-hosting, still looks like a '90s fever dream

CheesyTheClown

Re: Use case for ReactOS

I'll start with... because "Some of us like it" and don't really mind paying a few bucks for it.

I also am a heavy development user. And although I am really perfectly happy with vi most of the time, I much prefer Visual Studio. I actually just wrote a Linux kernel module using Visual Studio 2017 and Windows Subsystem for Linux for the most part. Which is really funny since WSL doesn't use the Linux kernel.

There are simply some of us who like to have Windows running on their systems. Even if I were using Linux as the host OS, I would still do most of my work in virtual machines for organizational reasons and frankly, WSL on Windows is just a thing of beauty.

As for more modern UIs like many people complain about here. I honestly haven't noticed. You press the Windows key and type what you want to start and it works. This has been true since Windows 7 and has only gotten better over time.

Then there's virtualization. Hyper-V is a paravirtualization engine which is frigging spectacular. With the latest release of QEMU which is accelerated on Windows now (like kqemu) you can run anything and everything beautifully.

I have no issues with the software you run... I believe if you sat coding next to me, you'd probably see as many cool new things as I'd see sitting next to you. But honestly, I've never found a computer which runs Linux desktop with even mediocre performance. They're generally just too slow for me. So, I use Windows which is ridiculously fast instead.

As for Bill Gates. Are you aware that Bill has more or less sold out of Microsoft? He's down to little more than 1% of the company. You can give Microsoft gobs of money and he would never really notice. Take it a little further and you might realize that this isn't the Bill Gates of the 1980s. He's grown up and now is a pretty darn good fella. So far as I can tell, since he's been married, he's evolved into one of the most amazingly nice people on earth. I can't see that he's done anything in the past 15-20 years which would actually justify a dislike of him or a distrust of his motives.... unless you're Donald Trump who Bill kind of attacked recently for speaking a little too affectionately about Bill's daughter's appearance.

Windows 10 IoT Core Services unleashed to public preview

CheesyTheClown

Re: Well if MS are offering to do that...

Some of us don't use registered MAC addresses. We simply use duplicate address detection and randomize. There's really no benefit to registered MAC addresses anymore. Simply set the 7th bit to 1 and use a DAD method.

Also consider that many of us don't use Ethernet for connectivity. There are many other solutions for IoT. A friend of mine just brought up a 1.2 million node multinational IoT network on LTE.

MAC address filtering and management is basically a dead end. There's just no value in it for many of us. It really only adds a massive management overhead to production of devices. And layer-2 is so bunged to begin with that random MAC addresses with DAD can't really make it any worse.

Who fancies a six-core, 32GB RAM, 4TB NVME ... convertible tablet?

CheesyTheClown

Will have bugs and no love from HP

For a product of this complexity to be good, it needs to reach high enough volumes thatthe user feedback on the product is good enough to solve problems. A company the size of HP will ship this, but the volume of big reports will be low due to a few reasons.

1) the user count is low

2) the typical user of this product won’t have a reliable means of reporting the bugs other than forums. This is because they work for companies who can afford these systems and would have to report through IT. IT will not fully understand or appreciate the problems or how they actually effect the user and therefore will not be able to convey the problems appropriately.

3) HP does not make the path from user to developer/QA transparent as once the product is shipped, those teams are reassigned.

As such, HPs large product portfolio is precisely why this is a bad purchase. Companies like Microsoft and Apple build a small number of systems and maintain them long term. Even with the huge specifications on these PCs, a lower end system and offloading some to the cloud is far more fiscally responsible.

Of course, people will buy them and if we read about them later, I doubt the user response will be overly positive.

I’m using a Surface Book 2 15” with a Norwegian keyboard even though I have it configured to English. This is because a LOT of negative feedback reached MS on the earlier shipments and by buying a model I was sure came off the assembly line a few months later, I was confident that many of the early issues were addressed.

This laptop from HP will not have that benefit because to produce them profitably, they will need to make probably almost all the laptops of this model they will ship or at least components like motherboards in a single batch. So, even later shipments will probably not see any real fundamental fixes.

But if you REALLY need the specs, have a blast :) You’re probably better off with a workstation PC and Remote Desktop from a good laptop though.

Even Microsoft's lost interest in Windows Phone: Skype and Yammer apps killed

CheesyTheClown

Re: MS kills UWP apps, Telephony API appears in Windows

Nope, both hands know what’s happening. The telephony APIs allow for Android integration. So the APIs permit Windows 10 Always Online devices (laptops with built in LTE) to provide a consistent experience across phone and laptop.

For instance, you will probably be able to make a call from your laptop. They also integrated messaging.

But I guess that’s not as exciting as assuming it means that Microsoft is confused. :)

White House calls its own China tech cash-inject ban 'fake news'

CheesyTheClown

Re: Enjoy this while it lasts

I don’t know whether I want to agree or debate this.

We saw republicans dropping from the election for now reason that seemed clear. Just one after another dropped out and yielded to Trump with no explaination to be had. Each time they dropped out and made their support for Trump clear it looked like people behaving as if they were forced to under duress.

Bernie seemed to have real support of the people because they believed in him politicaly. As though they liked his message. Hillary seemed to garner support by people who liked her making fun of Trump and also by people voting for superficial reasons. I’ve been a long believer that it’s time for a female president. I remember as a child being excited that Geraldine Ferrara was running. But Hillary simply scared me because her message didn’t seem to be anything other than “I’ll win and it’s my turn!”

Sander dropped out out of what seemed like frustration over the stubborn child stomping her feet and claiming “I’ll win, it’s my turn!”

I have had great hopes that if this election proved anything to the American people, it’s that the two parties are so corrupt that people need a choice and neither party is offering a choice to the people.

Amazon, Facebook, Twitter, Google, Microsoft, Netflix, and others can all change the platform. They can reinvent the entire two party system overnight. All it would take is to each build on their platforms a new electoral process to identify and support candidates that they would then have added to the ballot. If each company run different competitions and systems to identify and sponsor candidates, we could have a presidential election with 10 or more alternatives to choose from.

They can even allow underdogs to get a grip on the elections. For example, traditional fund raisers which reward only people willing to sell their political capital would become irrelevant. People could get elected because they were in fact popular instead of having sold their souls in exchange for enough money for some commercial time.

I think Trump and Hillary may be the best thing to ever happen to America. If two shit bags like them can end up being the only possible choices the people had, then it’s clear it’s time for a change.

Why aren't startups working? They're not great at creating jobs... or disrupting big biz

CheesyTheClown

What do you mean?

So, let's say this is 1980 and you start a new business.

You'll need a personal assistant/secretary to :

- type and post letters

- sort and manage incoming letters

- perform basic book keeping tasks

- arrange appointments

- answer phones

- book travel

You'll need an accountant to :

- manage more complex book keeping

- apply for small business loans

- arrange yearly reports

You'll need a lawyer to :

- handle daily legal issues

- write simple contracts

You'll need an entire room full of sales people to

- perform business development tasks

- call every number in the phone book

- manage and maintain customer indexes

You'll need a "copy boy" to

- run errands

- copy things

- distribute mail

Etc...

Now in 2018

You'll need

- an app for your phone to scan receipts into your accounting software

- an accounting app to perform year end reports and to manage your bank accounts

- an app to click together legal documents based on a wizard

- a customer relationship manager application

- a web site service for your home page

- etc...

Let's imagine you are a lawyer in 1980...

- You'd study law

- Graduate

- Take a junior position doing shit work

- Pass the bar

- work for years taking your boss's shitty customers

- work for years trying to sell your body to get your own customers

- one your portfolio was big enough, you'd become a senior partner who would take a cut from everyone else's customers.

The reason the senior lawyer hired junior lawyers was because there was a massive amount of work to do and a senior partner would spend most of their time talking and delegating the actual work to a team of juniors, researchers and paralegals.

Now the senior can do 95% of the work themselves by using an iPad with research and contract software installed in less time than it would have taken to delegate. So where a law firm may have employed 10-20 juniors, paralegals and researchers in 1980 per senior, today, one junior lawyer probably can easily handle the work placed on them by two seniors.

There's no point hiring tons of people anymore. Creating a startup that is dependent on a head count is suicide from the beginning. If you're a people based company, then the second someone smarter sees there's a profit to be made, they'll open the same type of business with far more automated.

Cray slaps an all-flash makeover on its L300 array to do HPC stuff

CheesyTheClown

What is the goal to be accomplished?

Let's assume for the moment that we're talking about HPC. So far as I know, whether using Infiniband or RDMAoE, all modern HPC environments are RDMA enabled. To people who don't know what this means, it means that all the memory connected to all the CPUs can be allocated as a single logical pool from all points within the system.

If you had 4000 nodes at 256GB of RAM per node, that would provide approximately 1 Petabyte of RAM online at a given time. The amount of time to load a dataset into the RAM will take some time, but compared to performing large random access operations across NVMe which is REALLY REALLY REALLY slow in comparison, it makes absolutely no sense to operate from data storage. Also, storage fabrics, even using NVMe are ridiculously slow due to the fact that even though the layer-1 to layer-3 are in fact fabric oriented, the layer 4-7 storage protocols are not suited for micro-segmentation. As such, it makes absolutely no sense whatsoever to use NVMe for storage related tasks in super-computing environments.

Now, there's the other issue. Most supercomputing code is written using a task broker that is similar in nature to Kubernetes. It spins up massive numbers of copies related to where the CPU capacity is available. This is because that while many super computing centers embrace language extensions such as OpenMP to handle instruction level optimization and threading, they generally are skeptical about run-time type information which would allow annotation of code with attributes that could be used while scheduling tasks.

Consider that moving the data set to the processor upon which it will operate can mean moving gigabytes, terabytes or even petabytes of memory transfer. However, if the data set were distributed into nodes within zones, then a large scale dataset could be geographically mapped within the routing regions of a fabric and the processes which would require moving megabytes or gigabytes at worst can be moved to where the data is when needed. This is the same concept as vMotion but far smarter.

If the task is moved from one part of the super computer to another to bring it closer to the desired memoryset, the program memory can stay entirely in tact but only the CPU task will be moved. Then on heap read operations the MMU will kick in to access remote pages and then relocate the memory locally.

It's a similar principle to map/reduce except in a massive data set environment, map reduce may not work given the unstructured layout of the data. Instead, marking functions with RTTI annotation can allow the JIT and scheduler to move executing processes to the closest available zone within the super computer to access the memory needed by the following operations. A process move within a supercomputer using RDMA could happen in microseconds or milliseconds at worst.

Using a system like this, it could actually be faster to simply have massive tape drives or reel to reel for the data set as only linear access is needed.

But then again... why bother using the mllions of dollars of capacity you already own when you could just add a few more million dollars of capacity.

Norwegian tourist board says it can't a-fjord the bad publicity from 'Land of Chlamydia' posters

CheesyTheClown

Re: Norwegian History

I think if you checked the Norwegian economy, you might find oil and natural gas doesn't account for as much as you might think.

CheesyTheClown

Ummm been done

There's a chain called Kondomriet all over Norway that sells electric replacements for sexual activities that generally require fluid exchange between participants.

They even advertise them pretty much everywhere with an "Orgasm guarantee". Though I wonder if that's just a gimmick. How many people would actually attempt to return a used item such as that.

What's all the C Plus Fuss? Bjarne Stroustrup warns of dangerous future plans for his C++

CheesyTheClown

Mark Twain on Language Reform

Read that and it all makes sense

Wires, chips, and LEDs: US trade bigwigs detail Chinese kit that's going to cost a lot more

CheesyTheClown

There goes buying from the U.S.

My company resold $750 million of products manufactured in the US last year. Already, these products are at a high premium compared to French and Chinese products. They are a tough sell and it’s almost entirely based on price.

Those items are built mostly from steal, chips, LEDs and wires.

Unless those US companies move their manufacturing outside of the US, we’ll be forced to switch vendors, otherwise the price hikes will be a problem for us. I know that the exported products will have refunds on the duties leaving the US, but the vendors cannot legally charge foreigners less than they charge Americans for these products. So, we’ll have to feel the penalty.

So, I expect to see an email from leadership this coming week telling us to propose alternatives to American products.

Intel confirms it’ll release GPUs in 2020

CheesyTheClown

Re: Always good to have competition to rein in that nVidia/AMD duopoly

The big difference between desktop and mobile GPUs is that a mobile GPU is still a GPU. Desktop GPUs are about large scale cores and most of the companies you mentioned in the mobile space lack the in-house skills to handle ASIC cores. When you license their tech, usually you’re getting a whole lot of VHDL (or similar) bits that can be added to another set of cores. ARM I believe does work a lot on their ASIC synthesis and of course Qualcom does as well, but their cores are not meant to be discrete parts.

Remember most IP core companies struggle with high speed serial busses which is why USB3, SATA and PCIe running at 10Gb/sec or more is hard to come by from those vendors.

AMD, Intel and NVidia have massive ASIC simulators that cost hundreds of millions of dollars from companies like Mentor graphics to verify their designs on. Samsung could probably do it and probably Qualcomm, but even ARM may have difficulties developing these technologies.

ASIC development is also closed loop. Very few universities in the world offer actual ASIC development programs in-house. The graduates of those programs are quickly sucked up by massive companies and are offered very good packages for their skills.

These days, companies like Google, Microsoft and Apple are doing a lot of ASIC design in house. Most other new-comets don’t even know how to manage an ASIC project. It’s often surprising that none of the big boys like Qualcomm have sucked up TI who have strong expertise in DSP ASIC synthesis. Though even TI has struggled A LOT with high speed serial in recent years. Maxwell’s theory is murder for most companies.

So most GPU vendors are limited to what they can design and test in FPGA which is extremely limiting.

Oh... let’s not even talk about what problems would arise for most companies attempting to handle either OpenCL or TensorFlow in their hardware and drivers. Or what about Vulcan. All of these would devastate most companies. Consider that AMD, Intel and NVidia release a new driver almost every month for GPU. Most small companies couldn’t afford that scale of development or even distribution.

UK's first transatlantic F-35 delivery flight delayed by weather

CheesyTheClown

Wouldn’t it be most responsbile if....

The F-35s are simply left grounded?

I mean honestly... who in their right mind would fly something that expensive into a situation where they might get damaged?

Let’s face it, if one of these planes becomes damaged in training or in a fight, the financial repercussions would be devestating. That would be massive money simply flushed down the drain.

The pilots are something else we can’t afford to risk. To train an F-35 pilot is so amazingly expensive we can’t possibly afford to place them in harms way.

I think it would be best to just keep the planes grounded.

Microsoft commits: We're buying GitHub for $7.5 beeeeeeellion

CheesyTheClown

Re: Shite

haha I actually should have read the entire post first. I went to the same website you did. I have to admit, I shamelessly download software from there all the time because sometimes I forget how good things are today unless I compare them to the days that came before.

I tried writing a compiler using Turbo C 2.0 recently. That simple did not go well.

Even though they had an IDE, it was single file and it lacked all the great new features we love and adore in modern IDEs. Now I managed to do it. I had a simple compiler up and running within about an hour, but to be fair, it was an absolute nightmare.

That said, the compile times and executable sizes were really impressive.

But of course things like real mode memory was not a great deal of fun. Also whenever you start coding in C, you get this obsessive need to start rewriting the entire planet. I was 10 minutes away from writing a transpiler to generate C code because C is such a miserable language to write anything useful in. No concept of a string and pathetic support for data structures and non-relocatable memory... YUCK!!!

I will gladly take Visual Studio 2017 over 1980s text editors. Heck, I'll take Notepad++ over those old tools.

You should get a copy of some of those old tools up and running and try to write something in them. It's actually really funny to find out that the keys actually don't do what you fingers think they do anymore. And what's worse, try doing it without using Google. :) I swear it's painful but entertaining. GWBASIC is a real hoot.

Page:

Biting the hand that feeds IT © 1998–2019