* Posts by CheesyTheClown

548 posts • joined 3 Jul 2009

Page:

Brocade undone: Broadcom's acquisition completes

CheesyTheClown
Bronze badge

Was buying FibreChannel a good deal?

1) FC doesn’t work for hyper-converged, adapter firmware supports initiator or target mode, not both. As such, you could not host FC storage in the same chassis as it is consumed.

2) Scale-out storage which is far more capable than SAN requires multicast to replicate requests across all sharded nodes. FC (even with MPIO) does not support this. As such, FC bandwidth is always limited to a single storage node. With MPIO, it is possible to run two separate SANs for improved performance but the return on investment is very low.

3) FC carries SCSI (or in some cases NVMe) over a fibre protocol. These are block protocols which require a huge amount of processing on multiple nodes to perform block address translations and make use of long latency operations. In addition, by centralizing block storage, controllers have to perform massive hashing and lookups Tor possibly hundreds of other nodes. This is a huge bottle neck which even ASICs can’t cope with. Give massive limitation in the underlying architecture of FC SANs, distribution of deduplication tasks is not possible.

4) FC (even using Cisco’s MDS series) has severe distance limitations. This is controlled by the credit system which is tied to the size of the receiving buffers. Additional distance add additional latency which requires addition buffers to avoid bottle necks. 32Gb/s over 30km of fibre probably requires 512MB of fast cache to avoid too many bottle necks. At 50km, the link is probably mostly unused. Using FCoIP can reduce the problem slightly, but iSCSI would have been better and SMB or NFS would have been infinitely better.

I can go on, but to be fair, unless you have incompetent storage admins, FC has to look like a dog with fleas by now. We use it mostly because network engineers are horrible at supporting SCSI storage protocols. If we dump SCSI and NVMe as a long range protocol, the problems don’t exist.

I would however say that FC will last as long as storage admins last. Since they are basically irrelevant in a modern data center, there is no fear that people will stop using FC. After all, you can still find disk packs in some banks.

4
3

NetApp's back, baby, flaunting new tech and Azure cloud swagger

CheesyTheClown
Bronze badge

What are they claiming?

So, the performance and latency numbers are on the pretty damn slow side. Probably still bottlenecks associated with using Data OnTap which is famously slow. Azure has consistently shown far better storage performance number than this in the Storage Spaces Direct configuration.

I have seen far better numbers on MariaDB using storage spaces direct in the lab as well. With a properly configured RDMA solution for SMB3 in the back end, there is generally between 80 and 320gb/s back end performance. This is substantially better than any NVMe configuration mainly because NVMe channels are so small in comparison. Of course, the obscene amount of waste in the NVMe protocol adds to that as well. NVMe is only well suited for direct to device attachment. By routing it through a fabric, it severely hurts storage latency and increases chances for errors which aren’t present when using PCIe as designed.

Overall, it’s almost always better to use MariaDB scaled on Hyper-V with paravirtualized storage drivers then to do silly things like running it virtualized over NFS. In fact, you will see far better numbers on proper Windows technologies than by using legacy storage systems like this.

I think the main issue here is that Microsoft didn’t want to deal with customers who absolutely insist on doing thing wrong. So they bought a SAN and just said... “Let NetApp deal with these guys. We’ll manage customers who have actual technical skills, NetApp can have the customers who think virtual servers are smart”.

1
2

Now Oracle stiffs its own sales reps to pocket their overtime, allegedly

CheesyTheClown
Bronze badge

Re: Overtime falsification in the timesheet. How quaint. And how familiar.

Overtime?

I’ve worked generally 60+ hours a week for the past 25 years. WhenI became a father, I worked less because priorities changed. I however would never work a job that doesn’t excite me enough to do it all the time. People generally pay me to do what I want to do even if I wasn’t working. I don’t think I’ve ever received overtime. Though if they ask me to work more on things which bore me, I often get bonuses.

That said, I generally negotiate as part of my salary “I’m going to work a lot more than 40 hours a week and don’t want to be bothered asking for overtime. Just pay me based 50% more and we’ll call it even”.

Then again, I don’t really look for jobs. I simply leave if I don’t get what I want and we all end up happy in the end.

7
1

Windows on ARM: It's nearly here (again)

CheesyTheClown
Bronze badge

Re: LOL

“Known” is the key word.

0
1
CheesyTheClown
Bronze badge

Sorry... I vomited in my mouth as choking

On what planet is Chromebook secure?

A) runs Linux as a core

B) has very little security research targeting it, so most vulnerabilities are unknown.

C) Runs on fairly generic hardware produced by vendors who don’t customize to the local security hardware.

D) has a fairly small business user base and hasn’t properly been tossed to the wild as a hacker target.

I can go on... but that comment was as good as Blackberry claiming that 11 million lines of untested code for a total rewrite with an entirely new OS Core was secure.

3
3
CheesyTheClown
Bronze badge

Instruction set doesn’t really matter

You’re absolutely right. Intel hasn’t run x86 or x64 natively for years. Instead, they have an internal instruction set decoder/recompiler implemented mostly as ASIC and partially as microcode to make it so x86 and x64 is little more that a means of delivering a program. In fact, it’s similar to .NET CIL or Java IL. It’s actually much closer to LLVM IL.

There are some real benefits to this, first is that the recompiler can identify instructions that can be executed out of order as there are no register or cache read/write dependencies. Alternatively, it can automatically run instructions on parallel on separate parts of one or more ALUs which lack dependencies. As such, more advanced cores can process the same code in less clock cycles assuming there is no contentions.

Microsoft has spent 15 years moving most non-performance critical code away from x86 or anything else and over to .NET. They also have implemented the concept of fat-binaries like Apple did with PPC, ARM and x86/x64. In addition, they have been making LLVM and CLang part of Visual Studio. Windows probably has a dozen technologies that allow platform agnostic code to run on Windows now.

Emulating x86 is nice, but is really only necessary for older and unmaintained software. Most modern programs can carry across platforms with little more than a recompile and for power conscious devices GPU intensive code will be the same and CPU intensive code is frowned upon. So, you wouldn’t want to run x264 for example on a low power device ... and certainly not emulated. You’d favor either a video encoder core or a GPU encoder.

As for JIT and AOT dynamic recompilers, I can literally write a book on the topic, but there is absolutely no reason why two architectures as similar as x86 and ARM shouldn’t be able to run each other’s code at near native speed. In fact, it may be possible to make the code run faster if targeting the specific local platform. Also consider that we have run ARM binaries emulated on x86 for a long time, the performance is very respectable. I believe Microsoft is more focused on accuracy and limitation of patent infringement. Once they get it working, it is entirely possible that running x86 on x86 may be faster than running native because JITs are amazing technology in that they can do things like intelligently pipeline execution branches and realign execution order for the host processor.

Nice comment though :)

1
1

The UK's super duper 1,000mph car is being tested in Cornwall

CheesyTheClown
Bronze badge

Re: Cool, but why?

So, the answer is... no... there is no why. They simply justify it as being cool.

I’m like the guy who asked... I think it sounds nifty. It was have been much cooler if there was an application. Of course, I believe that the “before science gets in the way” argument is crap. To suggest that :

A) Getting to 1000MPH doesn’t require piles of science is silly. There is propolsion, aerodynamics, chemistry, etc... involved here already. This project wouldn’t stand a chance without tons of science.

B) 1000MPH is a ridiculous arbitrary number. If this were ancient Egypt, we’d claim an arbitrary number of cubits, elsewhere leagues, in civilization kilometers, etc... 1000MPH is of no particular scientific or engineering significance. Has any physicist ever calculated that 1000MPH is when an object must leave the ground? Did we decide a mile should be one thousandth of a magic number that is when things can’t be on the ground?

All this really did was prove that you can lay a rocket on its side and with the right structure and right shape, it would stick to the ground and hopefully go straight.

Oh... let’s not forget that it glorifies insane amounts of waste. I am generally horrified by stuff like this.

Now, a 1000MPH electric maglev or 1000MPH fuel cell powered EM pulse engine... that would be cool. But glorifying a sideways metal phalice with incredible thrust that ejects massive amounts of liquid while pushing so hard it bypasses friction that once depleted causes it to sputter out and become limp... I must admit these guys... brilliant or not are more than a little scary.

5
28

Apple Cook's half-baked defense of the Mac Mini: This kit ain't a leftover

CheesyTheClown
Bronze badge

Re: Too late

It’s not about performance. It’s about connectivity.

I don’t like Mac keyboards... used them a long time and when I started using a Surface Pro, it was like a blessing from the heavens. Mac keyboards are a curse... especially the latest on which has absolutely no perceivable tactile feedback. I might as well be typing on hard wood.

So, I need a Mac to remote into.

A Mac Pro is just not a sound investment. It’s several generations behind on Xeon... which means more than on i7. It has an ancient video card. It has slow ram and slow SSD. If you’re going to spend $5000 on a new computer, it should be more. Even so, a modern version would be 10 times more machine than would be worth paying for. After all, Mac doesn’t have any real applications anymore. Final Cut is dead, Photoshop and Premier don’t work nearly as well on Mac as on PC. Blah blah.

Then there’s iMac. Right specs, but to get one with a CPU which isn’t horrifying, it takes a lot of space and doesn’t have pen support. In addition, you can’t easily crack it open to hard-wire a remote power switch and it doesn’t do Wake On LAN properly. So, you can’t use it unless it’s somewhere easily accessible.

Then there is the Mac Mini. Small, sweet and nice. I use a Mac Mini 2011 and a 2012 which I won’t upgrade unless there is a good update. The latest Mac Mini doesn’t offer anything mine doesn’t already have except USB3 and Thunderbolt 2. But to make that interesting, consider that to get that is $1500 as the cheaper ones are slower than my old ones. If I were to spend $1500 on a machine, I want current generation.

So... that means that there aren’t any Macs to buy.

Let’s add iRAPP. Apple has the absolute worst Remote Desktop support of any system. iRAPP was amazing, but it’s dead and now there is no hope for a remote management.

So that leaves a virtual hackintosh. Problem is, that requires VirtualBox or VMware... neither are attractive as I program for Docker which is on Hyper-V which can’t run side by side with either VMWare or VirtualBox.

The end result is... why bother with Mac? It’s too much work and there’s just no reason to perpetuate a platform which even Apple doesn’t seem to care about anymore. No pen, no touch, no function keys, no tactile feedback. I can’t use my iPhone headphone on Mac unless I use a dongle on the phone or play the pair and repair Bluetooth game when not looking everywhere for my other earbud which I forgot to charge anyway.

I still use iPhone... but I’m seriously regretting that since the last software update that has an iMessage app that looks like an American highway covered with billboards for 20 companies, doesn’t scroll properly, etc... let’s not get started on the new mail app changes.

Apple is a one product company. They make the iPhone and the make a computer to program it on. iPad is basically dead... well you wouldn’t buy a new one at least. I’m happy with my 5 year old one. AppleTV is cute, but I ended up buying films on Google now because it works on more devices. I’d actually switch to Android for now if Google makes a “important my iTunes stuff” app which would add my music and movies to my Google account.

13
6

Europol cops lean on phone networks, ISPs to dump CGNAT walls that 'hide' cyber-crooks

CheesyTheClown
Bronze badge

Re: v7 needed

I write this now from a computer which has been IPv6 only (though sometimes upgraded) on a network which has been IPv6 only except the edge for 7 years.

My service provider delivers IPv6 to my house using 6rd which appends my 32-bit IP address to the end of a 28-bit network prefix they own to allow 4 /64 subnets (IPv6 does not variably subnet past /64) within my home.

Anyone using my service provider who wants IPv6 can either obtain their IPv6 information via DHCP extensions that provide the prefix and therefore automatically creates the tunnel over their IPv4 network... or they can manually configure it. Of course, you probably need to know IPv6 to do so.

I use IPv6 exclusively (except for a single HP printer and my front door lock) within my house. By using a DNS64 server, when I resolve an address which lacks an IPv6 destination, the DNS server provides the top 64-bits of my address containing a known prefix (I chose) and the bottom 32-bits contain the IPv4 address I'm trying to reach. The edge device then recognizes the destination prefix and creates a NAT record and replaces the IPv6 header with an IPv4 header to communicate with the destination device. This is called NAT64.

I run zone based firewalling on a Cisco router which allows me to allow traffic to pass from the inside of my network to the outside freely and establish return paths.

I have not seen any compatibility issues between IPv4 and IPv6 in the past 7 years. The technology is basically flawless. It's actually plug-and-play in many cases as well.

Is it possible you're claim there is a compatibility issue between the two protocols because you don't know how to use them?

BTW... I first started using IPv6 when Microsoft Research released the source code for IPv6 on Windows NT 4.0. I've had it running more or less ever since. At this time, over 85% of all my traffic is 100% IPv6 from work and home. Over 95% of all my traffic is encrypted using both IPv6 IPSEC end-to-end and 802.1ae LinkSec/MACSEC between layer-2 devices.

There has been one single problem with IPv6 which is still not resolved and I'm forced in my DNS64 gateway to force IPv4 instead of IPv6. That is because Facebook has DNS AAAA records for some of their servers which no longer exist.

As for technical complexity... a believe a drunken monkey can set this up with little effort.

But I guess you think it's worth a nearly $1 trillion investment to drop IPv6 in favor of something new.

Yes... it would cost at least $1 trillion to use something other than IPv4 and IPv6. Routers and servers can be changed to a different protocol using nothing but software. But switches and service provider routers which implement their protocols in hardware would require new chips. Since we don't replace chips, it would require replacing all Layer-3 switches and all carrier grade routers worldwide to change protocols.

Consider a small Tier-1 service provider such as Telia-Sonera that runs about 250 Cisco 9222 routers for their backbone with 400Gb/s-1Tb/s links between them. The average cost of a router on this scale is about $2.5 million. So, to change protocols on just their routers would cost $625 million in just core hardware. It would cost them approximately $2 billion just to handle their stuff.

No consider someone like the US Transport Security Agency which has 1.2 million users in their Active Directory (employees, consultants, etc...). Now consider the number of locations where they are present and the network to run it. Altogether about 4 million network ports... all Layer-3. At an average cost of $200 per network port... that would be $800 million just to change the network ports on their network. Then consider that's just the access ports and distribution and core would need to be changed to. That would place the expense up to at least $5 billion.

Those were just two examples. $1 trillion wouldn't even get the project started.

Now consider the amount of time it would take. Even if you had a "compatible system" and honestly... I have no idea what that means. IPv6 is 100% compatible with IPv4... but I support you know something I don't. But let's say there was a "compatible system" by your standards. It would take 20+ years and trillions of dollars to deploy it.

Of course, if all we care about is addressing... and it really isn't, then IPv4 is good enough and we can just use CGNAT which is expensive but really perfectly good. Thanks to CGNAT and firewall traversal mechanisms like STUN, TURN, ICE and others, there's absolutely no reason we need to make the change. Consider that China as an entire country is 100% NATed and it works fine.

So... recommended reading. 6RD and NAT64/DNS64

Then instead of saying really really really silly things about IPv6 lacking compatibility with IPv4 or that IPv6 is B-team... you can be part of the solution. The "B-team" as you call it did in fact pay close attention to real users. They first built the IPv6 infrastructure and they also solved the transition mechanism problems to get real users online without any problems. It took a long time, but it's been solid and stable since IPv6 went officially live on June 6th 2012.

7
3

FCC Commissioner blasts new TV standard as a 'household tax'

CheesyTheClown
Bronze badge

Re: 3D

4K DOA? Haha I actually intentionally downscale 4K content. I don't want to look at people under a microscope. 4K is great for car chases, but it's horrible when you see how bad your favorite actress's skin looks when displayed as a close up on a 65" screen from 2 meters. 4K is absolutely horrible.

And I saw a few 3D movies and I actually stopped going to the movie theater because of them. I'd rather watch a film on Oculus Rift if I want it huge. In fact, it costs about the same to Oculus as it does to go to the movies and have snacks a few times a year.

4
1

NFS is now on tap in Azure – and NetApp is Microsoft's provider

CheesyTheClown
Bronze badge

Re: Migrating without adapting

At least it wasn't just me.

I was going to ask "and what's the possible use case" and the answer it seems is "because Microsoft managed to convince NetApp to help migrate from VMware/NetApp to Hyper-V and storage spaces" :)

It seems humorous that the stated use case is to basically kill off using NetApp and the likes :)

0
0

2019: The year that Microsoft quits Surface hardware

CheesyTheClown
Bronze badge

Re: Isn't it obvious

I read that article as well. I didn't agree with it then either. It was written without any regard for causality. People were more likely to return Microsoft devices because... wait for it... it's actually possible to return them. Microsoft actually has a really great return program and while I didn't make use of it, I did manage to walk into a Microsoft store and walk out with a replacement PC in 5 minutes without any hassles. Try doing that at. Best Buy in America or a Curries or Dixon's. In fact, compared to Apple in store service, it was amazing. My average waiting time for service at Apple Stores is 45 minutes. Microsoft was always better. And even better, instead of waiting 30 minutes to get an appointment with and appointment scheduler who will schedule you time with a Genius in 2 hours, the Microsoft store helps immediately.

As for broken devices, I bought three Surface Pro, a Surface Pro 2, a Surface RT, two Surface Pro 3s and a Surface Book. All of them are still in heavy use. With the exception of Microsoft's fairly poor magnetic power connectors, they have been absolutely amazing. (Apple's magnetic connectors were much worse).

Like my Macs which are still good even though I run 2011 models, the Surface Pros last and last. And I run older models because they last and last.

I am perfectly happy to pay Apple Care and Microsoft extended warranties because I love having the long term support. I always buy top of the line models as well... because if you will use it daily for 4-8 years, $400-800 a year is completely reasonable.

As for HP, Lenovo and Dell. I never bought a PC from them that had any love from the maker a few months later. Consider that ASUS releases an average of 1-2 BIOS updates per laptop. HP releases updates... sometimes. Dell has improved, but their updates don't need to come out more than 6 months later... that's because unless you bought "next day on-site service" the machine won't be running by then anyway.

I'll leave Acer out of the discussion because... we'll they're Acer. It's mean to beat up the slow kid.

Microsoft should stay in the game because if nothing else, even though Microsoft forced the vendors to raise the bar, they're still selling "lowest bidder shit". Yes, the market needs $129 laptops for the poor people... but anyone who can qualify for a credit card should be able to qualify for buying a $2500 laptop if they can't just pay cash. It's a long term purchase and investment.

As for corporations, I have no idea what kind of idiot would buy anything other than MS these days.

22
20

Bill Gates says he'd do CTRL-ALT-DEL with one key if given the chance to go back through time

CheesyTheClown
Bronze badge

Antivaxxers?

Bill Gate is a brilliant man, but sometimes he pisses away time in the wrong way.

Consider ratios.

What's easier, his way or the antivaxxer way? Let's evaluate both.

Bill says that an African child is 100 times more likely to die from preventable diseases than an American.

Logistically, vaccinating and healing Africans is very difficult and nothing but an uphill battle.

The antivaxxers have already been increasing deaths in American related to mumps, measles and reubella. This is much easier as all it takes is a former porn actress with the education correlating to said career choice to campaign on morning TV about how MMR vaccines can be dangerous and cause autism.

So instead of fighting like hell to vaccinate Africans... isn't it easier and cheaper just to let porn actresses talk on morning TV?

The results should in theory be the same... the ratio Bill mentioned will clearly shrink either way.

Of course, if his goal is to actually save lives as opposed to flipping a statistic, we might do better his way.

1
0

China reveals home-grown supercomputer chips after Intel x86 ban

CheesyTheClown
Bronze badge

Re: Interesting side effects of this development..

Let me toss in some ideas/facts :)

Windows NT was never x86/x64 only. It wasn't even originally developed on x86. Windows has been available for multiple architectures for the past 25 years. In fact, it supported multiple architectures long before any other one operating system did. In the old days when BSD or System V were ported to a new architecture, they were renamed as something else and generally there was a lot of drift between code bases due to hardware differences. The result being that UNIX programs were riddled silly with #ifdef statements.

The reason why other architectures with Windows never really took off was that we couldn't afford them. DEC Alpha AXP, the closest to succeeding cost thousands of dollars more than a PC... of course it was 10 times faster in some cases, but we simply couldn't afford it. Once Intel eventually conquered the challenge of working with RAM and system buses operating at frequencies not the same as the internal CPU frequency, they were able to ship DEC Alpha speed processors at x86 prices.

There was another big problem. There was no real Internet at the time. There was no remote desktop for Windows either. The result being that developers didn't have access to DEC Alpha machines to write code on. As such, we wrote code on x86 and said "I wish I had an Alpha. If I had an Alpha, I'd make my program run on it.". So instead of making a much cheaper DEC Alpha which could be used to seed small companies and independent developers with, DEC, in collaboration with Intel decided to make an x86 emulator for Windows on AXP.

The emulator they made was too little too late. The performance was surprisingly good, though they employed technology similar in design to Apple's Rosetta. Dynamic recompilation is not terribly difficult if you consider it. Every program in modern times has fairly clear boundaries. They call functions either in the kernel via system calls which are easy to translate... or they call functions in other libraries which are loaded and linked via 2-5 functions (depending on how they are loaded). When the libraries are from Microsoft, they know clearly what the APIs are... and if there are compatibility problems between the system level ABIs, they can be easily corrected. Some libraries can be easily instrumented with an API definition interface, though C programmers will generally reject the extra work involved... instead just porting their code. And then there's the opportunity that if an API is unknown, the system can simply recompile the library as well... and keep doing this until such time as the boundaries between the two architectures are known.

Here's the problem. In 1996, everyone coded C and even if you were programming in C++, you were basically writing C in C++. It wasn't until around 1999 when Qt became popular that C++ started being used properly. This was a problem because we were also making use of things like inline assembler. We were bypassing normal system call interfaces to hack hardware access. There were tons of problems.

Oh... let's not forget that before Windows XP, about 95% of the Windows world ran either Windows 3.1, 95, 98 or ME. As such, about 95% of all code was written on something other than Windows NT and used system interfaces which weren't compatible with Windows NT. This meant that the programmers would have to at least install Windows NT or 2000 to port their code. This would be great, but before Windows 2000, there weren't device drivers for... well anything. Most of the time, you had to buy special hardware just to run Windows NT. Then consider that Microsoft Visual Studio didn't work nearly as well in Windows 2000 as it did in Windows ME because most developers were targeting Windows ME and therefore Microsoft focused debugger development on ME instead.

So... running code emulated on Alpha did work AWESOME!!!! If the code worked on Windows NT or Windows 2000 on x86 first. Sadly, there was no real infrastructure around Windows NT for a few more years.

That brings us to the point of this rant. Microsoft has... quite publicly stated their intent to make an x86/x64 emulator for ARM. They have demoed it on stage as well. The technology is well known. The technology is well understood. I expect x86/x64 code to regularly run faster on the emulator than as native code because most code is optimized for an architecture where dynamic recompilers can optimize for the specific chip they are executing on and constantly improve the way the code is compiled as its running. This is how things like JavaScript can be faster than hand coded assembly. It adapts to the running system appropriately. In fact, Microsoft should require native code on x64 to run the same way... it would be amazing.

So, the emulator should handle about 90% software compatibility. Not more. For example, I've written code regularly which makes use of special "half-documented" APIs from Microsoft listed as "use at your own risk" since I needed to run code in the kernel space instead of user space as I needed better control over the system scheduler to achieve more real-time results. That code will never run in an emulator. Though nearly everything else will.

Then there's the major programming paradigm shift which has occurred. The number of people coding in system languages like C, C++ and Assembler has dropped considerably. On Linux, people code in languages like Python where possible. It's slow as shit, but works well enough. With advents like Python compiler technology, it's actually not even too pathetically slow anymore. On Windows, people program in .NET. You'd be pretty stupid not to in most cases. We don't really care about the portability. What's important is that the .NET libraries are frigging beautiful compared to legacy coding techniques. We don't need things like Qt and we don't have to diddle with horrible things like the standard C++ library which was designed by blind monkeys more excited about using every feature of the language than actually writing software.

The benefit of this is that .NET code runs unchanged on other architectures such as ARM or MIPS. Code optimized on x86 will remain optimized on ARM. It also gets the benefits of Javascript like dynamic compiler technology since they are basically the same thing.

Linux really never had much in the lines of hardware independent applications. Linux still has a stupid silly amount of code being written in C when it's simply the wrong tool for the job. Linux has the biggest toolbox on the planet and the Linux world still treats C as if it's a hammer and every single problem looks like a nail. Application development should never ever ever be done in system level languages anymore. It's slower... really it is... C and C++ make slower code for applications than Javascript or C#. Having to compile source code on each platform for an application is horrifying. Even considering the structure of the ABI at all is terrifying.

Linux applications have slowly gotten better since people started using Python and C# to write them. Now developers are more focused on function and quality as opposed to untangling #ifdefs and make files.

Now... let's talk super computing. This is not what you think it is I'd imagine. The CPU has never really meant much on super computers. The first thing to understand is that programmers will write code in a high level language which has absolutely no redeeming traits from a computer science perspective. For example, they can use Matlab, Mathematica, Octave, Scilab, ... many other languages. The code they write will generally be formulas containing complex math designed to work on gigantic flat datasets lacking structure at all. They of course could use simulation systems as well which generate this kind of code in the background... it's irrelevant. The code is then distributed to tens of thousands of cores by running a task scheduler. Often, the distributed code will be compiled locally for the local system which could be any processor from any architecture. Then using message passing, different tasks are executed and then collected back to a system which will sort through the results.

It never really mattered what operating system or platform a super computer runs on. In fact, I think you'd find that nearly 90% of all tasks which will run on this beast of a machine would run faster on a quad-SLI PC under a desk that had code written with far less complexity. I've worked on genetic sequencing code for a prestigious university in England which was written using a genetic sequencing system.... very fancy math... very cool algorithm. It was sucking up 1.5 megawatts of power 24/7 crunching out genomes on a big fat super computer. The lab was looking for a bigger budget so they could expand to 3 megawatts for their research.

I spent about 3 days just untangling their code... removing stupid things which made no sense at all... reducing things to be done locally instead of distributed when it would take less time to calculate it than delegate it... etc...

The result was 9 million times better performance. What used to require a 1.5 megawatt computer could now run on a laptop with an nVidia GPU... and do it considerably faster. Sadly... my optimizations were not super computer friendly, so they ended up selling the computer for pennies on the dollar to another research project.

People get super excited about super computers. They are almost always misused. They almost always are utterly wasted resources. It's a case of "Well I have a super computer. It doesn't work unless I message pass... so let me write the absolutely worst code EVER!!!! and then let's completely say who gives a fuck about data structure and let's just make that baby work!!!!"

There are rare exceptions to this... but I'd bet that most supercomputer applications could have been done far better if labs bought programmers hours instead of super computer hours.

0
0

Compsci degrees aren't returning on investment for coders – research

CheesyTheClown
Bronze badge

Re: Peak Code Monkey

It is true that compsci is generally a canonball which is often applied where a fly swatter is better suited. If you're making web pages for a site with 200 unique visitors a day, compsci has little to offer. If you're coding the home page of Amazon or EBay, compsci is critical. One inefficient algorithm can cost millions on hardware and power costs.

Product development... for example when a developer at Google working on Chrome chooses a linked list when a balanced tree would be better has an impact measured in stock markets because faster processors and possibly more memory would be needed on hundreds of millions of PCs. Exabytes of storage would be consumed. Landfills get filled with replaced parts. Power grids get loaded. Millions of barrels of crude are burned, shipping prices increase, etc...

What is written above may sound like an exaggeration, but a telephone which loses a hour of battery life because of bad code may consume another watt per phone per day. Consider that to scale to a billion devices running that software each day. A badly placed if statement which configured a video encoder to perform rectangular vs. diamond pattern motion search could affect 50-100 million users each day.

Consider the cost of a CPU bug.... if Intel or ARM are forced to issue a firmware patch for a multiplication bug, rerouting the function from an optimized pyramid multiplier to a stacked 9-bit multiplier core located in on-chip FPGA will increase power consumption by 1-5 watts on a billion or more devices.

Some of these problems are measured in gigawatts or terawatts of load on power grids driving up commodity prices in markets spanning from power to food.

So... you're right. Compsci isn't so important in most programmer jobs. But in others, the repercussions can be globally disasterous.

7
0

More data lost or stolen in first half of 2017 than the whole of last year

CheesyTheClown
Bronze badge

You mean more detected loss?

Call me an asshole for playing the causality card here.

Did we lose more data or did we manage to detect more data loss?

3
0

Everyone loves programming in Python! You disagree? But it's the fastest growing, says Stack Overflow

CheesyTheClown
Bronze badge

While I have only ever used Python on the rare occasions where it's all I had available (labs on VPN connected systems) and I honestly have little love for language-of-the-month, I don't necessarily agree.

I have seen some great Python code written by great programmers... once in a very rare while. In some cases, this is true of the Open Stack project.

On the other hand, Python gains most of it's power from community contributed modules. As such, it, like PERL, PHP and Ruby before it have libraries for nearly everything. Unfortunately, most are implemented by "newbie programmers" building bits and bobs they need.

This results in about a million absolutely horrifying modules. We see the same happening to Node as well. Consider that Node has probably 40 different libraries in NPM to simply make a synchronous REST call. This makes the platform unusable in production code. When a language has a repository of so many poorly written modules that it is no longer possible to sort through them to find one that works, it becomes almost unusable.

See C++... I use Qt because it provides a high quality set of classes for everything from graphics to collections. The standard C++ library and heaven forbid boost are such a wreck that they have rendered C++ all but unusable.

See Java where even good intentions go horribly wrong. Java on the desktop was absolutely unusually for apps because there were simply too many reboots of the GUI toolkit. AWT was so bad that IBM OTI made SWT and Sun made a bigger mess trying to reboot their dominance with SWING and Google made their own... well let's just say that it didn't work.

I can go on... there should always be a beginner language for people to learn on and then enventually trash. Python is great for now. Maybe there will be something better later. Learning languages should be playgrounds where inexperienced developers can sew their oats before moving on. Cisco for example pushes Python and Ansible to network engineers who learn to code in 8 hours. Imagine if every network engineer or VMware engineer were to start destroying other languages? That would be a million people who have never read a programming book trashing those other languages' ecosystems.

20
0

Huawei developing NMVe over IP SSD

CheesyTheClown
Bronze badge

Nope... block access is file/database access

No storage subsystem (unless it's designed by someone truly stupid) stores blocks as blocks anymore. It stores records to blocks which may or may not be compressed. The compressed referenced blocks are stored in files. Those files may be preallocated into somewhat disk sector aligned pools of blocks, but it would be fantastically stupid to store blocks as blocks.

As such, NVMe is being used as a line protocol and instead of passing it through to a drive, it's being processed (probably in software) at fantastically low speeds which even SCSI protocols could easily saturate.

There will be no advantage in extended addressing since FCoE and iSCSI already supported near infinite addresses to begin with. There will be no advantage in features as NVMe would have to issue commands almost identically to SCSI. There will be no advantage in software support because drivers took care of that anyway... or at least any system with NVMe support can do pluggable drivers. Those which can't will have to translate SCSI to NVMe.

They should have simply created a new block protocol designed to scale properly across fabrics without any stupid buffering issues that would require super stupid solutions like MMIO and implemented the drivers.

Someone will be dumb enough to pay for it

0
0
CheesyTheClown
Bronze badge

What the!?!?!?

What is the advantage of perpetuating protocols optimized for system board to storage access as fabric or network access?

Bare metal systems may under special circumstances benefit from traditional block storage simulated by a controller. It allows remote access and centralized storage for booting systems. This can be pathetically slow and as long as there is a UEFI module or Int13h BIOS extension there is absolutely no reason why either SCSI or NVMe should be used. Higher latencies introduced by cable lengths and centralized controllers make use dependent on unusually extensions to SCSI or NVMe which are less than perfect fits for what they are being used for. A simple encrypted simulated drive emulation in hardware that supports device enumeration, capability enumeration, read block(s) and write block(s) is all that is needed for a network protocol for remote block device access. With little extra effort, the rest can be done with a well written device driver and BIOS/UEFI support that can be natively supported (as is more common today) or via a flash chip added to a network controller. Another option is to put the loader onto an SD card as part of GRUB for instance.

The only reason block storage is needed for a modern bare metal server is to boot the system. We no longer compensate for lack of RAM with swapping as the performance penalty is too high and the cost of RAM is so low. In fact, swapping to disk over fabric is so slow that it can be devestating.

As for virtual machines. They make use of drivers which translate SCSI, NVMe or ATA protocols (in poorly designed environments) or implement paravirtualization (in better environments) which translate block operations into read and write requests within a virtualization storage system which can be VMFS based, VHDX based, etc... this translation then is translated back into block calls relative to the centralized storage system. Where they are translated back to block numbers, then cross referenced against a database and then translated back again to local native block calls (possibly with an additional file system or deduplication hash database) in-between. Blocks are then read from native devices in different places (hot, cold, etc..) and the translation game begins in return.

NVMe and SCSI are great systems for accessing local storage. But using them in a centralized manor is slow, inefficient and in the case of NVMe... insanely wasteful.

Instead, implement device drivers for VMware, Window Server, Linux, etc... which provide the same functionality but while eliminating the insane overhead and inefficiency of SCSI or NVMe over the cable and focus instead on things like security, decentralized hashing, etc...

Please please please stop perpetuating the "storage stupid" which is what this is and focus on making high performance file servers which are far better suited to the task.

0
0

Tintri havin' it large with all-flash EC6000 boxen

CheesyTheClown
Bronze badge

320000 IOPs?

Hmmm... so, the average IOPS of 90000 is expected from a consumer grade SSD today. In a subsystem with compression and dedup running on a common file server with a semi-intelligent file system, on two-way mirrored data, the raw read performance of the disk should be about 180000. Considering dedup and compression, it should be 5-10 times that. Three-way and four-way mirror of hot data should increase performance greatly. With 10 way mirroring, 900000 should be realistic. Add RAM caching for reads and even more should be possible.

In 4U, there should never be a circumstance on an all-RAID platform which IOPS should ever be anywhere near as low as 320000. Did you miss a zero in the article or did they make a product designed for SCSI/NVMe over fabric protocols?

2
5

Networking vendors are good for free lunches, hopeless for networks

CheesyTheClown
Bronze badge

Re: That works for a simple network

Let me come to the table on this. As a former developer of infrastructure networking equipment scaling from chip architecture to routing protocols as well as feeding my family for 5 years by being a Cisco network engineer and quite successfully to now working as hard as I can to automate out as many low level network consultants as possible.

Interior gateway protocols are long overdue for a refresh. The fact that we still run internal networks as opposed to internal fabrics is absolute proof that companies like Cisco, HP, Juniper, etc... are far out of touch with modern technology. The simple fact that we need IGPs is fundamentally wrong.

We depend on archaic standards like OSPF, IS-IS, EIGRP and RIP for networking and all four of these architectures are absolutely horrible and the only redeeming feature they have is that they're compatible with other vendors and old stuff. OSPFv3 with address family support is possibly the worst thing that ever happened to networking.

As for BGP. Don't get carried away. BGP as a protocol will remain necessary, but it's for the purpose of communicating between WANs. BGP is less of a routing protocol as opposed to a dog pissing on a tree to inform the world who owns which IP addresses. BGP doesn't really route so much as force traffic in a general direction. There are multiple enterprise grade open source BGP implementations out there and there's no reason to make your internal network suck because you are concerned about BGP support.

Peering to the Internet requires edge devices which may or may not speak BGP.

When you design a modern network infrastructure, you can completely disregard inter-vendor operability and design a fabric instead. There's a few things you probably want to do. Instead of inventing new fiber standards, it would be profitable to attempt to depend on commercial SFPs. As for vendor codings, I spent a long time making different vendor's SFPs work with my hardware... those codings actually mean something.

So... consider this. Imagine building a network based mostly on a new design where the entire enterprise is a single fabric. By this, I mean that you have a single router for the entire enterprise. That router is made up of 10-100000 boxes which all speak proprietary protocols and are engineered for simplicity and actually route traffic intelligently... without any switching.

You may think this is unrealistic or stupid, but it's really quite possible to do with far fewer transistors than you would use to support modern standards based layer-2 and layer-3 switching. Eliminate the routing table from your network altogether and instead implement something similar to Cisco's fast-caching forwarding mechanism with centralized databases for IP management.

Then to connect to the outside world, you simply buy a Cisco router or three and connect them at the edges.

I can say with confidence after considerable thought (years) on this topic that there's absolutely no reason this couldn't be much simpler and cleaner than modern networking and while three-tier network design would still make sense... or at least spine-leaf... any partial mesh with no single point of failure would work without any silliness like "you need to aggregate your routing tables to keep your routing table small."... we are long past the point where routing table lookups are O(32 > n > 0) where n = bits complexity. Then route learning via conversational characteristics would keep the per interface FIB small.

So... let's be honest... a developer can see the problem of networking clearly ... especially if they know networking.

A network engineer starts by spouting about how things like BGP are really hard... fine... use it as a boundary and stop filling my network with that crap... buy someone else's box for that or run a Linux box to do it.

11
7

Oracle staff report big layoffs across Solaris, SPARC teams

CheesyTheClown
Bronze badge

This is where Cisco should be NOW

If Cisco weren't completely stupid, they would swipe in an make a public announcement that they will hire on the entire group of laid off staff for what their pay was when they were laid off.

At that point, if Cisco isn't entirely stupid, they will :

- start an enterprise ARM development team using the silicon developers.

- use the Infiniband team to implement a proper storage stack for supporting SMB Direct and iWARP, etc...

- use the operating system team to build a solid container platform for Docker. Cisco keeps trying to do something like this but without a good OS team to start with.

- use the ZFS team for ... well Cisco needs a legacy storage solution, so a ZFS/FCoE solution for VMware would be great. Do it on ARM as well and make a single chip storage solution by embedding it within the Cisco VIC FPGA and Cisco will make a fortune

This might be the best opportunity that either Cisco has had in a long time. It's almost as if Larry decided to just throw up a big bunch of gold into the air and let it land in about 2500 places around the world for Cisco to pick up.

It's a good thing if HP doesn't go picking it up, they wouldn't know what to do with an engineer if someone smacked Meg Wittman on the head with one.

Another alternative which would be amazing for this team... Nutanix should suck them up. Not in bits... but the whole damn group. It's easier to hire them all and weed out what you don't need later. 2500 employees is expensive... but Cisco and Nutanix have needed a team of developer like these for a while.

Also, if the teams are heavy on H1-Bs, I would just let them go. There is certainly no shortage of experts in what will be left when they're gone. If Oracle is anything like Cisco or most other companies, then that's about 30-50% of their development teams.

1
0

Don’t buy that Surface, plead Surface cloners

CheesyTheClown
Bronze badge

Re: Pretty sure this doesn't count as a surface alternative

I suppose this depends on your use case.

I was working on a project of 3 million+ lines of C++ code at the time. With XCode or Linux, the average compile time was 8 minutes. With Visual C++ it was about 17 seconds. This isn't because Windows is so much fast than Mac or Linux. It is because Visual C++ has the best precompiled header support of any compiler. Add that to a progressive linker and librarian and there is no comparable product anywhere.

If I would have used Mac OS X and XCode, it would have cost me close to a thousand hours of waiting a year and my work days would have been 16 hours instead of 12.

Would you suggest that using Mac OS was an upgrade in that circumstance?

12
0
CheesyTheClown
Bronze badge

Pretty sure this doesn't count as a surface alternative

I have bought :

- A Samsung Series 7 Slate to be able to develop Windows 8 apps during the beta.

- Two Surface Pros

- A Surface RT

- A Surface Pro 2

- Two Surface Pro 3s (one Core i5/256gb because I had to wait a month longer to get the i7.. then I bought the i7)

- A Surface Book

So... why did I buy these machines? They are the "Official Microsoft Development Computers" for Windows. This means, updated drivers, updated firmware, flawless debugger support (you'd be surprised how important that is), long term support... etc...

Before this, I would buy Macs, delete Mac OS X and install Windows instead.

I buy machines because the vendor invests in them for long term support. Lenevo, Dell, HP, they have dozens of models of machines at a time. You know that as soon as the machines ship, their A-team moves to the next machine and the machine you buy is supported by the B or C team.

Never buy a phone or a computer from a company that offers too many options. This is because there is no possible way they can properly support a machine they really don't care about anymore because they're really only interested in building and selling the next model.

16
1

We experienced Windows Mixed Reality. Results: Well, mixed

CheesyTheClown
Bronze badge

Looking stupid?

I honestly can say that I spend most of my time looking stupid... it hasn't bothered me much so far. In fact, it's entirely possible these headsets will actually make me look better.

Oh, even if they make me look stupid, I really can't imagine that it would bother me much. I'm a engineer and color coordination has always been a problem for me. So, I generally assume I always look stupid.

The real question is whether I can mount these in a wookie mask.

14
0

KVM plans big boosts to storage and nested virtualization

CheesyTheClown
Bronze badge

Re: for real?

On the desktop, I'm with you. VMware works really well on desktop versions and is no hassles and pain free.

For SMB, to be honest, I've setup VMware for years, I dabbled with Hyper-V here and there and until the Windows Server 2016, I wasn't really happy. But with Hyper-V 2016, it is actually much easier now. Not only that, but on single host or 2 or 3 hosts, it's way easier than VMware today. Install Hyper-V (free of charge... no cost... period), setup basic Windows networking, setup a Windows share for storing virtual machines, setup a windows share for storing ISOs. You're basically done.

Of course, if you want the good stuff (vCenter style), you can save a lot of money avoiding SCVMM and either buying ProHVM for less than an hour of your salary per host. Or 5Nine which is REALLY REALLY AWESOME but I stopped recommending or buying once they removed the prices and purchase links from their website in favor of forcing me to talk to a sales person.

I will say tht VMware is compatible... it's not easy. Truly... I use KVM in many environments and I use Hyper-V as well. I still work very often with VMware and it's always funny how many problems it has which the others don't. But of course, it does run pretty much everything and I REALLY LIKE THAT. So, if I'm playing with old operating systems on my laptop, VMware is the only way to go. If I have to get some work done, then Hyper-V or Ubuntu are the only options.

I recommend you check either of them out again. I think you'll find that if you invest one full day of your life in learning either one, you'll never be able to look at VMware again without laughing at how much of a relic it is.

1
0
CheesyTheClown
Bronze badge

Re: RLY?

Paravirtualization allows versions of Windows targetting paravirtualization to operation more like a container than a VM. Paravirtualization gives Windows some truly amazing features which allow it to have insanely higher VM density than if it were running VMware.

For example, probably the most difficult process for a virtualized environment is memory management. The 386 introduced the design pattern we used today which consist of a global descriptor table (which creates something like a file system for physical memory)... it allows each program running to think it has a single contiguous area of memory to operate in... though in reality the memory can be spread out all over the system. Then there's a local descriptor table which manages memory allocation per process. This is a sick (and semi-inaccurate) oversimplication, of how memory works on a modern PC. But it gives you the idea.

When you virtualize, operating systems which don't understand they are being virtualized need to have simulated access to the GDT (which is a fixed are of RAM) and be able to program the system's memory management unit (MMU) through direct calls to update memory.

There's also the principal of hard I/O on Intel mapped CPUs vs. memory mapped I/O. Memory mapping could always be faked by faking the memory locations provided to drivers. But I/O couldn't be handled without intercepting all I/O functions... sometimes by rewriting the executable code on the fly.

To make this happen, VMware used to either recompile code on the fly to intercept I/O calls and MMU programming. Hyper-V Generation 1 does the same. With the advent of the Second Layer Address Table (SLAT which should actually be called the SLUT ... look-up table, but isn't), memory rewrites and dynamic recompilation of MMU code was no longer necessary. The CPU simply introduced a new descriptor table which works as a higher level than the GDT... or nests it.

The I/O issue needed to be addressed. This started by making drivers for each operating system which would bypass the need for making I/O calls directly and it works pretty well except on some of the more difficult operating systems like Windows NT 3.1 or OS/2. I recently failed to launch Apple Yellow Box in Hyper-V because of this. VMware has always been amazing for making legacy stuff work because they are really focused on 100% compatibility even if it makes everything slower.

Microsoft with Windows and KVM with Linux took the alternative approach which was 1000% better which was to simply say "We'll run whatever legacy we can run... but we focus on today and tomorrow. Let VMware diddle with yesterday". So Linux was modified to run as a user mode application and then later with Docker was modified to run without Linux itself. Windows did kinda of the same thing...

But Hyper-V did something really cool. Paravirtualization works on Windows by running the operating system... kind of as usual. But then replaces the memory manager with one that doesn't absolutely require a SLAT. Instead if it needs more memory, it asks the host operating system for more memory. So instead of wasting tons of memory guessing whether 4GB is enough or not... paravirtualization often gives Windows about 200MB and if it needs more, then it gets more. So paravirtualization makes Windows typically 20 or more times more efficient when run this way. The trade off is that the cross boundary (call from guest to host) is more expensive and can have a negative impact on CPU performance. Also, there is more memory operations in general so the system GDT is likely to be more active and possibly fragmented. I'm pretty sure MS will optimize this further in the future.

Then there was drivers. VMware has been so focused on bare metal and Paravirtualization goes the entire opposite route. It ends up that instead of trying to hardware accelerate every VM operation which can require billions more transistors and hundreds of more watts per server, Microsoft focused instead on allowing guest operating systems to gain the benefits of the host OS drivers by removing the need for hard partitiioning device operations. So, where VMware would simulate or expose a PCIe device to the guest VM, Hyper-V would give drivers to the host which would simply allow it to talk directly (and play nicely with) the host devices.

For storage this offers immense storage improvements in hundreds of different ways. With a Chelsio vNIC or Cisco (as a runner up), storage via SMBDirect or pNFS can reach speeds and performance per watt that are so incredibly far above what VMware offers that the environmental protection agencies of the world should sue VMware for gross negligence for their approach. We're talking intentional earth killing.

For network, the performance difference is almost equally huge, but once you virtualize storage intelligently (RDMA is the only way), then networking becomes easier.

But back to paravirtualization. Here's an example. If you want to share the GPU between two VMs in a legacy/archaic system, you would need to get a video adapter which is designed to split itself into a few hundred different PCIe devices (due to SRIOv chance are... a maximum of 255 devices... meaning no more than 255 devices per VM host... so no more than 255 VMs per host). Then you'd need specialized drivers designed to maintain communication with theses PCIe devices and to allow the VMs to migrate from one host to another by doing fancy vMotion magic (cool stuff really). This has severe cost repercussions... nVidia for example charges over $10,000 per host and requires a VERY EXPENSIVE (up to $10,000 or more) graphics card.

Paravirtualization would simply make it so that if the guest wants to make an OpenGL context, it would as the host for the context and the host would forward it to the VM. The Hyper-V driver then forwards the API calls from the guest app to the host driver directly. This means that you're still limited to however many graphics contexts are the maximum supported by the GPU or driver. But it's more than the alternative. VMware does this for 2d, but since VMware doesn't have its own host level graphics subsystem for 3D, it depends on nVidia to rape their customers. Where in Hyper-V it's free and works on $100 graphics cards.

Storage is HUGE... I can go for pages about the benefits for paravirtualization on storage.

So here's the thing. I assume that KVM will get full support for all the base features of paravirtualization. The design is simpler and better than making virtual PCI devices for everything. It's also just plain cleaner (look at Docker). In addition, I hope that they will manage to integrate the Hyper-V GPU bridge APIs by linking Wine to the paravirtualized driver there.

In truth... if you look at paravirtualization... it's actually the exact same thing which VirtIO does with Linux.

VMware has a little paravirtualization, but to make it work, they would probably need to stop making their own kernel and instead go back to Linux or Windows as a base OS to get it to work completely. They simply lack the developer community to do full paravirtualization.

And BTW.. Paravirtualization is the exact opposite of pervy and wrong. What you do to avoid paravirtualization is precisely the pervy and wrong thing.... but if that's what pervy and kinky is... I'm in. I love that kind of stuff. Paravirtualization is WAY BETTER but legacy virtualization is REALLY FUN if you happen to be an OS level developer.

7
0

Microsoft, Red Hat in cross-platform container and .Net cuddle

CheesyTheClown
Bronze badge

Re: No thanks

We started porting our system from .NET Core 4.6 to .NET Core 2 yesterday. We figured there was no harm in making sure our apps work on Windows Server, Linux and specifically in Raspberry Pi.

I think you see it as a battle of Microsoft vs. Not Microsoft. I developed on Linux for a decade and eventually, due to lack of alternatives, moved from Qt on Linux to C# and .NET on Windows because unless I was developing an OS (which I had been doing for a long time), I wanted a real programming language and a real alternative to Qt.

See if you're making a web app, there are a thousand options. In fact, since I started typing this comment, two new options probably were added to Github. And most of them are just plain aweful. Angular 4 with Typescript is very nice, and so are a few others. But even the best ones are very much work in progress.

On the other hand, if you're programming a back end, there is .NET, Java, Node and Python.

I don't use Java because their entire infrastructure is outrageously heavy. Doing anything useful generally requires Tomcat which is just a massive disaster in the making. It makes things easy, but what should use megabytes uses gigabytes, or thousands of hours to optimize.

I don't use Node because while I think it may be the best option... and Typescript is nice, all libraries are written by people who seem to have no interest in uniformity and as such, every module you use takes a month to code review and document before use.

Python is the language of absolutely all languages and can do tens of thousands of things you never knew you wanted to do. But again, the community contributed libraries lack almost any form of quality control. You're locked into a single language, which means that when Python loses its excitement, everything will have to be rewritten. (See the mass migration from Python to Node... no transition path)

Then there's .NET. C# isn't as versatile as Python (nothing is). But when you code against .NET Core, it runs pretty much anywhere. Has performance close to Node than Python. All code written for one language works with all languages on the platform. Documentation standards are high. The main infrastructure has already been reviewed for FIPS-140. There are clear support channels. Apps can be extremely light. Libraries are often optimized. It supports most modern development paradigms. It's completely open. The development team is responsive.

Basically, .NET scores better than average in every category except that it was made by Microsoft.

So... I appreciate that you don't like it. And I am glad people like you are out there using the alternatives which makes sure there are other choices for people like me. But for those of us with no real political bias towards tools, we are all pretty happy that .NET Core 2 has made cross platform realistic.

Maybe at some point your skills and knowledge will help me solve one of my problems and I look forward to thanking you. Yesterday, I sent dinner (pizza and cola) to a guy and his family in England for helping me on a Slack channel. :)

14
3

Wisconsin advances $3bn bribe incentives package for Foxconn

CheesyTheClown
Bronze badge

How do you beat cheap labor?

That's easy. With free labor. Foxconn would build the plant anywhere they could bypass environmental restrictions. Wisconsin already agreed to that. After all, flooding the fresh water supply with gallium arsenide instead of coal related waste will poison 100 generations instead of 10.

Of course, Foxconn will abandon the factory in 5 years when the process changes enough that the clean room they build in Wisconsin is no longer viable. During that time, the government of Wisconsin will pay 100% of the wages. Of course all profits will be sent to China.

But by then Trump will either be out or reelected so he simply won't care anymore.

The only possible good things about this are that instead of paying welfare, we'll pay people $25 an hour to work in a basically toxic factory. Oh and it will lighten pollution related to shipping for a few years while bypassing any tariffs imposed on Chinese imports of LCD screens.

So... it's a big win all around.

Now if Trump can pay Chinese companies to make tooth brushes in America again, we can at least know that if we fall out with China, we can at least brush our teeth.

1
1

The future of Python: Concurrency devoured, Node.js next on menu

CheesyTheClown
Bronze badge

Multithreaded programming is easy, Multithreaded coding is not.

A person with a sound understanding of multi threading and parallel processing should have absolutely no problem planning and implementing large scale multithreaded systems. In fact, while async programming is super-simple, it has many caveats which can be far more complicated to resolve than multithreaded code.

That said, if one is building a database web-app using asynchronous coding is perfect. It's absolutely optimal for coders without a proper education in computer science.

Of course, async patterns can fail absolutely when there is more to application state than single operation procedures. Locking becomes critical when two asynchronous operations are altering state that impact one another. At this point, we are left with the same problems as when threading is used. The good news of course is that the async paradigm generally offers additional utilities to assist with these specific scenarios.

I use the async paradigm often as it offers a poor mans solution to threading which can be quick and easy to maintain.

Back in 1991 (or so), Dr. Dobbs presented a nice approach to handling concurrency that more people should read. It's a crying shame they didn't just open source all articles when they shut down.

3
0

Place your bets: How long will 1TFLOPS HPE box last in space without proper rad hardening

CheesyTheClown
Bronze badge

Shouldn't they try a machine reliable on earth?

HPE has now owned SGI for long enough that all their best engineers will have left and the remaining ones will have been eliminated as redundancies. Therefore, all that's left is HPE engineers... which got rid of all their useful people throughout the dictatorship of the past 3 suits in charge.

I have some serious questions though.

1) If an HPE computer produces the wrong results due to random behavior... Is this considered a success or a failure?

2) If an HPE computer fails in space and support is needed, is the call routed through mission control first or does it go directly to India?

3) How will the cooling system impact the ISS? HPE last I checked only uses one model of fan and it's REALLY REALLY loud... on purpose because they think that if Ferraris sound faster because of how loud they are, then computers should too.

4) 56Gbp/s interconnect? Wasn't this supposed to be a supercomputer? I buy old used 56Gbp/s infiniband equipment for pennies on the dollar these days. Super computers should be running 10 times that by now. Or is this the HPE version where we sell yesterday's technology today?

0
2

Official: Windows for Workstations returns in Fall Creators Update

CheesyTheClown
Bronze badge

Re: 4 CPU's - That's a lot!

Windows Server is LTS which means no mail, store, Ubuntu, etc...

This will be nice

6
1

No, Apple. A 4G Watch is a really bad idea

CheesyTheClown
Bronze badge

Step forwards

Many of us have learned to plan our lives better and not be as dependent on a watch.

If you check the time so often that the few seconds it takes to take the phone from your pocket is inconvenient, you aren't managing your time. Unless you are taking medication that must be precisely timed, you should easily be able to manage your schedule. A person makes a victim of themselves if they ever find themselves unable to schedule. If you have to be somewhere at 10am, leave with enough time to get there at 9:45. If you're running late because of circumstances beyond your control, call and apologize and inform the person "I have encountered unforeseen difficulties. I will be a little late." Then next time, account for additional delays so they don't believe it is habitual.

If you're stressed over time because of conflicts of work and daycare for example, cut back your hours, change day cares, change jobs, hire an au pair, hire a teenager near the day care, make an arrangement with a parent to take turns picking up and dropping off with, etc...

If $500+ watch is in your budget, your life would be better if you spent the money to buy time instead.

Some people believe a successful person wears a fancy watch. Smart people know that success is learning to manage your life without one.

2
1

Mellanox SoCs it to NVMe over Fabrics with BlueField platform

CheesyTheClown
Bronze badge

FPGA, hashing and compression engine?

These chips are absolutely worthless in their current form. NVMe is great stuff for local connectivity and adding RoCE to the mix is truly amazing. But without someplace to implement SMBDirect, pNfS or SRP packet parsing in hardware, the offering is meaningless. NVMe storage is super amazing and super fast. Over 16 PCIe 4.0 lanes, a theoretical 64GByte per second throughput can be expected.

We have a series of highly optimized somewhat standard hashing algorithms that are designed for non-crypto hash functions which can be optimized with minimal transistor count such as murmur and siphash. A few hundred transistors (plus addressing and cache coherency logic approx 1m transistors) can implement DMA based has functions capable of hashing the full 64GB/sec in real time in maybe a watt or two of power with latency measured in picoseconds.

We also have standardized block compression methods such as LZ77 derivatives that can be implemented to offer the same performance in a minimal transistor count.

Then the CPUs would mostly have to handle transaction logging, DNA scheduling, block management (allocation, auto defragmentation, window sliding, garbage collecting, etc...) so one or two ARM cores would be able to process hundreds of times more data with the acceleration engine if these chips had :

1) FPGA for packer header parsing for identifying hashable and compressible payload regions.

2) Cryptographic (block... stream can be done in software) functions for protecting traffic

3) hashing

4) block compression

For a bonus, a dedicated multi-port capacitor backed up SRAM region for hot store transaction logs for hot storage regions would be REALLY nice. Especially with a dump from SRAM to Flash on capacitor discharge function.

This design was super nifty, but it looks like it was architected by someone who thought bandwidth on the cable was what the problem was.

To be honest, a single Intel quad-core Xeon+Arria FPGA would provide at least 10 times the bandwidth and storage capacity with the exception of Infiniband support which is somewhat useless without Infiniband arbitrors which are very expensive and are unecessary with RoCE and DCBx.

Alternatively, using a Xilinx processor/FPGA would be great as well.

With either solution, short time to market is possible by parallelizing storage tasks in OpenCL. So, even if Mellanox managed to do FPGA, even with Mentor as a partner, they would probably be screwed.

Mellanox should team up with lattice or Xilinx and develop a real storage core. CPU based storage is too slow and a bridge with theoretical bandwidths of 64GB/sec is a total waste of money without the additional logic to manipulate the data.

P.S. Mellanox... I have been harsh, but pragmatic. I have now seen 40 press releases of storage vendors who are just f-ing up NVMe storage this year alone. You are by far the closest to getting it right. Now go find a real storage developer who actually understands the full stack. Then ask them to tell you which parts of their code need the most optimization. Then instead of dropping a half assed generic ARM solution on them, make them build an NVMe stack and optimize the hell out of it and add security via transaction log storage.

0
0

Cisco's server CTO says NVMe will shift from speed to capacity tier

CheesyTheClown
Bronze badge

Uh... UCS Azure Stack anyone?

Starship + UCS + NVMe + VMware + Nexus + Windows + Linux + Hyperflex storage etc...

This is a tub of rubbish which is delivered with CVDs which takes weeks to months to deploy.

If you have a validated design and an automation platform, then you plug it in, answer some questions, and let it rip and it's done.

Or you can buy Cisco UCS with Azure Stack, turn it on, answer a few questions and your running in an hour without having to pay $60,000 a blade for licenses plus the Windows tax. Or you can install Ubuntu on a VM on a laptop and point it to a UCS and get a full OpenStack up and running with containers and automation.

Come on guys... Microsoft and Ubuntu have nailed full data center automation, have app stores and eliminate the need for server, storage or network guys in the data center. TCO on Hyperflex is close to a $150,000 dollars more per blade than on Azure Stack or Open Stack. Why the hell would anyone invest so heavily in VMware which is great for legacy... but we already have legacy sorted. Run that and as more services move to Azure Stack or Open Stack, shut down more legacy Vmware blades.

0
0

UK.gov embraces Oracle's cloud: Pragmatism or defeatism?

CheesyTheClown
Bronze badge

Re: Cluebat required

Doesn't matter, under the terms of national security, Oracle will be required to provide access to all data stored on systems owned and/or operated by US companies without informing the owners of the data of the request. It's not supposed as to be happening yet, but sooner or later, the FBI, NSA, etc... will find a legal loophole that will make it happen.

2
0

Man facing $17.5m HPE fraud case has contempt sentence cut by Court of Appeal

CheesyTheClown
Bronze badge

Re: This used to be how commerce worked isn't it?

Sounds to me that the guy was a hell of a sales person if he was selling servers at retail pricing. HP didn't have to cold call all the customers and probably saved millions on staff and red tape. Unless HPE actually lost money on the sales, it sounds like they screwed themselves.

0
1

Electric driverless cars could make petrol and diesel motors 'socially unacceptable'

CheesyTheClown
Bronze badge

Re: Trolley problem.

Consider connected autonomous vehicles.

Either special utility vehicles or nearby delivery vehicles or worst case, nearby consumer vehicles can be algorithmicqlly redirected to a runaway vehicle, match speeds and forcefully decelerate the out of control vehicle.

This would be wildly dangerous with human drivers, especially if they are not properly trained for such maneuvers. But by employing computer controlled cars, it could be possible to achieve this 99 out of 100 times with little more than paint damage.

This doesn't solve a kids chasing a ball into the street without looking, but it can mitigate many issues related to systems failures.

I can already picture sitting in a taxi and hearing. "Please brace yourself, this vehicle has been commandeered for an EV collision avoidance operation. Your insurance company has been notified that the owner of the vehicle in need will cover the cost of any collision damage to this vehicle. Time to impact, 21.4 seconds. Have a nice day"

1
0

New Azure servers to pack Intel FPGAs as Microsoft ARM-lessly embraces Xeon

CheesyTheClown
Bronze badge

Not entirely true but mostly

Alters has been hard at work on reconfigurable FPGA which is exciting. Consider this, calculating hashes for storage is simply faster in electronics, as fast as the gate depth will allow. Regular expressions are faster, SSL is faster, etc...

The problem is, classically, an FPGA had to be programmed all in one go. If Microsoft has optimized the work flow of uploading firmwares and allocating MMIO space to FPGAs and Altera has optimized synthesizing cores and Intel has optimized data routing, the a web server could offload massive work loads to FPGA. Software defined storage can streamline dedup and compression, etc...

This made perfect sense.

0
0

Azure Stack's debut ends the easy ride for AWS, VMware and hyperconverged boxen

CheesyTheClown
Bronze badge

Re: A different battle

I have customers spread out across government, finance, medical and even publishing that can not use public cloud because of safe harbour. They all want cloud, not virtual machines, but end to end PaaS and SaaS, but they couldn't have it because one way or another, you're violating data safe harbour laws or simply giving your data to China or India. This is a huge thing.

3
0
CheesyTheClown
Bronze badge

Re: Game Changer

Do you understand what this is? I'll guess not, but grats on first post before even performing the slightest research.

This is a cloud platform. It is not about launching 1980's era tech on the latest and greatest systems to manage. It's about giving developers a platform to write applications for which can be hosted in multiple places without the interference of IT people. It's about having an app store type environment for delivering systems that scale from a few users to a few million users. It's about delivering a standard platform with standard installers so there is no need for building virtualized 80's style PCs that require IT people running around like idiots to maintain.

You can keep your VMs and SANs and switches. There are a lot of us who are already coding for this platform and can't wait for this to fly. Whether you like it or not, we're going to write software for this. Your boss will buy the software and either you can run it or you can find a new job :)

1
8

Trump's CNN tantrum could delay $85bn AT&T-Time Warner merger

CheesyTheClown
Bronze badge

Please clarify the original claim!

I read the whole article which appears to be a tantrum about Trump. Ok fine, he's an idiot. The article makes lots of different points. What it doesn't do is connect Trump's idiocy with why the merger would be delayed. Is AT&T considering pulling out? Will there be a conflict of interest that would allow Trump to influence the members of the merger and make them stop CNN from being mean to him and hurting his feelings?

I found the article entertaining and I certainly have no love for Trump, but...

What is the connection?

3
0

Analyst: DRAM crisis looms after screwup at Micron fab

CheesyTheClown
Bronze badge

At least they didn't burn another one down

Don't the DRAM and HDD price hikes almost universally come from burning stuff down?

Wouldn't it be better just to say "We're cutting capacity to produce a shortage to force you to pay more"? Or is there an insurance scandal involved as well? After all, how likely is it that their insurance company has someone qualified to assess damages to a semiconductor manufacturing clean room on staff?

"Here, look in this microscope. Do you see that blue spec, they're everywhere now and we have to throw away all our obsolete equipment and replace it with the next generation stuff that we need for the next die reduction".

0
0

PCs will get pricier and you're gonna like it, say Gartner market shamans

CheesyTheClown
Bronze badge

Re: Value for money?

I don't understand. Are you suggesting it's better to stick with spinning disks on laptops and desktops because you can't find a company who makes a SSD based on a stable technology?

Was there ever a moment in history where the hard drive business wasn't like that? Have you ever complained that the methods of suspending magnetic heads over the drive surface on different vendors of hard drives was different? Have you ever complained that the boot code on a western digital drive wasn't the same as on a seagate?

Are you worried that there is something magically different on a SATA cable when using SSD as opposed to on spinning discs?

Are you worried you have to use mSATA, M.2,, PCIe. You don't.

2
0
CheesyTheClown
Bronze badge

Re: Value for money?

Semi-decent PCs? Without SSD and without decent screen resolution?

This is 2017. Semi-decent is a core i5, 16GB RAM and 500GB SSD and at least 2500 pixels wide. Decent is i7, 16GB, 1TB and 3000x2000 and nVidia graphics.

I bought that 2 years ago and have absolutely no inclination to upgrade. Microsoft can get a few extra bucks from me if they sell an upgraded performance base, but for about $750 a year per employee for their PC, it's a cheap option. I'll consider a new one in 2 more years if they come with at least double the specs and a minimum 2GB/sec SSD read time. Otherwise, I'll it will be a $600 a year PC, then $500.

Employers obviously have to consider the cost of buying dozens, hundreds or thousands of PCs. But leasing with an option to buy makes sense if the CapEx is scary. But spending less than $2000 on a PC can be very expensive. There are very few good machines on the market that cheap.

1
0

Cisco automation code needs manual patch

CheesyTheClown
Bronze badge

This is very common in Cisco products

Cisco is great at making products on top of Linux and Apache tools, but they are utterly useless at securing Linux and Apache tools. Currently dozens (maybe more) Linux kernel exploits work against ISE since Cisco doesn't enable configuration of RHEL updates on those boxes. As a result, they are very often very vulnerable. They also are wide open to Tomcat attack vectors because the version of Tomcat running on ISE is ancient and unmatched.

As for root passwords... install ISE on KVM twice (or more) and mount the qcow images on a Linux box after. You'll find that the password for root is the same on all those images. While ssh access as root appears to be disabled, there are a few other accounts with the same issue.

I don't even want to talk about Prime. It's a disaster with these regards.

Surprisingly enough APIC-EM for now seems ok, but that's because about 90% of the APIC-EM platform is a Linux containers host called grapevine. I think the people who worked on that were somewhat more competent (I believe they're mostly European, not the normal 50 programmers/indentured servants for the price of 1 that Cisco typically uses).

I haven't started hacking on IOS-XE... I actually don't look for these bugs. I just write a lot of code against Cisco systems and it seems like every 5 minutes there's another security disaster waiting to happen. They have asked me to help them resolve them but it would require hundreds of hours of my time to file bug reports and I can't waste work hours on solving their problems for them.

Oh... if you're thinking "oh my god, I have to dump Cisco", don't bother, the only boxes currently I would trust for security is white box and unless you know how to assemble a team of system level software and hardware engineers (no... that really smart guy you know from college doesn't count) you should steer clear of those. The companies who use those successfully are the same ones who designed them.

Cisco, you need a bug bounty program. Even if I could make $100 for each bug I stumble on, I would invest the half hour-hour it takes to write a meaningful bug report. Then you can fix this stuff before it ends up a headline.

5
0

If 282-page doc on new NVMe drive spec is tl;dr, you're in luck

CheesyTheClown
Bronze badge

Re: It's a standard for disk drives using Non Volatile Memory.

Intelligent protocols for multi-client access? If only there were some standard method of providing storage access in a flexible manor with encryption support, variable length reads and writes, prioritized queuing, random access, error handling and in a high performance package for non-uniform access over ranged media. Oh, let's not forget vendor specific and defacto standard enhancements as well as feature negotiation. No imagine the ability to scale out, scale up, work with newer physical transports, support memory mapped I/O access and even nifty features like snapshots, checkpoints, transaction processing, deduplication, compression, multicast and more. Imagine such a technology with no practical limitations on bandwidth, support for multiple methods of addressing and full industry support from absolutely everyone except VMware. Then consider hardware acceleration support routability over WAN without any special reliablility requirements.

Oh wait... there is. It's called SMB and SMB Direct depending on whether MMIO matters.

This is 2017. No one wants direct drive access over fabric unless they are simply stupid. Block storage over network/fabric in 2017 is so impressively stupid. It requires too many translations, stupid file inflexible file systems like VMFS, specialized arbiters in path and is extremely inefficient and an order of magnitude worse when introducing replication, compression and deduplication.

The only selling point for block based storage over distance is unintelligent and unskilled staff. The only place where physical device connectivity protocols (SCSI and NVMe) should be used is when you want to connect drives to computers that will then handle file protocols.

BTW, GlusterFS and pNFS are good too.

0
0

One-third of Brit IT projects on track to fail

CheesyTheClown
Bronze badge

So 60%+ are expected to succeed?

That's really not bad.

Consider that most IT projects are specced on a scale to large to achieve.

Consider that most IT projects are approved by people without the knowledge to select contractors based on criteria other than promised schedules and lowest bids.

Consider that most IT people lack enough business knowledge to prioritize business needs and that most business people lack the IT experience to specify meaningful requirements.

To be fair, 60% success is an amazing number.

Now also consider that most IT projects would do better if companies stopped running IT projects and instead made use of turn-key solutions.

How much better are the odds of success when IT solutions are delivered by firms with a specific focus on the vertical markets they are delivering to?

0
0

Heaps of Windows 10 internal builds, private source code leak online

CheesyTheClown
Bronze badge

Re: I'm done with Windows.

Windows 10 Serial driver (C code, based on the same code you've seen... still works) : https://github.com/Microsoft/Windows-driver-samples/tree/master/serial/serial

Windows 10 Virtual Serial driver (C++ code, based on the new SDK with memory safety consider) : https://github.com/Microsoft/Windows-driver-samples/tree/master/serial/VirtualSerial

Mac OS X Serial Driver (C++ code... runs in user mode) : https://opensource.apple.com/source/IOSerialFamily/IOSerialFamily-91/IOSerialFamily.kmodproj/

Using a domain specific language for a kernel which can implement the core kernel code in "unsafe mode" and then implementing the drivers, file systems, etc... in a "safe mode" language meaning memory references instead of pointers (see C11 which makes moves this way... but refuses to break with tradition by doing it as library changes instead of a language feature).

In reality, this is 2017 and if your OS kernel still has a strict language dependence for things like file systems and device drivers, you probably aren't doing it right. These days most of that code should be user mode anyway. And no, user/kernel mode discussions stopped making sense when we started using containers and Intel and AMD started shipping 12+ core consumer CPUs

0
0
CheesyTheClown
Bronze badge

Re: I'm done with Windows.

Ohhh... I'm glad I came back here.

C is a great language and it's extremely diverse. It's absolutely horrifying for something like the Linux kernel though. Consider this, it has no meaningful standard set of libraries which means that support for things like collections and passing collections is a nightmare. Sure you have things like rbtree.[hc] in the kernel, but as anyone who has studied algorithms knows, there is no single algorithm which suites everything.

Let's talk about bounds, stacks, etc... there's absolutely no reason you can't enhance the C compiler to support more memory protection as well. C itself is a very primitive language and it's great for writing the boot code and code which does not need to alter data structures. But there are severe shortcomings in C. Yes, it's 100% possible to add millions of additional lines of repetitive and uninteresting code to implement all those protection checks. But a simple language extension could do a lot more.

Let's talk about where I find nearly all of the exploits in the kernel. This is in error handling and return values. It's amazing how you can cause problems with most code written at two different times by the same person or by two different people. The reason for this is that there's no meaningful way to handle error complex error conditions. Almost all code depends on just returning a negative value which is supposed to mean everything. The solution to this is to return a data structure which is basically a stack of results and error information and then handle it properly. The reason this isn't done is because people get really upset when implementing anything resembling exceptions in C. And yet, nearly every exploit I've found wouldn't have been there if someone implemented try/catch/finally.

Let's talk about data structure leaking and cleanup related to the above. Better yet, let's not... pretty sure that one sentence was enough to cover it all.

This is 2017, not 1969. In 2017, we have language development tools and technologies that allow us to make compilers in a day. This isn't K&R sitting around inventing the table based lexical analyzer. Sticking with the C language instead of creating a proper compiler designed specifically for the implementation of the Linux kernel is just plain stupid.

More importantly, there's absolutely no reason you have to use a standardized programming language for writing anything anymore. If your code... for example an operating system kernel would profit from writing a new programming language for it... do it. You can base it on anything you want. It's actually quite easy... unless you write the language itself in C. Use a language suited for language development instead. Get the point yet?

The next big operating system to follow the Linux kernel will be the operating system which leaves 95% of the C language in tact and implements a compiler which :

a) Eliminates remaining dependencies on assembler by implementing a contextual mode for fixed memory position development.

b) Provides a standard implementation of data structures as the foundation of the language

c) Implements a standard method of handling complex returns... or exceptions (possibly <result,errorstack>)

d) Implements safe vs. non-safe modes of coding. 90% of the Linux kernel could easily have been done in safe mode

e) Offers references instead of pointers as an option. This is REALLY important. Probably the greatest weakness of C for security is the fixed memory location bits. Relocatable memory is really really useful.If you read the kernel and see how many ugly hacks have been made because of it not being present, you'd be shocked. The Linux kernel is completely slammed full of shit code for handling out of memory conditions which exist purely because of supporting architectures lacking MMUs. References can be implemented in C using A LOT of bad and generally inconsistent code. It can be added to a compiler with a bit of work, but when combined with the kernel code, can implement a memory defragmenter that could fix A LOT of the kernel.

And since you're kind enough to respond aggressively, allow me respond somewhat in kind. You're an absolute idiot... though maybe you're only a fool. C# and .NET are actually very good. So is C, Java, C++, and many others. Heck, I write several good language a year when a domain would profit from it. I you don't know why C# and .NET or even better, Javascript are often better than plain C, you probably shouldn't pretend like you know computers.

Did you know that Javascript generally produces far faster and better code in most contexts than C and Assembler today? If you understood how microcode and memory access function, you'd realize there's a huge benefit to recompiling code on the fly. Consider that Javascript spends most of its time recompiling code as it's being run. This is because the first time you compiled it, it was optimal for the current state of the CPU, but as the state of the system changed (that's what happens in multitasking systems) the cache has changed and the CPU core being used may have changed (power state, etc..) and the Javascript compiler will reoptimize the code. It's even possible with Javascript that if you're on a hybrid system containing multiple CPU architectures or generations, the code can be relocated to a CPU which is better suited for the process.

Of course C could be compiled into Javascript or WebAssembly and have the same benefits. The main issue is that you lose support for relocatable memory as WebAssembly to support C/C++ is flat memory. But at least for execution, it's very likely your C code will run faster on WebAssembly than on bare metal. If you then start making use of Javascript/WebAssembly libraries for things like string processing, it will be even faster. If you move all threading to Javascript threading, it will be even better.

This does not mean you should write an operating system kernel in Javascript. Just as C is not suitable for OS development anymore, Javascript never will be.

0
1

Page:

Forums

Biting the hand that feeds IT © 1998–2017