* Posts by Mark Honman

113 publicly visible posts • joined 19 Apr 2007

Page:

Working from home could kill career advancement, says IBM CEO

Mark Honman

In my own experience, it took about 2 years to become fully productive while working remotely - mainly because one needs to deliberately build working relationships that would form naturally while based in the same office. But now that almost everyone has made that change, it's not a problem to keep on working in that mode - as an engineer.

What I've seen during the pandemic is a vastly increased workload for line managers, as it just takes longer to coordinate remote work - my managers' schedules became completely packed with meetings, and I think their job satisfaction has suffered (I really appreciate good managers who "do the dirty work" so that I can concentrate on engineering tasks, so I want them to be happy too).

IMO what would be a lot more useful than having everyone go into the office X days in each week, is to gather a team together in a common place for the duration of a sprint.

BOFH takes a visit to retro computing land

Mark Honman

Re: Depreciation

It happens... once I was very keen to buy an old HP1000 L-series off my employer, but it turns out that it had been purchased with SW which had been included in the book value - and it was to be depreciated over 10 years!

Turing Award goes to Robert Metcalfe, co-inventor of the Ethernet

Mark Honman

Read all about it!!

There is a great book about Xerox PARC, called "Dealers of lightning" (by Michael Hiltzik, I think). Not just the technologies and inventors, but also what made it such a productive organization...

America's chip land has another potential shortage: Electronics engineers

Mark Honman

Re: time-limited careers

Can't say I can agree with that - I kinda straddle CS and EE (did Compsci & Physics at university in the 1980s) and I have a fair number of Intel EE colleagues who are from the same generation, and still hands-on engineers - most typically chip subsystem architects. Okay this is the FPGA part of the company, where quite likely work is more varied than in the CPU business and we haven't had the kind of reorg you describe. Our reorgs are different, teams get switched onto different projects which brings its own frustrations but the good thing is teams generally stay together.

Record players make comeback with Ikea, others pitching tricked-out turntables

Mark Honman

Re: Digital transmission?

The post D/A filtering is exactly why cheap digital audio often terrible "hash" in the upper treble. A well implemented high-order filter isn't cheap! I think this is why Philips' invention of oversampling was so effective, in simplifying the filtering problem. But open up any of their high-end Marantz CD players and you'll also find top quality components in the filter.

However one thing I'd like to challenge you on is the extent to which pure sine waves are a proxy for the sound of musical instruments - because musical notes have an envelope, and musical instruments are not in perfect tune. So the fundamental "continuous signal" assumption of the sampling theorem doesn't apply?

Mark Honman

Re: Digital transmission?

And at the start of the CD era I gained a record collection - from a classical music lover who was switching to CD. And also bought his old Quad 33/303, which provided a sound quality I didn't know could exist.

That was after buying my first CD player ("perfect sound forever") and discovering how low the CD sampling frequency really is.

As you say, CD & vinyl are different and have different shortcomings. CD's technical shortcomings tend to be due to clock jitter, brickwall filtering, and interaction between digital and analog power supplies.

By and large it is systematic noise which, yes, tends to bring a nastiness to the upper frequencies. Vinyl noise is more random although there are systematic weaknesses like resonances and the very high gain pre-amplifiers that are needed for reproduction.

Phone jammers made my model plane smash into parked lorry, fumes hobbyist

Mark Honman

Re: To everyone downvoting me for suggesting that he should have enabled failsafe

Since 2.4GHz has all this interference - and on the positive side, multiple models can operate on the same frequency - signal loss is more like "I haven't received a packet for X seconds - it's time to panic".

Most systems hop between different channels of the 2.4GHz band so if one channel is occupied by someone downloading the internet, it's still possible to get a packet through via a different channel.

Mark Honman

Mobile phone jammer doesn't make sense

mobile phones operate on licensed spectrum - totally different frequencies from the 2.4GHz unlicensed spectrum used for wi-fi, RC, bluetooth etc.

When used for RC the transmitter makes a lot of retries to address the problem of interference.

Most likely the issue was a receiver lockup, which I have seen happen a couple of time with a particular make of RC gear.

As for me, I just suffer from brain lockup.

Mark Honman

Re: Failsafe?

The normal setup for planes' failsafe is some rudder, and throttle cut. If all goes well, the plane glides in a circle. If not, it hits the ground in the reserved flying area.

Failsafe mandatory for models over 3.5kg (likely to be the case here unless this was a very light build).

Over a decade on, and millions in legal fees, Supreme Court rules for Google over Oracle in Java API legal war

Mark Honman

It's not even open source

If Oracle had won it would be a body-blow to the whole concept of open systems and API commonality.

The reason is of course that even if Google had coded their own version of the Java API headers, the function signatures & object declarations would necessarily be the same as those defined by the Java libraries.

The laptop you bought in 2020 may stop you buying a car in 2021: Chips are going short

Mark Honman

Re: Stop me from buying a car? Probably not.

Old Alfas? Who needs chips when every car comes with a unique personality and moods!

In the past I enjoyed a more sensible choice, two early Porsche 911s - and now have a Peugeot 205 GTi which is a whole lot more temperamental but I think not as bad as an Alfa!

That said, at the Pursche club events at Mason's Mill we were often joined by a chap nicknamed the "flying doctor" who brought a highly modified GTV 3.0 which was an absolute delight to watch - and almost always beat the brand-new Porsches.

'It's dead, Jim': Torvalds marks Intel Itanium processors as orphaned in Linux kernel

Mark Honman

Re: Not the 2nd 64 Windows

Pretty sure PPC was 64b - we had one, running Windows even - before we switched it to AIX.

A 1970s magic trick: Take a card, any card, out of the deck and watch the IBM System/370 plunge into a death spiral

Mark Honman

DU,6,1

Err, my greatest hit in this department was crashing an HP1000 by reading the printer.

To my astonishment a line of garbage was printed on the terminal as the system's last gasp.

How Apple's M1 uses high-bandwidth memory to run like the clappers

Mark Honman

Re: Unified Memory

Hmm, that's a pretty convincing explanation of the unified vs shared mem difference. I've always struggled to understand the difference between OpenCL's USM and SVM, maybe this is the key...

If I get this right, the difference is that in a heterogenous shared memory scenario, the memory appears at different locations in each device's address space so pointers are not translatable. For example if the CPU builds a linked list in shared memory, the accelerator cannot just dereference the pointers.

Das Keyboard 4C TKL: Plucky mechanical contender strikes happy medium between typing feel and clackety-clack joy

Mark Honman

Durgod

Based on a recommendation from a fellow commentard, I bought a Durgod TKL from the big river place's returns dept ("warehouse"). This one has regular reds which still make significant noise when they bottom out (I'm not a subtle typist).

On the whole I've been very happy with it, but would probably have gone for the silent reds if I knew then what I know now. The only weird thing is that the ~# keycap has the legend the wrong way round, but I've been working with Unix so long that this one's in muscle memory.

If not this keyboard, I probably would have gone for the Razer Blackwidow or maybe the Corsair K63

Soft press keys for locked-down devs: Three new models of old school 60-key Happy Hacking 'board out next month

Mark Honman

Alternatives?

Friday afternoon question for fellow developer-commentards: What alternatives are there at a more affordable price level, say around £50?

I'd be looking for a KB with cursor keys, though.

FWIW I'm quite a "heavy" typist and therefore a big fan of old-school Thinkpad keyboards. My all-time favourite keyboard is the one on the HP2645 (yes, that dates me more than a little bit!)

Mark

We regret to inform you there are severe delays on the token ring due to IT nerds blasting each other to bloody chunks

Mark Honman

Also in the minicomputer, era - HP1000s in this case - we had somehow acquired space invaders and pac-man for the HP 2645 terminals used on these system (the terminals had an Intel 4004 CPU, if I remember right!).

Friday afternoons were when the system was taken down for its backup to tape, so the last thing we'd was download these "diagnostics" to the terminal and then blast away while the backup was in progress.

Labour: Free British broadband for country if we win general election

Mark Honman

Re: For a given value of "free"

We already pay per mile due to the tax included in the petrol price.

That actually works well because it discourages gas-guzzlers and does not penalise lower-income people who rarely use their cars.

*Spits out coffee* £4m for a database of drone fliers, UK.gov? Defra did game shooters for £300k

Mark Honman

BMFA

Practically all the RC flyers belong to clubs that are affiliated to the BMFA (British Model Flying Association) or similar - that have their own database.

It would make a load of sense to require anyone who flies toys to be a member of a club that's affiliated with one of those affiliations. Model flying clubs are strong on safety because the big boys toys are so dangerous and requiring membership would help new drone owners learn how to fly responsibly.

Amazon tried to entice Latin American officials with $5m in Kindles, AWS credits for .amazon

Mark Honman

Yep, it's not just a river - it's a common name for the whole Amazon basin region, probably close to half the land area of South America.

Venezuela floats its own oily cryptocurrency to save the world economy

Mark Honman

Re: Are sanctions effective?

Sanctions against South Africa definitely resulted in the pullout from Angola. We could not compete with the Russian weapons systems used by the Angolans & Cubans.

I don't think sanctions contributed much to the end of apartheid. Rather, the end of the cold war meant that the superpowers were no longer interested in sponsoring our internal conflict. I would like to believe that Gorbachev's reforms in the USSR gave the SA government the idea that they too could open up; but have no evidence!

Death notice: Moore's Law. 19 April 1965 – 2 January 2018

Mark Honman

Re: You do know that Moore’s law says nothing about speed?

Yup, it's more correct to say that the "golden era" of single-threaded computing is gone - a time when moving to the next process node would enable higher operating frequencies _and_ the doubled transistor count could be used for new architectural features - such as speculative execution - and improved performance through integration of functions that were previously off-chip.

Many of the architectural features that boot single-threaded performance are costly in area, and now that applications _must_ exploit parallelism to get improved performance there is a tipping point. If the applications scale well on highly parallel systems, for a given chip size more system performance can be had from many simple cores than a smaller number of more sophisticated cores.

That is, provided the interconnect and programming model are up to scratch!

Causes of software development woes

Mark Honman

Foundations

Using the building analogy, it is possible to vary the construction along the way, as long as the building's foundations are sufficient. You can even add a second story if the original foundations are deep enough.

Extensions are a pain because their foundations will have to be designed to become as one with the original foundations.

In terms of software requirements, then, it's important to get the big picture of what the system might grow into, as a starting point for system design. That allows one to make appropriate platform and system architecture decisions that should prevent the system running into a brick wall as it grows.

There are 2 problems, of course:

The more capable the platform & more sophisticated the architecture, the longer it takes to get to something that stakeholders can see "turning its wheels".

Ultimately every successful system hits that brick wall where its requirements have outgrown the foundations. Succession planning is essential; we should be thinking about scheduling "moves to a new building" well before they become necessary.

What remains very difficult is making the case for the cost-effectiveness of a re-write; that as a system's actual requirements diverged from the requirements on which its design conception is based, there is a crossover point where the cost of maintaining multiple levels of workaround becomes greater than the cost of a re-write.

It's official: Users navigate flat UI designs 22 per cent slower

Mark Honman

Once upon a time UIs were all flat, then UX experts did some research and discovered that introducing 3D elements increased navigability. This must have been sometime around 1988.

Not surprising that modern research agrees, as meatspace hardware changes terribly slowly.

Why you'll never make really big money as an AI dev

Mark Honman

Good to have Dominic back

"Neural Networks were a joke in the 1980s. I built one, for a given value of "built" since it never ever did anything useful or even go wrong in a particularly interesting way."

Some Transputer-using friends got a bit further than Dominic, then... they trained their neural net with photos of team members, using a wheelie bin as control. Despite their best efforts, the net never managed to distinguish the bin from the team's rugby-playing member.

British Gas wins pre-paid smart meter patent lawsuit

Mark Honman

Re: Prior Art

> The case is about associating payments directly with the meter number.

Yes, that is how STS works. IMO it is about the only sane way to handle credit transfer to prepayment meters.

STS was originally intended for developing-world markets, where many customers are illiterate. Typically the customer is given a magstripe card with the meter number on it. This is all that is needed at POS for the customer to make their purchase.

Mark Honman

Re: Prior Art

Not just that, but in the STS specification for prepayment credit transfer the meter serial number is the unique ID of the meter. STS has been around since the mid-1990s.

As a side-note, one of the cute things about STS is that the serial number is part of the credit encryption scheme; therefore in a wireless system it is feasible to broadcast the credit tokens...

Hot iron: Knights Landing hits 100 gigaflops in plasma physics benchmark

Mark Honman

Re: 100GFLops at what power?

Tom, sorry to say that those Adapteva numbers are "guaranteed never to exceed" ones; in this case even more so than usual because Adapteva didn't get enough good 64-core Epiphany-IV chips to fulfil the kickstarter orders.

102GFlops corresponds to all cores doing solely fused multiply-add operations, and ignores the problem of where the data is coming from (i.e. nothing like any real application or even a benchmark). On Parallella the DDR is attached to a Zynq ARM+FPGA hybrid meaning about 300MB/s maximum RAM bandwidth, and the Zynq uses about as much power as the Epiphany.

IIRC the 2W figure is for the 16-core Epiphany-III - but it is still a good GFlops/W figure.

But Kudos to Adapteva for trying - I had high hopes for the Parallella when it came out, but to my surprise it led me into the world of FPGAs.

If managing PCs is still hard, good luck patching 100,000 internet things

Mark Honman

On the oil rig there will already be a centralised control system, and raw data feeds into that. Not an IoT scenario, I'm afraid.

The concept of very dumb "connected sensors" (cortex M0 at most!) is best for domestic use - with possible data concentration before upload to the processing centre.

By data concentration I mean elimination of no-change or insignificant-change data - massively reducing data volumes.

But that too is a unidirectional data flow, and security is best served by that edge box not being remotely accessible.

Miguel de Icaza on his journey from open source to Microsoft: 'It's a different company'

Mark Honman

@thames

Brilliant insightful comment - el Reg should pay you to expand it into an article on the subject.

Stalled cloud growth, software flatlining, hated Lumias unsold... It's all fine, says Microsoft CEO

Mark Honman

IBM used to be the evil callous gorilla of the IT world; somewhere in the 1990s they realised their stranglehold was lost and the only way to stay in business was to build positive (as opposed to extortionate) relationships with their customers.

I hope & expect that Microsoft will do the same, probably retreating to being the world's leading provider of enterprise client computing software (yes, with the cloudy bits).

There are bits of the company that are showing signs of the new "give the customer what they want" behaviour, and I look forward to the "RS6000 moment" when Microsoft products can be recommended on their merits and without fear of lock-in.

Ever wondered what the worst TV show in the world would be? Apple just commissioned it

Mark Honman

[computing books] Re: apple wants to video about computer stuff ?

Another great computing book

"Dealers of Lightning" by Michael Hiltzik

And does anyone remember Tracy Kidder's "Soul of a new machine"?

On the deficit side, sometime in the early 1970s a book called "the Glitch" (IIRC) was published - this claimed that electronic circuits were frequency-limited by "glitches" that increase with frequency (probably referring to metastability) and that the computer industry was doomed, doomed...

Mud sticks: Microsoft, Windows 10 and reputational damage

Mark Honman

Re: I'm a bit confused

> Microsoft has reached the point where optimizing and tuning is what they're focusing on.

Well I think that is what they *should* be focusing on.

Hence why massive changes in the user interaction model are not going down well with people.

They really don't need to do much: just provide UI continuity while as you say, focusing on optimising and tuning what exists.

I would happily pay a subscription for ongoing OS support (security/technology upgrades) for that optimising and tuning.

You've seen things people wouldn't believe – so tell us your programming horrors

Mark Honman

HP1000 G-code

In the 1990s I did some maintenance work on a process-control system (HP1000 minicomputers, all of 2MB main memory - running about 100 processes - amazing what one could do in the old days).

One of the original developers had passed into management, which was just as well as while a very nice person had limited coding skills. He had done a lot of the Pascal code in the system, and had fallen in love with CASE - it was used everywhere, almost to the exclusion of IF, and the code was peppered with 100 line long (or more!) CASE constructs. In that system the assembly code was generally much better written, commented, and documented than the Pascal code.

One of the tasks in this project was to move to 50MB disk drives, which meant a different driver set & thus different OS (RTE) memory layout, requiring a complete rebuild of the system from source.

Except - one little time-difference calculation function was missing. So we (myself, another young'un, and the boss) looked at the code that was using it, reverse-engineered its functionality, and re-created it (all of 15 lines IIRC).

However... it was our bad luck that the program on which the big rebuild's compilation had failed was a G-code masterpiece, and when the system went live _every other_ program which used TDIFF was getting wrong date calculation results.

'Unikernels will send us back to the DOS era' – DTrace guru Bryan Cantrill speaks out

Mark Honman

Re: Cores are cheap, it's how to use them...

Actor model - now that's interesting... from that brief description it looks like it's rather like the traditional RTOS message-passing model (now I will _really_ show my age - HP1000 RTE class I/O?).

Synchronous vs. asynchronous message passing is one of those wonderful design trade-offs. On the one hand the predictability of synchronous interactions makes it easier to reason about what is going on in parallel software, and the very simple rules make direct implementation in hardware feasible.

On the other hand in the real world one always seems to need a certain degree of decoupling between processes (easy enough to provide in a CSP environment via lightweight buffer processes, as long as the buffering depth is bounded).

The most useful form of buffering seems to be a depth of 1, on the data-producer side of the data path. Enough to release the producer to get on and produce more data, but avoids dynamic resource management overheads and still allows one to reason about where producer and consumer are in their respective control flows?

Mark Honman

Re: Cores are cheap, it's how to use them...

> Transputer and Occam not worth mentioning in the context of efficient message-passing?

I wanted to emphasise that CSP is even more applicable now than 30 years ago. While Occam and the Transputer were so closely coupled that the CSP primitives had equivalents in the instruction set, the XMOS take on CSP is IMO very clever because it addresses the problems that limited adoption of the Transputer and Occam. Specifically, the computational performance advantage of a RISC architecture, and reluctance to write substantial software in new languages.

And £10 will get one an XMOS devkit to play with...

As for me, I'm very happy to be working for a "Transputer company" that _still_ has a modular multiprocessing vision; even more so that when we had a clear-out last year there were some TRAMs needing a new home..

Mark Honman

Cores are cheap, it's how to use them...

Having taken the plunge into embedded software development, the unikernel concept doesn't seem very different from the standard embedded way of doing things.

Perhaps a differerence in that in embedded-land, specs are much "harder" and clearly thought-through, and testing much more thorough than in web-service land.

However there is a better way, that (of course) was first developed in concept more than 25 years ago.

That is, a bunch of unikernels that are functionally separate and only interact by exchanging messages. This re-introduces isolation, both conceptual and physical, without corresponding overheads. The model is called Communicating Sequential Processes.

CSP also includes a concept of low-overhead fork/join and yes, works best with languages that are amenable to high-quality static analysis (so that resource requirements can be computed in advance). XMOS microcontrollers and their C-derivative language XC are the modern-day examples; how I wish they'd produce some proper _fast_ processors!

The key idea with the XMOS chips is that there is direct hardware support for all the CSP constructs, so message-passing overheads and task switching are very efficient.

Back in the land of normal processors these kind of facilities are not available; but cores are now so cheap that it is possible in concept to dedicate a core to managing each peripheral. I'm not thinking of the x86 architecture here, but devices like the lowrisc chip, Zynq MPSoC, and especially TI's DaVinci media processors.

What is still lacking is decent message-passing hardware in mainstream processors, so that hand-off of requests to lightweight coprocessors does not need to go through a memory-mapped interface (where the resource management handshaking always seems to get ugly).

Four Boys' Own style World War Two heroes to fire your imagination

Mark Honman

Re: Some more good reads

Sorry, "Brazen Chariots"

Mark Honman

Some more good reads

Robert Crisp: http://www.theguardian.com/sport/2013/mar/05/the-spin-bob-crisp-amazing-life

His books "Blazing Chariots" and "The Gods were Neutral" are brilliant reads.

And there's Deneys Reitz, who wrote "Commando" about fighting the Brits during the Boer War, and "Trekking On" about serving with them in WW1.

Intel completes epic $16.7bn Altera swallow, fills self with vitamin IoT

Mark Honman

Real-life application: GbE smart switch

Have actually opened one up to change a fan, and noticed it had a Xilinx Spartan FPGA in it.

That's a good example of FPGA sweet-spot where you need significant bandwidth (16Gbps), some smarts (MAC filtering, VLANs, web GUI), updatability, and perhaps the ability for others to OEM the product with their own firmware branding and "secret sauce".

Nokia, ARM, Enea craft new TCP/IP stack for the cloud

Mark Honman

Re: TCP Offload?

Despite the cloudwashing, from the partner list this looks to be an embedded systems thing - so not much that can be offloaded.

It is probably more cost-effective to add another cheap processor core than integrate a high-performance NIC that can't do anything useful if there isn't TCP offload work going.

From embedded-land it is certainly interesting - the heavyweight TCP/IP processing in the Linux kernel was too much for one of our projects, and we had to use bare-metal software (and lwIP) to get full performance from the SoC's gigabit connection.

Microsoft makes Raspberry Pi its preferred IoT dev board

Mark Honman

Re: No comprende

> Except when the device has to communicate its data elsewhere. Or talk to the smartphone app that controls it. Etc,

Those don't need complex layered protocols. If the functionality is simple, the protocol can be simple. e.g. a sensor application will be returning the same data over and over - so it can be as simple as encapsulating the binary data in a UDP packet & handing that over to the network hardware. (I won't go into security question here, the trade-offs involved would make for an essay but suffice it to say that cheap microcontrollers like the XMEGAs have AES encryption support in hardware...).

Bluetooth is handy for smartphone comms - and is increasingly integrated in microcontroller hardware and supporting bare-metal software stacks. e.g. Cypress PSoC BLE (which would be my favourite platform for hobbyist projects). So, don't need an OS for that.

lwIP is a free TCP/IP stack that can integrated with an RTOS (e.g. FreeRTOS) or be used bare-metal. Doesn't force a requirement for an OS. There are also wi-fi modules available that allow one to offload the wi-fi and TCP stacks.

Mark Honman

No comprende

I's still failing understand why anyone would want to run an full-fat OS on an embedded sensor/controller device. Even a "concentrator" would do fine with a lightweight RTOS.

The point of an OS is sharing resources between functions; when the device is single-function there is no reason to have something to manage the hardware resources. And no need for complicated layered communication protocols when the device has a single function.

Having worked on bare-metal embedded products (including ethernet-connected) AND traditional Windows and Unix software, it's clear that each platfrorm has its place. However hardware size and cost dominate all considerations (including development costs) in commodity embedded systems and any processor that costs more than $2 (in volume) probably won't get a look-in.

I was going to end there, but there are 2 other dimensions to embedded computing that make Windows a non-starter.

First, support life - Microsoft just loves to hype up new tools, only to obsolete them a couple of years later. A device manufacturer wants to know that once the expensive development has been done, the product can be manufactured & sold for as many years as it remains competitive in the market.

Second, non-portability. Since the RPi design can be licensed it is feasible for device manufacturers to build the relevant bits of it into their designs; but the device manufacturer loses the negotiating power of being able to say "mr chip vendor, give us a better price or we build these million boards using someone else's processor". Every penny saved on components goes straight to the manufacturer's bottom line, and some firms have "value engineering" teams whose sole aim in life is to remove components from successful designs, or substitute cheaper alternatives.

Most developers have never seen a successful project

Mark Honman

To me, "success" is when the system is in use for 5+ years, "really good" is when it is successfully adapted to changing requirements during the maintenance phase.

And what's really nice is to hear via the grapevine that when the system is eventually replaced that it was due to changing fashions in technology and that the end-users want it back.

Projects that were delivered but could not be deployed due to business reasons (re-orgs, Marketing Menopause etc.) also count as successful.

AMD sued: Number of Bulldozer cores in its chips is a lie, allegedly

Mark Honman

Re: Everyone knows

There is also the precedent of classic SIMD machines such as the Connection Machines CM-2.

In the day it was always referred to as a 64K processor machine (or maybe to the pedants, 64K PEs), had one Weitek FPU per 32 processors, and being SIMD I'd assume there were no per-CPU instruction fetch/decode units.

Oh, and BTW each processor was 1 bit wide...

'T-shaped' developers are the new normal

Mark Honman

Re: B-O-L-L-O-C-K-S-!

Agile works well when there is a single on-line system where the goal post are always moving, and mistakes are not costly.

Waterfall is still the best for well-specced mission-critical systems or ones that cannot easily be upgraded.

And in between, where most systems fall, the best methodology is a blend: for example use of prototypes (agile mode) to nail down requirements for a waterfall phase which is rapid and productive due to the elimination of unknowns. For each application there is an optimum point of change frequency and scope where customers are willing to wait a certain amount of time for changes in exchange for greater stability, predictable release cycle, and the ability to schedule end-user training to avoid loss of productivity due to end-users having to work things out for themselves.

Mark Honman

Stripping away the buzzwords, I'm highly entertained

As that's the way I, and most of my fellow South African devs, have always worked.

Although it should be more of a V than a T, i.e. it really helps to know a good deal about topics closely related to one's speciality. e.g. in my case it's the intersection of real-time and high-performance computing, but I can and will do GUI and database work if that helps get the product delivered.

It does make working in a mult-disciplinary team good fun, making problem-solving a collective exercise rather than a scrap about whose side of the wall has the root cause.

Junk your IT. Now. Before it drags you under

Mark Honman

For as long as I've been developing software - 30 years now - there has been the problem of software projects being expensive and late. Over time the "industry" has tried to address this by providing frameworks of various kinds that are supposed to either take work off the developers' shoulders, allow more developers to tackle a given project, or allow cheaper (= less skilled) developers to be used.

That leads inevitably to a profusion of modules and interfaces (to prevent the multitude of developers tripping over each other), acres of crap code (due to hiring of "cheaper" staff), and frameworks that are themselves bloated as their feature-set grows to make them more all-encompassing.

And the net result is that the software is still late, but bloated and inefficient, wiping out the gains from Moore's law.

When you add to that the inevitable management push to deploy software before it is truly ready, and the massive organisational cost of end-user retraining (never mind hours lost due to bugs in the new software), the cost of an IT change is very much more than the development cost of a new system.

I should be the first one to admit that I'm not at all keen to maintain someone else's manky old code. Even though when hunting for bugs in old code it feels that it would be so much easier to rewrite the whole thing from scratch that to get into the heads of the guys who wrote it, the reality is that by the time one is 2/3 of the way through a rewrite the "new" system has become too big for any of the team to grasp in its entirety, and the cycle is well on its way to repeating itself.

Dear do-gooders, you can't get rid of child labour just by banning it

Mark Honman

Re: Intern slave labour

Interns are not forced to work - it is their choice.

for my part I benefited greatly from holiday jobs in industry that paid a token amount. It helpmed me decide what I wanted to do with my life, and when looking for real employment I could offer real experience and references.

Now that I'm an old fart, it has been my pleasure to have interns working with me and to develop their skills and see them move on to fulfilling paid employment.

Feeding the XPoint cuckoo and finding it a place in the storage nest

Mark Honman

Re: 3D XPoint is a new form of RAM, not SSD

Yes - if anything it reminds me of the merits of magnetic core, without the poor density and high power consumption.

The most interesting use, short-term, would be paging store. Systems already have a mechanism for paging stuff in and out of memory, but it's pretty useless these days as mechanical drives are too slow/thrashy and SSDs wear out.

If you view the paging mechanism as providing a RAM cache of the contents of the backing store (as if it were just a giant binary file that has been memory-mapped), the application is pretty obvious.

For HPC this means that the size of the data set can exceed the size of available RAM; and as RAM no longer limits the data set size, new RAM tradeoffs become possible. The amount of RAM can be chosen to suit the number of pages that are "hot" at any given time, thereby reducing cost/heat/size. The savings then become available for more processing power.

There is one element that would be a handy addition to the present virtual-memory model: a prefetch capability (analogous to that used in floating-point DSPs) that would extrapolate data-access patterns to identify data that should be brought into higher levels of the memory hierarchy before it is needed.

By "higher levels" I'm really thinking of processor caches rather than DRAM, which is pitifully slow in comparison to the performance of HPC compute engines.

Page: