Re: Well that was an invisible problem
See my response to solarflare above. ;-)
Sorry I couldn't make it yesterday!
122 posts • joined 28 Nov 2007
It's pronounced to rhyme with "lorry", but I get that a lot... ;-)
(I wrote this up a while ago for submission to El Reg, but was never quite happy with it. All names changed to protect the allegedly innocent.)
My first job was in the mid 90's, working for a big company with serious political dysfunctions. One of the best demonstrations of those issues is something I refer to as "That time I was told to steal a £14,500 switch"...
I was working as tech support in a building that was full of outsourced telephone support desks for some very big IT names. One of my colleagues - let's call him Dave - had just moved from working on one of those helpdesks into a project management role. We got an email from Dave saying that a new network switch had arrived, and could we please locate it and configure it?
(In the mid 90's and a network switch was an exotic bit of kit. These were the heady days of the new 100Mbit "Fast" ethernet. Switching wasn't a feature on hubs as it is today, it was a function for dedicated hardware. And in this case, it was a 12-port 10/100 3COM switch which, including taxes and delivery, cost a little over £14,500.)
Dave explained that a new helpdesk was going to go live, and there were concerns over the performance of the database server that handled scheduling of hardware engineer visits. That was millions of pounds of business each year, and therefore probably the most valuable server in the building - possibly in our division. Analysis had shown that the server's performance was fine, but it was homed on a network with at least 100 clients and yet more traffic from a WAN uplink. Network congestion was most likely the issue.
My boss dispatched me to find the switch. It wasn't at Dave's desk. It wasn't with reception/facilities, who said that they had delivered it to Dave's desk. Dave had only recently changed jobs, so it was likely their information was out of date - I went to check his old desk...
And it had been there.
But it was now with Terry.
Terry was a man of initiative, and had decided that as this had been delivered to Dave's old desk the switch therefore belonged to Terry's department. A short but futile conversation left me certain I wasn't going to leave with the switch, as Terry had repurposed it as "something I can put on my CV". (He didn't quite phrase it that way, but the meaning was clear to us both.)
To complicate things, Terry's department was a flagship project. Big name client, used as a case study, on the tour route for all visiting potential customers - they had serious political clout in the company.
Dave was on leave, and this was 1996 so he didn't have a mobile phone. But after some calling around we managed to get hold of him, and he confirmed that the switch was ordered under his new budget code. He was very unhappy to hear that "his" switch has been poached. It was made clear that there was a deadline for setting up this new helpdesk - his main concern was that Terry might plug the switch into his department's network (they managed their own IT to some degree), and that would make it hard to get back due to their political capital.
My boss assured Dave that this would be handled. We hung up the phone, and I was ordered to go and steal the switch.
Not exactly how I'd planned my day.
It was approaching lunchtime, so I slid round to a helpdesk adjacent to Terry's, and began to very slowly diagnose a non-existent fault on a PC. The moment Terry went to lunch, I pounced. Swiftly repacking the switch and disconnecting it from a serial port, it was soon retrieved. Now we had to decide what to do with it. My boss decided to lock himself in our small office, and read the manual. I was sent out to distract Terry, and do all our pending jobs in the process. On my way out of the door, I grabbed the empty box.
"What are you doing with that?", my boss asked.
"Decoy" was my response.
The ground floor server room was a repurposed meeting room - so it had glass windows. I dashed in, sat the box on the workbench, and then left - making sure to lock the door as always.
I spent much of the afternoon running around the building in as unpredictable a pattern as possible. I kept dropping into conversation that we were more busy than usual, and I had to go to $department next - knowing full well that I was going elsewhere. On returning to one helpdesk, I heard that Terry was looking for me. Eventually I bumped into him, and found out that he too had been busy - he knew the switch was in the ground floor server room. Eager to help, I went to fetch the key - but never returned, having been diverted by a faulty computer on the way. Anyone who's done desktop support will know the kinds of distractions that can drag you somewhere unexpected. That afternoon, I made sure that they all did.
At six in the evening, I dropped in to our office. My boss was still reading the manual. I was sure that Terry would have gone home by now - his helpdesk closed at five - but I headed back out on the distraction trail anyway. At seven thirty, I got paged (remember pagers?) and returned to our tiny office to hear the plan my boss had come up with. Then we went home.
The next day, shortly past nine, Terry dropped by our office.
"I want my switch."
"It's not yours."
"It's ours, we're a flagship desk, and I want it."
My boss adopted a soft, conciliatory tone. "OK, let's go and fetch it."
We walked to the server room, unlocked the door, and ushered him in.
"There it is. Help yourself."
Terry was both livid and crestfallen at the same time.
My boss hadn't just read the manual the previous day, but had also written and uploaded a configuration for a switch he'd never seen before.
We'd been in since before six, and had racked and cabled it and cut all services across to it - a WAN uplink for the building, a link for each of the local hub stacks (remember 3Com 100Mbits backplane connectors?), a link each for the Exchange, IIS and File/Print servers... And a link for Holly, the multi-million pound database server.
A single network cable whose traffic was worth more money than most people will earn in their entire career. If Terry wanted his switch, all he had to do was unplug that cable.
Terry left without his switch.
That long day and following early start was worth it. Not just for the satisfaction of a job well done, but in other ways. For example, one of the helpdesks ran Doom/Quake servers at lunchtime to help relieve employee stress, and apparently the switch made a noticeable difference to their performance. I was gifted many, many free beers for that.
And finally, I should note that Dave showed great promise as a project manager.
He took all the credit for our work.
In 1990, my money would be on either IPX/SPX or NetBIOS Frames (NetBEUI).
If the office had a Novell Netware server doing file/print, then the former. If they had OS/2 doing that for them, then the latter. Not that this will be news to anyone who was there at the time, mind!
I started my first job in 1995, and never saw a Banyan Vines network - although I often saw it in documentation as supported by products.
After a bit of research I've found that 3COM had an old network protocol called 3+ that was based on XNS, but by the time of this story they'd thrown that out and joined Microsoft on the LAN Manager NetBIOS and IPX/SPX train. By the time I started work, 3COM was mostly associated with the hardware layer - network cards, hubs, and those newfangled switches...
Loath though I am to defend Microsoft, this really wasn't their issue.
The important thing to remember here, which isn't mentioned in the article, is that in 1990 switching wasn't really a thing for the average network. It would have been hubs, broadcasting every packet to every machine, with the network card simply ignoring anything that's not for its own MAC address.
To put that into context, I remember the first switch I ever saw. It was 1996 (IIRC). It was a dedicated 1U rackmountable unit from 3COM that had twelve 10/100Mb ports and cost around £14,500.
Yes, that's fourteen and a half grand.
I remember it well, mostly because I was ordered to steal it from a rival department. (It's a long story. Maybe some other time.)
Again, for context, our hub stacks were 3COM 24 port 10Mbs units, with the 100Mbs backplane connectors grouping them lumps of 4, and a dropdown cable between each group that glowed a soft red when the teams started playing DOOM at lunchtime...
When we implemented the switch we removed the dropdown cables and plugged each stack into the switch itself, along with our primary SQL Server, the domain controllers, the IIS server (because "intranet" was the latest buzzword), the Exchange server and the WAN link. That really eased up network traffic both for WAN and local users, and was regarded as £14,500 quid well spent.
These days, if you buy a network device that costs more than about £40 it'll have switching built in, and the scenario described here could never happen on that network. Only the cheapest of kit, or wireless networks (for obvious reasons), have no switching capabilities.
Of course, everyone probably has stories about managers decreeing that "we won't pay for cabling that new floor we've expanded into, we'll use wireless as it'll be cheaper" and then the network grinding to a halt every day between 08:30 and 10:00 as everyone logs on and Windows pulls down their profiles... ;-)
That'll be the closest we'd get to this story these days.
The number of reported flaws isn't a great metric. It's just part of how security should be evaluated.
For example, Windows has far more security issues reported, yet it's still used. And the same can be said of many Linux distributions.
A key difference is that Drupal is just a CMS, and therefore a smaller project - which could lead to an expectation of a lower number of incidents. Balancing that, as a CMS Drupal is under constant attack because it runs on some pretty valuable sites.
What really matters is how security issues are handled. Drupal seem to have a good, responsive security team that has a good handle on it. And in later versions they've tried to prioritise security in their development processes, which is also a good sign.
(Disclaimer: I use Drupal for my own personal website, but not in any professional capacity.)
A different Jet. There were two streams of Jet - Jet Red was the Access database, and Jet Blue was the enterprise variant used in Active Directory and Exchange Server (amongst other products).
Jet Blue became ESE (Extensible Storage Engine), and is very different to Jet Red - in that it's actually reliable and half decent.
Fun fact - sharing a database engine is why Small Business Server died. The AD team and Exchange team used different versions of Jet Blue/ESE. Neither liked the idea of being forced to upgrade to a later version of it because of the other team, and it made support difficult as patching one product might break the other.
This is why it's not at all supported to install Exchange on your AD controllers - it will likely result in issues with your mail databases or - worse - your AD database.
I'm a Debian kinda person, but I know that RHEL uses XFS as its default filesystem for recent versions - so this seems like a fairly dumb move. And OpenSUSE seems to use XFS for /home in recent versions.
They should at least support both ext4 and XFS on that basis alone.
The xattrs reason is plainly not true, as there's a bunch of filesystems that support xattrs perfectly well. One interesting comment I saw on Reddit seems to have a possible answer:
Basically Dropbox may have used a particular attribute as an identifier. That attribute is static on ext4, but may change on XFS. If that's the case then this is nothing to do with xattrs, and everything to do with a bad assumption on the part of Dropbox's development team. (I'm guessing they use it to determine whether a file is the same but changed versus a completely new file which replaced the old one.) They assumed all filesystems would behave like ext4, and now they're finding that this isn't the case and there are some edge cases they didn't expect.
If this is the case then rather than fix the problem they created, they've decided just to shift the blame and drop customers who they failed...
I suspect that, if done well, Notes/Domino will have a decent niche in the future for companies that don't want to go cloud.
Microsoft is very committed to O365, and that means that future versions of Exchange will both lag behind in features, and perhaps even not get them at all. Some of those features will be obscure back-end ones - but when it comes to things like web access, it might be more obvious.
For example, if O365 gets a new web interface optimised for tablet displays, do you really want to bet that it's going to land in the next Exchange patch? More likely you'll have to wait six months to a year - and then there's the delays in actually applying that patch to your own infrastructure, such as testing and change control.
Notes/Domino might actually become a sensible choice for on-premises because of that. And even if it doesn't, hopefully it'll give Microsoft a reason to compete for on-premises business, rather than simply drive everyone to the cloud.
Having spent fifteen years working with both Notes and Exchange (from 1998-2014 - didn't quite get sixteen years!) I'm a little surprised to read this.
I'd venture to suggest that any admin who said that isn't a Notes admin, but instead a Windows admin who was forced to work with Notes and had no training in it - or willingness to learn.
The major problems are usually ID files and a lack of AD integration. So yes, user administration has a couple of challenges - but later versions of Notes have an ID Vault which helps a lot with that. And even earlier versions (6 onwards?) had password recovery, which also helps a lot - just generate a recovery key and start using it, and your helpdesk will be able to reset passwords much more easily.
User administration around ID files was certainly the weak point.
But the server itself was solid, capable and had a lot of great options. Its handling of storage, mail routing, and replication was generally much better than the competition. For example in a multiple site, WAN-linked environment moving users around is a snip - and very reliable compared to Exchange, which often fails repeatedly on bigger mailboxes. (I've actually had to resort to exporting a mailbox and transferring it as a PST using Robocopy, then moving it re-importing the data!)
I'd definitely agree that at the SMB level Exchange is better due to its integration with AD. But frankly, that market is being ceded to the cloud anyway. However, as your system grows in scale, Notes definitely starts to be much more attractive to administer. It's not without some faults, but I've seen far worse products - and I much preferred administering large Domino environments to large Exchange ones.
I'm more surprised that you found people who liked the user experience - that was what I usually got complaints about!
Microsoft had noticed how embedded Notes was becoming in enterprises, and modelled the Outlook/Exchange combo on the Notes behemoth. But Microsoft, weirdly, never took advantage of the sprawling engine it had created, with its immense flexibility for categorising data and creating custom views on it. Only a tiny proportion of the power in Exchange/Outlook is ever used.
Exchange/Outlook certainly never lived up to the power that was promised in terms of Public Folders and custom forms. A lot of that is down to what I'd call The Microsoft Developer Problem.
Microsoft tends to develop programs with tools it already has - until recently they had a strong streak of Not Invented Here. This tends to lead to products that can be described as "designed by developers for ease of development", with less concern for users or administrators than those groups might like.
By contrast Lotus Notes was designed for collaboration and customisation from the ground up, and many of the decisions taken were taken (and boy will this be controversial!) for ease of use for the average user.
Heck, Lotus Notes didn't get a dedicated design client until version 5, if I recall correctly. You could build a new database in the same client you used to access your email and databases, and assuming you have a friendly Administrator who's willing to put it on a server you can be up and running within a day or two... and as a Lotus Notes Administrator back in the day, I can confirm that I did see enthusiastic business colleagues outside of IT bring me their little databases that they'd developed to help their team work!
By contrast, Microsoft reached for what it had. The database was ESE, which is OK but didn't lend itself to the same kind of unstructured storage because it still has some notion of tables. That's a fundamental restriction when compared with Notes' proto-NoSQL approach.
Similarly, forms had to be designed using a Visual Basic client - which is kind of overkill and rather intimidating to the average user - and the distribution mechanism for Exchange Forms was complicated and annoying, even for administrators.
Exchange/Notes was much more capable in some ways - one of the flagship demonstrations at launch was a graphical chess game that sent moves via email, which Notes would have difficulty doing. But these complexities meant that creating a simple holiday approvals system or a sales opportunities tracker was an order of magnitude more work than for Lotus Notes.
It's hard not to conclude that Exchange was built with what was lying around, rather than looking at what people actually needed.
Ultimately none of this mattered as Groupware seems to have been a fad - albeit a decade long one. Integrating workflows within your email platform was effectively killed by the ability to send someone a hyperlink to a web page. Which, when you consider Notes had DocLinks from the start, is kind of ironic - apparently everything else Notes did was overkill.
The world moved to dumb email clients, and chose to move workflow and other specialised features into dedicated applications that send notifications.
This Microsoft Developer Problem extends to many of their products. Skype for Business (or whatever it's called this week) requires an SQL Server to store people's status. Their STATUS! That's overkill, right there - but companies dutifully add the databases to an SQL Server cluster so that their IM solution is highly available. SharePoint has a list of prerequisites that makes you wonder if its true purpose is collaboration or selling Windows Server licences. This design pattern runs through most of Microsoft's platforms.
So whilst in theory Outlook could do what Evernote does, in practice I have little faith in Microsoft's ability to deliver that. Their platform and tool choices would likely have made it difficult to port to other platforms, and cumbersome in use. And this was before Microsoft "opened up", so you know that there would have been a (poor) Windows Mobile client and nothing else...
By comparison, Evernote's technological choices are simple. They use SQLite on the client side. I can't easily discern what they use on the server side, but I'm guessing MySQL and Apache/nginx. I'm also guessing that it's mostly just a simple schema, with perhaps a type, tag, colour and so forth then a blob that gets full text search. The one thing we can be sure of is that they didn't have their own technology lying around, so they had no incentive to choose anything but that which was most suitable for whatever problem they had at the time.
And this is even before we get into the question of why Microsoft then chose to not develop the Notes feature of Outlook for fifteen years. Maybe they thought it was OK? Maybe they simply didn't want to put development resource onto that feature when they could instead be focusing on the Ribbon, or Sharepoint integration?
Evernote has no such problems with focus, because they have just one product. The closest they'll get is juggling the priority of business versus personal account features.
That's why Evernote has succeeded where Microsoft has failed. They're free to make choices that benefit the customer, rather than fit into a corporate platform strategy.
(Also, I think Microsoft's Evernote competitor is really OneNote. But this is already a very long post, so I'll let others talk about that.)
I am almost positive Word 95 did not support line numbering. IIRC, in the anti-trust case the MS legal team created their documents in WordPerfect and saved them as a Word document.
It wasn't just line numbering - Word failed to meet a number of court requirements for a long time.
In particular its word count didn't count words in footnotes and tables of contents. Courts do. When the court says 10,000 words maximum and your 9998 brief turns out to have 11,346 words in it due to footnotes and Word not being able to count properly, your document gets thrown out by the court.
I think that the first version of Word that fixed the various problems lawyers had was Word 2010. (I could be wrong though.)
It certainly took Microsoft a lot longer than you'd think it should for them to get these things fixed...
The problem is, as always, the "legacy". There are old documents that may still need to be rendered, using these old incantations, and no update is allowed to break them.
I don't disagree.
However, the Office team has a very loose definition of "break". The thing that drives me really crazy when people talk about "needing Office because of compatibility" is that Office isn't actually all that good at its own backwards compatibility.
If you open a document produced in Word 95, it's likely to look different anyway. Heck, I've held printouts in my hand and been looking at the document on screen - and noticed that the text is the same, but Word has interpreted the layout somewhat differently.
It's not an issue that results in data loss, but it's awkward trying to explain to someone why the "reprint" of an archived document has an extra page. Or worse, is "missing" a page!
(Part of this is probably due to printer driver changes over the years...)
I think at some point we have to face the fact that Office, especially Word, is crap at backwards compatibility. Provide those who care with a macro that saves all documents in a folder as .pdf, and move on.
There will be legal issues, as you say. But in those jurisdictions they already have a legal issue due to the current bad behaviour - they just don't know it yet.
Actually, I'm not telling the whole truth. I'd welcome a re-write of Office, full stop. In almost any language. So long as it's done with modern development techniques, it's got to be an improvement.
Office is ancient software. Deep within its code base lurk odd behaviours and bugs. For proof, look at the OOXML specification's hilarious "autoSpaceLikeWord95" option. Apparently if this option is enabled 'applications must imitate the behavior of that application, which involves many possible behaviors and cannot be faithfully placed into narrative for this Office Open XML Standard.'
Yes, Microsoft themselves can't actually say what the option does. But if you write something that handles OOXML, and encounter this option, you must emulate whatever Word 95 does. Assuming you can find a copy of a 20+ year old program, fire it up, and then test it and determine that behaviour...
(In Microsoft's favour is the fact that this only applies to full-width East Asian glyphs. We should consider ourselves fortunate that there are no large and growing East Asian economies in Microsoft's world...)
My point here is that somewhere, deep in Word, there's still some - probably untouched for decades - layout code from Word 95. (Word 97 presumably changed the way that layout happened to be more unicode friendly...)
Microsoft can't accurately describe what that code does for anyone. They may no longer have anyone employed who knows what it actually does. Yet it's there, and will run under certain circumstances.
Four of the "big five" parts of Office (Word, Excel, PowerPoint, Access) date back to at least Windows 3.x. Outlook dates back to Windows 95 (just - came out a little before Windows 98 IIRC). Who knows what horrors lurk in those codebases?
Let's be clear - most of this code is written to ancient standards (if any), and is simply a stability and security millstone around Office's neck.
A re-write is long overdue, to remove this kind of junk.
A few years ago I had given up hope of such a rewrite ever happening. Windows 8 appeared with its Universal Apps, and Office simply got a free pass and a recompile into ARM code for their doomed tablet offering. That wasn't a good sign.
It was, in fact, the normal behaviour. Politically inside Microsoft the Office team is more of a platform team, and has considerable clout. So they historically just ignored whatever they didn't like in the overall strategic decisions of Microsoft.
But they couldn't ignore Google Docs and Sheets, and similar technologies. Web based productivity may have been a joke when it required Java applets, but suddenly it was usable - and being used by customers!
Office 365's web apps gave me back hope. It won't be overnight, but I can see a future in a decade's time where any feature that doesn't work in the web version becomes slowly deprecated. The scripting side for Excel will probably be the hardest hit there...
The idea is simple - use the web version as a stealthy way to drop support for features over time, thus allowing a new "clean room" version of the Office apps to be built in plain sight, without anyone realising it.
Then at some point you simply make the switch over to having the web app - or a desktop version of it in Electron - be the default experience.
If that helps get rid of the cruft in the old Office applications, then I welcome it... it's long overdue.
I really couldn't give two hoots what programming language it uses, so long as the rewrite happens!
I was going to complain about the loss of the XPS viewer, but then I did a quick bit of research and found that Windows 10 has a built-in PDF printer driver.
I'm really not sure how that escaped me. If I did see any headlines about it, I probably subconsciously shrugged it off with "welcome to 2008!".
I do have a few XPS files kicking around at work though. With PDF printer drivers costing money at the time, the advantage of XPS was that it was there. I usually used it when I needed to save some information from a website that I could only use with IE.
I have never used it for anything else though. Chrome "prints" direct to PDF without any printer driver required, like a civilised program should.
Still, it's odd. I really didn't like XPS when it came out, partly because it felt like Microsoft yet again attempting to use their monopoly position to remove a competitor. But having used it a few times, I've found it to be reliable and usable. I can't say I'll miss it, though.
I will, however, be annoyed that I have to open those XPS files and re-print them as PDFs!
Here is some valuable insight into what usually happens...
Sir Humphrey Appleby: Well, this is what we normally do in circumstances like these.
James Hacker: [reads memo] This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967...
James Hacker: Was 1967 a particularly bad winter?
Sir Humphrey Appleby: No, a marvellous winter. We lost no end of embarrassing files.
James Hacker: [reads] Some records which went astray in the move to London and others when the War Office was incorporated in the Ministry of Defence, and the normal withdrawal of papers whose publication could give grounds for an action for libel or breach of confidence or cause embarrassment to friendly governments.
James Hacker: That's pretty comprehensive. How many does that normally leave for them to look at?
James Hacker: How many does it actually leave? About a hundred?... Fifty?... Ten?... Five?... Four?... Three?... Two?... One?... *Zero?*
Sir Humphrey Appleby: Yes, Minister.
"you do realise that when the government subsidises its actually us citizens that are doing the subsidising. its not free money or coming out of the governments pockets, its coming out of OUR pocket as tax payers."
Yes, I do realise that.
So if we're talking about air quality, what will be the most effective way to spend money? The government can't ban transportation as such. (Small pedestrianised zones may help, but you can't do that everywhere!)
Larger cities can afford some kind of congestion charging, but smaller cities and towns can't.
You can increase road taxes on vehicles that pollute.
You can give tax breaks for newer, less polluting vehicles. That's effectively a per-vehicle subsidy.
You could also give loans for fleet replacements - the government can borrow money at a very low rate, so backing those loans isn't difficult.
You could even give grants for those replacements, which require no repayment.
So that's one tax, and three possible methods of subsidy, each trying to use financial levers to reach the goal of cleaner air.
Is taxation the best method? That may have an effect. But it may be an unintended effect - companies may reduce the services as they try to do the same job with a smaller fleet. Taxes can work well for individual decisions, but they affect organisations in a very different manner.
Are the subsidies better? Well, how much will we spend on healthcare for children whose respiratory development is affected by air quality? Are there other costs associated with the air quality?
A subsidy is not the automatic answer, nor is it necessarily the best answer. But it could well be.
Other answers are available, such as restricting deliveries to certain times - these also have their own costs and benefits.
Your mistake is to look at it as one pot. Government is huge, and the "one pot" analysis frequently fails.
There's a great example that's being repeated a lot recently - DWP assessments removing the lease of a Motobility adapted car, on the grounds that the person "isn't sick enough." But then approving a taxi to and from work - because the local council pays for that. It gets it off their budget and onto someone else's. You can find multiple stories in the news about this, and it usually costs at least £10,000 more for the taxis than for the Motobility leased car.
However, based on your behaviour here, we have to assume that you'd simply be raging that the Motobility car is coming out of OUR taxes, and is therefore bad value for money.
As a society, we have to ask ourselves what the goals are and then take a look at how we can achieve them. Sometimes, the answers are counter-intuitive and require explanation. That should not prevent us from pursuing them.
So go hybrid then. Have a nice little Atkinson cycle engine that keeps things topped up when necessary.
That way they don't need to charge everything all the time, they still use less fuel, and they pollute less. In town and city centres, it's a nice little win.
Ask the government to subsidise the costs. Put an RFP out for a suitable vehicle. We know that they'll get high usage, so maybe DEFRA can be convinced to chip in for an easy PR win.
As for the resale market - when I was growing up I couldn't walk three streets without seeing a Transit van that had plainly just had the BT logo ripped off it. To be quite frank, I think you're telling porkies...
I wonder how many customers are even using this feature.
I seem to recall that it arrived in Notes 6.x, but required admin rights on the machine or Windows local admin credentials stored in the Notes infrastructure. It was a nice idea, but implementing it tended to make security teams antsy. Later versions (7+) improved it, but frankly not quite enough.
As such, whilst it wasn't a bad feature, most companies went with a third party packaging/deployment tool that could also handle all their other software. Investing the time and effort into Smart Upgrade just to get Notes upgraded wasn't worth the hassle if you could instead get something else to do the job for all your software.
If this feature had shipped five or ten years earlier, it would have gotten widespread adoption. But I always felt it was just a little too late. I'm sure some customers are using it, but I'd bet that the vast majority aren't.
Disclaimer: I'm no longer working with Notes. Nor, for that matter, with Exchange. The cloud has pretty much killed the messaging employment market. (There's a lot of migration jobs, but that's not exactly a career...)
Ah, VideoPlus+ codes!
I do remember them in newspapers. But only for a short while in the 80's.
I always suspected that what really killed them was that TV is, in no way, dependable. At the time that they were published in the UK we only had four channels. Any major event would require a news bulletin, ruining the schedule. Any overrunning sports event would ruin the schedule. A squirrel farting would ruin the damned schedule.
It may have been easy to program the video via VideoPlus+ codes, but that didn't mean you were going to get your programme. IIRC I think they allowed a few minutes each way, but that's all. Smart people usually allowed 5-10 minutes at least both before and after the program.
So it may have been more convenient, but it was no more likely to succeed than manual programming. And therefore not quite worth the money.
We appear to be seeing a true paradigm shift in the industry. Automation and innovation is removing the need for the traditional core sysadmin roles - sizing, configuring and running applications and their infrastructure.
Yes, the only part of a sysadmin's job that will survive is the part we like the least - security.
Basically, you now only need a sysadmin if you have any data. Which must be a terrible relief to all those businesses out there that don't keep or process any data.
For the rest of the world, sysadmins will continue to be an awkward fact of life. Let's look at the example given in the article:
"For this use case, a security system that has extremely basic facial recognition built in is presumed. The on-premises security system's facial recognition is presumed to be just powerful enough to tell that there is movement on the camera and that a face is likely exposed to the camera. Once it detects a face it uploads a snapshot of that face into an Amazon S3 bucket."
So, we have personally identifiable information (a face, in an image 'snapshot') that is uploaded to third party infrastructure over a publicly accessible network. Now I'm sure that this is all done with appropriate encryption - Amazon probably won't offer unencrypted connections anyway. But how is it stored? What are the retention schemes? Who's got access to it? Who enforces that access and how? Who does audits and assumes responsibility for the results? Are new levels of access and auditing required once DPA requests become something the business has to handle?
These developers are in for a nasty shock - possibly in a legal sense - if they think they're not going to be doing any more Ops in their DevOps.
If you have data, you need a sysadmin. You can attempt to automate a lot away, but you still need someone to ensure your data is properly handled. If your business wants to hand that off to management or developers, please let me know - I need to make sure I don't own any stock or use the services of your company.
Still, like I said - this is truly a boon for all those data-free companies out there!
Seriously, it never ceases to amaze me how multi-core, multi-gigahertz, multi-gigabyte mobile systems can be so excruciatingly unresponsive compared with my 16-bit, 7MHz, 512KB Amiga from the 1980s.
How can this even be possible?
Well, for starters, your Amiga is doing all the bounds checking and type safety that my old Speccy used to do...
Probably more importantly - and more seriously - it's a smaller system. Less inter-dependencies. People talk about wanting time back that they spent watching a crap film. Screw that. I want the time back that I've spent watching Java and .Net programs start up. I'm not sure how much time I'd get back exactly, but I suspect that your grandkids will know me as "that guy who's functionally immortal".
Your Amiga could print text to screen with mere kilobytes of dependent libraries. These days we have to wait for megabytes of dependencies to load. Usually because some idiot developer thinks that maybe, some day, they'll need to parse JSON or make a raw TCP socket connection or whatever - so they should definitely have that in their project.
As others have said, not too bad.
Sony are never first to new versions, but their commitment isn't to be doubted.
My Z1 came with Android 4.3, got updated to 4.4, and then updated to 5.0.
My Z3+ came with 5.0, and got updates to 6.0 and 7.0.
I'm on a Xperia Z5 Premium right now, and it came with 6.0 and has had updates to 7.0 and 7.1. I will be very surprised if it doesn't get an update to 8.0 at some point, but my contract's due for renewal soon so I may not see it personally.
When I say "Sony aren't first", I'd guess there's usually a lag of a few months or so from release of new version to Sony pushing it out.
Some security updates have also arrived, but my phone is currently on the 1st June 2017 security patch level - it would be nice if they were actually monthly, but they seem to be quarterly at the moment.
Overall I'm not unhappy with the commitment Sony have shown over the three phones I've had. I'm inclined to buy a Sony again, if possible.
My only other candidate was a Google Pixel - which will naturally have much better updates. But the next version is rumoured to have no headphone jack, and if that's true it can get stuffed. At the moment Sony have the best balance of great hardware, decent (light) Android customisation and a history of shipping updates. There's room for improvement, but compared to much of the competition they're doing pretty well.
Too long? Didn't read? You can expect two full OS version updates on a Sony phone, maybe three.
Odd - I never had any issues with our OS/2 Notes servers. They just ran and ran, and were fairly nippy too.
Now, the Notes server on a Netware box running as an NLM - let's just say that replacing that was a top priority, and much beer was consumed when it happened. Netware 3.x was wonderful for file & print services, but godawful for running application servers on.
@JLV - the Liberal Democrat's position on Europe won't be known for sure until Wednesday when they launch their manifesto...
But at the moment it's very much "OK, we're leaving. But nobody knows what that means, so there must be a final vote on the deal - either in Parliament or a referendum. We cannot just give the government a blank cheque on this issue."
I suspect that won't change.
@Charles 9 - yes, it's a fair price.
They have pretty strong privacy policies. They're huge - so big that it would be very difficult to defend against an attack from them. But as a threat, they're negligible - they have plenty of good reasons to treat my data well. Reputation, legal requirements, etc... So I'm not that fussed by it.
And often, the very things that people think are bad about this are actually a benefit for me.
Way back when Opera first went ad-supported, in version 5, I was a registered user. I was also one of the people asking for the ability for registered users to toggle the adbanner bar in the UI. (They never did provide that.)
The ads that Opera served were of two types - generic casino/entertainment ads that were animated and flashy and somewhat annoying, and Google ads that were just text - hence unobtrusive. But the Google ads were also targeted, based on the page you were on (not on tracking you, as I understood it). So when you're shopping for something, you always had this set of alternative options in that banner, which was sometimes what I wanted.
A lot of people couldn't understand why I would even want to toggle the ads on or off - but they were sometimes useful. And making my computer more useful is the only good reason for any change to my computer.
Google's services do make many people unnerved. But when I look at what I get from them, I think it's a fair exchange.
I use Chrome because Google has accomplished for the consumer what Microsoft does for the corporate user.
They built a platform that allows you to roam.
When I sign in to Chrome, my bookmarks and history follow me. On Windows, Linux, Android - it doesn't matter. It all just follows me. Oh, and where applicable, so do my browser extensions. Log in to a machine I haven't used for a while? No worries, Chrome will soon be the familiar place it is everywhere else for me.
Microsoft does provide roaming profiles for companies. But they haven't really wholeheartedly grabbed the idea of having an account in the cloud that their software uses for this. They're partway there, but they seem to want to segment their products into "professional" ones that do roam, and "consumer" ones that don't. Internet Explorer (and Edge) seem to be stuck in the "don't" pile.
I once worked with a product which had an odd installer. The progress bar went straight to 100% very early on, but the installer then continued to install yet more stuff.
By that I mean it was actually putting out messages telling you which files it was copying, or that it was updating the registry and so forth - despite the progress bar clearly being at 100%. It could happily go on for another minute or two, maybe longer if certain options were picked, and the progress bar was evidently completely divorced from the reality of the installation process.
I got to speak to the developers of the product (about something else), and offhandedly asked them about this.
"Oh, that's because the installer script only ever gets appended to. A decade ago, we just had the main module and a couple of optional ones. Now, we have loads more optional modules, and a number of new mandatory ones. Each new module was simply added to the end of the install script - nobody ever goes back to adjust the progress bar computations, because the risk of breaking something when editing the old script entries is high and the benefit is low. As a server based product, very few people see the installer anyway, so we're just never going to fix that."
Well, kudos for them for not taking risks, I suppose...
Robots? To settle a problem of aging population?
Or we could, y'know, confront our fears and prejudices and simply allow people from other countries to settle here to do the jobs we're not producing enough people for.
I'm all for automation, but many of the jobs mentioned here are just better done by a person.
Although it is nice to know that if I keep up with my Linux skills I may have some part-time work available in my dotage...
You have the wrong tense.
She wasn't for turning. Then she found herself in a position where she was in charge of turning.
Now, as we all know, turning means turning. Nobody can deny that we will turn, and anyone who says otherwise is a traitor who is defying the will of the turning British people.
It's that, or she walks. And she'd rather be turning than walking, and damn the consequences for anyone else...
If I were Donald Trump's sysadmin - I'd find a new job.
He employs people who tweet their passwords. His ego won't allow him to admit that he, and his employees, are incompetent. As the sysadmin, I will always get the blame for his and his employee's incompetence and inadequacies.
So you find a new job.
On a general purpose machine in a class where you can traditionally run whatever OS you like, a TPM is bad.
On a machine sold as custom built for one specific OS, it's good.
There are other factors too. If you're spending a lot of money on a machine you hope to be general purpose, a TPM is bad. If you're spending little money on a machine you will treat as a commodity, it's a lot more acceptable.
It's all very situational.
For clarity - this is an "All Years" view, which appears to have data back to 2013.
For opinion - it's no surprise that many big companies are getting subsidies. Same in fishing, IIRC - the majority of the "British fishing fleet" is in the hands of large companies.
Of course, there are the plucky small independents. I'm not denying that. But the Brexiteers like to pretend that they're the only story, because that pulls on heart-strings. The fact is that the vast majority of the problems for both farmers and fishermen are down to a combination of corporate competition and government cockups. Europe is either neutral in events, or on their side trying to improve things.
Sadly, that makes for a complicated story. Much easier to just bend the truth and go back to that narrative of the plucky independent...
I use Google Maps as my primary mapping service. Have done for years. Streetmap's data was better, but they stayed static for far too long.
I just visited Streetmap to see how they were doing, and they have a bigger mapping window now - but not big enough when you compare to Google Maps or OpenStreetMap. I was pleased to see that they did support grab-scrolling, but disappointed to see that scroll wheel zoom didn't work - it just moved the whole page.
Basically, they haven't kept up. Google's mapping data is good enough, and they just keep adding features. Their integration with their search is superb, they've added directions, street view, live traffic reporting...
Anecdote time: I don't drive but was on a trip with friends recently (to a distillery, so why drive?) and on the way back we hit a traffic jam. My phone alerted me that it was roadworks, and with some judicious scrolling and checking the live traffic overlay on my phone I managed to locate exactly where they were, which seemed to help us all to stay more sanguine about the experience.
Frankly, if I'd had the presence of mind to check my phone beforehand, Google Maps could probably have saved us some time by getting us a route that avoided those delays!
And I said that Streetmap's data was better, but that past tense is deliberate. It's missing some paths in local parks. Not new paths either, but ones decades old. Google was missing them a few years ago but is slowly adding them in. OSM has had those right for ages.
Finally, let's not mention the woeful search. Both Google and OSM could get me to a local park by name, Streetmap couldn't manage it no matter which option I picked. And it's 2017 - why do I have to pick a search option? Search them all, then show me a list!
I have fond memories of printing out an occasional Streetmap page back in the early 2000's. Before phones had mapping and internet connections, A Streetmap printout was more convenient than carrying an A-Z around, providing your journey was short. But they have more competition than just map books, and they seem to have failed to realise that.
In its departmental overview for 2015-16 the NAO revealed that customers other than the Department for Transport have now withdrawn from their Arvato shared service centre contracts and will seek other arrangements.
Why? We need detail!
And I'm not railing against El Reg here. I went back and checked the original report, and there's nothing there either.
So why are people leaving the platform? Is it something core, or a fixable detail?
OK, we'll have to ask the grapevine. Anyone got any ideas?
The good news is that the vast majority of vulnerabilities have patches available on the day they are made public
I think what they meant to say was:
"The good news is that the vast majority of patches have vulnerabilities available on the day they are made public. Otherwise we'd be out of a job."
any dependencies needed are pulled where APT fetches the main install off the MS repo
MS are hosting their own repository for updates of this? As in, actual .deb/.rpm packages that are fetched and installed with "apt update" or "yum update"?
Because that's one thing a lot of big companies often manage to "overlook" when porting software to Linux. It's very disappointing.
But if Microsoft are giving us repositories, and adding them to the system config so that the updates of SQL Server are managed just like any other component, then I have to say I'm bloody impressed.
That's how it should be. I'd assumed that this was just some hacky "it runs, it's done, ship it and hope" kind of affair, but actual repository integration shows a level of effort and attention to detail that's warming the heart of this cynical old git.
Well done, Microsoft. Well done.
The included NTFS driver for Mac makes it interchangeable between Windows and Apple notebooks.
Ye gods, what a craptacularly idiotic idea! Who the heck would want to trust their important data to non-native filesystems that are being handled by Seagate software?!?!
That's a genuinely scary thought.
A quick googling shows that they're probably using a licensed version of Paragon Software Group's NTFS drivers - which I'm sure are fine. But for my backups, I'd rather have a native filesystem please...
(Yes, you can just format it. But how many people are going to do that? There must be a better way...)
Still used by international banks to confirm some types of business. Lawyers like faxes.
You can try to take an email to court, but a lot of jurisdictions don't have any guarantee it's binding. Whereas the 60's/70's/80's were full of court cases around the world that settled, definitively, that a fax or photocopy of a contract was still a contract - you don't get to ignore it because it's a copy.
(Yes, people really tried that scam.)
Also, email can be traced, but fax usually means that there's a phone call and that gives you another level of evidence should you need it in court. Although personally I never really bought that argument, and fax systems seem to be going to the cloud and fax over IP (FoIP) these days.
The last few fax systems will probably be all electronic, never putting out paper unless the recipient wants it. The input (probably an account summary or trade confirmation) is generated by an application and picked up from a file share or some kind of message queue, converted into a set of images, and then sent via either fax over IP or a real phone line, to a system which does pretty much the same in reverse and delivers the images (and maybe OCR'd text) to an application.
But the legal aspects will keep people on that system for a decade or so, until someone realises that the expense of the infrastructure outweighs the potential cost savings in court...
(And it can be expensive. I know of a couple of banks whose license estate for faxing infrastructure is in the seven figure range on software alone, let alone the licenses for the platform below that software. At standard software maintenance rates, that's a pretty nice amount of coin for software which is mostly in maintenance mode these days...)
The iPhone 7 isn't any thinner than the iPhone 6 though.
And I think they painted themselves into a corner with the seeking of such slimness.
I have owned phones from the Sony Xperia Z line for the past for or so years, and they're great. But put them down next to an iPhone 6, you (just about) notice an extra half millimetre or so of thickness.
Just about enough room to put a nice rubber gasket in around the headphone jack, so that the phone is waterproof.
Having looked at them, I think that the iPhone 6/7 are too thin to waterproof AND have a headphone socket. They only had three options - waterproof it and make it thicker, waterproof it and remove the headphone jack, or don't waterproof it.
The wanted waterproofing, so that left them two options. I can't say for certain why they chose to remove the jack, but I suspect that Apple view making the device thicker as being a step backwards, and they lacked the courage to do that and put a bigger battery in. So they took the only way out that remained...
Of course, if they were Samsung/Sony/HTC/Huwai/Whoever, they could have just tested the market with another model. But they're Apple, and want as simple a product lineup as possible - hence my verdict of painting themselves into a corner.
He's got experts - believe him, real experts - looking at this right now. Ten years old, very smart - the smartest - and one might even be eleven.
*waves tiny hands*
And unlike Crooked Hillary, The Donald doesn't even know how to delete an email. He just doesn't know. But if he did know, he'd only be deleting emails from those people. You know. Those people.
More seriously - even if someone did break into his email, what do you hope to find? All of his bigotry and hatred is on Twitter at 3AM. All of his bankruptcies were public. His sexual assaults are somewhat public. The people he didn't pay for their work are common knowledge right now.
Oh. I get it. What's the betting at least one mailbox is just full of invoices from the company he stiffed for doing maintenance and upgrades on this system?
Global revenue has remained flat at Fujitsu for a number of years
Ah, so this is probably down to the chasing of short-term performance figures.
No doubt next year, they'll record excellent performance.
And two years after that, they'll report high overheads - because clients are leaving them as they can't meet SLAs, and they have to hire expensive contractors in to get certain key jobs done. (Luckily, they have experience from a previous employer. Wonder who that would be?)
The joys of short-term capitalism. Simply making a profit, steadily, year on year isn't enough these days...
Ah, but it's the best camera according to DxO! Who do scientific measurements, and everything!
What I suspect we're starting to see is manufacturers gaming that system. Good stats don't necessarily make a good camera, especially if your output is JPEG. I can fix a lot with a good RAW converter/editor, and it's true that many phones now allow RAW shooting.
But let's be honest. It's a phone. You're going to want to shoot JPEG, so that you can actually use the photos. And that means that for all we know, this phone might use exactly the same sensor as the other phones it beat by a couple of points - but just has a different tone curve and a slightly less aggressive JPEG engine. Which would probably be just enough to gain a point here and a point there in the tests... and suddenly you're the best phone camera available!
When you what's being tested, being best becomes *so* much easier.
Biting the hand that feeds IT © 1998–2019