I laughed out loud just reading the story. Actually listening to that recording has me in tears, and not all of them are from laughing.
I also got another great laugh out of "Haas". Upvote ahoy!
55 posts • joined 2 Nov 2010
I laughed out loud just reading the story. Actually listening to that recording has me in tears, and not all of them are from laughing.
I also got another great laugh out of "Haas". Upvote ahoy!
I already turn my nose up at the insurance companies that offer these little dongles that monitor my speed and driving habits. I'll tell you straight up, I'm very sure that's *not* going to earn me lower rates. Despite the fact that I've only had one accident, ever, over 20 years ago, and it was a low-speed fender-bender.
But even if it would lower my rates, I view paying more for not being monitored as just fine, exactly how I view things like paying a subscription rather than being tracked on-line to get "free" content. Until someone legislates that cars must have these things built in (and I fully expect that day to come), the folks who'd like me to use them can get stuffed. And maybe they can still get stuffed even once the cars come with them, depending on how hard or illegal they are to disable.
The idea, then, that I'd willingly choose to pay a *third party* to curate this data is mind-boggling ludicrous to me. Odds are good there's already a third party involved in the branded equivalents that the insurance companies offer directly, but why would I want to pick a middleman like that for myself?
The rest of the things this doodad purports to do sound like a solution desperately seeking problems. I at least see the utility in the insurance thing, even if I disagree with the wisdom of taking advantage of it.
My team runs our company's incident/problem/change solution. (Yes, the irony burns.) A few years back, we had tables in a QA DB that we no longer needed. DB administration at that level is managed by a dedicated DBA team, not the application team. We sent them a request for the table drops and, knowing our prod and test DBs had nothing to do with one another, thought nothing more of it.
Except, unbeknownst to us, the tool the DBAs use to perform such tasks connected to both test and prod systems alike, and the over-eager person involved issued DROP CASCADE against the tables in question *everwhere*. In the middle of the US morning / EU afternoon.
The only reason this did not completely destroy our production OLTP DB was that there were locks in play because of our level of user concurrency. (Logs later showed that the DBA actually tried the deletes several times when they failed.) Our prod reporting DB instance had no such protection and critical tables were wiped out. Restoring that took a long time because the tables were huge and, at the time, the reporting DB schema was not a 1:1 match with the OLTP system. (You can do that with fancier replication tools.) The reporting instance had to be restored from remote backup, which literally took days. Fortunately, for the duration, we were able to point most of our BAU features that relied on the reporting instance to the OLTP instance instead, accepting the modest risk of OLTP performance impact to keep important things working.
Happily, this event did produce both process and architecture changes in the way the DBA support tools were used and set up. And, probably, at least one staffing change. o_O
Agree on the article quality.
I took "popping partners" to be a verb/object pair. "By 'popping' (business) partners (of intended targets)..."
"Also, I can no longer recommend El Reg articles to my security oriented friends because CloudFlare blocks Tor connections."
Eh? I quite regularly visit sites protected by CloudFlare over Tor, and have done so within the last couple of days. I typically have to answer a captcha to convince it I'm not some kind of bot, but it lets me in as long as I can do that.
Dropping the root filesystem (presumably from an installer boot) is fine. It's actually modifying the /sys/firmware/efi/efivar contents specifically that causes the problem. In other words, modifying files in there is translated into changing UEFI variables, and deleting files in there being translated into deleting the UEFI variables.
Unmounting the whole thing would not have the same translation.
Wholesale change in interfaces does happen in the real world. Eventually old versions of interfaces need to evolve so extensively that the old interfaces need to be demised. Perhaps there are new technologies or standards to leverage. SOAP being replaced with REST, say. Or perhaps there's just been a broad design shift in what your service needs to do, affecting all its interfaces. I've seen both happen in environments I support, though the services weren't "micro" in the DevOps sense. Surely similar considerations still apply, though.
I guess in cases like this you can transitionally support both interfaces from your (micro)service (or add the new interface as a new service) and demise the old interface later.
That's still something you can't *just* cover with testing in a sufficiently complex (i.e. enterprise) environment, since you have to make very sure you know who all your consumers are before you demise older interfaces they might be using. Appropriate logging in your production instance can be invaluable here, so you can see who is actually making requests, to reach out to them to make sure they're ready (and have tested) for their applications to consume your new interface.
This doesn't mean the whole DevOps approach doesn't have merits, but at least as I see it discussed in brief in articles like this, the shiny, happy world they describe seems like would only really manifest in environments where its easy to manage all the service dependencies.
I won't mind being corrected if I'm just missing something.
I get automated phone calls from them constantly, which go to voicemail and get deleted. They have not injected anything into my browsing experience so far as I know, but then I scour my browsing experience pretty hard, so it's possible I'm just stripping it out and never seeing it.
They also pestered my parents, who have limited tech savvy, and they were mislead into thinking that getting the new modem would actually speed up their experience. That wasn't the case because their package had a performance cap below that of their old modem, so upgrading basically did nothing for them. Given my folks' level of expertise, I doubt the misleading was intentional, but I also doubt Comcast did anything to make the reality terribly clear.
I've no intention of taking their upgrade, as my modem works fine and I do nothing with my net that my existing bandwidth isn't overkill for. I don't want their WiFi. When I do upgrade, it'll be to a model I buy. The economics of that aren't great - renting their modem is cheap enough that buying my own will take a while to pay for itself. The main thing I care about is the device I get will be much more under my control (as much as anything is these days). Also, I read about a lot of people having performance and stability issues with the xfinity branded modems, which makes me want to stay far away.
Upvote for the title of your post. :)
But yes, I was going to post the same thing. So sayeth Ridley Scott himself.
To be fair, the purely organic Replicants seem to have been specific to the Blade Runner movie. The Philip K Dick novel's androids appear to have been a mix of robotic and organic bits.
But unless you MitM the certificates, you can't inject headers into the request such that websites can track page hits across domains.
All they're going to do is surveil ordinary people who most likely have nothing sinister going on, while halfway-intelligent criminals and/or terrorists will simply use software the state does not control.
Sure, such measures may allow them to pluck some low-hanging fruit in the form of catching on to boneheads who have poor understanding of technology or operational security. But is that really worth such whole-sale risks to civil liberty? While the really scary people laugh and carry on?
"Why would anyone want a 'fix' to stop important security updates & fixes? Anyone declining updates puts themselves at risk as well as others."
Because some of us have learned that accepting patches sight-unseen is a proven recipe for disaster, and that it's much smarter to wait a week or so and see what complaints pop up to make sure that any given patch will not screw up our systems. This isn't some hypothetical risk - it happened several times this year alone.
If you read the article you should see the concern here isn't for enterprises with test labs and a release cycle, who get pretty strict control over how these updates roll out. This is a concern for home and SMB installations.
Obviously what one does should depend on the severity of the vulnerability and whether it's known to be exploited in the wild, but those of us who manage our own patch processing (even "just" on home systems) should have the ability to make that risk assessment.
When I saw the headline I was quite sure this was an April Fool's prank, but I must say that the first first page of the article had me wondering for a bit. I have to give credit to the author for the part about the missing cores in particular - the telling of that part feels quite plausible. Up until you get to the bits about Krakatoa + Chernobyl x1000 (the details of which are happily consigned to page two), I think this could fool a lot of people. I agree that we might see this republished in seriousness, especially if they don't read the whole thing.
...but I tend to agree with a lot of other the commenters that most folks many readily consider trolls aren't particularly sadistic or even clearly narcissists. Now, none of folks I knew well enough to think I had any insight in their true personality were people I truly knew well, nor am I an expert in psychology, so perhaps these traits were there but subtle. Some of them were quite bright though, and were capable of Machiavellian twisting of a thread of discussion, though I am not at all sure any of them were doing so with any specific plan in mind beyond to mess with people.
One, though, the one I refer to in my reply title, did seen a fit for most of characteristic mentioned in this poll/study. He was extremely narcissistic - he simply could not ever admit to being wrong, even when faced with incontrovertible proof that something he posted was false. He spent a lot of his time talking down to people, posting mainly to disagree with folks and always in a disagreeable way, which variously fits the psychopathic and sadistic labels. And, my God, actually engaging in debate with him was like falling down a rabbit hole designed by Daedalus. He would constantly dance and twist and subvert, and if you weren't careful, after about 10 posts you were arguing about something totally different. This was often related to his refusal to accept lines of logic that undercut his argument - he would would drag the discussion away from such things as a red herring debate tactic. He was quite good at it, using an ever-evolving, subtle shift through of sidebar arguments seeming related to the topic at the time, but accumulating to pull things every further into a different area entirely.
To this day, I don't know if he was a fairly brilliant troll or someone who was deeply disturbed and lashing out online because he could. I lean towards the latter, though.
When I first got and Android phone, signing it up for Google Play also signed me up for cross-sell services like GMail, Drive and, yes, Google+. I do use Drive, but not the rest.
I don't use social media much to start with, and what I use it for, Facebook is sufficient and also where everyone I want to interact with in that way is found. Google+ thus served no purpose, and I considered it more unwanted "attack surface" for my online presence than anything, though I don't mean to suggest I was deeply paranoid about it.
Thus it was that I was pleased to learn how to disable my public Google+ profile. If anyone didn't know that was possible, ironically the best way to find the instructions is probably to Google "delete Google+ profile". You do want to read the instructions - some Google products are intimately linked and deleting your profile can delete other data, too, but most of the mainstream services like GMail and Drive are unaffected.
So I do wonder if they'll unlink automatically creating a Google+ account from any other of their offerings, since it's clearly not deeply integrated.
How do you think the assistant is going to *do* all those things, and get all that data?
When I need directions, I still want to *look* at a map. I can figure out what I need to do much faster by doing that than by having a virtual conversation with my device. Having the map at hand calls for an app, even if it's an app the assistant-maker also built, and the assistant can directly integrate with it.
And when I have had an instant messaging chat with someone and am trying to remember the name of the person they told me to ask for at the local office? Well, I suppose it's possible a *really* damn impressive assistant could get that from the chat, though at that point I'd start to wonder what they'd need me for. But where are the chats stored? Sounds like the chats are stored in an app, to me. Did I have an IM chat with my friend via the assistant? Sounds like the chat needs an app interface of some kind.
How is an assistant going to act as a remote for my movie streaming? Do we all really want to *have* to talk to the device to look at a bit of action frame by frame?
If I want to connect to a new, walled-garden media streaming service a-la Netflix or Spotify, how is my assistant going to achieve that without some sort of app?
No, assistants are not going to be the death of apps. Apps may be replaced by something else someday, if web-rendered apps, exposed APIs, cloud-based screen rending and so forth keeps advancing. But even then the assistant is going to need to be able to integrate with whatever thing the apps become do do new things. The assistant won't have killed the apps - they'll have changed on their own for largely unrelated reasons.
"And El Reg can't help but wonder why informed consent is a concept that requires scare quotes."
Simple. If they have to act responsibly, that's a barrier to arbitrary action, usually intended to benefit them financially in some way, even if indirectly. I very much doubt Facebook worked with these folks purely out of interest in science, even if they had received no or minimal compensation. Knowing these sorts of things help them figure out how best to monetize their users.
If companies and/or academics working with them had to request consent or, possibly, be completely barred from such research, then this means they get to investigate how to monetize more slowly, or not at all, and they will resort to exactly this sort of red-herring arguments to try and hedge against that risk.
It's really quite disgusting.
Other posters have mentioned this, but I'll pile in. If some company like Google has a wide-ranging amount of information about my interests, my communications and my movements, it's not much consolation to me that these private companies don't want to abuse that power where the state government might. The reason is that the government has the power to demand that information from the company (or to take it without their knowledge) for the sake of whatever it is the government might want to investigate me over.
As we've seen with the Snowden releases in the US in particular, the very act of the government tapping corporate intelligence stores can be contrived to occur in such a way that almost no one outside the channels that make it possible knows about it, and anyone involved who would like to make it public is under threat of severe criminal prosecution should they try.
It's fine and well that our governments have not not, seemingly and so far, meaningfully the abused civil rights of their citizens using the information they now have access to. That is not a sufficient defense of the practice. The reason democratic nations have historically sought to reign in the knowledge freely available to a government's apparatus about the people governed is to limit the *possibility* of government abuse.
Quite simply, if a system that can be abused is left in place long enough, two things happen. One is that many of the governed people become inured to it, assuming it's OK because "it's always been like that". The other thing, which often comes only after the first is established, is that someone *does* abuse the system. It's human nature - either someone eventually won't be able to resist committing abuse, or someone will seek a position of power *specifically* because they recognize the potential for abuse they can execute.
As a species, we humans like to live under the conceit that conditions we enjoy now will persist into the future without bound - that because no historically decent government will ever change to be otherwise. I think this is imminently foolish.
I'm hardly a doom-sayer, but it's hardly impossible for me to imagine future situations of civil disorder, most believably due to some natural disaster or resource constraint (water, power, food, etc.), where governments of what are today democratic and free societies might resort to more totalitarian means simply to try and keep things under control. (Martial law.) In situations like this I believe you very much would not want to mix in such abuse-prone tools such as a way to track basically everyone all the time (pervasive cameras, facial recognition, cell phones, centrally managed driverless cars, etc.) It's unwise to trust leaders with such tools to do the right thing in situations where civil rights are so specifically curtailed History does not show good precedent.
On that note, one thing I'll disagree with in the original article is the notion that we owe Orwell for the caution of people my generation and older. While 1984 certainly stood out for some time and doubtless influenced many readers, for cautious people I know it is real, historical events that serve as more sobering reminders of what abusive governments can do with the power of extensive information about the people they govern. The examples set by the Soviet communist party, Nazi Germany, and the Red Scare in the US are much more frightening to me than any fiction. Imagine those regimes or movements with access to the information they could gain on their citizens today, especially if those citizens were raised to use the internet with limited caution.
Hope for the best, but plan for the worst. Enabling pervasive surveillance is unwise, even if it is not the government who directly surveils us..
I am in the US, but I work for an international company for which the UK members have a strong leadership presence. My boss' boss is in the UK.
I'm a generalist. I know something about a lot of different things, can use that to solve lots of problems, or create lots of solutions. And I've got a job where that's basically what I do professionally, where the breadth of my skills is basically specifically why I'm valued, and I'm paid very well. I've been where I'm at for some time, though, so I can't speak to how easy it is to find a job like mine, and it's something I do worry about should this job disappear or become unsavory. I *can* tell you my team wishes we could find more people with a breadth of skills.
Where I fit in best is in a place where specialists exist in their own silos. You have developers, DBAs, sysadmins, storage teams, and networking gurus. In places that divide specialties up like that, you often benefit from someone who is a bit like a business analyst, except instead of being the interface between developers and customers, they face the other direction, interfacing between developers and infrastructure / middleware.
What we find is that the developers often are wildly ignorant of the implications of the system's (virtual) physical design. The infrastructure teams often have no time to learn the ins and outs of the applications, in order to tune their systems for them. I help the developers create systems that won't be rubbish on the basis of the systems on which they run, and help the infrastructure folks design hardware that won't be rubbish for the needs of the application.
The challenge is in finding an organization that values this role. Not everyone does, and that's clear even within my company. What seems to make the tuning and problem solving skills valuable to people is when they're strapped for budget and they need to expand their system or make their existing scale of system run better. Tuning things can increase concurrent users on existing footprint or reduce infrastructure for same performance. And even in a cloudy context, the ability to achieve those things can be valued. But I fear that may be rare.
I would never do project management. It has nothing to do with why I'm in IT, and requires primarily the exercise of people skills, not technical ones. If I lost this position, I would look for a job as a systems architect - someone who looks at the big picture of software, infrastructure, APIs and whatnot and assembles it into a solution. I see a PM as someone who drives all the people involved to implement that vision. I would want to be the person creating the vision itself.
I would like to see a Linux OS take off and get real market share and mind share in the broader device market, but as a power user, I would never personally want that Linux flavor to be Ubuntu. Of late their party line is that they know what the users want better than the users do. And *maybe* they do for users in general, or for new users that (somehow) find themselves using a device running Linux. But they sure as hell don't know what I like, because I don't like a lot of high-profile things they've done in recent Ubuntu releases.
That attitude of knowing what I want/need more than I do myself has been classic Apple for ages now, and it always kept me away from them. That attitude was recently adopted to much angst of late by Microsoft with TIFKAM, and it kept me away from Windows 8. And it keeps me away from things Canonical the same way.
You can be pioneering without immediately throwing old paradigms under the bus. You can be progressive without remaking *everything* from scratch without a smooth transition. Does it take longer? Most likely. Does it cost more? Possibly. Will it give you a happy base of users who you help through the transition? I think it would.
It's great to be looking at the long road where today's 6-year-olds will be the device consumers of the future, but the fact is that right now we've got a ton of users bridging the desktop and phone/tablet paradigms. Making *them* want to use your product would get you significant market share which those 6-year-olds would grow up seeing. I don't understand the strategies that seem to decide those transitional people are irrelevant and/or can be made to change their opinions en masse. Given how poorly it seems to have gone in general, I think I'm probably not alone in feeling that way.
I'll never see these on my PC, or if I do, it'll be a brief situation.
I'll likely just use my mobile client less. It's getting to the point that I use its updates notifications to go look at FB on my PC, heh.
I'm not sure the comparison with what happened to minis is 100% apt. That seems to compare better withpredicting that in 5-10 years those using PCs might have them based around low-power ARM chips, which is certainly not impossible. The comparison between slabs and PCs (and notebooks) is as much about form factor (keyboard+mouse, etc. vs pure touch) and user interface as much about the architecture of the system.
Now, the shift between on-client and remote processing by your application is possibly apt in that comparison, but that is actually not wholly integral to the PC vs slab debate, at least IMO. Modern slabs come with enough grunt to run certain things locally rather well *if you buy a high-end one*.
The migration back to the "cloud" version of a client-server model for various compute is partially about keeping the price of slabs low for growth into developing markets but also about control (vendor lock-in) by large corporations, the attraction of locking people into subscriptions versus one-off purchases, benefits of economies of scale, and the idea that it's nigh impossible to pirate stuff that doesn't execute locally.
I don't really buy into the full spin of the article, but I do think some aspects of the impact of slabs will hit PCs as we know them.
The modern PC has become a fairly durable good - something you buy and keep for 5+ years rather than something people replace every 1-3. Why has that happened? Partially maturity, partially because the incremental performance for buying a new one has been dropping for a while - the bang for the buck on an upgrade has decreased such that it doesn't make sense to do very often for all but the most hard-core users. I'm a pretty serious power user who builds his own (and family) PCs from parts and I expect my current PC to last me 5 years before I really want a new rig.
This means that, for branded PC makers, this means growth is slumping. Windows 8 and it's idiotic handling of what was probably a generally wise paradigm shift didn't help, but it didn't cause this growth slump. Market saturation with longer-lasting goods did. In this case "market" means "all the people who can afford and make use of a desktop/laptop PC". Most of the people who could/would own a PC already have one and have no compelling need for a new one.
Yes, there are lots of *other* potential customers out there in the world, but for a variety of reasons they weren't going to be able to afford a PC (or have a place to hook it up) for the foreseeable future.
And then, along came a sort of PC-like thing that people don't have to "hook" up in the same sense. All they need is a cellular network capable of decent-ish data rates and a place to plug it in overnight for charging. Still high demands in some part of the world but vastly more common than the home office niche a lot of us have in the developed world. And it's a lot cheaper than the cheapest PC or laptop.
Companies as a rule chase growth - if they're public they basically are obligated to do so, because their shareholders and often the law demand it. If growth in PC sales is collapsing, PC makers *have* to chase the next growth market, even if it's not PCs.
Does that lead to a "post-PC era"? I doubt it in the sense that some people seem to like to claim, but the reality is that if enough of the big companies wither their PC production, that has a knock-on effect on price and availability of the things that go into PCs. As they exist today their parts are cheap and generally modular because umpteen different companies make them. What happens when that changes?
I think that market group-think leads to this notion that PCs will completely disappear, and I think that's dumb, and even this article doesn't claim that. I do think this shift in where growth leads makers will impact who makes the PC of the future, how easy it is to modify or build bespoke, what OS is available to run on it., and (perhaps most importantly) what applications are made available for those OSes. Personally, I think it looks a bit grim, but not completely awful.
Here's hoping this means he's off flying somewhere better. :)
I've only ever experienced the man through TV interviews, but he was rather inspirational, and indeed seemed to me to have had a fascinating life. Sounds like he would have been a terribly interesting person to meet.
You use spaces if you have multiple people editing the same files from in mixed environments, like having some people using various Windows- or X-based editors, some in terminal vi or emacs, etc. Tabs have arbitrary visual width, and you can tell people to set whatever they use to display a certain tab width, but you cannot always enforce that they all do it. So then you get people mixing spaces and tabs (often unintentionally) in ways that look like the code is aligned in one editor but not in someone else's.
Spaces avoid this ambiguity. No editor displays five spaces as anything other than five character positions, unless, God help everyone involved, someone is editing code in a proportional font.
It just don't find it practical entering a complex password on a mobile device that locks frequently (as many corporate-issued ones do, per pushed policy). I have to do this on my corporate Blackberry, and that's barely tolerable because I still have one with actual keys and because my employer doesn't enforce very onerous password requirements and has a low-frequency change policy for such phones. Entering a complex password on a phone with a screen keyboard, given how often a good password requires mixed case and punctuation (and how most soft keyboards implement this), would be a nightmare for me. I would come close to spending as much time unlocking the phone as I did actually using it to reply to mails and text messages.
I'm also not a fan of search spanning my machine and Bing. I either want to search my machine, or the internet. I cannot remember ever having had searching both at the same time be a use case I actually needed.
I assume, perhaps incorrectly, that it can be disabled, but still.
(The author's use case of wanting it to search in calendars seems pretty wild to me too.)
As a learning language, I believe excess focus on Pascal, Delphi and even Java hobble you with respect to what's going on under the covers. That may not seem important to many people - too many in my opinion. But in my experience, the really excellent programmers are the ones that understand at least some of the low-level implementation details of how languages, and indeed computers overall, get done what you ask of them. Languages like Java work hard to hide that from you by design.
Learning C and C++ force you as a matter of the language basics to learn about things like referencing memory more or less directly via pointers and low-level arrays. Naturally, using these tools can be risky and poor use leads to all manner of awful bugs. Indeed, that C/C++ can permit such horrific bugs is the main reason we have languages like Java, whose core design philosophy is to "protect" the programmer from these things. But by using languages like Pascal and Java as your sole teaching tools for a new programmer, the students don't learn nearly as much about what's actually being done by the resulting program than they would having written in, say, C/C++.
I believe this deprives them of all manner of useful knowledge they can later bring to bear to debug deeply buried issues or improve application performance. Not because they're going to hack pointers into a Java program, but because they often have a better grasp of things like stack and heap maintenance, why things can go wrong with loading in foreign libraries (vi JNI and the like), and so on.
The above can be covered of course by courses in assembly or compiler design, but not that many CS students I know take those if they can avoid it when their goal is to get marketable skills, as opposed to pursuing computer science directly. They usually focus on programming classes, which usually focus on higher-level languages used in business.
Learning algorithms, design patterns and the like is essential, and I hardly think
We choose less surveillance.
"And yet the PS3 had a bluetooth remote control 6 years ago"
Which is frequently considered *a detriment* because nothing else can control it, because everything else is still designed around IR. That perspective is *exactly* why this hasn't happened yet.
It seems to me the idea of non-IR controls is gaining more steam the last few years, but it also seems a bit more likely to me that this will manifest over full-blown WiFi than over Bluetooth. Every controllable device I have but one is now attached to my home network, even when I didn't aim it to specifically have for that functionality. Granted, not everyone has a home WiFi network, so that still might let Bluetooth fill that slot.
(The one un-networked device? A flat-screen TV that was a 3-ish year old model when I bought it new, specifically chosen because it didn't need smart features and was thus far cheaper. It doesn't need to be smart - every single thing I have that can feed it a picture is an internet-connected smart device - even the receiver that sends A/V from my other sources to the TV.)
I can see SaaS as very compelling for small and medium size businesses. For large, complex organizations with a lot of IT already in place, there are frequent needs I don't often see mentioned that, so far, limit how useful replacing SaaS would be. For example, where I work, a common thing with on-premise solutions (licensed or home-grown) is that they are integrated with one another. From what I've seen of most SaaS offerings, integrations and customizations are usually extremely limited, if on offer at all.
While integrations like this are sometimes a pain to create and maintain, they usually exist for a reason, reducing duplication of data entry, improving automation, etc. These are real needs that still need to be met in big organizations, but most SaaS offerings seem to exist on a pedestal inside a walled garden, making using them for this awfully challenging. Yet it seems overkill to me in a world where SaaS exists that having an on-premise solution is the only way to address these kinds of needs.
I think this is a problem with product maturity that will eventually need to be met. I expect some future wave of cloud marketing to make a big deal of how various SaaS products expose APIs (possible for added cost) that allow this sort of integration with other products - even SaaS products from other providers. At least, I hope to see that some day.
No, really, it's not that simple. I know there are plenty of people *think* it's going to work like that, but they won't be hiring me, and I'm not at all concerned about that. I recognize that not everyone has that luxury.
This isn't quite as bad as people insisting on access to personal social media log-ins, but it's certainly in that general area for me. With the social media access thing, if I'm asked or told about it in an interview, it means I'm done with the interview and I'll keep shopping. Mandatory BYOD means I'm going to have a lot of questions, and *might* keep looking depending on the answer.
The issue at the end of the article works in reverse, especially in light of "BYOD" really meaning "Buy your own device from a list of certified options". If I buy a device for a job and don't end up staying, I'm stuck with an out-of-pocket investment in a device that might not be something I would have chosen without that job's likely restrictions on what I can buy.
On the happier note that I stay at the job, this doesn't save me any money, because I'm not going to use a work device for personal things. Sure, if it's a phone I might be OK making calls on it, but at this point I don't have any interest in using a work device to chat with with the gang, post my personal opinion on forums under a pseudonym, or exchange racy texts with a girlfriend. If I want to do any of those things, I'm going to want a truly personal device anyway, so BYOD becomes a pure cost overhead for me.
Maybe eventually I'll be mollified by the internal work/life firewalls of the sort Blackberry and Samsung are working on, but at this point they are too new and untested for me to trust very much to truly keep my prospective employers out of my personal life.
If the cops actually need to go 150+ MPH to chase someone down, I rather wonder what they intend to do with them once they catch up to them in this particular car. I can't imagine they plan to ram or PIT maneuver them with a 250-300k vehicle. (That's ignoring the question of what harm making a car spin out at those speeds might result in.)
I suppose this could just be used to follow the speeder doggedly, and thus ensure the speeder actually can't get away, to be bagged (one way or the other) when they finally give up or eventually crash. In that case, the cops better hope whatever they're chasing doesn't have dramatically better MPG than their 12-cylinder beast. :)
Totally agree. We're where we are currently because we've had a few generations of people who grew up learning PC-related tech in large part because it was abundantly available to everyday people. While I'm not so dense as to claim standing on the shoulders of a nearly totally cloud-centric future is impossible, it leaves me wondering very hard about where the innovations to expand those cloud providers into the next big thing after that will come from. We already went down this road with "wizards" who were the only people who could manage proprietary systems in the form of things like mainframes. The PC era did a pretty good job of sweeping that away. What will be the cloud equivalent?
There's a preposterous disconnect in the potential penalties for things like this and actually horrific crimes like murder or rape, or things like selling hard-core drugs. The claim that they're some sort of prosecutorial "bargaining position" is small comfort, since the very existence of the penalty means a prosecutor has the right to ask for it and a judge is free to apply it. The idea of actually scaling the penalty to the harm done (or threatened) seems right out the window. This, sadly, should be no surprise, as I feel we have been seeing similar problems in copyright infringement cases as well.
For folks pointing out issues with drivers on laptops in particular, doing an OS install of Windows on a laptop from base media is rarely a walk in the park. In my experience, laptops have historically been the biggest offenders for having odd-ball, even completely specific hardware for which you need to obtain the driver *from the laptop vendor*. On such systems, Windows only works "out of the box" when it comes pre-installed (or primed in a way that it will complete the install off a disk that includes the needed drivers). Otherwise, you need to go trawling the internet looking the right driver download.
Yes, in such cases you'll often struggle to find a Linux driver for the same devices. You'll often also struggle to find drivers for older Windows OSes (even XP at this point, for new laptops), as well as Mac OS compatible drivers if it's not actually a Mac laptop.
I don't want to ride this too hard, but I'm pretty curious about some of the complaints about how Linux distros apparently constantly change paradigms like how to copy/paste, drag and drop and the "File" menu bar. If you're talking about a user who struggles with mapping those things across new OS variants, surely you're not using that as a defense of commercial OSes? Yes, their change cycle is a lot slower than that of Linux in general, but Apple has radically changed their OS's user interface several times. They're also mildly notorious for making new OS versions incompatible with software from previous versions. Microsoft has tended to be more stable in terms of both interface and software backwards compatibility, but they aren't completely unknown for interface shifts that are crippling to folks who struggle with basic computer operations. Windows 8's interface-formerly-known-as-Metro is a wild departure for folks who know how to use Windows XP or 7 based mostly on rote. And even before that, things like the Office Ribbon interface was severely disruptive to non-computer-savvy folks I know. (Hell, it was disruptive to savvy folks, too.)
If you're dealing with folks like this, you don't change their OS unless you absolutely have to. Odds are, they aren't installing their own OS regardless - they're getting it shipped on a PC or having friends or family do it for them. That means the glut of variants of Linux and the deep vagaries of configuring the OS (any OS, not just Linux) are basically non-issues. If it breaks, someone else is likely going to fix it anyhow.
And if they're actually more capable computer users than that, major distro Linux installs from, say, 2008 and on are not that different from the one you get with Windows, and for the last few years they actually come with good device drivers for mainstream video and sound devices.
Is Linux awesome for "normal users", if such a thing exists? Probably not. Are the major distros nearly as bad as some commentators here seem to suggest? Not by a long shot.
Desktop Linux could probably stand some better end-user friendly distros, but ultimately, what MdI is happy about is that he got his hands on an OS that someone else made water-tight and locked the rest of the world out of serious tinkering with.
That's pretty much the polar opposite of what Linux has been for ages. The fact that it's wide open to tinkerers who think they can build a better moustrap is *why* it has a bunch of variant distros. Sure, it also leads to some annoying schizophrenia among its tools even within a given distro, and yes that all contributes to it not always being that user friendly. But, honestly, if you want an OS that you're never going to tinker with and you want to "just work", and (most importantly) you want to be able to call someone for help if it doesn't "just work", you really are probably going to be happiest with a commercial OS. Not because they're better, but because they're closer to what you want.
Apple's OS isn't intrinsically superior. Its just aimed at a different market segment.
I agree it's not a PC replacement for most of us, but that's what makes the comment about using it for something like CAD work pretty laughable.
Current touch interfaces would be a truly horrific way to try and do serious 3D CAD work. A workstation class PC is also likely to completely blow away any tablet on the market (iDevice or not) on raw grunt power used for a lot of things like 3D rendering or even video encoding.
Maybe someday, but right now the suggestion that we use a tablet for anything other than social and multimedia consumption seems pretty silly. I don't even particularly like using them for office productivity stuff like spreadsheets, but that's down to either having to use touch or at least lacking a mouse, rather than tablets lacking the power for most of that.
"That's why the OCP IT model will fail in the real world outside the hyperscale data centres of Amazon, Facebook, Google, Yahoo and the other enormous cloud IT service operators."
It seems somewhat likely to me that they are envisioning a future where that's where almost all of the compute hardware actually lives, and thus where the economies of scale end up focusing the market. Personally, I don't think such an one-sided outcome is going to happen for a very long time, if ever, but a lot of hype seems to assume it's inevitable.
"Might I recommend Windows 8?"
This may have been intentional sarcasm, but for Windows users, going to Windows 8 is not totally unlike the switch from Gnome 2 to 3. The interface-formerly-known-as-Metro is anathema to power users who want to heavily multitask, and some of the first things such power users do is disable as much of it as possible, which is mostly achieved with 3rd party software, as actual end-user-facing options to disable it do not exist. And the result is still less productive to a multitasking power-user, IMO, than prior versions of the OS.
I'm sure a lot of people will be stuck sucking up those ads, but I bet it will generate a lot of ill-will. I can't say I blame Facebook for trying - they're literally obligated to try and monetize their users, but I don't expect users to like this particular method of it. I'll be interesting to see if networking effects keep users stuck there, or if such blatant ad peddling drives them elsewhere, perhaps to Google+. Google, to its somewhat underhanded credit, is generally more subtle in how they monetize users through cross-product tracking. They show us ad videos on YouTube, but hey, at least you went there looking for a video.
Personally, given that video is not fundamental to Facebook's core use, I'm confident I'll be able to strip out such nonsense on my desktop browser. Given that I run a miserly data cap on my mobile data usage, if they try to force it on me as a smartphone user I just won't access Facebook from there. (I really think that forcing video viewing on people using a mobile data connection would be an outrageous PR fail for them, though. A "video ads on wi-fi only" setting would probably go a long way to mitigating that.)
It's hard to say there's been no evidence, but based on my own reading, there seems to be no accepted scientific / medical evidence that indicates gaming can rise to the "addiction" classification. There are apparently some clinical studies that claim it games can be formally addictive, but all are apparently considered anecdotal for various reasons, and not firm evidence one way or the other.
On the other hand, there are a fair number of papers or write-ups on how to tap into the impulsive tendency of (general) people which, when incorporated into game design, have been shown to increase time players spend playing, how long they stick with a game, or both. Again, these are surely mere anecdotes from a hard science standpoint, but it gets a bit hard to ignore them when game makers talk about using them successfully. Add in that you occasionally get people who die in (usually East Asian) gaming parlors because they would not leave a game to go get a drink or use the facilities for three days straight, and it makes you wonder.
Consider this article: http://www.gamasutra.com/view/feature/3085/behavioral_game_design.php?page=1, which is written by this guy: http://www.gamasutra.com/view/authors/205411/John_Hopson.php . This kind of writeup certainly makes it easy to draw comparisons between games and a virtual Skinner Box for humans, though that direct comparison is often challenged.
When you consider that gambling presses many of the same psychological levers, and addiction to gambling is considered a real disorder, I'd say it "gaming addiction" seems fairly plausible, if not formally "proven".
I'm neither a would-be Luddite nor, typically a doom-sayer, but topics like this do suggest the risks that come with our accelerating progress with computing power and data collection. We are simultaneously armed with ever increasing knowledge of how the human mind works, both biologically and psychologically, partially thanks to access to massive computing power and data collection, and that same computing power promises new and sometimes scary ways to use or abuse that knowledge. There are certainly potentially gloomy side paths to consider off of this topic, such as how all this computing power and data collection can be used for inappropriate levels of surveillance (government or not), or perhaps how governments or corporations could use similar kinds of behavior-shaping to make us more compliant, particularly when talking about our school-age kids.
It certainly seems possible to me that right now our capabilities are outstripping our foresight and thus our ethical and legal frameworks about how to limit their use. That's probably always held true - I can't think of a time when legal frameworks didn't seem to lag cutting edge processes. Still, with the vast computing power and number crunching we see not just present but accelerating, the gap seems to be growing larger, faster. There might be some structural way to try and control that, but I'm certainly not sure what it is, or, honestly, who could be trusted to adhere to it.
Anyway, on-topic, Matt, I don't envy you the challenge you face, but it sounds like you're talking with your son about it, and trying to let him help drive change rather than dictating it to him. I applaud all of that, and am impressed that you'd share this rather personal story here. I wish your family the best of luck.
This makes sense to me. Despite being a self-proclaimed technophile, I pay for a skimpy data plan from AT&T in the US, and mostly avoid heavy data usage (such as installing apps, streaming media, OTT VoIP, or viewing lots of images) when I am on a cellular link. When I'm out and about I mostly use my phone just to text or talk to people (in that order), or for things that don't need connectivity, like shopping lists.
Now, when I plunk down in a restaurant or some such, especially when alone, I'll probably check my Facebook, email, etc, so I do consume bits and pieces of data doing so. But if there's WiFi available, I will use that instead, though I might be a tad more cautious about what I use it for. I probably wouldn't manage my billing or bank account on a public hot-spot, for example, but I mostly tend not to do that stuff over cellular, either.
I don't presently game on my phone, so that doesn't enter the picture.
Both at work and at home I have good WiFi coverage, so basically my phone only needs to consume cellular air time when I'm going to/from work, running errands, or out on the town.
I see this as a far off outcome, with other pressures, like solid-state storage, more likely to be of more immediate concern to hard disk manufacturers. If hard drive demand plummets, I think it will be because the technology of storage itself shifts, instead of *where* the storage units live.
I see three primary barriers to a radical shift in consumers migrating storage to "the cloud". The first is convenience. If it's not fast or reliable, then average consumers will not use it as a complete replacement for local storage, no matter how carefree about the data itself. Anything that I would put in the cloud but want frequent access to needs to be small enough that I can download it in trivial amounts of time. The less often I need to access it, the bigger it can be, but there are limits. If anything that is so big it takes me hours to upload, forget it - I'm not going to fool with it. Bear in mind that most people have asymmetric upload/download speeds - I can download nearly 10x faster than I can upload.
The next barrier is trust that the provider is a long-term partner. When I put money in a bank, I don't expect that institution to shut down any time soon. Setting aside whether or not that's a safe assumption about banks in today's world, it's still one that I think most people take. I don't yet have that sense of stability with remote storage providers yet, with the possible exception of Amazon. (While some storage companies use Amazon as their underlying storage platform, that doesn't mean I would benefit fully from Amazon's corporate stability.) Also, unlike banks, storage companies aren't insured in ways that ensure our data will be returned to us or moved to another storage company should our chosen provider fail. With time, and as the industry matures and consolidates, I'm sure this will be less of a concern, but my own feeling is that is no company out there with the perceived stability that I would treat it as a permanent solution. While even a local solution has uncertainties associated with it, some can be mitigated through design, such as Drobo or other NAS products. Even if the companies that make such products go bust, I would already have their hardware on-premises, and could replace or upgrade it at a time of my choosing (barring failure), rather than be at the mercy of a distant board of directors, the economy, etc.
Finally, there is the matter of trust around the content itself, and how secure it is on a remote host. I have data on my personal systems that, if I have my way, will never see itself stored on any media that leaves my home, or at worst, perhaps a safety deposit box at a local bank. While I realize that not everyone feels this way about any or all of their data, the fact that I do means that I struggle with the idea of explicitly putting its storage in hands I cannot know. At a minimum, I would want such content encrypted end-to-end, which gets back to the point about convenience. If applying security that will make me comfortable with remote storage makes dealing with it too slow or inconvenient, then local storage is going to be more attractive.
Most users are consumers of data, and less so producers of it. Many consumers are also willing to rely on external providers to keep things they like readily available for them. This is the basis of streaming media - few copies, many views/listeners. But some of us both produce a lot of media (I toy with 3D graphics), and also dislike that external media providers may demise or otherwise remove something I like to listen to or watch. Having my own copy locally means I control my destiny, but means I must account for that storage myself, and risk loss if I have a catastrophic loss of my home. Everything has trade-offs. My preferences and choices lead me to want local storage. I suspect I'm not alone.
Buzzwords aside, there are some very good observations here. Whether one lays it at the feet of "the cloud" or what-have-you, the combination of greater compute power (faster CPUs, network, more memory, etc.) and improving IT efficiencies in deployment of that power (increased automation, virtualization, etc.) mean that IT departments need to be delivering more "oomph" in the same timeframe, or there's no return on investment in actually having all those new capabilities. And my experience does agree that taking advantage of this is going to usually be disruptive of existing IT culture, all the way up the IT delivery chain from VMs and network to middleware stack to the main application developers.
One thing, though, doesn't change - the need for quality requirements. All the fancy IT infrastructure in the world is only going to let someone with poor requirements deliver the wrong product faster. That may seem like a good thing, but if your business customer is choosing between you and another provider (internal or external) and they too share your turn-around speed (and eventually, someone will), then the best investment for the customer is the IT delivery organization that gives them the product they need (or at least want), and not the one with the coolest IT toys.
There's going to be change, and *maybe* that change will disrupt monolithic software providers. But developers? Someone has to create, maintain and extend the components being discussed that people would access via APIs. Heck, someone has to design the APIs themselves, let alone implement them. Developers and designers will have a place doing just those things.
And I think the claims that "the world" wil be writing tomorrow's software is a bit overblown. The vast majority of people don't want to fiddle with APIs, and use of even highly graphical tools for creating programs has never really taken off.
I don't see the change as all that, personally.
It's an early but important step. I think it's likely to be the first real step onto a slope that leads to not only enterprise VDI, but also consumer VDI, where people who want more than a browser-enabled media device (like a tablet) *still* just a browser-enabled media device to access remote compute resources that run in remote data centers. More than any other tech we've seen yet, It allows gaming and other locally resource intensive programs to become remote apps.
That has the potential to further current trends we already have in consumers shifting to lower-power, more portable devices, which has implications for the costs of heavier-duty kit. If almost no one is buying full-on desktop PCs, they'll become expensive niche products, assuming anyone sells them at all. (I do assume there will be some legitimate need for them, and someone will meet it.)