Well, it's obvious to any bloke who ever had to buy smellies for their better half...
333 posts • joined 27 Apr 2011
Then the OS can be upgraded without even consulting the hardware vendors - just like Microsoft does with Windows on PCs.
Yeah, I know, not quite entirely aligned with this article, but...
All the vendors would have to do is provide the "drivers" (for want of a better word) and perhaps a partition where they can add their bloatware (to be disabled instantly) and their home screen app.
I'm probably being naive here, but M$ managed it...
That means manufacturers are forced to support products for a minimum of 5 years, be that with spare parts or security updates. I'm rather surprised to find that even the EU doesn't appear to have such a requirement.
I know I'm not the typical punter because I pay up front for my phones and I keep them as long as I can. I'm no "oooh, shiny shiny" waste-ard and look to minimize the amount of landfill I generate. It's a shame that Samsung will be forcing a device change on me just so I can benefit from security and other fixes in a product I bought only a few short years back...
If you're doing serious development, you need a desktop. If you're on a tablet (and therefore unable to use a desktop based IDE) you're not doing serious development. You might review something. You might even do a couple of lines of code to fix something, but you can do that through your cloud-based git repository UI without the need for a browser-based IDE. I simply don't understand the browser-based IDE concept. Unless, I guess, you have a Chromebook. But again, that isn't a serious developer's piece of kit.
As to developing Java - just use the best IDE (IDEA) and be done with it.
Hmmmm. Maybe I'm a fossil, but this seems to be a solution looking for a problem.
I regularly use certain Stack Exchange sites, but was entirely oblivious to these goings-on. The article was, therefore, eye opening and I welcome it.
Personally, I see no need or place for pronouns, at least on the sites I use, and (given current ridiculous social norms) feel I should be offended on behalf of people who see pronouns as divisive. But honestly, meh.
In terms of the Stack Overview/Stack Exchange reputation, it has taken a dent but for me it is almost all about user generated content anyway so really is more about the quality of posting than any mutterings from corporate.
Regarding user reputation, as @jglathe said, it is really, really hard work to build reputation on the sites I use. You find old hands with 8+ years up in the 100000s of points and folk who joined more recently (1-3 years) but regularly post down in the really low 1000s even after what is clearly a really concerted attempt to flood the sites with nuggets of their learning and experience. This is, I suspect, simply down to the fact that the old hands were in on the ground floor, answering the "low hanging fruit" questions that naturally gain additional votes over time. In addition, as the sites have matured the questions generally have become more niche (since search engine(s) results cover those earlier questions well) and therefore attract less reputation, which is a shame.
What's great about git is that it doesn't really matter if the remote repo becomes unavailable for a while because you've still got all those commits in the local clones. Sure it may take a little while to get yourself out of the bind but it is ultimately doable even if the remote repo can't be recovered at all.
It may be a bit inconvenient to be unable to share updates in a team in the normal way but there are simple ways around this (e.g. using generated patches and the like).
I'm not saying I want downtime from that cloudy git, but actually there's no real chance of missing those pulls or losing commits unless you are a complete noob and delete or mess up your clone.
1. They are deterministically computed, not random. Yeah, the likelihood is slim but it COULD still happen. If it does, this language is hosed. My point stands.
2. My point is that the original "function" was referenced by one hash and now it has another. I think you are implying that you have to build/deploy all the code again so that the referer code correctly references the bug fixed code, but I'm not sure. That implies you must "own" all the code in the solution since it all has to be re-built and re-deployed to resolve all those references, so don't even think about models like shared libraries, right?
(And why hide behind anonymous posting?)
A couple of things sprung to mind when reading this (and, no, I haven't gone about finding out any more by reading any of the cited links):
1. How are they going to deal with hash collisions? Hashes are NOT unique identifiers; they are semi-unique at best.
2. How do you ensure that a bug fix, which changes the hash of a function, is correctly applied (updating all reference to the broken function with the new one)?
Yeah, my brain isn't big enough to handle Erlang so I'm guessing it would go "*phut*" in Unison...
All of those things can happen for on-prem data and servers. And you have to employ a sys admin to do stuff like backups, configure/update OS and security, handle disk management etc. And you have the risk that your sys admin is happy to drink coffee in the server room, propping the cup on the kvm switch at head height on the pull out tray in your server rack then absent-mindedly pushing the tray back in and causing a coffee coloured waterfall down inside your rack...
Two years ago I would have said the same thing. Now, having been developing software on the Salesforce platform, I have had to modify my opinion.
There is more to it than simply putting your stuff on someone else's computer. That computer comes with a shed load of software that makes "your stuff" be more and let's you do more with it.
Sure, someone could sell an on-premise version of such software, but (as is suggested by what happened here) it is probably quite complex and difficult to administer and keep up-to-date. That means you probably would not bother to have it and would not be able to do nearly so much with your data.
I still have my concerns about cloud and as a private individual without any real need for clever use of my data, I am more than happy with a personal NAS with RAID to store my files, locked down with no sharing across the public internet.
As a corporate user, however, I can see the value. And I wonder how many outages and the level of cost I would have suffered with my own kit compared with using something like Salesforce.
Americans like Trump would read this and miss the sarcasm. El Reg needs to add special tags to sarcastic statements just to ensure that Trump and the Trumpettes don't take it the wrong way and cite the article as more reason to do some further bashing of "not US so not USeful" companies.
But so many Android users fit the "can't be arsed" profile and just use whatever the default default is that this will have no impact.
This screen will generally be ignored by most, with a simple touch on "Next", so strikes me as a way to placate the EU while having almost zero impact on Google - other than to get more readies from third party search engines naive enough to take part in the bids.
An estimated workforce of 50000 and a value of £3bn sets the "sector value" as £60k per worker...
They must pay a lot of their devs a lot less than this. Or there are a lot of part time staff in that 50000 head count.
Or someone's got silvered glass and particulate clouds in spades.
Unpaid overtime, in my experience, is common across all software development and services work. Sure, rarely more than doubling weekly contracted hours for more than a few weeks, but still common.
Enticing people to click through to the LinkedIn profiles so the owner(s) of the profiles can find out who in tech is naive enough to access those profiles and therefore give away their identities (assuming they are on LinkedIn, which you need to be to view the profiles from what I can see)?
I recall one of my fellow graduate intake starters doing the same on a company DEC 10 during the lunch break on our second day of training, having just started work, and running it up on every terminal in the terminal room. He was caught. They fired him on the spot.
So, "hire immediately" as long as the developer doesn't continue to do that sort of thing post-hiring!
Some of the commentards have been suggesting that "all data" means all the data in the database, which is, of course, multitenanted, hosting data from numerous "orgs" (think of these as virtual servers). My point is - I suspect this isn't the issue but rather that sharing rules have been negated within orgs allowing all users (of those orgs affected) to access data for that org without sharing rules being applied.
While this could be a significant issue, especially if it affected communities on Salesforce, it is far less big a deal than some suggest - the data from one business was probably not exposed to other, unrelated businesses.
Reading between the lines and looking at the supposed "all data" grant, in Salesforce terminology that means all data in YOUR ORG that belongs to YOUR BUSINESS. It doesn't mean all data on the DB across all businesses that share the instance. Can we have some clarifications here please?
This isn't AI. Stop being a sheep. Stop following marketing's stupidity. Stop abusing the term AI. We don't have AI. We have nothing close to AI.
The fact that this article has "AI" stamped all over it just gave me a red mist in front of my eyes and I couldn't actually focus enough to take in what it was about.
I found, growing up in a family who were avid SF readers, that my mind was opened to many different possible scenarios set in fictional universes but obviously applicable to reality. That reading made me consider the downsides of technologies and other social changes. And that's why I see pervasive and unfettered use of facial recognition quite disturbing.
These sorts of studies are executed in a vacuum of knowledge; they don't try to influence, but clearly they don't inform either (well, it isn't their job, is it?). I wonder if the same results would be observed if an equal number of serious benefits and major issues/disadvantages were listed before hand?
It sounds (from this reporting) like the skewing in favour of its use seen in the "refined questioning" added only beneficial usages as the scenarios. Obviously, having not seen the detail, I could well be wrong in this assumption. However, if right, that to me makes the study biased and ultimately unhelpful.
I was a young nerd in the very late 70s and had managed to get myself access to the hallowed computer room at my school. This had a real TTY terminal that connected us the PRIMOS machine at a relatively local university. The university hosted logins from a number of schools, each identified by accounts like SCH008 (which was our login). A rival school in the same town had the account SCH007 and they would broadcast brags to us about how much cooler they were, frequently related to a License to Kill.
One of the super nerds at my school, a chap a year or two older than me, somehow managed to get hold of a useful commands list and found that the FATAL FUTIL command let him mess with other user's processes (I don't remember the detail).
However, the upshot was that it was SCH008 that gained an effective License to Kill, by way of terminated processes.
Just wondering, how you define "technology consultancy and marketing" in a manner that is robust enough to fit a legal usage?
And talking about "academic", if we had laws that ban folks from working after they have been shown to be peddling hype, that could really shake up the academic world. After all, in academia if you have an idea you want to research you need to "sell it" (probably with plenty of hype) in order to get funding. And that funding may result in no gain other than a negative result. Ergo an academic can no longer work in academia after their first failed research project.
Now I think about it, perhaps that would be for the better.
The desktop client can be configured to auto-answer. This is ideal for people who want to be able to call a friend or relative who is suffering from dementia or another illness that prevents them from being able to understand or physically interact with an incoming call request. All you have to do is make sure the PC/laptop is always on and is facing the right way to view the recipient, then set Skype up to auto-answer.
I noticed a few weeks back that Skype (even the desktop client) was not correctly initializing - once up after initial boot and login to the PC/laptop, you had to click on the "join a conversation" button to make the client complete initialization and allow incoming calls to be auto-answered.
Now I wonder whether the web-based replacement will work with auto-answer at all.
Marketing is the worst thing that could happen to science. Science comes up with ideas/techniques/goals that warrant specific names. Then marketing comes along and says "oooh, I like that name. What we're doing would sound great if we call it that". And so begins the abuse and dumbing down of the name.
Don't believe me?
What about "hologram" or "AI"? Do you really think that universities are using real holograms to project virtual lecturers or that your smartphone (or more accurately the service provider behind it) really has an artificial intelligence helping you turn your home lights on and off?
I wish someone would "re-educate" marketeers and stop them from co-opting, distorting or plain destroying science. (I suspect that "re-education" would be a term that a marketeer would apply to watching re-runs of The Big Bang Theory because that sometimes uses "big words" so must, therefore, be "educational".)
One last note; journos are not helping here, by sheepishly following the marketing fools.
Biting the hand that feeds IT © 1998–2020