Re: Suspicious Savings Stats Spotted
My thoughts to. How can employing more people of a specific gender specifically save money?
1530 posts • joined 10 Apr 2007
My thoughts to. How can employing more people of a specific gender specifically save money?
Next: Campaigns bemoaning that fact that 100% of jobs are occupied by humans.
I'm very, very sure that they aren't. Not sure what species they are, but human they ain't.
How many brontosaurus's is that?
Blah, blah blah... but what is that in Register Units? :)
I find IKEA is a worthwhile place to look around but only if you can stand the forced routing around their stores without going on a killing rampage (particularly after the piped music in their car parks), however it's often better to look and then buy elsewhere. This isn't always the case though as IKEA do genuinely produce some gems, but you have to be careful as they also push a lot of junk.
However I avoid flat pack furniture as much as reasonably possible, preferring my furniture to have rigidity other than being retro fitted with stiffeners, additional glue and nails or attached to a wall. The last free standing bedroom furniture I bought was custom made to order from a local furniture maker - it cost only 20% more than the equivalent flat pack, was made to our specifications and is well built so you don't feel that everytime you put something in a drawer the front will either fall off or the drawer base will collapse.
These things are a bit of an oddity...
Touch in windows 8 still sucks even though much of the user interface in Windows 8 (8.1) has been murdered to be "touch friendly", when configuring them you frequently have to head back to desktop applications so you can actually get some things properly configured. Such as networking... In essence, the entire Windows 8.1 shell feels and acts like yet another MS bodge job, massacred by clueless UX developers, further trounced by greedy, clueless marketing department drones and then rushed out half implemented.
It's not helped that, as noted by Steve Knox above, their specifications are just wrong for the price. For rather less money you can get a considerably more useful and powerful laptop which while this sounds daft, from experience most users want a keyboard with their Surface devices which basically means that they instantly turn into under powered, over priced laptops. The staggering inefficiency of the Windows OS and typical Windows applications really doesn't help them as the raw specifications and processing power does exceed competing tablets, but the final result just isn't the same.
As much as I hate the Windows 8 interface on a desktop PC, it does make sense in use on a touchscreen tablet as the interface conforms more to tablet expectations (which are rather different to desktop or server expectations). I'd like to be able to run windows, or at least some windows applications, in a tablet form factor and the business case of multiple users and easy access to files and documents makes a strong case for them compared to iPads (forget it) or Android tablets (much easier, but no corporate control). But again, even Microsoft's flagship application Office is awful on a tablet as the user interface has been massacred - except for the bits that they couldn't be bothered to update of course, these still popup with the same old desktop windows. The result is MS Office on a tablet feels like just another MS Office skin-refresh bodge job with restricted functionality and poor usability therefore users tend to use the desktop version and for that they require a keyboard and mouse. Touch does substitute for a mouse in a lot of situations but pair a keyboard with a tablet and what do you get? Yep, a laptop of sorts...
Azure recently overtook AWS to become the largest cloud Windows Server hoster...
Really? Sources please.
He keeps a box of unnecessary cable extensions, adapters and gadgets under his desk at home, another in the garage and three more at work, despite the fact that half the contents are racing towards obsolescence while the other half is so old and pointless that they could be sold as collectors’ items.
Ah, erm. I think I may, just possibly, on a tiniest smidgin of an off-chance... have a similar affliction.
(see icon). git.
I presume you are implying that your shed has better decor... and if it's a proper man-shed, heating as well.
Oh hell yes, now I remember the resistor colour codes... in black and white. And other gems such as sample circuits that somehow got mirrored in printing or not-so-carefully (or was it a ploy to buy more) skipped power regulating components.
Yep, I see your point with the finite number of bitcoins, eventually they will start to be worth more and more. Particularly as time goes by, more will go out of circulation either through being lost or hoarded.
I didn't mean that money being created required actual money to be printed or minted, it's that in order to have growth the extra money must come from somewhere. The daft modern fixation with perpetual economic growth is impossible without the new money that this growth "creates" coming from somewhere otherwise there is in reality no growth as existing money is just be recirculated and through attrition gradually lost, in a very similar manner to the finite number of bitcoins. While the banks like to create money through loaning money that they don't actually have (yet), there is only one final result with this scheme and that is an economic collapse, one of which we have recently experienced when banks were lending each other the promise of money in a self-perpetuating circle and it caught up with them.
When it comes to it, QE is nothing more than marketing speak for "create more money out of thin air". It's dressed up in various ways but this is all it is, there are no tangible assets behind it which just leaves promises to honour it.
As for the Bitcoin fans who keep banging on about inflation, Bitcoin has built-in deflation, which is worse. That certainly means it can never be a fully-fledged currency, as it makes proper banking pretty much impossible.
Bitcoins have no more "deflation", or perhaps more accurately "de-valuation" than any other currency and it probably has less and is also more open about it. In order to maintain the financial illusion that an economy is growing new money must be continually created - it used to be gold mines providing the scarce raw materials for the coinage but we no longer operate on a gold based economy. Think about it... if new money was not continually created then after a while the money would start to pool in the possession of the "rich" and there would be a scarcity of it at the "poor" end which doesn't really benefit anyone in the end as those with little money still need money to continue their "low value" transactions between each other. Taxation is one mechanism to force money back into circulation through the taxing organisation taking the money and redistributing it by spending it - which, incidentally, is where the theory of central government spending its way out of a recession comes into play as it stimulates the economy through the distribution of money to those who don't have so much of it. Importantly this tactic fails utterly when it's a case of distributing the money in such a way where it's impact to the wider populace is very low... such as the money leaving the country (this is why some countries historically had a prohibition on the export of money).
A currency, virtual or not, only has value if people perceive it to have value. There are a lot of old currencies that no longer exist or have general value because of this lack of agreed value on the part of the receiver and the giver. You won't get far trying to pay for something in Italian Lira, East German Marks nor the Roman Denarius. Coinage of these defunct currencies has specialist value for collectors, but not a lot more with the exception of the value of the metals themselves.
A currency is nothing more than one-step away from bartering. Rather than me having to trade onions for socks, instead I simply exchange onions for an agreeable number of tokens, and later exchange a agreeable number of these tokens for. The important part is that all parties agree that these tokens have a value, in this instance it's the number of onions or socks they are worth. The next most important point is that while there is a level of trust implied, people will always game the system therefore the tokens must be hard to reproduce which is where scarcity comes in.
In some ways, bitcoins are closer to the origin of currency than the current money markets and banking system where "money" is moved around and between databases on the pretence that it actually exists despite the fact that "calling" all of the money will find an enormous, impossible to fulfill, deficit.
So this is where an attacker profiles a target website, going through it and recording the document requests and the number and size of the data requests being returned. In other words something like "going to page X triggers Y separate requests of a particular size". The number and size of the resources requested are likely to differ between pages and the pattern of page progression will also indicate the page on the site, such as a user will typically follow a pattern of page visits because that is how the site is designed.
Clever enough stuff, but it does require that the site is already profiled, probably extensively and a few times... and no doubt regularly in case the site makes changes. This does limit the vector on this approach quite substantially.
The fix, of course, is to make either the page progression vary (pissing off users and making the website hard to use) or to vary the number and size of requests for each page in a site wide randomisation plan. If the website always produces, eg, 25 requests for each page and they are a consistent size then it'll be impossible to track the page progression.
You wouldn't believe how many Internet projects I come across that have exactly the same kind of blind arrogance like this. It's enough to give the entire industry a bad name... :)
Not really sure why you'd do this. I'd have guessed Windows expects multiple apps to write to different windows on the screen as they run. I could see the trouble starting when multiple apps on a remote server want in as well
The pain with windows GDI is one app with multiple concurrent threads of execution where it makes sense for them to update the interface independently. In theory it shouldn't be a problem because windows deals pretty well with multiple applications, with varying processor affinities, updating user interfaces simultaneously however as soon as you try to put this all into one application the deficiencies in the GDI start to come through. It's not unexpected of course, as windows was designed as a single user, single processor shell rather than anything more sophisticated and the multi-processor and multi-user was bolted on later as a virtual afterthought.
In case you're wondering why GDI is/was being used, many of the newer windows APIs are sometimes little more than translation or management layers for the underlying GDI layer so not only do you suffer from the hidden GDI problems but you also have another layer of abstraction and inefficiency on top to deal with. The aim was to fix this in WPF however WPF was practically unusable for a long time and brings its own problems to the game.
I know I'm not exactly an "average" developer, but I was working on multi-CPU x86 code in 2002 (on Athlon MP CPUs if it matters).
It's not hard, or at least I didn't find it so, when you are aware of concurrency issues and know how to code parallel tasks and in particular what can be easily run or is appropriate for concurrent processing.
The hardest part was dealing with the utter ball ache that was (and still is in some ways) concurrent access to the Windows GDI, let alone the complete train wreck often involved in running anything ActiveX related concurrently.
The Intel C extensions for parallel code also make it a doddle but, again, you need to know what you are doing. IMHO the historical ghastly native support in Visual Studio for concurrency was a big problem.
As for Intel vs ARM, yes the x86 instruction set sucks balls compared to the ARM instruction set and this requires a lot more (very) clever optimisations from Intel, but even with this aside, it's just depressing how for windows applications, in much of the code 95% of the time nothing productive is being done with the CPU cycles.
I can't be bothered to look this up, but I seem to remember that this was a highly successful campaign of negative marketing?
I'd much prefer an upstream speed that doesn't suck balls, but this doesn't grab the marketing headlines in the same way.
Same old shit peddled by the gob-smackingly clueless to satisfy the spectactularly stupid (or lazy in the case of parenting).
Very similar in the level of clueless to the "facebook, twitter, and so on must ban nek-nominating." cries. Errr, yeah.
@ Adam Foxton
That sounds like an even better idea... however isn't it theoretically possible to retrieve the contents of RAM even when the power has been lost or is this only particular types of RAM?
Let's see erase itself when it's unpowered...
How much electricity is required to wipe flash memory? Have a secondary battery just for that, it could even be built into the flash memory package itself and with multiple control routes to send the "wipe it now" signal it would be very hard to prevent without multiple precision drills hitting at precisely the same time.
But you can replace iPhone batteries and relatively easily. They are intentionally designed so the average consumer can't replace them, not for the buggers to be impossible to replace. Unless you get one where the assembler got a bit over enthusiastic with the glue of course...
Not all iPhones models are as easy as others to replace the battery though.
Good. While neither Apple nor Google are directly responsible for the gross piss taking that is going on with in-app purchases (Dungeon Keeper anyone?) but they are in control of the ecosystems that deliver these apps and therefore they are in a position to do something about it.
It's hardly a new product, these have been around for quite some time. What's extra is the bluetooth button which is a nice refinement.
While in theory they are good, in practice they suck balls because it's even harder to a) not shake the camera and b) point the camera in the right direction.
What usually makes "selfies" suck balls is that very few people know how to compose a selfie or to pose for it. Still, they are fun and there is nothing wrong with that.
Interesting to hear the process described.
Reuse is (usually) the best form of recycling.
Pah! When I were a lad we wrote code in machine code, none of this assembly business... that's for nancies.
Seriously, I did. I can still remember some of the numeric codes as well.
It's interesting how a nation can turn its citizens into unwanted pariahs abroad.
For quite a while now there appear to have been considerably more Canadians travelling around than (US) Americans. Although their passports tell a different story.
I have a very similar take with my S3, I can't actually see a compelling reason to replace it - it's easily a good enough phone for me and for what I use it for. e.g. calls, texts, mobile internet access and the odd game and note taking session - although I have a 2013 Nexus 7 for the more heavy content editing and not taking sessions.
The S4 is a good phone but I see no reason to upgrade, the S5 looks more of the same. If, or more likely when, Samsung give up supplying updates I'll drop a custom ROM on it. I'm tempted to do that now due to the ghastly mess they've made of WiFi with the most recent update and losing the TouchWiz interface and Samsung shovel-ware apps really won't be a loss at all.
Ouch - I thought the sandoxing between applications on iOS was better than that, however this sounds like it subverts the APIs that allow inter-app communication although the way the article is written is could be specific existing applications that have elevated access that are the problem.
You are personally anonymous to the government and NHS cronies through security-through-obscurity. However the buyer of your personal information (inevitably somehow linked to the same government and NHS cronies, odd that) will now know all they need to know about you. They will then link this new information to all of other personal information that you never gave permission to be shared.
And then people wonder why I always fill out such random information on forms that don't need this information, refuse to give my address to random shops for "catchment surveys" and other general flippances. So instead my bank sold, or lost, my personal details for me...
Sure I haven't seen it either. However seeing what it is meant to look like might help jog the grey cells...
Just a random google image search gives me this: http://www.energyroyd.org.uk/wp-content/uploads/2014/01/betterinfobettercare1.jpeg
...and nope, it doesn't ring any bells at all even with a pic of it.
So how are we meant to pronouce "Huawei"? :)
But back to the plot - competition is always good when it's competent. Three years to develop the backend processes and final products for mobiles is good going.
I know there's a lot of hate about/against advertising (you'll find me there as well), but this tech, as it is presented now, is actually a good idea and makes clever use of impersonal data to improve the targetting of adverts while using largely commodity hardware to perform the task.
In some ways it's easier to think of it as an automated attendant looking you up and down and either suggesting the "£1 for a bar of chocolate" or the "meal-and-a-paper" deal depending on what you look like. The serious problem comes when systems like these are enhanced and store your picture rather than just analyse and discard it and then start to link this image profile to locations and purchases (e.g. payment methods). After all, you wouldn't want to walk into a different shop/petrol station with your wife and be offered condoms because you regularly buy them from another shop using the same system would you? :)
They care about their own lies as much as "FACT" do.
The instances of somebody breaking into an office or other space and stealing the software is very, very low. The instances of copyright violation (*)... considerably higher.
(*) Or is it more accurate to call it "use on unlicensed software"?
I can see you point, but how about these:
You're on holiday to enjoy yourself. If your form of enjoying yourself includes being in (near constant) touch with friends and families then there's nothing wrong with that.
You're on holiday to get away from it all and isolate yourself. Turn the phone off. Until, of course, you find that you'd like to know exactly-ish (GPS) where you are on your maps that are rather more convenient on a mobile device than the paper variety. Then the maps need to be downloaded. Then you find that you'd like to locate the nearest good taverna rather than the flea ridden cess pit you "found" the previous day. You can, of course, do things the way these things were done 20 years ago but technology is available to assist so why not use it?
Now the trick is to manage it all and exercise self control. Will finding out the local, to your home, sports results improve your day that much? Are you able to ignore or turn off the work emails so you can deal with them when you return to work, not in your free time?
Your personal data has value, so be careful with it using it as a currency to pay for "free" services. If the data leaks, you will pay forever because you cannot change who you are.
The "personal data" that these providers tend to receive from me consists of an entirely separate and unused email address along with whatever other arbitrary and entirely fictional information I am forced to supply at the time. And I'll definitely never install their "helper" apps that appear to exist solely to fuck things up and liberate more information.
the Lync client didn't suck balls so spectacularly.
It takes a flipping age to get around to starting up, then when you think it is finally started it turns out that it hasn't, is a minefield of impossible to fathom "icons" and functionality that all tends to do nothing useful.
Basically, it almost the polar opposite of something that should be simple, efficient and easy to use. Installing it enterprise wide is one thing, persuading users to use it when it sucks balls so badly is another.
...and that's just the client. The server side is even less fun.
The problem is with .com is that it is usually incorrectly used instead of .us
No one ever does...
There's probably little point unless you can compromise literally tens of thousands of AMD GPU powered systems.
If the site was aimed squarely at children or as an online game then the overall presentation with the parallax (layered) scrolling and cartoon style graphics is actually very good. And they even avoided using Flash.
The navigation, however, sucks balls and is very much along the level of incompetence exhibited by flash "web designers". e.g. they have no clue whatsoever about web design, optimisation or anything so they just made an all inclusive flash "site". Generally a desire to control everything and re-implement everything in a custom manner that makes no sense and is not optimal for any user or device. But it looks pretty when a screen shot is taken.
...and this is pitched both at businesses and home users???
A professional developer selects what streamlines his job, doesn't try to affirm an ideology.
Nearly. A professional developer, being a professional, usually has to just use whatever software he has been given to use and often a specific version that, for whatever dumb-arse fuckwit twisted reason, actually works with the legacy mess that he's working on. Sometimes the software choice is also out of his hands for other reasons such as having to develop an application that continues to work across as many versions of Windows as possible, in which instance there's often a case for using an old version of a development tool rather than a newer version which will often silently includes later version prerequisites thereby hobbling the deployment target.
Given a choice, most professional developers favour using the tool that they are most familiar and comfortable with rather than always selecting the optimal tool for the job. The optimal tool for the job may only be optimal for this developer after a lot of new training or relearning and for a quick (haha) job the familiar is usually selected instead. There is more flexibility for new projects however corporate libraries may not be compatible and there is always, for good reason, pressure to re-use existing code rather than create new copies of the same functionality.
After all Windows development (but WinRT) is much more open than Linux one, where after all everybody uses GCC - you have more choice about development tools in Windows than in Linux - if you don't like Visual Studio you have alternatives.
I disagree with this. Firstly there is a big difference between a compiler and an (Integrated) Development Environment (IDE). GCC is a compiler and it neither claims to be, nor is, a development environment. There are alternatives to GCC as well however given the structure of GCC many additional components just enhance GCC rather than attempt replace it wholesale. This, if the structure is good enough, is a very good way of operating and this modularity is one of GCC's key strengths. There are quite a lot of development environments for Linux however your level of satisfaction will depend greatly on the level of integration that you need or desire. Unfortunately these days on Windows there are very few remaining genuine development environment tools that are not cross platform and therefore also available for Linux (and often OSX). The most "used" ("used" is not the same as "popular") development environment for Windows is, of course, Visual Studio. However this tool is very inflexible in that you will work the Microsoft way or not at all. You will use the Microsoft tool stack or you have to try to work around with the alternatives, which wastes a lot of time. Visual Studio's overall operational inefficiency and user interface leaves a lot to be desired as well (note to MS - don't ask your developers for feedback, get told it and then ignore it because you are too arrogantly stupid and have a "vision"). However it is familiar to a lot of developers therefore gets used even when there are rather better alternatives available.
As for Linux GUIs... yes, they are often appalling. While the tools available in Windows are often better on the GUI front, I assure you that this often doesn't translate into a better GUI - just one with more visual components. There is a world of difference between good developers and good user interface designers.
Other than the desire to label everything and everybody, it's also attempting to merge biological identification with orientation into one label.
Biologically, there are "male", "female", "both" and "neither". The occurrences of the latter two are very low in comparison but they do exist. To further complicate this, questions such as "is a female who is 'born' with no ovaries truly a biological female?". Best not to answer this one unless you want a very long discussion, but it's an example of how things are complicated.
Beyond the straight biology, for many thousands of years humans themselves have blurred the lines, starting with eunuchs and cross dressing and moving on to trans-sexuals in various applications and stages (surgical, hormonal, etc).
This is before the complication of sexual preference comes into play where the basics are accepted as male-female, male-male and female-female however these basics only take into account the two primary biological sexes. Expanding just the biological side and sticking to pairings there are ten distinct combinations. Add in those who are interested in more than just one partner type and it becomes quite a mess. Next include the non-naturally occurring "genders" and it's one hell of a matrix. Lastly, don't forget those who have no sexual desires at all and are happy that way therefore they shouldn't be identified as one of the others.
Even attempting to refactor the single label into a few becomes an exercise in pointlessness as no single label will fit all unless there are n! combinations. So in essence, it's a multiple choice list and not a single selector, after all your work colleague could be a heterosexual man during the week and a cross-dresser that is only interested in other cross-dressers at the weekend...
50+ doesn't seem too insane a number considering...
Unfortunately reality and sense have nothing to do with this as this is Religiopseudoscience that we're dealing with here and that has no truck with either common sense or reality.
true... and that was probably the only time that I have ever called it "candy crush saga" rather than just "candy crush".
I'm struggling to wonder even why funds had to be moved at all. Anybody?
Can't agree more, especially where I am located and the frequent curses I hear about the UK first year undergraduates being so utterly useless compared to the foreign intake students. So they have to teach down to the lowest common denominator and teach the basics, boring the hell out of the more competent students and because the basics have to be taught so quickly, quite a few drop out as well.
Meanwhile schools carefully teach our children how to pass exams and look like a worthwhile statistic, teach them, parrot fashion, how to use a particular company's products and yet they entirely fail on the basics, including the combined sense of exploration and learning that teaching is all about. Now we have huge numbers of mathematically and language illiterate kids coming through school and this has always been inevitable as while the cuts and policy changes are short term, their impact is long term. It's not just that these kids have suffered with poor basics (maths and language) but they have repeatedly had all drive and exploration and creativity beaten out of them as none of that helps to pass exams. These, of course, are the same exams that our "smarter" children are getting better at every year, despite the fact that, for example, the current A level maths curriculum rather suspicously closely matches the old GCSE/O-Level curriculum of a decade ago.
Some of the most important things I learnt when I was taught Computer Science (not word processing or powerpoint bothering) was an appreciation of the history of computing, how we go to where are (or more accurately, were then) and the basic sociopolitical issues around computing in general. This provided the building blocks for the basic of how computers operate (Input > Process > Output), boolean algebra and logic, how computers interacted with humans (both input and output interfaces), how information is stored and transmitted and that was before we looked at a single line of code. Let alone code that isn't code... e.g. website markup or style-sheets.