Re: More Concerned About Safety Gear
Considering how clothing much many builders wear in the summer, she's quite well covered up really.
1417 posts • joined 10 Apr 2007
Considering how clothing much many builders wear in the summer, she's quite well covered up really.
errr... yeah... I had a train of thought at some point but seem to have misplaced it. For some reason.
And then I noticed that in the (photoshopped) "asus beach babe" image the girl couldn't possibly be using the device due to the angle of the screen. Do I have my priorities right? :)
If the professionals can fit sensors upside down and confuse metric with imperial measurements, I'm sure a missed blown fuse is quite forgivable :)
That's intriguing. It would seem to me that it implies that the FP64 processing is implemented using the multiple steps of the FP32 circuitry (splitting and then re-merging the values?) rather than native FP64 circuitry.
Ah yes, the "in-memory database" that's effectively crippled due to lack of support for many standard and commonly used SQL operators.
However I'm sure that if you had specific data requirements that you need to run at an acceptable speed, you could redesign your database, separate the data that you need fast access for and then work around the dependencies. In general it may be useful for specific new cases, useless for speeding up existing databases.
Oh hell, yes. I forgot the bullshit of acronyms everywhere... with DS, DD, OH, LP, DH and everything else that just makes it all as cliquey and incomprehensible as possible.
Some of the info on mumsnet is actually useful - children are different and finding out what other parent's solutions, or attempts at solutions are, can be invaluable.
Unfortunately it's hard finding the useful information under the heap of junk posted by the batshit insane.
At full arms' length, those pixels are probably going to be far too small to make a difference
Full arm's length? When was the last time you saw anybody hold a hand held device such as a mobile or tablet at full arm's length? Apart from the logistics of doing this in public, you'll soon realise just how heavy these devices are, and how heavy your arms are, when you try to hold something at full arm's length for any amount of time.
My desktop monitor is only just a full arm's length away from me... strangely this makes it less than ideal for a touchscreen interface.
It wouldn't be rotatable, and it would probably be pretty horrible to use when rotated 90 degrees.
It probably wouldn't have happened in the first place. There would have been actual code reviews, code analysis, testing etc etc.
Nice troll :) Commercial organisations are much lazier about their validation and testing and code reviews because a) it costs too much and b) nobody else will see the code therefore problems are hidden through obscurity.
This is a functional programming error, a memory bounds checker would not pick this up because there no memory violations taking place. Unless an independent code reviewer thought about the case in enough detail and thoroughly dismantled the code it would be missed. This is a small function of a rather large code base.
On the other hand, this function was evidently not tested to destruction through putting the full combination of extreme values into it.
Why do I always internally sigh whenever I hear anything coming from "Federation against Software Theft (FAST)"?
Firstly, there is very little Software Theft, and theft is generally a police matter. e.g. somebody has stolen your collection of install media and licences from your office. Intentionally misnaming an organisation to further an agenda is ethically wrong.
Next, while FAST like to report themselves as a "not-for-profit organisation", they are not a registered charity, instead they are registered as "PRI/LTD BY GUAR/NSC (Private, limited by guarantee, no share capital)", which while similar is rather more flexible - the only enforced limitation is that they have no shares and therefore shareholders. FAST have also registered a for-profit organisation. It may all be above board, but it just feels wrong.
Not that I'm against software being correctly licenced (I'm all for it, especially given my business), but misrepresenting things, seemingly only to support the large software organisations and ignoring the small and making up ridiculous statistics at whim just doesn't sit well with me.
This bug is nothing to do with malloc - it's a basic overflow - the data returned is bigger than the allocated size, thus returning other parts of the processes memory/variables.
So even using calloc throughout would have made no difference here.
It's not a "basic overflow", there are no memory bounds being violated in this bug which is why the automated code checking systems, good as they are, didn't pick up this bug.
The bug is that the memory allocation code allocates one size block of memory, which being unitialised contains whatever was in that memory space before, hence the problem, but overwrites this block with a different number of bytes. In this case a 64k chunk is memory is allocated, one byte of it is overwritten with the return data and all 64k of it is returned.
Completely agree, this is a classic case of an analyst fundamentally failing to understand the products and their reasons for success.
The iPad is successful because of what it is - a quality media consumption device that allows some (limited) media creation. The MacBook devices are also quality devices but more targetted towards media creation compared to consumption. There is some cross over between the two but each have their specialisation and that is their key strength.
If you really want to use an iPad to create content a bit more easily you can always link a keyboard to it. There's a reason why most people with iPads don't do this.
It's in the equivalent US and UK agencies' remit to do exactly the same - to promote their nation's interests (both commercial and non-commercial). This covers industrial, commercial, political and military espionage and this has been the case long before the Internet became so prevalent.
As I understand it, it's the normal political "please don't do it" kind of espionage slap down where both parties (privately) know damn well that it is happening, that it will continue to happen and neither really want to escalate it any further. This tends to result in token grudging actions but nothing fundamental and in general everything will carry on as before. When it escalates further, sometimes diplomats are expelled as well, usually to be replaced by somebody just the same but with Internet espionage even this is less relevant and is just a token protest measure.
Nations also have internal agencies that spy on their own citizens or, more accurately, anybody within their borders. This is for various reasons such as tracking extremists, criminals activities (organised crime groups and lesser crime elements), counter-espionage (you need to know who could leak information or who is) as well as more benign political analysis where the reaction of the populace can be measured and reported on.
Interesting, however it is a struggle to see how it compares with other wireless technologies for many applications.
On the other hand, if you have a few sensors that can be put in awkward places and don't want to or can't run cables around but have sunlight available then you have a remote sensor that powers itself. It's line of sight, which may be a problem for some applications but for many sensor systems this wouldn't be a problem and the uni-directional nature will make it a little more efficient on the power over distance front.
This exploit isn't about buffer overruns as such - that is where you throw too much data at a process and it overwrites executable code with whatever you threw at it. This exploit cannot be detected using memory bounds checking, because it is not violating any memory bound.
When an application allocates memory, this memory is in an "undefined" state. For a cold started system or a block of memory that has never been allocated yet, this memory is usually all zeroes, however there is no guarantee of even this. Hence "undefined".
This exploit allocates 64k of memory, which being "undefined" will generally contain whatever application or process last wrote. Due to deficiencies in the code one byte of memory is copied to this and the whole 64k of memory is returned. It's pot luck what is in this 64k block of memory, but keep on requesting memory and you will eventually get something interesting back.
There are various preventatives for this, such as zeroing the memory on allocation, but for a low level library this is inefficient and as the block of memory should have been overwritten entirely a pointless exercise in wasting processor time. Another is to zero the memory on de-allocation, again for many low level processes this is also inefficient as a relatively simple process could then take 20x longer to complete, multiply a low level task by the number of calls to it and the overall system impact could be disastrous. On the other hand, a code process that stores passwords and private keys should damn well clear the memory after use, but again this is an efficiency argument compared to what can be done on an otherwise "trusted" system.
That is the problem. There are some very clever code analysis systems that can help to spot these kind of mistakes, but they can't spot everything.
System libraries usually need to be implemented in the most efficient possible way. That efficiency is achieved by working as close as possible to the "bare metal" — And C gets you there.
BOLD TALK ... FROM THE EIGHTIES! Well, already in 1984: The Lilith
Writing in C means you have to be much more careful
THIS ZIMMER FRAME REALLY GETS ME THERE FASTER, I JUST HAVE TO BE CAREFUL WHEN GOING DOWNSTAIRS. SURE I BROKE MY NECK A FEW TIMES, BUT IT'S NOT GONNA HAPPEN AGAIN.
This kind attitude to coding is exactly why many current applications and indeed operating systems are so staggeringly inefficient and slow compared to the equivalent of even a few years ago despite the hardware being orders of magnitude faster.
The lower level the API the less appropriate it is that it is implemented using "managed" code. If you had an understanding about just how much more processor resources (memory and CPU cycles) are consumed by managed code than unmanaged code then you would understand. Some things are appropriate implemented one way, some another. No one programming technique is appropriate for all cases and attempting to use one across all or to use the wrong technique is utterly stupid.
I don't get it either, all the open source morons have been saying for years their OSS crap is more secure, then we get things like this. Oh and the 23 year old x windows vuln exposed a few months ago.
Hint: down arrow is below, morons lol :)
Mistakes are made equally in Open Source Software and Closed Source Software. The point with OSS is that it can be made more secure. This kind of fault in closed source may never get spotted or reported and then you'll be in an even worse situation where you don't know about the fault or how long it's been there.
In this case it would appear that those responsible forgot that they were dealing with a work-force based in the UK and treated them as if they were in the US.
A common mistake made by many Americans, they seem unable to realise that laws differ and that laws of the USA are not universal.
The gulf in differences in quite staggering... effectively in the US an employee has no rights whatsoever compared to the UK. AFAIK many of these rights come from contract law where both parties have to agree to contractual changes, rather than a company just making changes as they feel fit.
The 5+5 is a home deal and the licensing is explicit in that the software may only be used for personal purposes.
In many ways the rental of the cloud software is another backwards step, because where previously multiple users could use the same system with the same licenced software on it, now the users themselves are licenced. I many organisations this won't be practically different from before but in some it will be.
You've neatly summarised just why it isn't fine... because it's not usable as is and you have to alter its initial behavior to make it useful.
The underlying Windows 8 OS part is good, the awful "not-metro" interface kludged on top of it is not - it's acceptable for a hand held touch screen device, nothing else. Unfortunately users are generally forced into the "not-metro" interface far too often even with the "Boot to desktop" option found and checked.
Please stop posting sense. Otherwise Microsoft may have to come out with all kinds of marketing-speak technobabble as to why that just isn't possible and never will be even though they own both the OS and the application layer on top of it.
Is it me or does the sale price of some of these items feel a bit low considering their significance in the achievements of mankind? Incidental stuff used by film stars often sells for considerably more.
Actually, I think I'd be perfect because I absolutely don't want to do it (but somebody has to).
Where's that quote from? Something about the best politicians being the ones that don't want to be?
For some reason I suddenly have a desire to watch Monty Python again... :)
The particularly upsetting thing about it all, is that if any of us (non MP) did something like this in business we'd be instantly fired (no bonus, golden handshake or anything) and then given a civil case for recovery.
Whereas she can lie, cheat and steal and then attempt to cover it up, probably using more tax payer money, and then gets off with a limp apology and doubtless a cushy job somewhere else.
Thorough enough review, but one has to wonder about the sensibility from Dell in waving such a device around if it can be savaged so thoroughly. Unless of course that's the aim and it is an alpha or beta test going on rather than a review of a released product.
No it isn't, 90% of it is about marketing. A touchscreen in a car is not progress by any definition for example. And networking all the systems together to provide functionality thats not required isn't progress either.
I agree that a touchscreen in a car isn't exactly progress. The "user interface" of a car works through not putting too much burden on the user (the driver). Physical knobs and buttons are good as they can be operated without the driver having to focus on a non-tactile touchscreen to check that the function that they hope is there is in being displayed and that their finger is in the right place.
Why is the functionality not required? You're making a broad statement based on your preferences. Better control of the car and its performance helps fuel economy and safety. A suitably experienced driver familiar with their car may be arguably safer than a less experienced driver however driving isn't about just these "super drivers", it's about all the more normal drivers. A smartly controlled system will most likely save fuel compared to even the most experienced "fuel saving" drivers.
No offence mate, but you really need to go on an engineering course if you think ANY of the things you've listed require a networked system in the first place, never mind one running IP over ethernet.
Yes , CAN bus already exists and its already overkill. As for "air conditioning, windows and mirrors to control, seat positioning, lighting" needing networking - sorry, were you trying to be funny or have you really drunk so much of the kool aid that you just can't see a simple way of doing these utterly simple tasks?
While at a fundamental level, it's true that nothing I listed requires a networked system in the first place, the same could be said of your phone, your computer and your printer. After all, you could just retype all of your contacts again in your phone, or use a hand held phone book and a pen. You could just write your reports rather than typing them on your computer and printing them out. However it's about progress... and progress in the device engineering front is steadily heading towards more and smarter control of devices. This allows much more efficient and accurate operation and much better diagnostics... and this requires a lot more sensors and a result is a lot more and better communication. In a car, a CAN connected ABS system can report traction problems to a central system, it can report back for each individual wheel if necessary and this can be fed back into all manner of systems, cross referenced with other sensors and devices (e.g. temperature sensors) and the operating parameters adjusted appropriately (ABS in the wet, dry and cold, potentially icy, conditions really does need different operation profiles). This is just one small example of ABS and systems where command and response is vital.
Why wouldn't lighting, air conditioning, mirrors, seat positioning and lighting need networking? If you've ever driven a vehicle with multiple driver profiles it's an enormous benefit having your own driving preferences compared to a partners and being able to switch between them quickly and safely.
I'm all for simple, however simple doesn't always equate to efficient, optimal or useful.
Electric cars are even simpler than internal combustion - some electronics to charge the battery and run the motor. Done. Dump all the other crap and save weight and space. I can't really see why it needs an internal network running over ethernet other than it being some geeks wet dream.
A hell of an over-simplification there. In a conventional car there are many systems that communicate and are managed through the ECU - both monitoring and control systems or just a convenient way to integrate everything (often the monitoring is separate to the control systems). In the majority of vehicles these operate over a variety of the CAN Bus, as it's a simple bus and very resilient to the hostile environment of a motor vehicle. However an electric vehicle will be a considerably less hostile environment than a combustion engine system therefore there is scope for different systems. For example there is also a variety of the CAN protocol that can run over IP although this scheme is generally more used in an industrial environment than motor.
So why shouldn't there be an internal network running ethernet / IP? It's a good opportunity to take advantage of standard interfaces between components which is always a good thing compared to proprietary connectors and interfaces. A modern electric vehicle consists of a lot more than just a charger, battery and motor - there's all the battery management, battery level management and notification (e.g. "you have 12 miles remaining - charge soon"), recharge braking, ABS, tyre and other pressure monitors, audio system, navigation system, suspension management, air conditioning, windows and mirrors to control, seat positioning, lighting, dash board notifications and so on.
If you've done PPTs in the corporate world, you'd understand how nice using a touch interface on a screen that big is.
You mean stock whiteboard / projector systems? They're not often physical touch, even less often multi-touch with the enhanced control that can bring. This is aside from decent size "displays" where touching them isn't feasible unless you are 8' tall with arms that match. It may be marginally incorrect, but a sub 5' sales woman repeatedly jumping to reach parts of a whiteboard system is something that is hard not to find amusing and wipe from your mind...
Mirror the display onto a tablet, touch that without having to lean across a larger screen covering it. This also means that the presenter can remain facing the audience. iPads already have have this functionality. Not that interactive white boards don't have a use, but on many corporate occasions this would be better.
I'd highly doubt that an iTV (hmm, possibly name issue there) Apple TV would be touch based. It's just not a remotely sensible way to interact with a large screen unless you are either (a) a small child who watches TV from 2" away or (b) Microsoft and insist on using a touch interface where it's not useful.
Instead I'd expect to be able to use an iPad, or possibly an iPhone to control the device. Possibly to play content back on one or more of these devices as well. Even just supporting audio would be a dream in many houses where viewers can listen to the tv at their own volume without disturbing anybody else.
Apple would doubtless be tempted to put in proprietary speaker links, to fair quality, but somewhat overpriced speakers.
A more visionary Apple would turn the device into an entertainment hub, pretty much iTunes on a TV with wireless link to local devices. Such an iTunes in essence ought to require little more than a reasonable processor, local (cache) storage and a display and this kind of spec is getting there with many "smart" TVs and set top boxes.
In all this time of the BYOD pushers releasing press notices and other such "advertising" as they can get away with... I've still yet to fully understand just who BYOD will actually benefit other than the pushers of BYOD management systems.
The majority of staff use a computer as a tool to do a job. If it works, that's its requirements taken care of. Power users, of various types, have always required more specialised systems and a good corporate IT department will cater for these as well and in practice, in a given organisation there won't be more than a few different distinct power user requirements, although there may always be the odd specific case.
"Bring your own mobile device"... now that does have value as an employee would then not have to carry multiple devices around. There is also the cynical point of view that an employee is more likely to take care of their own mobile device than a company one.
I thought at the time it was a clever move, as it meant MS could still punt a .NET version of Office to other OS users - Mac and Linux being key.
Why did it never pan out that way ?
Precisely my thoughts. Most people see the word "database" and makes the assumption that this implies a networked or online data repository on a computer system. A set of documents in a filing cabinet is a database and this is made quite clear in the (EU) Data Protection Act.
While I agree that there is now more obvious gender targetting of lego, to a large part this always the case. The "city" lego was usually shown pictures with girls and boys. The "space" lego was usually shown with pictures of boys. The rather older "house" lego (where you built rooms and had articulated characters) was usually shown with girls.
However dump all the pieces in a box and they become the building blocks that a much wider variety of things can be built of - but that's the enduring beauty of lego, what you can do with it. The more specific the piece then usually the less options for re-use there are but even this encourages creativity - want a satellite dish for the side of a house but don't have one, use a water character's "tray" instead.. want a downlighter for a light but don't have one, use a satellite dish... and so on.
The interesting thing about the blue = boy, pink = girl colour gender assignment is that this is a relatively recent assignment; It used to be the other way around. It's also interesting to note that it wasn't that long ago that up until a reasonable age boys and girls were dressed near identically.
Exactly my thoughts on this. Statistics, lies and damn statistics.
I remember wishing that they had other expressions other than "gormless grin" or that there was a difference between male and female faces. The enterprising among us got hold of marker pens and draw faces on (aka mauled with a marker) the reverse of the head to give us some variety.
I still really wish that Microsoft had gone the OpenGL route rather than the (frequent) abomination that is DirectX. It's not that DirectX is inheritently bad (it's steadily improved a lot since the earlier versions), it's just that working with it compared to OpenGL there is a lot of boilerplate, inefficiencies and lock-ins and more than a few cases where a little more transparency would be nice as it would help figuring out what is actually going on, or just going wrong :). OpenGL has it's faults as well and comparing OpenGL (graphics) to DirectX (graphics, video, audio, input and more) isn't exactly a fair or straight comparison but a more standard approach would have benefitted everybody including Microsoft and the implementations of OpenGL would have improved as well. Instead we generally have to use a further level of abstraction to try to develop in a more cross platform manner and this introduces a whole host of new problems.
Embedding good support for OpenGL within the windows UI would be a dream for many standard (i.e. not game) application developers compared to the pain of all the work arounds to produce good quality, efficient, embedded imagery otherwise.
This would still leave windows as a platform competing against others, but it could then compete more fairly and if Microsoft worked hard to produce the best experience and the best (non-lockin) services to support it all they'd be onto a really good thing. Instead games and gamers are steadily moving to other platforms.
You are very correct about building things more appropriately in the first place. However the commercial computer industry is very young, it has changed massively in its time due to huge advances in technology and along the way common sense has often given way to convenience or greed. In this case I mean greed through trying to get a product out as quick as possible, ignoring the future or best practices. This applies equally to the designers of industrial machinery utilising the advantages that computers could give them.
This is where defined standards are critical to everything. We wouldn't have the Internet we have today without defined standards which are, relatively, vendor neutral. Individual vendors will always want to push their take on something which shouldn't really be seen as a wholly bad thing, as long as the end result is sensible. The more open these standards are the better as it allows the implementation of a solution by multiple, competing vendors and interoperability between systems. Again, we wouldn't have the World Wide Web without this - instead we'd be mired in the locked in blight that was AOL, Compuserve and similar.
Standards benefit many levels, for example Virgin Media uses cable modems that adhere to the DOCSIS standard. This allows VM to select the "best" or "most appropriate" solution for them which need not be a single supplier or manufacturer. The residential power plugs we take for granted all use a defined standard, with defined tolerances and performance - consider the nightmare this would be without this basic standard - an extended form of travel plug nightmare. For reference, in the early days of computers and PCs, many used proprietary connectors for the other end of the power cable rather than the IEC form that is now uniform internationally.
Ideally the designers of industrial machinery mentioned here should have used defined communication standards and definitely not use closed, proprietary protocols such as NetBEUI / NetBIOS and similar. Unfortunately these short-sighted decisions are often made in the pursuit of new technology and fast (i.e. cheap) development time. At the time these devices were designed, more open protocols such as CAN (CAN-Open), CAN/TCP or the many other protocols may have not been available or the devices that were available just did not have the right functionality.
Ummmm... thanks for that, but it's annoyingly incomplete: "At least some of the sheep are OK". How do we identify which sheep are OK and which aren't? This could be very, very important for survival at some point.
untidy networked strands
Checks under desk... uh-oh... checks cabinet... oh dear.
It appears we may have a serious infestation here. Haven't spotted the spiders themselves yet though...
I agree, Lotus Notes had an appalling user interface even when new and it never improved. It did, however, have a lot of very useful features that many users missed in the obligatory move to Microsoft Outlook and Exchange. Microsoft haven't done much with Outlook except re-skin the main interface every few years (the same old back end dialogs are in place in places even in the latest "metro" version, the same old bugs and useless HTML rending are there as well), made it slower and even more resource hungry and bloated it with lock-in features that most users never notice or use.
On the other hand, has email functionality reached the limit of what is sensible? At which point refinements in email client user interfaces are just that.
You haven't been thinking about the commercial aspect of it fully: this pound shop could sell these mini-towers. A complete win all round for capitalism.
I know, this is one is so outrageous that it's obvious... it's when there is a degree of plausability to it that it becomes more difficult.
Also, don't forget the article a few weeks ago that mathematics is sexy.
...and that it's the 1st of April :)
Sounds like this is the kind of developer who has absolutely no clue whatsoever how anything actually works by way of memory, code or anything else much... However he did fess up to it and (despite the headline here) doesn't seem to be attacking AWS. You don't always have to learn from your own mistakes.
In some ways in a modern environment it could be argued that a developer shouldn't need to know everything that's going on behind the scenes, however good developers should be aware of what's going on.
Searching a delivered package is a world away from decompiling an app. In any case, just how does this developer think the likes of Google and Amazon check that apps are not doing anything untoward? Or in this case, just plain dumb.
That'll be the journalisming monkeys then... I know, I know, with a poor pun and obscure reference like that I'll get me coat...
*Insert item* causes cancer and reduces house prices.
*Insert item* causes cancer, reduces house prices and creates an in influx of criminal child molesting immigrants.