2197 posts • joined 21 Dec 2007
Well, not quite, perhaps if I pronounced "dog" like "cat" then confusion would reign supreme.
That has no bearing on how the word "dog" is "meant" to be pronounced, because there is no attribute of the word which endorses a pronunciation. The conventions observed by a speech community which make language mutually intelligible are just that - conventions.
Re: Nadella pledged to make ... Skype essential parts of daily life.
Wow, downvoted for asking a perfectly reasonable question.
This is the Reg, and the comments section for a story about Microsoft at that. Reason will not be tolerated.
I admit that I can't offhand think of a Skype competitor that's free, integrates with POTS, is easy for non-technical users to install and use, and is any less likely to be spied upon by the intelligence services. I don't have any special love for Skype (and for work purposes I use or have used various commercial alternatives, some of which work pretty well), but where my threat model permits, it does the job. And frankly my threat model generally permits.
Re: Installed office 2013 today
But think of all the bold new industries that will be enabled now that spreadsheets have a little fruit machine animation of spinning numbers when you change a cell.
Such as using recipes - the latest killer app that Nadella has discovered. Perhaps next year Microsoft will invent the checkbook-balancing application. Or something to keep track of collected memorabilia.
how the word is meant to be pronounced
Fucking prescriptivists. Any word is "meant" to be pronounced however the speaker pronounces it. One speaker may well deplore another's pronunciation, but there is no magical essence that renders one pronunciation correct.
Re: conflicting objectives
if you are running on an "OS" (and I use the term loosely) that doesn't zero pages before handing them to another process, then
... you're running on an OS which is not Orange Book C2 compliant - this is the "Object Reuse" requirement. I don't know of a contemporary virtual-memory OS which isn't at least C2.
Re: Code is truly awful, but sadly not unusual
It's very tempting to do that when you're a developer but look at Netscape / Mozilla. They lost massive ground to IE because they decided to re-write, rather than re-factor.
Eh? The Mozilla code base was derived directly from Mosaic. It and the IE (formerly Spyglass) code base share a common ancestor in Mosaic, but in no sense was Netscape a "re-write" of anything in IE.
That aside, there are other SSL/TLS implementations that are independent of OpenSSL, of course. Several of them - GnuTLS and Apple's implementation are the most notorious - have had some of their own shortcomings exposed in recent months. Neither independent projects nor attempts to clean up the OpenSSL code look like the royal road to a better implementation.
If I had the mandate and adequate resources, I'd start by rewriting all of the OpenSSL code for readability. Then pull out the architectural mistakes (like the rightly-maligned suballocator) and refactor where reasonable to reduce redundancy. As far as I know, no one's taking that route; certainly the LibreSSL project hasn't.
Re: Good work
The OpenSSL team was well aware of the problems with their code base long before this. The issue was one of resources, and by far most of their income came from chasing particular revenue streams (like FIPS certification). It was economically infeasible for them to do the kind of work the LibreSSL team has done, and no one else stepped up to do it, because using OpenSSL as-is was cheaper.
Now, post-Heartbleed, the OpenSSL team is much larger and better funded, and they are in a position to address technical debt.
Frankly, I don't see the existence of LibreSSL as much of an additional incentive. OpenSSL continues to have the name recognition. It has the installed base. It has FIPS certification, which is critical for sales to the US government - a not-insignificant market for many of the ISVs using OpenSSL. If I were on the OpenSSL team (rather than just one of the many people who use it, monitor the mailing lists, and so on), I wouldn't give a rat's ass about LibreSSL, except to check their commits periodically to see if there were fixes that applied to the OpenSSL trunk as well.
"Some are legacy platforms and the developers currently have no access to them,"
It looks OpenSSL developers never used a virtual machine?
Which virtual supervisors or hypervisors support OS/2 Warp?
I know one that supports zOS, but it ain't cheap.
And considering that, prior to Heartbleed, the OpenSSL development team was a couple of people, they haven't really had the resources to test every release across every supported platform.
Perhaps a bit of education before casting aspersions would be useful.
Re: Two thumbs up to Theo DeRaadt ...
having cleaned up the incomprehensible hairball of code that is OpenSSL
Evidence that LibreSSL is "clean"1 would be appreciated.
While I too have much respect and some affection for de Raadt - a curmudgeon's curmudgeon if ever there were one - this looks to me like an extremely premature evaluation. Eliminating some of the architectural infelicities and archaic-platform kluges is useful2, but - and I've said this before too - the underlying problem is the maintenance-hostile coding style, and I don't see any sign that the LibreSSL team is interested in fixing that.
1Personally, as I've noted before, I'm of the opinion that any code using KNF can't really be considered "clean". But that's just a quibble.
2Though my bet is they'll take the platform cleanup too far. There are people using OpenSSL on many platforms that the LibreSSL team are likely to scorn. I suspect they don't care, and neither will their fan base, but it's a bit disingenuous to pretend that no one needs any of the stuff they're ripping out.
Re: Two thumbs up to Theo DeRaadt ...
If LibreSSL can pass the existing tests then it is as secure as OpenSSL.
Utter rot. There is no suite of "existing tests" that fully vets the "security" of OpenSSL, and no such suite of tests is possible. And the phrase "as secure as" is essentially meaningless.
Re: Anyone looked at it with Fiddler?
Missing header elements like:
X-Frame-Options, Content Security Policy (Src, Script Src, Obj Src), anti-mime sniffing and XSS Protection directives.
This sort of thing is the real problem. I'm not inclined to put much faith in any provider of "secure" web-based software, if its developers can't be bothered to go through the OWASP Top 10 list and implement their recommendations.
It's not like this is top-secret knowledge. You don't have to be a television-drama super-hacker to plug these sorts of holes. Organizations like OWASP make this information available for free, with clear, cogent, simple explanations that should be accessible to any experienced practitioner. Fixing these things might take a week or two for a competent team.
Re: Browsers cannot be secure...
[in browsers] the encrypted channel is based on public certificates
HTTPS normally uses X.509 certificates and a PKI that relies on preinstalled signing certificates to provide authentication, yes. That's a far cry from "the encrypted channel is based on public certificates" (a meaningless claim if interpreted strictly), and even farther from "that's how browsers do encryption".
In the case of a browser-hosted MUA written in ECMAScript, where encryption happens in the MUA, the subversion of the HTTPS authentication mechanism is utterly irrelevant to the confidentiality of the email message, except on one branch of the attack tree: when the MUA itself is retrieved from a compromised server (and thus replaced with a compromised MUA).
However, there's no reason, in principle, why the endpoints couldn't load the MUA from an uncompromised local source.1
For that matter, technical users could eliminate the normal HTTP PKI attack vectors (compromised CAs, typically) by constructing their own PKI in parallel with the public one. Plenty of organizations do that already (installing their own root certificates in the browser stores, etc). It's a usability nightmare, but so is the existing HTTPS PKI. Again, the point is that problems with the HTTPS PKI do not necessarily compromise encrypting MUAs running in the browser.
What we finally need to do is to get GPG to be more usable and shipped by default with e-mail software.
Already done (see my post above). Didn't help. Most people don't want encrypted email if they're required to do anything to get it.
1Obviously that requires a prior secure channel, but the main point remains: it doesn't depend on the normal HTTPS PKI.
Re: Why 128 bit AES not 256 bit?
How about a non-American government generated encryption method instead?
You're free to build your own MUA using Camelia, if that makes you happy. Or you could adopt a reasonable threat model.
The question is, why has this not happened w/r/t PGP/GPG?
It has, any number of times. I have a GPG plugin for Thunderbird, and at least one GUI GPG wrapper (which I installed to see how usable it would be for non-technical users). There seem to be many others.
A better question would be why haven't they become popular? Because:
- Most email users want to minimize cognitive load and opportunity cost, which means using the MUA that minimizes what they have to learn and how much work they have to do. Webmail beats separate MUA applications by that metric.
- By the same token, most email users don't want to have to learn even the high-level security concepts associated with PGP/GPG (or PEM or S/MAIL or any of the other schemes that have been floated). They don't want to learn about asymmetric encryption and public and private keys and digital signatures. They really don't want to learn about the Web of Trust or other PKI architectures, all of which are usability disasters. Many aren't really clear on what "encryption" is in the first place, or how the newspaper Cryptogram differs from standard algorithms or from the mythical "military-grade encryption" they hear about on NCIS.
- Most email users are operating under a threat model where the benefits of email cryptography (privacy, integrity, some degree of authentication and non-repudiation) are very small. Hell, I wouldn't care if the vast majority of my email were published, and I'm an IT security professional.
Widespread adoption of email with digitally-signed envelopes would offer some benefit to most email users, as it would eliminate some spam and phishing channels, but the key there is widespread - and thus convincing users to adopt it now, so eventually it would scale up enough to help. And even then the implementation has to be very good.
Re: Is it too late for everybody to get behind that?
So you can perform encryption and decryption in the MUAs, and the MTAs never see plaintext or keys. Why is that hard to understand?
Re: Anyone using any web based password manager is just an idiot.
Anyone - and that includes most of the people contributing to this thread - who makes blanket statements about what is and is not a "safe" or "secure" practice without specifying a threat model is a sophomore whose opinion on the matter is worthless.
Re: Java? On iOS?
I guess we're lucky Pauli didn't tell us the problem was with password safes for "the Google".
This bit about asynchronous training providing noise, thus enhancing recognition.
That's quite straightforward. Optimization algorithms like Expectation Maximization can converge on local minima (or maxima, depending on what the evaluation function looks like), just as any other local-gradient-following algorithm would. (Think of Newton's method, for example.) Adding noise perturbs the system, making it more likely to jump out of a local minimum if the inflection point that forms one side of the trough isn't too much greater than the minimum value itself.
Of course the same thing can happen with the global minimum if there's a nearby local minimum and the inflection point separating them isn't much greater than the local minimum, so the effectiveness of this tweak depends on the characteristics of the curve, as well as how much noise you're adding, etc.
It also means your classification accuracy is dependent on todays hardware architecture, a poor idea.
In this case, the noise is due to out-of-order inputs caused by the asynchrony of the partitioning mechanism - a happy accident. The effect really doesn't depend on the source of that asynchrony, just its degree.
The bit about jumping out of local minima, well theres been lots of approaches to this over the last 25 years
Sure. Many of those approaches are highly deterministic, though, which means with a large, deep hierarchy of neural networks, you'll tend to train large portions of the net to respond identically to a given input. The noise created (accidentally) by the asynchrony is presumably rather more stochastic, so it might produce less self-similarity across the net. I freely admit that's just a guess - obviously we don't have the paper to read yet.
if this was working as suggested it would mean that the neural network was matching the training data more specifically reducing performance on unseen test data
Not necessarily; better optimization of the weights for training data doesn't imply overtraining, particularly if the training data would have arrived at a set of local minima far from the global minima without the added noise.
Re: Exotic Physics
To me, anything becoming tachyonic would seem to be exotic.
Here, apparently, "tachyonic" just means "having imaginary mass"1. And that really just means that its energy is not in a stable equilibrium (it's at a local maximum rather than a local minimum), thus spontaneous symmetry breaking2 occurs, and it moves to a lower energy state. This makes it no longer tachyonic, a process called "condensation" (because the lower energy becomes a condensate of new particles).
So it's not "exotic" in the sense of "not happening very often"; or in the sense of "gosh that's pretty weird" (relative to other stuff that happens at the quantum level); or in the sense of "the mathematics are bizarre" (because as it turns out they're not); or in the sense used in the article, which is "different from the Standard Model". It's perfectly in keeping with the SM, as I understand it.
Of course, you can still find it "exotic" in a subjective sense.
1A non-tachyonic field usually has complex mass, where the real part corresponds to rest mass (I think) and the imaginary part to decay rate. At least that's my understanding from browsing some of this stuff. I may be wildly incorrect. But the point is "imaginary mass" here just means "the real part is zero". This also means the field doesn't have any particle-like nature, apparently, until it condenses and becomes non-tachyonic.
2The standard analogy here is a ball balanced on top of a perfectly symmetrical hill. Any perturbation will cause the ball to roll down the hill in some direction, which breaks the symmetry.
Re: Why is it so hard to see?
Huh. I thought that blog post was pretty easy going, particularly compared to reading any of the actual papers published on the subject, or, say, Lacan or Heidegger. But I think reading this stuff requires a rather specific kind of focus - it's not a matter of "are you smart enough" or anything else so simplistic.
However, in 0.1% of collisions, HBs produces a pair of photons, which are what you can measure. If you add up their energy & mass it should equal the energy and mass of an HB (good old Einstein at work, there).
This is missing an important detail. There's a background that obscures this signal, too - lots of other stuff decaying into photon pairs. The key is that the experimenters count all the observed photon-pair decay events at each total energy level, and there are more of them at the Higgs mass level (because of those decaying Higgsies), which produces the spike in the graph shown in the blog post. So you plot number of photon-pair decay events versus their mass, and where the spike appears, that's your possible Higgs.
(Also, photons have no rest mass, so you don't exactly "add up their energy & mass", but that's a minor quibble.)
Ah, the Media Lab
This is at least as useful, interesting, and pleasing as the vast majority of things to come out of the MIT Media Lab.
I've been to the Media Lab, and as far as I can tell, its role is to be simultaneously the least productive and best-known building on campus. I figure it's a deliberate distraction from the real work being done everywhere else, in case engineering-hating monsters/aliens/zombies/terrorists attack Cambridge.1
1Massachusetts, that is. If they attack the one in Cambridgeshire I expect nothing will save it.
Re: Trellis Quantisation
I know you used the Joke icon, but in case this was a serious question:
- Trellis codes are, in general, a way to create error-correcting codes. Wikipedia and other sources can provide more detailed explanations, but basically with a trellis code you create a state machine, where a state is a couple of bits you've already received, and the transitions from that state are labeled 0 and 1 for the next bit you receive. The machine only allows some states, so if you believe you've received a sequence that's not allowed by that machine, that indicates an error, and the smallest change to the sequence that gives you a path that is allowed by the machine is the most-probably-correct correction.
- In this context, we're talking about trellis quantization1, which is an application of trellis coding to discrete cosine transformations (DCTs). DCTs are a kind of Finite Fourier Transform. Lossy image compression, as in JPEG, generally involves transforming the input into the frequency domain with a DCT, then discarding the components that don't carry much information. Then the receiver can perform the reverse transformation and recover most of the information from the original.
- With the DCT, you have a lot of choices about how to parameterize it by selecting coefficients. Trellis quantization applies the idea of trellis coding to selecting those coefficients so you maximize the signal-to-noise ratio.
At any rate, that's my understanding. I've played with some of this stuff and the underlying mathematics, but it's not my area.
1s/z/s if you must.
Re: // shouldn't get here, but WTF
If it wasn't for "Screw this, good enough!", many projects would never get done or they'd only get by done by people who don't see failure on the horizon
I'd like to see solid evidence to support that conclusion. In my own experience - and research I've seen appears to agree - most bad code is developed in ways that are less efficient than implementing superior alternatives would be. Copy-pasta is a great example: it's often faster to prefactor at the first sign of reuse and abstract the problem away rather than duplicating the effort.
Certainly servicing technical debt ("clean up old problems") is crucial, but it's also possible to avoid a lot of technical debt in the first place and use fewer, not more, resources in the process.
you could train that same Bayes classifier to look at the code itself
I doubt Naive Bayes would be much good for identifying suspect source-code constructs, beyond what static analysis tools already routinely catch. But it'd be interesting to experiment with more-powerful classifiers like Support Vector Machines or Maximum-Entropy Markov Machines or stacked neural networks1.
On the other hand, it's such an obvious idea that there must be a pile of existing research. I'm on vacation and feeling lazy or I'd have a look.
1The last are generally known as "Deep Learning", but that term is so stupid I avoid using it. Which is how you know I don't work for Google, where it's apparently mandatory.
Re: Lovable Hippos
Hippos are not only in Africa
They were very nearly in Louisiana. A number of influential people lobbied Congress to allow large-scale importation of hippos to the US to solve the "meat problem". (That Atavist piece - a long essay, like the non-fiction equivalent of a novella - is worth reading, both for phrases like "lake-cow brisket" and for the rather interesting stories of some of the personalities involved.)
Re: We kill and eat Wilbur. And Nemo (if not Nemo, several of his cousins). And Bambi. And Daisy...
I choose not to, along with half a billion other people
Which is great for the rest of us, because health-conscious zombies and vampires prefer grain-fed human.
First of all you are confusing statistics with mathematics. In mathematics we operate with certainties
And you appear to be confusing mathematics with some figment of your imagination.
"Mathematics" describes those forms of reasoning which employ formal abstractions to produce results that are demonstrably consistent with a set of axioms (and a set of principles for combining them, which are second-order axioms themselves). Statistics is most definitely part of that set.
Try logic instead
An excellent suggestion. Perhaps you should learn what "logic" means and have a go at it yourself.
Here, I'll get you started. "Gosh, that seems really unlikely" is not a logical construction. If it's correct, then it's a justification for making a Bayesian estimate of the probability of a thesis; but in itself it has nothing to do with logic.
Space-time is infinite.
There's a finite number of ways particles can be arranged in space-time. Space-time must therefore start repeating at some point.
There's a finite number of decimal digits, therefore the decimal expansion of pi must start repeating at some point. Oh, wait.
Even if space is infinite, that doesn't guarantee there are an infinite number of particles in it.
An infinite universe is thus guaranteed to contain multiple versions of you.
Even if the previous argument held water (which it doesn't, for several reasons), this doesn't follow. Your "infinite set of particles arranged in finite ways" could include one copy of me and an infinite number of hydrogen atoms. There is absolutely no requirement that there be more than one of me.
Sometimes there seems to be an infinite number of people making sophomoric arguments that don't stand up to a moment's scrutiny, but I'm sure that's just an illusion.
Re: That's because
They're very popular in US academia, particularly in the humanities. I've avoided them myself, because I hate Apple's OSes (dealing with OS X via bash or ksh is bearable, but it's still inconvenient at best, since all the applications are GUI), but most of the academics I know have Macs.
And most people at work still need a PC and need to run SAP, Dynaics, JD Edwards etc. plus MS Office.
"most people at work" need to run SAP, etc? Across all industries? Not in my experience. In fact, I'd be willing to bet that the majority of business-owned PCs are not running SAP, or most of the other packages you listed. MS Office, yes, probably, because the damn thing is nearly unavoidable.
Re: Uh-oh... we guys are in trouble.
Since science tells us that we men think about sex
"science" tells us nothing of the sort. Those claims are complete inventions that people incapable of critical thought have spread as myth.
Re: Leaving aside the snarky comments
If El Reg could give more of a shit about what goes on in Europe
If you could learn to read, you'd see that the article mentions accessibility twice explicitly, and again implicitly with the reference to Hawking.
But thanks for the plug for an unrelated project. Have any other trumpets you'd like to blow?
Re: Why not put a blue LED on the front ?
A red one that goes side-to-side and you can look like KITT from Knight Rider in human(-ish) form.
the late scientist Nikola Tesla
"scientist"? Tesla was an engineer, inventor, and tinkerer, but at best he dabbled in actual science. I've read a couple of Tesla biographies and I can't recall any actual methodologically-sound scientific research he did, nor any contributions to scientific theory.
I realize Tesla is one of the patron saints of the Internet, thanks in part to hagiographies by the likes of Inman1 and even more to the usual online groupthink. And no doubt I'll garner some downvotes for daring to challenge his legend. But that legend is hugely inflated.
That said, sure, let Inman and Musk build a museum for him. It never hurts to get folks excited about engineering and the like. If we can't have Scrapheap Challenge, at least we could have a fun Tesla museum to encourage engineering-inclined kids.
(In my opinion, Tesla did make one major contribution: demonstrating AC power transmission and getting Westinghouse on board. While HVDC looks better than AC today for long-distance transmission, at the time AC was likely the best way to go. Edison's plan of DC short-distance distribution and small neighborhood generating plants would have worked in cities, but rural electrification would have lagged far behind.)
1Whose work I often enjoy too. That doesn't make him an expert in the history of science, though.
Re: When you can't hold it in your hand, or worse you have to right click on it
Reading The Hand by Frank Wilson (role of tool use and dexterity in the development of the brain) and wondering on the possible misapplication of Piaget's ideas by Alan Kay and the rest of the Xerox gang.
Oh, yes. And more-recent neurological research (e.g. by the Damasios and their team) supports this, too. You can't simply strip away the somatic aspects of tool use and expect visual metaphors to play the same role.
Re: Noting error messages....Let's be fair here...
Everyone I support knows how to take a screenshot of an error message and e-mail it to me
Bah. As far as I'm concerned, emailing a screenshot1 should be punished by an extra hour of being ignored. Unless it's from one of the many Windows dialogs that don't support copying text, of course, in which case the only proper response is to go to Redmond, find the developer responsible, and administer a brutal beating.
Kids, lawn, etc.
(I should note, though, that when I deal with other people's software issues, those people are all presumed IT professionals. I don't generally interact with end users. So when I get an emailed screenshot, it's from someone who ought to know better.)
1MIME is the second-most aggravating innovation in the history of computing, right after the WIMP UI paradigm. A god-awful morass of inefficiency and waste that exists primarily to enable people who don't understand the system and refuse to learn.
Re: Oh dear
I can believe that, but are you really suggesting renewable electricity will be so cheap - in spite of demand for it just as electricity - that a large quantity of hydrogen, produced at low-ish efficiency by electrolysis, will be available for a hydrogen economy ( or synthesised methane economy if you like) at a reasonable price ?
It would certainly require very cheap electricity, produced far enough away from the vehicles being fueled that the most-economic options would be high-voltage DC transmission and/or shipment of synthesized hydrocarbons. (I'd go with propane myself for the latter, since it has some transportation advantages over methane if you don't already have a pipeline available, there's a widespread distribution network and market for it, and it's easy to run existing internal-combustion engines on it.)
So nuclear, stuff that's geographically constrained like geothermal, and specialty projects like those big solar-thermal plants in the Sahara that people are always dreaming about.
Re: Prius attracts ridicule?
what's not to like?!
The advertising, at least here in the States, which is obnoxious enough to discourage me from wanting one.
The smug "I'm saving the world" attitude of some Prius drivers, as they happily consume far more than the average human.
I admit they make more sense than the "luxury" hybrids that just use the electric motor to be even more ridiculously overpowered. (Of course, by my standards even the base Prius is overpowered, with a p/w ratio that's about 133% of, say, a 1990 Honda Civic CRX. What, am I going to tow a boat with the thing?)
Re: What was OK in the '50s may not be ok now...
70 years ?
Maybe his parents were sending him to summer camp? "But I don't wanna make macaroni shakers!"
(Does this de-Godwin the thread?)
Re: "children's bare bottoms may be considered to be pornography"
As we have learned from the Internet (and codified as Rule #34), anything may be considered by someone to be pornography. There is no, as the philosophers1 say, "impenetrable2 context" which prevents some image or description from being found prurient.
So it all comes down to a question of what the courts (in areas reliably under rule of law) or populace (elsewhere) deem unacceptable. And that's hard2 to predict.
Re: Did he code?
Former Sun employee, yacht named Escape - clearly not only a coder, but a long-time vi user.
Re: So what benefit do publishers bring?
increasing in need of a strong editorial presence
And that's the key. There are many good novels on the shelves these days, of tremendous variety, and it's possible to find them because of the work of acquisitions editors. Their quality owes no small debt to development and copy editors.
There's no replacement in sight for acquisitions editing - so far crowdsource alternatives like reputation networks and recommendation systems don't seem to be doing the job, on the whole, with all the self-published material available, though some self-publishers have risen to the top. And while it's possible for self-published authors to hire copy editors, many don't; and even fewer seem to try to work with development editors.
Publishing is an economic mess, but that doesn't mean the solution is to let Amazon bully and bribe publishers and authors.
Not really sure if fitting in a pocket matters now.
It matters to me, and apparently to the person you responded to, and the person who upvoted that post. So yes, it does matter now.
Whether it matters to a substantial fraction of Blackberry's potential market is another question.
JFTR, I know dozens of people with smartphones, Apple and Android.1 None have Samsung Notes. Which merely goes to show you that relying on your acquaintances as a sample is probably not methodologically sound.
1I used to know one with a Symbian smartphone, but that was me, and when that phone died I didn't see the point in remaining with the moribund Symbian OS, though I liked some aspects of it.
They can't actually make that claim at all
I assume by "They" you mean "Lewis Page", since that part of the Reg article appears to just be Lewis' exercise for today in not thinking critically.
("rather conclusively cast doubt"? What is that supposed to mean? "At first we doubted whether there was any doubt, but now we have only the slightest of doubts about the existence of doubt.")
Kickstarter has now joined Farcebook and Twatter as the hangout of fools.
Apple's understanding of the device being "off" / in stand-by is to just turn off the video output. The processor still runs at full speed and the hard drive still spins at normal speed. The only way to really turn it off is to pull the power cord!
This is true of the DVRs I've had too - a very small sample, admittedly, but I suspect it's broadly true of set-top boxes and the like. Why would it be different? The simplest implementation is to have "standby" simply appear to the user to have disabled the device's user-visible functionality. Consuming power is an externality for the vendors of such devices.
Re: Runnng 4.4.4 no worries
all software has bugs, regardless of who makes it
True. That said, I took a look at the second bug (since I'm running a phone with Android 2.mumble, though I install almost no apps on it, and certainly not without considerable scrutiny), and it's 1) a dumb mistake on the part of the developer, 2) pretty obvious when you have the source code, 3) part of a general class which is likely to contain other bugs of the same sort, and 4) indicative of a systemic software-security failure on Google's part.
The last is the failure to recognize that security is a cross-cutting aspect, and mechanisms such as Android's "activities" either need security implemented at a lower level (say, using a capability architecture of some sort), or need it automatically injected via some aspect mechanism. Requiring the developer to either review which activities are exported, or to manually implement safeguards against misuse, will inevitably produce privilege-escalation bugs.
Explicitly wrapping security around functionality is good - it reduces the attack surface and increases the depth of defenses - but in itself is not sufficient against reasonable threat models for consumer software that controls anything of value.
Re: which is why...
It's enough to make you smile in places.
Yes, the authors really don't know much about their subject. Sample quote: "Unix shell will interpret files beginning with hyphen (-) character as command line arguments to executed command/program". That's fundamentally and entirely incorrect - the shell does not "interpret file[name]s beginning with hyphen[s]" at all. It just passes those strings to the exec system call, which will in turn make them available to the invoked program as part of argv.
Re: which is why...
unix programmers and administrators have used ./* since 1988 to avoid this
Yes, and why modern versions of find(1) have the "-print0" option and xargs(1) the corresponding "-0" option, and so on. It's a widely-recognized issue.
There's also the related trick of embedding ANSI or other terminal control code sequences in filenames, for entertainment when someone lists them using a suitable terminal (emulator).
- Updated Hidden network packet sniffer in MILLIONS of iPhones, iPads – expert
- Students hack Tesla Model S, make all its doors pop open IN MOTION
- BBC goes offline in MASSIVE COCKUP: Stephen Fry partly muzzled
- PROOF the Apple iPhone 6 rumor mill hype-gasm has reached its logical conclusion
- US judge: YES, cops or feds so can slurp an ENTIRE Gmail account