Re: Ah, the days
And remember TWAIN, for digital cameras? That's "Thing Without An Interesting Acronym"
(I always thought the M in PCMCIA was "reMember"...)
2243 posts • joined 8 Nov 2007
And remember TWAIN, for digital cameras? That's "Thing Without An Interesting Acronym"
(I always thought the M in PCMCIA was "reMember"...)
Um, aren't Java and C++ both statically typed? (Sure, C++ still has raw pointers and casting, but still...)
"fire" "movement" ... maybe "telepyretic" (causing fire at a distance)?
Probably still fine, I guess, since it's showing you how to use the big G.
So if we can't have a free movement of people, but you don't want a hard border, I'm not sure how they can think that's acheiveable?
National Identity Cards.
Less massive planets would have a harder time holding onto lighter elements thanks to solar winds and the like. The lighter elements will obviously be higher up in the atmosphere, making them more likely to be stripped away over time. More massive planets are better at keeping hold of these because gravitational forces are higher, but there are no doubt several other reasons as well (like distance from the star).
It's called "key escrow". The device used by the consumer has a secret key that can be used (along with other information, such as the device ID) to recover the session key used to encrypt the communication. The device is supposed to be tamper-resistant, so users aren't able to access the escrow key. A copy of the that key is also stored by law enforcement, allowing them to decrypt the communication whenever they want.
The other way to implement it is to present users with a new encryption scheme that's supposedly secure, but has a flaw that is known to your mathematicians, but (supposedly) not anyone else. This gives them an advantage when it comes to decrypting stuff because it becomes feasible to use some short-cut to brute-forcing the message.
With both sorts of secret (escrow key or "back door"), the security of everything is dependent on how secure that secret is. As we've seen from NSA leaks (giving rise to this weekend's botnet that hit the NHS among others), plus the existence of plenty of hardware and maths wizzes outside of the NSA (or whoever) who can, with enough time, effort and money, crack that secret, rendering the encryption completely irrelevant.
Unfortunately the EFF's signing on to the bogus ...
That's not what the article said. It said that it's worried about "the independence of the office and its ability to conduct fair investigations".
Personally, I don't think that an investigation would come up with enough to tie Trump to the Russians directly, though I suspect that there are others in his entourage who were compromised. Still, if he has nothing to hide, then why should he fear the probe? A normal, sane individual would allow this to run its course. Instead, Trump uses bluster and now, it seems, direct interference in the workings of the investigation. That doesn't project an image of him being free of taint.
the partisan interests of a few wealthy donors.
Surely you're not serious here, or are you back to talking about Trump and Russian donors again?
It seems that they want to interpret the GPL as a EULA, when it's not.
If it's looked at as a copyright statement, then the default state when you put the appropriate (c) mark on the document is that it is your [the author's] property and should fall completely under copyright laws. If that's all you do, then the position is clear: you [someone other than the author] can't go and copy the material except under certain fair use conditions.
When you add the GPL statement, you are granting certain extra rights (but, crucially reserving certain other rights, such as not tampering with the rights granted, or modifying the document and re-releasing it without continuing to honour the conditions set out under the derived works sections) to anyone who might happen to have or receive a copy of the document. It shouldn't be looked at under contract law. In particular, it shouldn't be necessary for both parties (the author and the person who has a copy) to enter into a signed arrangement.
The question of how the person receives the GPL-copyrighted document should also be irrelevant. It's like the question of whether you buy a book from the publisher, a bookseller or you get it second-hand, somehow. The delivery mechanism or how you came by the copy is irrelevant since copyright resides within the copy itself.
Someone whisper in Mayhem's ear
Don't you think she looks tired?
Maybe if the reference is to Leonard Cohen's Tower of Song...
It's humbling to see such a devastating and wide-ranging attack appear as if out of nowhere. Indiscriminate, uncaring and just plain nasty in it effects. If I were a normal person (well, actually, I am, more or less) and not some puffed up politician, this would leave me speechless and basically in awe of the fact that I am basically a zero when it comes to the new normal elemental forces at play on the Internet.
If you can streamline the installation of a secure VPN and get caching of push data when the link is down, then the convenience factor could be worth it.
However, this is really nothing that a moderately tech-savvy person couldn't do in an afternoon. At least the secure VPN/DMZ part, anyway. The store and forward part will depend on the particular IoT device. Most of them won't admit to this sort of configuration, although all of them should by right allow you to configure exactly where the data will be sent to, and over which network link, rather than being hard-coded to only send to a fixed server or using a proprietary protocol (making me notice that this particular offering has a whiff of embrace/extend/extinguish about it).
Apropos of nothing, I recently lost the drive attached to the Pi that I'd been using as a music/radio player. Nothing lost since it was an old drive that I'd expected to fail. I had also been using the machine's wireless card to provide fail-over Internet access so that if my broadband went down, I could just turn on tethering on my phone and I'd be back online again. I decided to replace the Pi with an ODROID (simple) and then idly wondered about doing the fail-over on my OpenWRT router. Turns out that my wireless card can be used in both client and AP mode at the same time, so once I had that insight it took about an hour to migrate the fail-over completely onto the router. No doubt setting up a VLAN/DMZ would only take a similar amount of time.
Now if only my ISP would support IPv6 in some way.... though I guess that would take a bit more than an afternoon to fully explore :)
Us fleshbags will end up with no job prospects apart from writing trashy robo-dramas (and maybe landing an odd "token human" role, if we're lucky) à la "All my Circuits" for our benevolent overlords.
Blimey. I've never seen so many downvotes for someone making a valid (and interesting) observation on English spelling/pronunciation. Is pedantry dead here on The Register?
You can have my upvote, partly because I just noticed your post after I made a similar comment. We're both in this together :-/
Actually, phonetically (and historically, since it's named after botanist named Fuchs) it should be pronounced "Fuck shia". Little wonder that polite society fudged the pronunciation, to say nothing of any current Sunni-Shia ramifications.
Eh, I think that you'll find that Display Postscript was developed by Adobe and NeXT. Before Quartz.
Still, I guess you might be right: Apple probably decided to patent it, despite prior art, because ... splines?
I love the irony that if they had used strcmp instead, there wouldn't be a bug. Ironic because the programmer probably thought "shouldn't use strcmp... that might be insecure or cause a crash". Probably a form of hypercorrection. It's not strcmp's fault if another bit of your code fails to null-terminate a string.
Still on the subject of strncmp, surely it would be a good idea for the compiler (or a debug version of the C lib) to warn if the call is/can be a no-op? Obviously, I can think of some places where this might have a valid use (like exiting from a partitioned search when you've either found the right string or end up with a partition size of zero; checking which case it was can be deferred to outside the loop) but for the most part, a no-op wasn't what you expected, so it probably indicates a bug like this one.
"When will these vegan dreadlock toting flip flop wearing tree huggers think of the trees?"
Or the orangutans!
See my vest, see my vest, see my vest!
That's because they desperately need cash to counteract their losses in Westinghouse, and they can sell their fab business as a "going concern".
Dunno. All I know is, he'd better watch his speed.
Addendum: The above assumes that you want to be at a full stop immediately after leaving the intersection. Not very practical, but at least you shouldn't get a ticket for breaking a red light.
Yer man's argument may be just that, from what I gather from a re-read of the article. If you don't aim to have v=0 at the exit point, but instead decelerate down to some minimum speed and then maintain that through the intersection, you obviously travel further in the same amount of time. Same idea, just different piecewise integration:
Columns are for reaction time, constant deceleration and constant exit speed. There are a couple of extra variables (final speed and how long you will travel at this speed, the product of which tells you the distance from the exit to where you will stop your deceleration), but if you set that product to be half the distance through the intersection, then you should be safely in control of the car and not trying to brake and turn at the same time.
As before, calculating dT is simply adding up the areas of the rectangles and the triangle, so it's still basic trigonometry.
Looks like Applied Maths to me. It's hardly "engineering" by any sane standard.
Start with a simple d = vt equation, take safe-braking distances and reaction times from the Rules of the Road (or US equivalent) to find the deceleration curve, graph it out in t and v, then do piecewise-integration (calculate area of some rectangles and a triangle and add them up) to find total distance travelled dT. Calculate the length of the journey between passing the traffic light and leaving the intersection as one quarter of the circumference of the intersection dI (or consider it to be two legs at a right angle, to be on the safe side) and show that dT - dI, which is the furthest you can be from the intersection in order to safely traverse it, is greater than 0.
The only "engineery" thing here is measuring how big the intersection is, but he could do that with OS maps.
Disclaimer: I am not an engineer!
A Linux user with a tie?
Don't knock it. It stops my pants from falling down.
I especially enjoyed your articles about privacy legislation. I probably need to get out more often.
I think you missed this bit of the article.
That still doesn't make sense to me. Either you're running Kali on a "modem" like those listed (in which case, you can use the wireless hardware), or you're communicating with these things as external devices (in which case, kernel support for the chipset is irrelevant; you talk to them over the standard 802.x network protocols).
Either way, this part of the article is very poorly worded.
Yeah, but a running process is <program that's on disk> + <data that's only ever in working memory>. Spawn a shell, install a program in its data space and your solution won't work.
Other posters above suggested that switching the machine off will deal with it. But what if it's a kind of APT ("advanced persistent threat") that can find other local machines where it can also run in memory, maybe even using different exploits or propagation methods? This can act as a backup in case the first machine is power-cycled, then re-infect it using the original exploit when it comes back up. Just like the ancient "Robin Hood and Friar Tuck" hack, except that there's no persistence if both machines are turned off at once.
Putting on my black hat for the moment, not persisting on disk can be a great way of avoiding detection. It's great for initial stages of an attack because you can use it to passively monitor a target network and use that info to plan for future attacks. Chances are this won't trigger any internal tripwires, and even if the probe is found, it won't reveal very much. From there, you can use a variety of different payloads, each working together stealthily using ideas of "quorum sensing" and "oblivious agents".
Quorum Sensing is an idea from bacteria, where individual bacteria take cues from the environment and begin to change their own secretions. The ultimate expression of QS in bacterial colonies is that they can regulate gene expression, so that they become more efficient at thriving in the environment. Apply that analogy to malware and you get to the idea of individual bits of malware using subliminal channels to announce their presence to each other and coordinate with each other to a degree. A simple example of a subliminal channel in a network might be to interact with a caching proxy (be it a web proxy or memcached database proxy or whatever) somewhere on the intranet. By looking at timing differences in responding to a request, each malware agent can basically pick up environmental cues to detect each other's presence. There are doubtless tons of other ways they can implement subliminal channels over innocuous-looking traffic.
Oblivious Agents are bits of code that have an encrypted payload. They take a set of input parameters (such as environmental cues, as gathered above, but it could also include things like the time or the host IP or whatever) and combine them to form a key. They use that key to do a trial decode on the encrypted payload, and if the decrypted message is valid (eg, by checking that it has a valid checksum), they execute it. They're called "oblivious" agents because they don't know (and don't reveal anything) about what exact set of triggers are needed to run a particular payload. And, of course, a defender can't easily decrypt the payload, either. Neither does it have to have just one payload, nor does all the logic have to be confined to being stored in a single malware agent: a payload could be just sending out a certain environmental trigger that ultimately serves to self-repair the swarm, delete itself, or start enacting some new strategic phase.
All of this is much more suited to a spear-phishing attack against a high-value target. It's still fascinating to think about how you could apply techniques like this against certain businesses, banks, military installations or whatever. If it can lay more or less dormant and inactive over a long enough time, there's no telling what it could do. It could, eg, find some long-term persistence technique (so that it can re-infect at a later time if it's discovered), or use a variety of environmental cues, eg, noticing lots of extra emails being sent or other seeing other signs of activity to guess that a North Korean missile site is about to conduct a nuclear test, or even just have some other internal resource (like git repo, active directory server, SCADA system or whatever) as the real target, and delete the bridgehead system once it's done its job.
Hmm. I think that having that black hat on for too long has affected my brain ...
I was going to complain that most people use something akin to apt-cacher-ng or squid on the client side, anyway. But then, realised that FTP doesn't have a standard way of getting file metadata, particularly the HTTP-like "last-modified" data that's crucial for avoiding downloading (mirroring) stuff you already have. Sure, running "dir" works, but there doesn't seem to be a standard way of presenting all the fields ...
Overall, probably a sensible move. Still, with FTP disappearing it does make me feel just that little bit more antiquated.
Why not ...?
I know that you're probably just asking rhetorically, but you got me thinking of what sort of algorithms and data structures you'd need to scale up the number of no-fly zones. As the number increases, you obviously hit a practical limit if you do a linear scan on them.
I reckon quadtrees, possibly with some sort of arithmetic or wavelet encoding of the number of NFZs in each sector.
Funnily enough, I also started thinking about how to do differential equations when I saw the word "exponentially" in the article. AFAICR, differentiating ex with respect to x (can't do fancy LaTeX or mathml markup here) is ex. "Exponential" means that we have a superincreasing sequence since the dy/dx (slope) at each point is constantly increasing (approaching infinity) in the x direction.
It doesn't make sense to compare two numbers and say that the second is an exponential increase over the first. There's no curve (or an infinite number of curves), just a straight line between two points, so "exponential" doesn't apply.
There may be an order of (base 10) magnitude between the two prices, though, which would be mathematically correct.
(Yeah, I know, I'm being really pedantic here. That's why I'm making a comment, not using the "make corrections" link.)
why hasn't he tried tunnelling out of the embassy
Shh! We don't talk about those "diplomatic channels".
for "reached out".
Top journalistic tip: You have to reach out for the phone or keyboard, not just do a zombie impression and hope that someone will contact you to corroborate your story based on your mad skills.
Apparently, he also said "Never pick a fight with people who buy ink by the barrel". I can't see how news organs are going to take kindly to Wikipedia scraping and aggregating all their articles and deleting all their revenue-generating ads.
Since sarcasm often involves humans stating something opposed to their beliefs or wants
<sarcasm>Typical Yanks. Can't tell the difference between irony and sarcasm</sarcasm>
<sarcasm><irony>Finally, something to learn Americans to speak English good</irony></sarcasm>
She probably won't say, but Alaska.
Are these self-driving cars going to be uploading any part of the video feed with a Windows-style "telemetry" excuse? Would such data be able to be aggregated across many self-driving cars to identify and track other cars on the road, either through their license plates or tagging and tracking other vehicles while they're in your field of view, and using the cloud part to fill in for discontinuities ("white van was tracked until point A, then lost; another sighting at point B is consistent with being the same white van, ...").
An autonomous driving system obviously needs to be able to be aware of other vehicles and make sure that it doesn't forget that they could still be around (temporarily in a blind spot or obstructed from view) before making manoeuvring decisions, but surely there are issues if these data are being aggregated in near-real time in a cloud somewhere.
Also, why stop at tracking vehicles? Surely it would need to have awareness of pedestrians, too. Maybe the current crop of cars are more suited to highway driving or driving somewhere like the US, where vehicles have right of way, so ignoring pedestrians who might suddenly walk out in front of you might make sense (until you have to go into collision-avoidance mode). For city driving, though, and in countries where jaywalking isn't a crime, surely they'll have to follow the same rules as for human drivers. Part of that is being able to figure out where pedestrians are or might be, and reading their intent, at least to some degree. Obviously, simple things like seeing that they're very close to the kerb or partly on the road is a good sign that they're looking for an opportunity to cross(*), and that can be handled by simple physical rules based on distances/location. However, reading intent is often much more complicated. If you want computers to be as good as humans, you're going to have to include things like how they act (do they turn around to look at the road as they approach a crossing, or look up at a traffic light), and figuring out where their attention is directed.
All this analysis of pedestrians (and tracking them, obviously) probably won't make it into first-generation cars, so in the initial (training) stages at least, manufacturers are going to be slurping a lot of so-called telemetry data. You can't say that blurring faces or whatever is a solution because they will need facial features to do things like gaze tracking or to judge how aware the person might be of traffic (or, eg, they're talking on a mobile phone or texting rather than paying attention to other things). The easiest thing is just to slurp everything they can, but if real-time tracking is the norm from the outset, it's hard to see how spy agencies or whatever (or even just traffic police) wouldn't want to tap into that and make sure that they continue to be able to use the system even after the AI part has been trained and downloaded as a set of real-time rules that can run on the car.
I'm sure that these sorts of concerns would definitely be looked at in Europe or the US, but in China? Somehow, I don't think so.
* another, unrelated scenario with self-drive cars strikes me. If you're coming to a crossing and you see someone that you know (or think you know) and make eye contact and give them a nod or something, how are they going to interpret that? Maybe they don't know that you're not the driver. If you were the driver, they could take your gesture or general demeanour as giving them the OK to cross the road in front of you. The car's not going to understand that...
... that old servant Ines told me that one drop even if it got into you at all after I tried with the Banana but I was afraid it might break and get lost up in me somewhere because they once took something down out of a woman that was up there for years covered with limesalts they're all mad to get in there where they come out of you'd think they could never go far enough up and then they're done with you in a way till the next time yes because there's a wonderful feeling there so tender all the time how did we finish it off yes O yes I pulled him off into my handkerchief pretending not to be excited but I opened my legs I wouldn't let him touch me inside my petticoat because I had a skirt opening up the side I tormented the life out of him first tickling him I loved rousing that dog in the hotel rrrsssstt awokwokawok his eyes shut and a bird flying below us he was shy all the same I liked him like that moaning I made him blush a little when I got over him that way when I unbuttoned him and took his out and drew back the skin it had a kind of eye in it they're all Buttons men down the middle on the wrong side of them Molly darling he called me what was his name ...
I suggest something like a cross between a chef's hat and a nuclear mushroom emanating from a "cleftal horizon" (a couple of tasteful curves framing a Y for Yankee).
Hmm. Those guys that have been accumulating fake social media profiles apparently have been on to something all this time. Who knew they could be used for good?
Yes, I am Mr. Cypherpunk, and so is my wife.
I totally agree, Ken. We should be talking about "automated processes" or the like.
It seems to me that the only thing that needs legislating here is in the realm of data protection (or FoI) requests. Let's say that someone is refused insurance cover. I think that it's quite possible and reasonable to make a data request asking the organisation to clarify the factors leading to the decision. I'm pretty sure, though not certain, that this sort of request is allowable and that it should receive a reply.
However, once you start using automated processes, there is a great risk that the organisation being asked for such information will, deliberately or not, seek to obfuscate what their processes are. You'll just get a response "computer says no". If you kick this up to the ombudsman or whatever, there's every likelihood that the organisation will argue two main points: first, they'll say that their algorithms are a trade secret, and second, they'll say that the cost of satisfying the request is excessive. I don't think that the first point needs much comment, but for the second, it's quite possible that they'll be able to make a good excuse: since software is so much more complicated than manual processes (which they'll no doubt have documented as part of their quality certification or whatever), the cost to audit it will be so much more. Since data requests can legally be refused on grounds of cost, this will end up with more data requests being refused, with little or no recourse.
So, as a result, I think that the only changes that need to come about are to ensure that the same transparency standards are applied to automated processes as manual ones. This needs to happen both in terms of privacy/FoI legislation and non-legislative areas, such as ISO quality standards (which I assume is immune to Brexit).
I've often thought that this sort of collation of data could be very useful for language learners.
There are plenty of basic things that scanning corpora like this can turn up. You can have some basic stuff like collocations that existing in the target language (eg, "take" and "bath" form a collocation in English) and distinguish that sort of association from more conceptual linkages. For example, when "president" appears, you're likely to see more vocabulary related to countries, laws, government, debates and so on, as well as particular current events or issues. More or less what the article says about "spaghetti" appearing more often with "food" than "shoe".
Besides being able to group new vocabulary and presenting related words to be learned together, in context, a computer-aided learning tool could use the data in a lot more ways, eg:
Maybe it's too much to expect a machine learning system to do all of this unsupervised, but still, you could have it at least generate different kinds of material and use crowd-sourcing to weed out errors or re-train the thing. Lots of ways to have a hybrid human/computer system.
The other big use that I've often thought about is automatic classification of documents. I've got tons of PDF files downloaded from the net, but no actual filing system for them. One simple way of clustering similar documents together is to do a frequency analysis of the words in the document and then to get rid of all the most common words from the language (like "it", "for", "and", "the", etc.). The remaining top ten words, say, should help to give a very good idea about the topic of that document. Basic statistical clustering like this should help a lot to find relevant/related documents on a given topic, but there seems to be so much more that could be done with AI/machine learning techniques.
Dumb ways to die (for a more upbeat version)
... shouldn't be needed.
The kernel loads itself into memory at the start and then, apart from loadable modules, it doesn't need to re-load (page) itself from disk. The only thing that stops working after installing a new kernel is hibernate. That fails (in the sense of starting to hibernate, but not going through with it) because on the next reboot, a new kernel is in place and it wouldn't make sense to reload the memory image belonging to an old kernel.
Regular programs/services should also be restartable without needing a reboot. Even upgrading loadable kernel modules on the fly is fine because they are (like the kernel) just loaded once and the init system (or something like dbus) knows about dependencies and can restart affected parts of the system in the right order.
Biting the hand that feeds IT © 1998–2017