1817 posts • joined 15 Jun 2007
Re: Significance @Gerhard den Hollander
Is this PDF safe to read in Foxit?
In this case, it cannot be the kernel stack.
Re: Significance @runwin
OK, I've read the page, and it falls into the "change the return address" scenario that I mentioned. Having read that, and done what I should have done before and worked out the way stacks are stored, it looks as if most systems grow their stack 'down (higher to lower addresses)', and I admit that the return address will be stored in memory with a higher address than the buffer, so could be overwritten.
But I still think for several reasons, that this will be more likely to cause a DoS, rather than a remote code execution problem in this case.
I'm always a bit sceptical about the danger of this type of bug. Sure, it will cause unpredictable errors, but lets look at what could happen.
As they talk about stack overflows, I'm presuming that the URL is being copied into a variable stored on the stack, i.e. a local variable. When this exploit runs, whatever is in the memory locations after this variable will contain some data that is under the control of the exploiter.
So. The memory locations after the variable will be another variable, or possibly a stack frame header including the return address and possibly some saved register contents.
If it's another variable or saved register contents, then the previous contents will be lost, and/or some unpredictable behaviour might happen when the variable is used. It might be a pointer, which may mean that some other data address could be clobbered later in the code. It could be a vector (pointer to some code), but in order to exploit this, you'd have to understand the rest of the code really well. If it's a stack frame (and I've not checked the direction of stack growth so don't know whether it will be the frame for this function or another), then the return address may be damaged, which could be used to control where the code returns to.
The comment from Paul Ducklin from Sophos, re. "The crash, which is a side-effect of a stack overflow, pretty much lets you write to a memory location of your choice," seems an over-reaction, as it is likely that you could overwrite an address following this buffer in the page the stack is in or a contiguously later memory page. Any point after this will probably generate a segmentation or address violation, as soon as it tries to write to an unallocated address. To me, this is not the same as "a memory location of your choice".
You've potentially got some executable code (if that is what the URL contains) stored in a memory location you should not have access to, but it's not in the program text, and I've not yet seen a method described of triggering that code (the return address in a stack frame header is the only one I can see which would affect the execution stream). This does not appear to be a practicle means of injecting code, much more likely some DoS attack against the user running Foxit.
So it is important (all bugs should be regarded as such), and I'm sure there may be some special cases I've not spotted, but on casual inspection it can only be described as a DoS vulnerability with a 'potential' remote execution problem. saying any more would be FUD.
Possibly someone could educate me if I am wrong.
Re: Rip off @cornz 1
I hope that you are only using usenet for your content, because if you watch 'live' TV over the Internet (yes, it's a bit of an ambiguous definition, but I believe that it means material that is broadcast over the Internet while being broadcast to air, even if delayed by a few minutes), then you still need a TV license. Your computer becomes TV receiving equipment under the terms of the law.
But if you are using usenet, expect a letter from your ISP accusing you of copyright infringement.
What's not clear is whether the fact that you could watch Internet broadcast TV but don't is enough to remove the requirement for a license.
It's not quite that straight forward.
Shareholders are already on the hook, as they are unlikely to get the money they invested back. They are just as much creditors as the workers who are owed pay.
In the case of a company that is negligently driven into large debts, especially if money is owed to HMRC (in the UK), then the directors can be sued for corporate negligence, which can result in them being banned from becoming a director for a period of time, personally heavily fined, and in some cases, sent to prison, especially if fraud can be proved.
Limited Liability companies do not offer complete protection, but I admit that there are ways of extracting value from such a company and walking away without the debts.
There's two points here.
One is that it allows companies to hire more people in order to be able to select the ones worth keeping after three or six months, and the other is that companies are petrified by the redundancy conditions such that they do not take people on until absolutely necessary, for fear that having to make them redundant at a later date if there is a downturn in their business is so costly. This is also the reason why so many companies prefer to use agency staff until they know an increase is really justified, to avoid the redundancy packages.
In both cases, if it was easier to get rid of workers more easily, companies might be prepared to employ more people.
I personally would prefer to work for a company knowing that they could shed staff more easily if it stops the business going bust and everyone being made redundant, even it it did lessen my job security. There are some safeguards needed, but giving a company in difficulty the choice of going bankrupt because they keep too many employees on and have to keep paying them, or going bankrupt because the redundancy packages they have to offer can't be afforded, is no real choice at all. They both drag the company down.
If I remember the story properly from the BBC
What is happening here is that the Indian government is retrospectively changing the tax rules, and then expecting foreign companies on just roll over and pay more tax for years they already thought were closed. It is a policy that is specifically designed to extract more money from non-Indian companies that are operating in India.
It's within what a government can do, but is clearly not going to make companies operating in India happy.
Re: thinkpads? @cap'n
I think that it depends on whether you are a form-follows-function person or not.
Thinkpads are functional. There is little wasted weight or space, the screens and keyboards are/were the best in the business, they are not too bulky, and they will suffer the day-to-day wear and tear that a road warrior will put them through. And there is nothing in their design that makes them unpleasant to use. The lips and edges you talk about are all deliberately engineered so that when shut, they all lock together, so there is not too much strain put on the hinges. Seen many Thinkpads with broken hinges? No, I didn't think so.
Add to this an engineering, maintenance and warranty strategy that means that they will can and will be fixed if they break in warranty, and have the full maintenance manuals available for third party maintainers to fix them when they are out of warranty, with a large pool of donor systems for parts means that they have an extended 2nd and 3rd user lifetime where you will still see 6-7 year old Thinkpads in regular use (my T30 has a manufacturing date of 2005, and the A20 which runs as my linux firewall is even older).
I'm sure that it you look, you will still be able to buy brand new OEM batteries from one of the auction sites for any Thinkpad built this century. Try that for a decade old Dell or HP.
Of course, if style is more important, then a Sony Vaio or any of the Ultra books will do the job, but don't expect them to have the same life expectancy. But if you are after style, it does not matter if it breaks after 12 months, because you will probably be replacing it for the latest 'shiny' toy anyway.
Re: @ Peter (was: No mention of token ring? @keith_w)
It may have been that way in the US, but I was involved with a customer still installing new TR kit beyond Y2K. I admit it was mainly because the customer had a large investment in it, but when the organization split, the bit I went with dumped TR, and jumped straight to 100baseT.
In a lot of commercial organisations, being able to use a Premises Distribution System to organise your cabling for TR (and twisted-pair Ethernet, phone and RS232 terminal traffic) was a real benefit, and one that 10base2 thinwire Ethernet could not take advantage of. Thus Token Ring persisted.
I saw the benefit of a PDS when I saw 1MB/s AT&T StarLAN installed for the first time in the late '80s.
Re: No mention of token ring? @keith_w
You're confusing the physical MAC layer with IP.
Token Ring and Ethernet are comparable. IP can run over either, and many more physical networks as well. Although it does not directly follow the 7 layer OSI network model, it is a layered protocol (MAC, IP, TCP/UDP, application protocol), and provided it meets some basic requirements any physical layer can be used to transport IP.
Token Ring is exactly as routeable as Ethernet when running IP. Routing has nothing to do with the MAC layer, except in very simple protocols as IPX or NETBIOS.
I have worked at numerous locations where there were multiple networks using Token Ring, Thinwire (10base2) Ethernet, Twisted pair (10baseT) Ethernet, ATM, FDDI and even SLIP and PPP all routed together using Layer 2 routers.
What makes Token Ring better than 10base5 or 10base5 bussed Ethernet is that it did not use CSMA/CD to arbitrate use of a network segment, so works much better at higher utilisation rates. As soon as 10baseT switched Ethernets came along, that was no longer enough of an advantage, and Token Ring died.
If you look at network topologies, those with multiple tokens or a slotted ring (such as the Cambridge Ring) could carry much more data than Token Ring, but were more complex to set up.
If you had ever had to debug a token ring implemented with MAUs, when one system was running at the wrong speed and causing lost beacons (or beaconing), then you will be glad that TR eventually died!
Re: iOS feature support @DougS
I was only thinking this morning as I connected my tablet to the cable that had fallen down behind the table and I had to fish out again, and which my wife complains about whenever she vacuums, how useful it would be to be able to just put the tablet in the same place and know that it would charge.
Ditto all the cables in the car.
So yes. wireless charging would be a good thing. Even better if there was a standard, and I could have a couple of them scattered around the house, charging all the phones, remotes, media players and other gadgets wherever I wanted to be in the house.
Re: B5 @BoldMan
Yes, there was a terrific dynamic tension between these two characters that persisted throughout the entire show.
I see Andreas Katsulas on other shows (from the past, obviously), and when I do, I can't help seeing him as G'Kar.
Re: Babylon 5 influence
Off topic, I know, but....
I much preferred Michael O'Hare as the commanding officer of B5. With Sinclair being the re-incarnation of Valen, and being involved with a part human Delenn (as would probably have been the case), it would have led to an interesting dynamic. Having 'The Scarecrow' dropped in at the beginning of Series 2, even if he was introduced as the 'Starkiller', lost some of the world-weary ordinariness (quite remarkable for a SF series set in the future) that Series 1 had.
Series 1 did not really start the main story arc, (although throughout there were plenty of forward references that only became important later on, such as B2), it set the back-story for the way Babylon5 operated that was necessary in the later stories. None of the ST franchises managed to achieve the same level of detail, although DS9 probably came closest.
I really would like to have seen how Series 4 would have turned out if JMS had not had to shoehorn in the Shadow Wars conclusion and compress the Earth liberation storyline into the same series. The Telepath Wars storyline for Series 5 was too weak (especially after seeing what happened in 'Endgame' in Series 4), and the loss of Commander Ivanova and Marcus, together with the changed role for Michael Garibaldi meant that there was too little continuation in the last series.
I must admit that I was a bit tearful the first time I saw the final episode "Sleeping in Light", especially seeing B5 finally destroyed, and again when doing a frame-by-frame on the easter-egg cast and crew video at the end of the closing credits. Makes me a bit of a sad geek really.
One last question. Whatever happened to Lennier (I know, I've read what the Lurkers Guide and Wikipedia has to say) but I'm sure there is an interesting story in there somewhere.
Re: Let me get this straight... @Neill Mitchell
Point well made, but I'm sure that Xerox (Star), MIT (X Windows), Sun (SunTools), Digital Research (GEM) and even Apple (Lisa) were using the term "Window" and it's plural form in relation to computer systems a long time before MS Windows version 1 went to market.
Ah, but here's the benefit
Unlike us (because of the NDA that is part of the settlement), Samsung know which patents Microsoft have hit them with over Android in the past. If they avoid those patents, they may be able to avoid having to pay the license fee, which may save them dollars per phone. They will also have some control in order to avoid the Apple ones as well.
I suspect that the main one that MS roll out frequently are the FAT patents, some of which will expire shortly, but I believe we've never found out the full set.
Re: The reality is all too real @RICHTO
WRT the FUD claim and the links to URLs that you claim will affect iOS and Android.
Question. Do you understand the application deployment model in either iOS or Android?
In both cases, the way applications run is handled by a layer ABOVE the OS. So when you talk about it 'rooting' the OS, that is almost certainly not the correct terminology. Rooting by definition means getting access to the root account on UNIX-like OSs.
What has been compromised here is the application framework, *NOT* the underlying OS. In both cases, the underlying OS will be untouched. In terms of what a user sees, the result may appear to be superficially the same, but if you are going to make such claims, it is vitally important that you understand what you are talking about. Anything else is FUD, especially if you are spreading fear as a result of your uncertainty and doubt.
These specific issues are rather analogous to a Facebook application or account being hacked or a vulnerability in IE or other browser, while the underlying OS, whatever that is, remains untouched (unless, you run the browser from an admin account of course, in which case all bets are off).
This one of the historical differences between UNIX based OSs and Windows. Unless you take specific actions, you will *NOT* be running applications as a privileged user on UNIX, BSD or Linux. This was not the case on Windows before Vista, where many people's normal accounts had full Administrator privilege. This has changed, for which I say Hurray! but it took a long time for MS to recognise this (although NT was designed with a good security model from the ground up, even though it was rarely used to full potential).
Re: The reality is all too real @Danny 14 @RICHTO
I say again, this time to RICHTO. Read the article you link to.
This statistic is for defaced websites, not OS vulnerabilities. If you don't know the difference, then you should probably not be taking part in these discussions.
I'm also not sure about the data from Zone-H. The stats you point to are for 2010, and looking at the dates on the news pages (latest, September 2012, total news items posted in 2012, 2, total posted in 2011, 5), it looks like it is a site in decline.
Re: The reality is all too real @Danny 14
Read and comprehend the article you point to.
It is talking about what the rootkit does once it is installed, and you are right, it does look quite sophisticated, and unpleasant.
But there is nothing in the article about how the rootkit gets onto the server, and this is where the strength of the OS security model comes into play.
As long as an OS has some privileged mode that allows the OS to be changed, it can be compromised. This is true about all currently deployed OSs around at the moment, and is necessary in order to be able to install patches. If you look at it from another angle, there is little difference between a rootkit and an OS patch, apart from the fact that one is supposed to improve the system, and the other is not.
If you were to look at compromised Linux systems, and work out how they were compromised, I'm certain that most of them will have been initially infected as a result of a human error rather than a deficiency in OS security. You know, something like an administrator using the same password or SSH key for multiple accounts, or having trusts set up from untrusted to trusted systems. And I also think that I am on safe ground in saying that it you were to look at the ratio of compromised systems to total number of systems of a certain type, Windows would show as having a higher rate of infection than Linux.
It is true that Windows AV solutions are able to detect rootkits and other persistent infections once they are present, but this article is talking about zero day detection rates. I would much prefer to use a system that is less vulnerable but which had poorer detection tools, than one that let malware in but detected most of it sometime after the infection.
It should be seen as axiomatic that AV software is a market that only exists because of poor OS security in the past. There is no market for Linux or OSX AV because there is no history of significant infections on those platforms. If there were, there would be creditable AV solutions for them.
What the AV software vendors have to accept is that in an ideal world, their comfortable little niche should disappear as OS security gets tighter. This is currently why they need to spread FUD in order to protect their income stream, and the tone of some of the comments here add to this.
Re: Same functionality as Droid and Apple?
I'd noticed the fact that there were videos missing from the Android YouTube app compared to the same search on a desktop. This also appears to be the case on the YouTube support incorporated into Blu Ray players and SmartTVs.
I think that it is the case that if the YouTube app does not think the correct container or codec is installed on the device, it won't display the video in the search.
I think it is possible to get YouTube in a browser to tell you the format of any video, but I can't remember how, and I can't check as YouTube is blocked/filtered at work.
Re: Winding watches daily
I remember when I got my first self-winding watch about 40 years ago (it was also a cheap bit of crap, made by Timex), and I remember thinking how bulky it was compared to the cheap Ingersol I had before it.
Some people just can't wear heavy watches.
I still prefer slimmer watches, even though I'm now wearing a lump of stainless steel that must weigh close to 100g because I cannot find something durable and lighter that does everything I think I want (although the last time I used the stopwatch was months ago) at a price I'm prepared to pay (when did watches become so expensive!)
There is a school of thought that suggests that some errors are introduced intentionally by the publishers, and are used to identify the original source of copies of printed works.
This is particularly said of music manuscripts, so that if someone copies by hand some sheet music still in copyright into Sibelius or Rosegarden to produce 'clean' copies, supposedly free of copyright, the publishers can still identify and take appropriate action.
I keep reminding members of the choir I sing with what they can't do when it comes to music copyright. All I can say is thank heaven for the library service in the UK, who can loan/rent out multiple copies of music to choirs and orchestras at reasonable rates to reduce the temptation to buy one copy and just photocopy it.
Re: all these satellites
Do you really believe that surveillance satellites work like those in films such as "Behind Enemy Lines"?
Put a linux based firewall running on anything with a pentium3 or later as a boundry firewall. Almost free, and perflectly capable of doing this. Use something like Smoothwalll with some of the community mods.
How do the ISP know there are children in the house?
"In other words, parents living with their children must remember to click on "no thanks" to filtering, otherwise their internet access will be restricted accordingly to block supposedly harmful material."
So. Am I going to get a mail from my ISP asking whether I have children living with me? Or are they going to look to see whether anybody is visiting moshimonsters? And how are they going to contact me? (I rarely use the mailbox provided by my ISP because the mail name is crap).
I really think that all MPs should be made to attend compulsory "How the Web works" training, sit an exam to show they've understood, and if they fail it, be barred from taking part in debate or votes on laws affecting Internet access. Internet access is becoming so essential to daily life that the people agreeing legislation have to understand enough to stop suggesting stupid, unworkable laws.
Ultimately I agree
Ultimately I agreere is about private cloud, so data security is still your/their responsibillity.
I don't get it (at least not all of it)
I do not understand why companies will be prepared to put their data - which to many companies define them, onto a public infrastructure where you have no control about who can access the data, and in many cases, cannot even tell which geographical location their data resides in (which can be important if they don't want the FBI and DHS trawling your data).
It's all very well saying that the cloud providers will ensure that your data is secure, but that is about as trustworthy as a bank saying that their traders do not try to influence LIBOR. They may not even know what individual employees are doing. At least if one of YOUR employees leak data, you can take appropriate action without a commercial contract being between you and the guilty party.
I know that you could put cryptography in place to make the stored data not useful to a third party, but that may not give you the security that you expect if your data is stored in certain territories which require key escrow or disclosure.
I can see that some services may be suitable for deployment in a public cloud, but there are, and will remain, many that are only suitable for a private cloud or within controlled boundaries, requiring physical data centres.
IMAP is a protocol, and does not imply any structure on the way mailboxes (folders) are set-up and named.
The problem is that now people use a hybrid of reading their mail on a web enabled mail server, and downloading mails to a local mail client, you need some structure on the server, something that IMAP was never explicitly written for. There is code to handle it, mainly by treating folders as separate mailboxes, but there is no standard structure defined, and nor should there be in a protocol standard.
As Gmail does not really support folders (from what I remember, one of the design criteria was that it would not use folders, anything that looks like a folder is really a set of mails indexed using tags), this probably adds difficulty to communication with another mail server that does use folders. Add this to a protocol that does not embrace folders in the first place, and it is clear that it will never be smooth, and how well it works is probable more to do with the mail server and mail clients than to imap.
I would say that typing them in from a magazine was damn good practice. You either learned to debug other peoples code (and your typos), and could become a software engineer, or gave up, and became merely a user.
Re: Here's a thought...
If the administrators had no way of making sure they got paid (i.e. by being a preferrential creditor), then they would not do it unless forced to by legislation.
Re: Time Legacy. @Spaniel
...then why get clocks that auto adjust!
If you are talking about computers, both UNIX and Windows can be told not to adjust for DST, so I don't see your problem.
But be careful what you ask for. DST is applicable in summer, not the winter (which is what most people assume). We are lucky in the UK, because normal time (GMT) ~= UTC, so it is very clear to us which should be 'normal' time, and which is DST, but it is not so clear cut for any other timezone.
I would not object to losing DST, but only if time was such that the sun was at it's highest in the sky at noon.
This fills me with dispair
This is one of the security 101 things to check on any UNIX-like OS. The fact that it was allowed to happen indicates that there are too many people working creating these systems without the requisite knowledge and/or experience.
It is not uncommon to come across UNIX or Linux software that creates world-writable files, but that does not excuse such stupidity. What makes this worse is that it appears to be the primary interface to the memory system, which will negate all other security measures.
Re: How did they tell their customers?
I'm sure that somewhere in the package documentation you got there was an alternative dial-up service that you can use when ADSL is not working.
What do you mean! You no longer have a V90 modem?
On my way home this evening
I stopped the car at the darkest point on the journey (Exmoor can be really dark), and got out of the car.
The sky was a jewelled spectacle, and I said a farewell to Patrick with a heart both sad and joyous at the same time.
Re: Could be interesting
Google Maps and Navigation are only any good if you get a data service.
I recently had to go a long way out of my way to get home from work because of a combination of weather and several accidents. I turned on the data service on my phone and got.... zilch. And, of course, I had not maintained a paper map book in the car. As it turns out, the switch from Orange to EE was not as smooth as it was supposed to have been.
I reckon that I probably drove at least 10 miles further than I needed because of the stupid road signs that I had to rely on to get me back to somewhere I knew (this was in Devon, UK, where even major roads can be quite small, poorly lit and badly signposted), and I've vowed to never rely solely on Google Navigation again.
BBC R4 Today program carried an article on this
In that, a representative of the NHS (can't remember who, and the summary transcript is not on the BBC Web site yet) stated that the genome of (specifically) cancer sufferers would be taken if the patient consented, with a view to try to identify what factors in a person's DNA make-up controlled how a cancer developed once they had the condition. The data would be anonymised so that summary data would be released to research organisations would not contain information able to identify individuals. The fact that it was going to be restricted to people who already have a cancer diagnosis makes the information less useful
to the insurance industry.
I know that collecting the data at all (and building the "data infrastructure" to hold it) could only be the tip of the iceberg, but it certainly did not sound like a wholesale sequencing of the entire population. I am as worried about this type of information becoming available to other parties as the next person who gives-a-damn, but from what I heard, it should not yet ring the alarm bells.
When you consider it, it would be perfectly possible for the NHS to sequence the DNA of any patient who gave any form of blood or tissue sample, but that is not what they were talking about. I'm not even sure whether that would be illegal, because I'm sure that personal medical notes probably contain blood-sub grouping and other information that could be used to identify an individual or their susceptibility to certain conditions already.
Re: Why bother? Because of graft. @AC 15:45
You know, being a sysadmin can be seductive. When I was faced with remaining a techie, or crossing the divide to become something else, I decided........ to go contract.
I've been calling myself a system administrator/system integration/support specialist (there is really not much difference if you are good at it) for 30+ years, and I still enjoy it.
What is one such as I to do. Where do I go and still expect to enjoy working? Certainly not into a a supervisory or management role. I possibly could have become a system architect, but the opportunity did not present itself.
I cannot see myself changing what I do before I retire, unless I have to.
Well, no. I don't think that IBM think that an IBM 795, the largest single Power system, is an HPC. They might regard several of them as such, but in IBM terms, a Power 775 Cluster, or a BlueGene Cluster, or an iDataPlex Cluster or a Power 755 Cluster are supercomputers.
I work with a couple of Power 775 cluster, and I can tell you that this is a cluster-in-a-box (or, in fact in several cabinets).
But you are right. They don't run Windows.
The report that was quoted somewhere else in these comments is a one-off Windows cluster that was put together by Microsoft. It caused a few ripples, but none of them lasted, and I've not heard of another Top 500 supercomputer running Windows since. There is only one Windows system in November's Top 500. Not really credible as a HPC OS.
Oops. I'm going senile, and I admit it. Thanks for the correction.
Re: Licencing hell @RICHTO
I'll give you a clue back.
The PowerVM hypervisor that sits in IBM Power 795 systems (or, in fact all Power systems since Power 4) is Linux. And I'm damn certain that they can do the level of IOPS that you are asking, although I suspect that comparing the I/O rate with Windows is a bit like comparing apples with oranges and the comparisons would be of little use.
As all Power systems use this hypervisor, even if they are configured as single system images, any I/O benchmark run on those systems that can perform at that speed will use the hypervisor in one way or another.
But I'm sure that you will come back with a 'that's not on Intel' to justify your claim.
Re: Some Obvious Reasons..... @RICHTO
Proprietary UNIX has had filesystem ACLs of the type you are talking about since at least 1990. I am most familiar with AIX, and this was a major enhancement when the RISC System/6000 was launched in 1990 with AIX 3.1.
The Posix 1 filesystem permissions were a description of the original UNIX permissions model that was invented back in the 1970s before Microsoft even existed. At that time, the most sophisticated security model around was that proposed for Multics, many features of which made it into both VMS and PrimeOS (and it is worth remembering that Richard Cutler had some responsibility for VMS).
This is for a filesystem, I admit, but the basis of Role Based Accounting (acquired credentials used to control running processes and services) was introduced in AIX in 4.3.3, which IIRC was around 1998.
If you look outside of core UNIX, then DCE/DFS, which was a standards based enhancement which sat above the OS, and worked on various UNIX OS's, OS/2 and even windows NT provided ACLs for processes and file objects around 1994, and this was based on the Andrew File System (AFS) and Apollo's NCS which were earlier still. AFS, and DCE/DFS allowed credential management using Kerberos a long time before that support was integrated into Windows, and was provided by the OS vendors in most cases. AIX could build in a Kerberos based user authentication system from about AIX 4.2 in 1995.
I'm fairly sure that those people who were familiar with Veritas will also have something to say.
In terms of NFSv4, the Linux support may be experimental (which probably reflects more on the people doing the work than NFSv4 itself), but has been part of the core facilities provided by at least Solaris and AIX for quite some time (have to look up when it was introduced, but I remember reading up on in in 2005). Definitely not experimental on those platforms.
Having got that off my chest, it is clear that these arguments are pointless. This is because although I have a good knowledge of AIX and traditional UNIX, my knowledge of Windows is incomplete, so I so not make direct comparisons of capabilities. I suspect that there are actually very few people who are able to make a dispassionate comparison of these features between OSs, so just having a willy waving competition in forums such as this one is largely pointless.
That said, I do like the idea of a Windows Server that allows you to strip down the basic install to the minimum necessary to run an application. Seems consistent with KISS, one of the primary requirements to make any service functional and secure.
It is pointless to have more features than you need which may open up security or performance issues running on a server which has a specific defined function. This is where heavily (de-)configured Linux distributions have had a real advantage in the server space for years, because you could strip them down relatively easily to the bare minimum. It looks like Microsoft have finally learned.
Re: Ironic but many Security “consultant” prefer NTLM + SSL over Kerberos + IPSec
Your point is quite well made, and I agree that in isolation, there should be no problem deciding which is most secure, but quite often there are other constraints.
I suspect that many of these security consultants may have to come up with solutions that are 'good enough' while not adding significantly to the cost and complexity of the solution.
When all is said and done, the security of any environment is a compromise between risk, cost and strength, and always will be until the strongest security is also the cheapest.
Of course, if the consultants you've known only suggest NTLM+SSL, then your scorn is probably deserved.
I know I keep banging on about this
but I had TomTom running on my Palm Treo 650 with a BlueTooth GPS about seven years ago. I'd probably still be using it as a satnav now if TomTom hadn't retired the database format needed by Navigator 6. And out-of-date databases are a real pain in the neck.
I may try this. £31 is not so much to lose, and I really can't get on with Google navigation needing a data link any time it needs to re-route, and giving me directions just-too-late to get into the correct lane.
Amazon decided that they could get a better deal
than Royal Mail offered and switched to whatever courier they are now using. I think they made the decision during one of the postal disputes a few years ago.
My worst experience of Royal Mail delivery is when the postman decided that the refuse bin was a good place to leave a parcel, with no card saying where he left it. It was a sheer fluke that it did not go out with the rubbish.
I have been sitting in a room next to the front door for a whole morning, only to find a card saying that they'd attempted to deliver a package and could not get an answer. I'm surprised that the postman was even able to put the card through the door without me hearing, let alone ring the door bell. And the dogs didn't hear it either!
This normally happens about 11:15 on a Saturday, with the parcel office closing at 12:00, and the card saying leave at least an hour before going to collect the parcel. Really gets my back up.
I did drive up to my house one weekend to see the postman filling out the card before he even walked up the short path to attempt to deliver a package. He did not get a Christmas tip that year.
Re: Added value : more than downloadable apps.
Congrats on your 22nd landing.
I agree, but only up to a point. I drive somewhere hilly and through small villages where there are speed restrictions (rural England is like that), thus momentum in a light car (a European Fiesta is what I believe you call a sub-compact in the US) cannot be maintained, and raw acceleration is what matters. My Fiesta is an old one, and does 0-60....... eventually (I think it's about 17.6 seconds according to the Ford stats). I know that on one part of my journey, I regularly get overtaken after a standing start by cars like Landrovers. There are plenty of BMWs, VW Golf GTIs etc. that drive at the same time as me that are much more nippy, and get right up my bumper!
So when going up a hill, or when leaving a speed restricted areas, momentum does not help.
Re: Added value : more than downloadable apps.
Speaking as one who is currently driving a Fiesta (one of my kids need a vehicle to get some driving practice while they learn to drive, and I'm not paying to keep 3 vehicles on the road!) I will say that almost *ANYTHING* is better than a small-engined Fiesta for anything other than local journeys.
I am getting to work later (it is not fast enough and falls behind other traffic, mainly due to a lack of acceleration) and is significantly more fatiguing to drive than a larger car, and I certainly would not want to do long business trips in it. All of these issues are arguments for getting something a little better, merely to improve the ability of the driver to work at the end of the journey.
If you had chosen a Focus as the comparison, then I may not have bothered to reply, but a Fiesta is really too small.
I think that what I am trying to say is that there is a range of options between cheapest and most expensive, providing something that is good enough without having to go up to the premium end of the product spectrum.
Re: The "Magi" of NERV
Neon Genesis Evangelion was a quite seriously messed up anime, on several levels.
The implication was that the Magi and the Evangelions themselves were or contained the conciousness and/or the brain of various family members of the main characters. In the case of the EVAs, it was necessary to allow the pilots to synchronise with them.
The only thing that I never understood was where Rei had come from. Shinji's mother was in EVA01, and Asuka's mother was in EVA02 (the scene where Asuka comes across her mother who had hung herself suggests that a trauma was also required, which is also distressing). I know that Rei was the prototype for the dummy plug (as shown in one of the last few episodes where we got to see parts of Rai in Terminal Dogma), but there seems to be no template for her personality, not that she had a lot.
But the role of the Magi was never explained, and I definitely don't think that they qualify as 'badass'.
Re: vi on unix, teletype support...
ed was the primary editor on UNIX until ex and vi came in from BSD.
When I got my BSD 2.3 software tape in 1982 (we wanted to run Ingres on UNIX V6 and V7), I found that I could not compile vi up on my PDP11/34E because it (vi) was too large for a non-separate I&D PDP11. Instead we used a screen editor that was written for small UNIX systems by the Newcastle University Computing Department.
Later versions of VI used an overlay loader that may or may not have been related to the Keele Overlay modifications for UNIX, but Berkeley dropped support for such small PDP11s by about BSD 2.6 (after all, the PDP11/44 was a much better machine, and it and everything after it all had the separate I&D feature).
- NASA boffin: RIDDLE of odd BULGE FOUND on MOON is SOLVED
- SOULLESS machine-intelligence ROBOT cars to hit Blighty in 2015
- BuzzGasm! Thirteen Astonishing True Facts You Never Knew About SCREWS
- Worstall on Wednesday YES, iPhones ARE getting slower with each new release of iOS
- Microsoft's Euro cloud darkens: Redmond must let feds into foreign servers