As far as I am aware, all change control systems work like this, and have done for decades. I used SCCS on Bell Labs. UNIX 25 years ago, and it was the case then, and Wikipedia states that SCCS dates from 1972.
1934 posts • joined 15 Jun 2007
As far as I am aware, all change control systems work like this, and have done for decades. I used SCCS on Bell Labs. UNIX 25 years ago, and it was the case then, and Wikipedia states that SCCS dates from 1972.
"If you want a better phone for actually doing phoney things..."
...then my advise is to get a £25 Nokia or Samsung dumb phone. Tried and trusted technology, long standby and talk time, robust as hell, easy to use (as a phone), and it won't get you mugged or be particularly painful when you drop it down the loo or lose it when out on a bender.
Of course, if you want to run Angry Birds, then you really want more than a phone!
And passwords on privileged accounts every 30 days or less.
Not sure that many companies I've worked for actually abide by their own rules, however. For most companies, it looked like this was in the policies merely to satisfy an audit requirement.
Imagine having to change all the passwords on all your routers, intelligent switches, management consoles, data appliances - well anything that has a password that protects a configuration basically! I'm sure that most companies don't really know the scope of the problem.
Micrografx Picture Publisher used to be my go-to software for image manipulation on Windows (back when getting scanners working on Linux through the parallel port pre-USB was a real pain). They used to allow personal use of their old versions of software, and it often came as part of the software bundle that came with a scanner as well.
Although I don't have a lot of Windows boxes around now, those that are in my control generally have MPP installed on them still.
Well, they make white cider by passing diluted grain alcohol over the remains of the apple pulp after it's been squeezed dry of real juice. Probably imparts as much flavour as using the wood.
What the hell is going on here with the "Add an icon" tab? What was wrong with having the icons below the post! Has somebody at ElReg been drinking the Metro/Unity cool-aid?
Me? I don't take my watch off, even in the bath.
Only time it comes off is to replace the battery, and so as not to scratch the wife when, um...
Oh, you know what I mean.
Not sure which of those two events happens more frequently nowadays!
"esethay arentay otntay hetnay igeonpays ouyay reajay ookinglay orfay"
"these are not the pigeons you are looking for"
Should you not have used/invented Pigeon Latin rather than Pig Latin?
Indeed, the sentence that reads "people are not willing to spend big money on mainframes – the Unix big guns" indicates that Dennis Yip has only a limited grasp of what he is talking about.
From a cursory examination, I would suggest that in his world there is just Wintel, and a single category of "everything else".
Second Impact was supposed to be in Antarctica!
"Sounds like is more a matter of having to page different regions then and iterate over them. Shades of paged memory again"
Except that the remote machine cannot manipulate the mapping registers that are used to control the window. If you can't manipulate these, then you cannot see memory outside of the region. And even the process on the local machine, which is running in non-privileged mode, cannot alter the mapping registers. This is a function of the OS, and is controlled by system call, with appropriate security.
I must admit that if you think that paged memory is a bad thing, then you must realise that you are condemning all current machine implementations that use virtual address spaces. All virtual address space machines use memory mapped pages to implement a linear virtual address within a physical address space, and also allow memory overcommitting (==using a page space on disk).
There are NO architectures currently that I know about that allow multiple processes in a strictly linear global address space that is the same as the OS. I don't know whether you work with some that I don't know about, or if you don't realise how modern systems work.
I think it is safe to assume that for any architecture that implements RDMA, an IO MMU is essential. Anybody designing an architecture/system without one in this day and age would have to be regarded as terminally naive.
I'm as into Microsoft bashing where it is deserved as much as anybody else, but in this case, just because it is Microsoft that does not mean there is a problem. You could say that pretty much every OS has buffer overflow exploits (it is one of the problems of writing in C after all, and that is a language that has been used extensively for OSs) thus it's not only Microsoft's problem.
In terms of the OS setting the window, It is really set up so that the hardware MMU does the guarding, not the OS. The OS sets up the window, and then the MMU enforces it. The OS may set it up wrongly, but assuming that it is correct and the MMU is bug free, it becomes impossible for the remote machine to move outside of the constraints of the window. This is not an OS issue.
If you distrust the MMU, then you must distrust all architectures that use virtual address spaces for processes as well, as they all use the capabilities of a HW MMU to enforce the issue. It's been that way for multi-tasking OS's with virtual address spaces for decades, and wintel since NT is no exception.
Obviously some people hiding behind the AC banner have forgotten their system architecture courses, or are just trolling because they don't understand in the first place.
RDMA is secure. It does not give access to the whole of the address space of a system (or at least the implementations I have used don't).
What happens is that the OS on the controlling side of the RDMA transaction sets up a limited window to a region of memory that the remote machine can see. The remote machine has no access to memory outside of this window.
This makes the access as secure as the OS on the controlling machine which set up the window.
In the implementations I have seen, there is an architectural limit on the size of the memory region in the window. This means that it is simply not possible to expose the entire address space of a machine using RDMA.
Windows users are now expecting to cut (or copy) and paste large objects.
I recently used a product on Windows (I can't remember what it was) where drag and drop copy between folders did not work. What they expected you to do was select a group of files, right-click-copy (or ^C), move pointer, click, and then either right-click-paste or ^V.
In this way, you could be copying multi-megabyte files. This will not work with a traditional X-Windows style cut buffer. I think this is what they are talking about when they are talking about cut-and-paste between VMs.
Even though I deal with HPC machines every day which do RDMA with the fastest interconnect in the market at the moment (at least I believe it is), I cannot understand how you can have remote access to another system's memory via RDMA faster than access to local memory. What this would imply is that the RDMA device (probably attached via PCIe) had access to the memory on a system faster than the processor. As the memory controllers of all current high performance machines are integrated on the processor die, I cannot see how it is possible for PCIe adapters to have faster access. If he is talking about VMs on the same machine, and using RDMA to access memory physically in the same system but in a different VM, the best he can do is have it running the same speed as 'local' NUMA memory.
I know they are talking about NUMA machines here, and that a processor may have to go across the internal system bus to talk to memory controllers attached to a processor on another die, but that across the bus access is not likely to be any less than the RDMA device that is in a similar position on the bus. Not unless the internal bus architecture of the system is seriously screwed up.
The only thing that I can imagine that he is talking about is having the memory 'mapped' and available via RDMA between two VMs on the same system faster than it could be copied across the interconnect from a remote (i.e. separate hardware system). That seems eminently possible, but that should not be a surprise.
It would be interesting to see whether the systems he has seen in the lab are the ones using PCIe4 as an interconnect.
I saw in interesting documentary on Hershey that claimed the reason why American chocolate is so nasty is historical.
In the 19th century, Europe had a Chocolate fad. East coast America picked this up as Europe was trendy to Americans who lacked a long history, and looked across the Atlantic for trends and fashions. East Coast America is fairly cool and the major cities were fairly close together, so chocolate could be made and moved quite easily,
When the frontiers started getting civilised (mainly with the railroads), high quality goods from the east coast were sent around the country because those people wanted to be seen a sophisticated, and chocolate was a small luxury that could be afforded in small quantities. Unfortunately, America is a big place, and often quite hot.
This meant that chocolate arriving on the West coast or the South was often in box cars for days in hot conditions, and quite naturally went off. The milk in the chocolate became rancid. The people receiving such luxury items did not know that it was off because they did not have anything to compare it to, only knowing that it was the trendy thing to eat and just accepted that this was how it should taste, and got used to it.
When Hershey, Pa (aka Chocolatetown, USA) was built, and they commissioned refrigerated box cars on the railways to distribute the product in ideal conditions, the outcry about the change of taste from the rest of the US was so great that Hershey changed the process to re-create the taste of gone-off chocolate that Americans now favoured. And the rest is history.
Mind you, I understand that Americans now go wild about the taste of Cadbury's chocolate (and buying the company, unfortunately), so maybe they are beginning to see sense. Having said that, I guess that people from places like Belgium and France probably throw their hands up in the air at British chocolate, which substitutes vegetable fat for milk fat.
But I prefer it that way, and that's what matters to me.
It's Scientifiction, I insist.
As a student, I bought a JVC KD-720 deck to go with my budget Hi-Fi. Now, 35 years later, it is the last original component of my system, and still works pretty well.
It was at the time the "Best Buy" budget (~£100) cassette desk, and (IMHO) did a thoroughly good job of recording and replaying music. The main problem was that it did not have a 'proper' FeCr setting, and pre-dated metal tapes, so could not use them (the Cr02) setting was also not really calibrated for psudo-chrome tapes like the TDK SA). It also only had a 'normal' tape head (not a Sen-Alloy one), so I was always expecting the head to wear, but even now, it's not too bad (apparently, early generation chrome and metal tapes were very abrasive, I guess that missing these tapes out prolonged the life of my deck).
Even this budget deck had a wow and flutter of 0.2% DIN, and a frequency response of 40-14000Hz with ferric tapes, and 40-15000Hz on chrome dioxide tapes.
I nearly always used TDK AD-90 ferric tapes to record, because they were the sweat-spot of the TDK range, providing good frequency balance (although slightly bright), good frequency extension (the brightness extended the upper ranges while minimising tape hiss) at an affordable price. They also used to be about 47 minutes long (useful for both sides of a long album) and very well made with friction reducing embossed teflon sheets either side of the spools. The Audio Society used to buy them by the case-full, and sell them at near cost to the members. TDK SA tapes were better but much more expensive, although I believe that the calibrated tape brand for JVC decks was supposed to be Maxell.
Anyway, my deck always gave exemplary performance. You could tell it was a tape from the hiss if you listened hard, but otherwise it sounded very good and I had very few tape jams or mangled tape in this deck. The sound of it persuaded several of my friends who had the "cassette tape is no good" attitude to change their mind and invest in a decent deck. It also helped having a reasonable music source to record from (turntable in the day).
To prevent jams, always wind the tape from end-to-end in a single operation and leave them on the leader before storing them. This prevents 'ridges' which increase the drag, especially on the take-up spool. If the take-up spool cannot be turned by the motor (remember it has a slip clutch so that it can cope with the varying diameter of the spool as the tape is wound on), then the capstan and roller will continue to feed tape in until it loops back on itself, and gets mangled. This was a common problem on C120 cassettes, because the nearly full spool was very heavy.
If it does look like this is happening, immediately re-wind the tape as best as you can before attempting to eject the tape. If you just eject the tape, you are absolutely guaranteed to break or stretch it, and will end up fishing out short lengths of broken tape forever.
I still keep iso-propyl alcohol around as a general cleaning fluid after using it for so long to clean the tape heads with cotton buds.
I used to read New Scientist when I was doing my O and A levels at school in the 1970s (it was an accepted use of free study time in the Library), and beside ogling the calculator adverts, I found it a good counterpoint to the taught science. It was sufficiently detailed to engage the mind of a scientifically curious teenager, with sufficient detail to be seen as useful knowledge, and would in my opinion be graded in content and detail above Horizon (which I used to watch regularly as well), and other science and philosophical programs such as The Body in Question and The Ascent of Man.
I continued to read it on-and-off through University (it was one of the popular collage periodicals in the Junior Common Room at my college).
Once I left University, I stopped reading it.
Recently I noticed that supermarkets have started putting it on their magazine racks, so I thought I would pick up an issue or two, with the intention of getting my late teen kids interested.
I was shocked! I thought that as I had not studied Science in the last 30 years, that i would struggle a bit with some of the developments and maybe some of the maths, but no.
The content was trivial, and the author(s) clearly felt that they had to explain everything from first principals. I found almost nothing taxing, and was able to get almost all of the articles by skim-reading them, missing out 70% or so of the content that I already knew. This was not the mind-expanding New Scientist that I had read thirty years ago.
I gave them to my 17 year old son, and he found nothing in them that he felt interested him enough to justify me regularly buying it (although he is a knowledge sponge, and read a lot on-line).
It's a terrible shame that one of the most approachable science publications appears to have done what I can only describe as dumbed down. But maybe that is just because the are following their audience, rather than leading it.
Not sure about the real one, but I'm sure you could find a lookie likie who would do most things you want for cash without needing any special technology.
Probably not a legal transaction in many places you would want to go to for a holiday, though.
I will support the views about inappropriate clothing, but mainly on the practical side.
If I had my way, I would ban anything approaching high heals in a machine room. This would include built-up boots, such as cowboy boots. The reason? Well suspended floors are never completely even, and if you put a heel over the edge of a raised or removed tile, accidents will follow (I believe you are more likely to maintain your balance with 'normal' shoes). Similarly, I would not want sandals or strappy shoes worn anywhere heavy equipment is being moved.
I also wonder how someone in a skirt would react to crawling under or around restricted spaces while trying to maintain their dignity, and if they did not care about their dignity, how much of a distraction that would be to their colleagues.
The comparison with suits that ShelLuser quotes is not relevant, because it is perfectly possible to take a jacket and tie off, roll up your sleeves, and be no different from a mobility basis than someone wearing cargo pants and a tee shirt. Indeed, IBM hardware engineers used to have to wear suits, it's what was expected. It's not possible to do this if you are wearing a dress.
In terms of appearance, I would say that standing out is not always a bad thing (I wear collared shirts and a tie even though those around me are much more casual). But do it for the right reasons, not the wrong ones.
"...almost, but not quite, entirely unlike tea"
My experience with the IBM Security Module (TPM 1.0) on Thinkpads where it was optional was never good.
Once a Thinkpad had been booted once with the security module installed and turned on, removing it stopped the machine from working. Completely. It just sat and beeped at you 16 times when you powered it on.
The only way to revive it was to plug the removed module back in. It had to be the one that was removed. One from another machine of exactly the same type did not work. You could not even disable the security module and then remove it. The IBM maintenance manual stated that if you got a machine to repair in this state, without the original security module, the fix was to replace the motherboard.
More modern Thinkpads, like most machines, have it in the supporting chipset, so it is impossible to remove.
just don't use your disassembler in the US. There's clauses in the DMCA specifically banning reverse engineering.
I'm also interested in your disassembler. Are you that fluent in x86 machine code? Can you really gleen from the executed code what the software writer was trying to achieve without access to the meaningful variable names, structure definitions, function names (missing if the symbol table has been stripped), argument types and comments? If so, you are in a class of your own, so much better that anybody else in the world (and yes, in my time I have tried to do exactly what you suggested, and even had some limited success)
In order to get access to what Paul Crawford wants, you would need a de-compiler which was able to reconstruct the C, C++ or whatever language the various parts of Windows are now written in, including removing all the optimisations that not only change the generated machine-code, but in some cases completely eliminate sections of code. And you had better hope there is no self-modifying code in there anywhere!
If you have such a de-compiler, you should be immensely wealthy, because what you would have would be tantamount to magic.
and then review your comment.
It's now 10 years old, but lays down what Trusted Computing means to Microsoft and other vendors.
IBM never nearly went bancrupt. They had plenty of money washing around even in their darkest days in the mid '90s.
It suited them to have a few bad year's results so that they could take some large tax write-downs and not get hammered for their extensive employee shedding exercises.
If you go back to the published results from the mid-90s, and read between the lines, there was still profit being made, money and assets in the bank. It's just that their transformational restructuring costs could be counted as a loss as far as the bottom line was concerned. IBM has some very skilled financial engineers.
Anybody who thinks that IBM is anything other than a quiet giant should look at their share price. Investors are confident, and IBM has irons in more fires than any other IT company. It's just that companies like Microsoft and Apple in particular have inflated value because of various flavour-of-the-month products. I would trust IBM to still be around when Apple have run out of ideas and the PC wave is over and Microsoft are just a patent troll.
And to think I used to think that IBM were the enemy! Either they've got better, or I've just mellowed.
Indeed it is, but it is not the EFF that is at fault.
Liberation Music is clearly trying to nullify the fair use clauses in US copyright legislation, and that is a clear violation of the rights permitted by that legislation.
People who think that all use of copyrighted media should be licensed and paid for do not understand the benefit to the copyright owners that such limited fair use, as long as it is not abused, can grant to a work.
Many tunes that would have sunk into obscurity have had life breathed into them by being included in a viral video. The fact that someone can hear something, like it, and immediatly buy it from iTunes, Amazon or such is of huge benefit. So what you want it your music included at low enough quality, incomplete, or with other noise over the top so that it cannot be ripped from the video, but recognisable enough to be identified.
Such was the use in this video. Liberation Music are harming the work and copyright owners, not helping them.
I think the post by vagabondo above sums it up very well. Go and read it.
Although Lawrance Lessig is a Harvard professor, he's probably not in a position to defend a copyright infringement claim without somebody backing him up financially (american law is very expensive, and although he is a Professor in Law, he's probably not qualified to defend himself in court without professional representation). If this had been a free lecture for Harvard, then the University might have done so, but this was for Creative Commons, and they are probably not cash rich enough to assist.
What gets me is the fact that Liberation Music believe they have any chance of winning. If anybody understands DMCA and copyright law in America, it has to be Professor Lessig. If I were LM, I would be running away very fast, and trying to settle as quick as possible, not that Professor Lessig will allow that now he has decided to get a precident set.
My guess is that LM will lose, and pull up the drawbridge to the US, and never pay the damages. I only hope that if this happens, the US court will attempt to extradite the board of LM to the US. If that happened, a little bit of my rapidly diminishing respect of the US court system would be restored
The basic limitation of the BBC Micro was the way that the memory map was laid out. There was 32KB of the address space reserved for ROMs, normally 16KB for the OS, and 16KB for the Basic, or whatever sideways ROM you were using. This was at a time when Sinclair has all of their OS and Basic in a16KB. This only left 32KB without some address-trickery for RAM.
The segregation of the OS and sideways ROMs was a great feature for speed, and allowing separation of the OS and other packages, and really allowed you to do a great deal. The architecture allowed you to have 'service ROMs', essentially add-ons to the OS to handle interrupt driven hardware (the OS could bank-switch the ROMs to handle interrupts), which meant that you could add things like floppy disk drives, mice, teletext adapters, software sprites (Acorn's Advanced Graphics ROM), sophisticated music hardware and even networks and hard disks relatively easily.
With one of the ROM positions populated by static RAM (there were several side-ways RAM boards, mine is an ATPL board with a write-protect switch) you could even (dare I say it) load ROMs from disk. I got the Acorn ISO Pascal Compiler (two ROMs, one an editor and runtime, and the other the compiler) running in a single 16KB bank of RAM by re-vectoring the OSCLI ROM bank switch vector, and loading the compiler from floppy or Econet and then swapping back at the end of the compile.
The BBC OS was a masterpiece of good software engineering, and with the associated Advanced User Guide, which mapped the OS and rest of the system out like a blueprint (and even contained a board schematic), enabled magical things to be done.
When the B+ came along, Acorn copied what Solidisk and Watford had done as add-ons, and moved the 20K graphics screen and some of the low memory pages used by the sound, floppy disk, and other queues into "shadow" memory in the address space normally occupied occupied by the ROM and OS by bank switching, meaning that the low 32KB above 0x700 (I believe, it was 0xE00 on a normal model B without additional filesystems) to be used for programmes. The Master 128 took this even further by adding bank-switched ROM images as standard. "Shadow" screen memory generally broke programs that directly manipulated the screen bitmap without using the OS.
Of course, if you wanted the full 64K of memory, the you could have bought the 6502 Second Processor, which not only gave you a lot of mode-independent memory, but ran at a screaming 3MHz. Playing Elite on a BEEB with a second processor and a Bit-Stick attached gave you smoother full mode-1 four colour graphics (without the screen-tearing divide between the two colour mode 4 and four colour mode 5, something the Electron version could not do because it was missing the interrupt timer used to switch modes at the appropriate position), but also gave you incredible control of the ship!
I always thought that the 64KB claim of the Commodore 64 was a swizz, because the first thing it did when turned on was copy the OS and Basic out of the ROM and into RAM, effectively leaving you with only 39KB (if I remember properly) for any programs, and it did not have the high resolution modes (640 pixels wide) that allowed you to do 80 column text, which enabled us to use the BBC as a terminal to the minicomputers at the Polytechnic where I worked at the time. On the C64, you could use something approaching the full 64KB, but only if you wrote the whole thing in machine code, and disabled Basic.
The BBC Micro also had a ULA, so Acorn were not treading new ground. It appeared to be a troublesome technology, because as far as I am aware, everybody who used them had production problems.
The ULA on my issue 3 BEEB always overheated on warm days (cue the freezer spray), and I noted that on issue 4 and onward, passive heat-sinks started appearing on both the ULA and the Teletext chip.
It seems strange nowadays to have a system that did not have a single fan in the case, and as a consequence would have been silent if it had not been for the incessant buzz of crosstalk interference from the speaker. I suppose the silent end of computing has gone to tables. At least they owe a legacy to these machines.
The point here is that the subsidiary is subject to the law in that country they are established in, as are all of the employees working for the subsid (even those who may be US citizens while they are in the country). The US owners may be free from the effects of the local law, but the local employees certainly aren't.
If this wasn't the case, companies like IBM and HP would never get UK Government contracts with organisations like the MoD or GCHQ, yet they obviously do.
Having worked on Government projects with personal data involved, I know that data security is drummed into the local workers, as is the fact that they personally are liable to prosecution should data in their control be leaked. I'm sure as hell I would prefer the wrath of an employer rather than a jail sentence if I were asked to copy data across a national border.
And vacuum and injection moulding machines as well?
It's well within the realm of your average senior school metal/woodworking/craft shop to fabricate some very convincing and well made devices without using a 3D printer. It's just a bit slower, needs some reasonable skill, and you can't distribute the model data over the Internet as data. But you can still send the dimensions and blueprints.
The way that the 40 bit addressing works on a 32 bit ARM is by the use of segment registers allowing you to offset the virtual address space for a process into more that 4GB of memory. It's not new technology, and has been a cornerstone of the instruction sets of processors since the mid-1970's.
The first architecture I saw address extension done was the 16-bit PDP-11, which had it's address space stretched from 16 to 18 and then to 22 bits in different models. I do not know the ins and outs of Intel's PAE, but I suspect that it is something similar. The Power processor family also does something similar for it's virtual address space, although it does not need it to stretch the address space. Most other modern processors (those designed in the last 30 years) do something similar to support virtual addressing (but not necessarily for address extension).
The basic method involves breaking up the virtual address space into chunks called segments, and then adding a real-address offset to the base address (normally designated as a page number) in the address decoding hardware. This allows a process to see a linear address range scattered over a larger possibly non-contigious address space. The impact to the code-writer is ZERO. There is nothing that needs to be done for a user-land process to cope with this technique. All multi-tasking OS's have done this for what seems like forever.
It does make the OS have to a bit more work every time you start or context switch a process (it has in some way to manipulate the segment registers - it's different in different architectures), but it's well understood what needs to be done, and has been a standard technique. And it is perfectly possible to write the OS itself to work in a virtual linear address space (an example was the 32-bit AIX kernel running on 64-bit RS64 and later Power processors), where the OS is in control of manipulating the segment registers for itself, as well as for all of the other processes. The 32-bit kernel could manage 64 bit processes, with more than 4GB of real memory on the system, which when I explained it used to puzzle people for whom the 32-bit to 64-bit migration in Windows seemed like a huge deal.
The major limitation to this is although the system may have more memory than the size of an address, it can only be used in chunks determined by the width of an address. So for example, an individual process in an ARMv7 with 40 bit LPAE can only address 4GB of the address space, even though the architecture will support 1TB of real memory. But of course, you can have more than one process, allowing you to utilise all the available memory. And as a side effect, you have the ability to share pages across multiple processes for in-core shared libraries, shared memory segments, and memory mapped-files.
This is not even a problem for the OS, because all the writers have to do is to keep at least one segment free, and then manipulate the segment register to allow the OS to see any of the real memory. Of course, it can't see all of memory at the same time, but it can get access to any of the memory.
The issue of whether 64 bit addresses will add any more inefficiency over 32 bit addresses is all to do with whether half-word aligned load and stores can be done natively. On some architectures, performing a half-word operation (for example a 32 bit load or store on a 64 bit machine) requires loading an entire 64 bit word, and then masking and shifting the required part of the word to obtain the correct half word value. This may be microcoded, but in some architectures had to be done by the program itself. This is slower, and on some architectures, the decision about whether to 'waste' 32 bits of memory verses the performance costs of half-word operations was a difficult decision.
I would have to research the ARMv7 and ARMv8 ISA to know whether this is the case, although I would welcome someone in the know to provide an answer.
Whether floating point load or store operations can be done in units other than the word-length is different from architecture to architecture. For example in Power 6, it was necessary to load a floating point value through a GP register (or two in the case of a double-word FP value), and then move it to a floating point register. For Power6+ and Power7, it is possible to directly load from memory to a floating-point register, allowing you to do double-word FP loads (128 bits) in a single load operation. This decouples the FP processor from the natural word size of the CPU.
Cadbury used to produce a bar called "Bar 6" which was a similar confection, but with 6 "bars" rather than fingers. Terrys also produced a two fingered wafer in chocolate bar called Riva.
There have also been numerous supermarket look-a-likes for ages, of both the 2 and 4 fingered variety.
I was sad when the writing on the top of each finger changed from Rowntrees to KitKat, although recently I was happy that Cadbury returned the Chocolate Cream confection to the Fry's banner again. Just waiting for the same to happen to the Crunchie.
Was the recent limited edition 5 fingered KitKat an attempt at a trademark landgrab, I wonder?
Tell me, how do you attach a ST-506 or ST-412 drive to a modern machine? You can't even plug in the ISA controller card into any machine built in the last 10 years or so.
I mean, even EIDE and SCSI are disappearing rapidly.
So you are saying that the physics of aerodynamic flight is not natural?
If you ignore the lack of a vertical tail fin, the configuration looks uncannily like Zero X, or maybe a little like Fireball XL5.
Were the model designers at APF prescient, or knowledgeable beyond their time, or is there some plagiarism involved. Or maybe there is just a natural way of doing these things.
Just put a boundary network device with a sniffer, something like a Linux firewall. Allows you to record all IP traffic flowing through it using something like tcpdump. Doesn't need any antenna.
I kept reminding my kids that I could, if I wanted, see the URL of all web pages they visited and a lot of their other traffic over the household LAN (can't do much about 3G, but that's another story). Made them much more 'net aware. I know that they could obscure the data using encryption or a VPN, but this would actually achieve part of what I want them to do, and that is to understand what it is to be on-line safely.
For the record, although a lot of the URL and connection data the data is kept on the Firewall for months, I've never felt it necessary to snoop on them (although I have used the data to prove I could, and also to resolve bandwidth contention between them). It's amazing what being open about what you could do can achieve, without actually doing it.
My 'phones all filter down to the kids as I upgrade the phones, although I gave each of the kids their first phone at whatever age they started spending significant non-school time out of the house, normally early-mid teens. First phone always low end, low value phones, often hand-me-downs on PAYG to give them a means of calling-home, never as a means of keeping tabs on them.
Currently my daughter is waiting for me to replace my Sony Xperia Neo so she can have it, and her Samsung Galaxy (one of the low end ones which was my first Android phone) will then go down to my youngest son, which will replace his Nokia clamshell. This will mean that everybody in the household except my wife will have smartphones, and she just doesn't want a mobile at all.
Eventually, the low end phones just end up sitting in a drawer as 'spares' (like my old Nokia 7110, now really only kept as a curio). The exception is my Palm Treo 650, which I am keeping as my active spare (with a PAYG SIM in it), because its not that desirable to anybody who was not a Palm user, and I like it too much.
The only real thing that bugs me is how soon service provider locked Android phones stop being updated by the service providers. My oldest son noticed this, and as a result always buys SIM-free phones (he's old enough to have his own money to spend) that get the updates direct from the phone manufacturer or Google, not waiting so see whether the service provider is prepared to package the updates. Maybe SPs should be forced to admit that they will never update old phones, and allow them to be un-branded so stock ROMs can be installed on the phones without hacking them.
Maybe I have a pedantic mind, but when I read Monty's comment, I immediately thought US protectionism, and had to read it carefully in order to get any other meaning. So, no, I don't think it was obvious what he meant, let alone what he implied.
The full context in the original comment is "and this could have been a real kick in the nuts for Apple that possibly could have costs jobs and affected real people"
There is an implication here that the subject of the potential kick would be Apple, and by association, that the jobs and real people that would be affected would similarly be associated with Apple. I agree that this could be the S. Korean and Chinese workers, but bearing in mind that any displacement of product would probably have meant that another brand made in South East Asia would have benefited, possibly more than if the Apple product was sold. So maybe a blow to some workers, but a benefit to others.
I don't feel at all guilty not worrying about US jobs at the moment, as I believe that most US based multinationals are currently screwing over their non-US subsidiaries for jobs and profit, and I'm not in the US.
Is the implication in your statement "affected real people" mean that Samsung employees in S. Korea or even China don't count as "real people"?
...tend to be informed and technoliterate.
Older members of my extended family still watch mostly the first 5 channels of Freeview, because that is what they know, and they know where they are with "1", "2", "3", "4", and "5" on the remote. Channels under 10 do have a real premium when it comes to people who are used to press just one button per channel.
I've tried and tried to make them more aware of the +1 variants of ITV and Channel 5, to no avail. I just have to assume that they are too set in their ways to change, or maybe that they cannot read the programme guide on the screen!
It's a while since I did any education on the power factor, but it is quite clear from the reading I've been doing over the last few weeks that the whole power factor issue is much more complex now that it used to be.
Back in the days of inductive loads, the power factor was mainly due to a phase shift caused by the load (and in fact, devices that use significant amounts of power nowadays have to have additional components to bring the power factor down to close to zero before they can get a CE mark in Europe).
But since such simple times, the increased use of switched-mode power supplies, used because they are much smaller and more efficient, has lead to the waveform of the neutral being not only phase-shifted, but corrupted so that it is no longer anything resembling a sine wave. I still cannot get my head around what is needed to work out the real power use in this case. I'm sure it is all factored in, but without further research, it's beyond me.
All I know is that the two clamp-on power meters I have rarely agree on how much power is being used in the house, but they still are a good indicator of when the consumption goes up and down.
Fortunately, the smart meter will not be able to power individual appliances down until either smart sockets are installed, or the appliances start implementing remote control. Both features are in the pipeline, but not generally here yet.
As I understand it, when you do get remote control from the meter, you will be able to assign certain devices (like fridges or freezers) a higher priority, so that other devices will be powered down first.
The savings I would get would be minimal, because I am already using a power consumption monitor on the house as a whole, and a plug in consumption meter to measure the power of individual devices.
"Never underestimate the bandwidth of a truck full of tapes".
Over the net is fine as far as it goes, but it does not have to be the only mechanism used. That's why most large datacentres use tape with offsite storage pools for their DR plan.
The CentOS version problem and not storing the VM definitions in both sites should not have happened, but I would not bash yourself over the head wrt the sendmail config.
Sometimes it is not enough to do a restoration test. For some services, it's necessary to actually run for a period of time in your alternate location. I suspect that any number of 99% tested DR plans may hold something like your sendmail problem.
This is normally because of the high cost of a full DR test. As a result, 5 minutes after the last DR test has been concluded sucessfully, an apparently minor change somewhere in the depths of the environment may invalidate it!
Of course, if you do run from your alternate location for enough time to make sure that you've got most of the bugs, it introduces another problem, that of fail-back. This is something that many, many administrators just do not think about. If you run from your alternate location for any length of time (to rattle any connectivity problems out), you have to have a procedure to revert back to your primary site. And it's not always a reverse of the DR plans, because these are often asymmetric.
The background to this is that most businesses don't think beyond restoring the service. One bank I worked for acknowledged (or at least their DR architect did) that it would be almost impossible to revert back to the primary site if they invoked their full site disaster plan for their main data centre. The services would be back up, but vulnerable to another failure.
Is this because they stuff everything else they keep in their pockets into the one they don't put the iPhone in, just so that they doesn't scratch or mark the phone?
I read the first sentence, and was preparing to flame, until I realised that you were being ironic!
You may log on to a system, but there is a HUGE difference between a system and the network, and I say again that if you do not understand the difference, you should not be commenting on stories like this.
You really don't log in to a home network, not unless you have implemented domain level accounts and an authentication server, in which case you are really logging into the domain. I strongly suspect that you haven't, although I do admit the possibility.
On all Windows systems I've administered outside a company environment, the network settings are set up on a per system basis, not a per account basis. This means that once logged in to a system with any account, all network access is the same. And it is normally not possible for a web site to know what user account is in use on a particular PC (that's why they go to so much trouble putting cookies in your cache, so they can track who wou are). So to the ISPs web site that the popup comes from, there is no way of knowing whether the account is Tarquin's or Dad's. That level of information is just not available to the web site.
What the ISPs may end up doing is directing you to a site where you have to log in to the web site, using an account that was set up when the account was set up. This would do what they need, but would render the entire home network unusable until the account owner was available. And I suspect that many users (like me) do not use that account, so may not remember the user id and password for that site.
I suspect that I have been locking down my Windows PCs so that most users are not using Admin for longer than you. My background is 30+ years of administering UNIX systems, so privilege separation is engrained in my psyche, and I learned how to do it for my PCs (together with a mechanism of relaxing it for those STUPID programs that need admin rights) almost as soon as I got an NT based system in the house, which was after I started putting Linux on all my PCs.
I was not advocating it. I was just suggesting it as an alternative to DPI or a simple DNS lookup which are either too complex or too naive to be considered.
And as I said, I am not claiming to be any wizard, although I do believe I have a working knowledge of DNS and IP. I'm sure that the ISPs will do something much more complex.
I understand about shared servers serving many sites. I must admit that I had not fully considered this while drinking my tea, but were I really designing this, I would have spotted it, I'm fairly certain. But the majority of most site visits are probably to servers that do not serve more than one service, (Google, Facebook, YouTube, Ebay, Twitter, the TV channels), or if they do, the sites are closely related, so it would work for a sizeable proportion of users.
Anyway, my point was that it does not have to be DPI, and in fact DPI is probably exactly the wrong way of trying to block porn, as you would have to assemble a complete picture or frame of a video, and then subject it to image analysis to try to determine what the image was. This is clearly more than the ISPs will be prepared to do.
It does not have to be DPI. all they have to do is reverse lookup the IP addresses of the initial TCP session setup packets, then see whether the name or domain is on the blacklist. For UDP services (which do not include web browsing) you may need to look up every packet.
And if the lookup does not return a FQDN at all, then they block anyway it as a precaution. It could be a dark network!
This gets around all of the alternate DNS workarounds, but would not stop proxies via systems that are not blacklisted.
I've thought this up over a cup of tea. I'm sure that people much better than I can think of even better ways of implementing this!