Re: Here's one someone made earlier, out of Lego
2924 posts • joined 15 Jun 2007
All you need is some way of changing the strowger selector to a particular position, and then rotating the split-flap character to the new character.
I'm sure that there is a 'return to space' operation that can be applied to all character positions at the same time to clear the display.
I thought this would be difficult, until I realised that you could use a pulse-coded dial (like an old-fashioned telephone dial), linked up to something similar to a Strowger exchange and a Solari split-flap display.
I know I'm cheating a bit using pulse-encoders and electric motors, but I'm sure that you could use rachets, rotating shafts and slip-clutches everywhere that the modern displays uses electric motors.
If we take 7-bit ASCII as the character set, that would mean 96 different displayable characters which include all upper and lower case English characters and numbers plus sufficient punctuation. This could be encoded using a 32 place dial like a rotary telephone dial, together with two shift keys shifting to different rachets to generate upper and lower case, and numbers, together with the punctuation. These work well with strowger type gear, and all you would need to do would be to pulse each successive position in the split-flap.
The difference in the sandbox approach is that it denies access to resources by checking what they are doing at the API boundary of the sandbox, rather than allowing the underlying OS to control access.
Any suitably designed OS should have controls to contain rogue actions (like the permissions system on the filesystem and IPC resources and Role Based Access Control) already, and many do. But things like Windows up to XP, whilst it had the underlying technology were so compromised by the way that the systems were implemented (users running as an Administrator by default, and too many critical directories having write access to non-administrator accounts) that it became necessary to add the extra 'sandbox' to protect the OS!
Unfortunately, the way that OSX deploys applications is fundamentally flawed (they've added an application deployment framwork into user-space so that you don't need to be root to install an application, or it was this way the last time I looked at OSX), and this unfortunately opens it up to applications being altered by other applications without requiring additional privilege. The OS remains protected, but the applications are vulnerable. This is the reason for implementing a sandbox.
Anyway, sandboxes are not new. On UNIX systems since seemingly forever (certainly since Version 7 in 1978), you've had chrooted environments that you can use to fence particular processes to controlled sub-sets of the system
I think it is that when Orange and T-Mobile merged, they spun off the business of operating the infrastructure to a separate legal entity that is EE. Orange and T-Mobile manage the customers, and 'rent' access to the infrastructure from EE.
It is not clear to me whether Orange/T-Mobile own EE, whether EE is now also a holding company with Orange and T-Mobile as subsidiaries, or whether they are completely separate companies.
I'm sure that it makes sense to somebody, but I'll bet there's some international shenanigans about where the profit is declared!
At the time, the 480Z with the network and file/print server option appeared good (though expensive), because it ran a CP/NOS (a network capable CP/M compatible - OS the industry standard when the 480Z came out), and allowed files to be stored centrally so students did not need personal media or to work on a specific machine all the time.
Unfortunately, CP/M completely dropped out of favour when the IBM PC was launched.
I actually preferred the BBC Micro with Econet and a Econet Level 2 hard-disk server. Back in about 1983, the Poly. I worked at built 2 similar computing labs, one by the Computer Unit, and one by the academic Computing School. There were similar bugets, and both were installing 16 seats, networked with a file-server and printer.
The 480Z lab (Computer Unit) had 16 computers, with screens, a fileserver and printer, and a basic productivity package. And that was pretty much it.
The BBC Micro lab (mine) had 16 computers with screens, a fileserver and printer. It also had basic productivity packages, but it also had light pens for all computers, and a selection of other hardware items including CAD software and hardware (BitStik and 2 different digitizers), teletext and speech synthesis hardware, speech recognition hardware, 2 types of digital camera, robot arms, touch screens, and a pen-plotter. And on the software side, it had a full ISO pascal compiler for all of the systems, together with a selection of other languages including Forth and Lisp.
My BBC Lab was built to teach people who did not know what a computer was the vast range of things they were cabpable of, in an affordable way. It could als be used for the computing students to teach programming, networking (sticking an oscillascope onto the Econet was a great way of demonstrating what a network was) and it was great fun building it.
Good luck with your promotion prospects! It seems at the moment that only those who are prepared to go the extra yard are even considered for advancements.
Whilst I agree that it should not be this way, I am increasingly upset by the divisive nature of the appraisal system that most companies use now to measure performance. It now appears to be used as a tool to get high-skilled, expensive people out of the door because of arbitrary 'poor performance', rather than a mechanism to reward people good at their jobs.
"like Microsoft Office ...... allowing people to work at much faster speeds"
I find Microsoft Office has always slowed me down compared to other, pre-WYSIWYG tools! The use of inappropriate tools (like Excel rather than a database for storing and parsing data, or a proper document preparation system for technical reports) is IMHO one of the biggest productivity blocks around!
This bothered me. I have an original EeePC 701 4G, ~700MHz single core Celeron. It's slow, but not remarkably slow. It runs Linux at least as well as low end Pentium 4 systems with the same OS, and can do basic browsing (with FlashBlock and NoScript), play streaming and local media well enough to be useful even now (although 4GB is a major hurdle).
I've played with Intel Atom netbooks, and they are not hugely better, although the battery life is generally longer.
I'm sure that using benchmark comparisons, the difference is evident, but from a usability basis, doing what netbooks were originally designed to do, the original concept was sound (only damaged by the stupid Linux distributions that were originally shipped, allowing Microsoft a chance to ruin [IMHO] the concept with a cut-down WinXP).
So I wonder whether the Atom based devices in the £400/$500 mark were actually a real improvement, or more a means of the manufacturers being able to sell more expensive kit.
OK, so if a device has no user recovery path, but the manufacturer can re-flash the internal memory using the initial manufacturing process, is that not bricked?
Or how about they can rescue your data, and replace the main system board, but use the same case, serial number etc. Is that not bricked?
I agree the sentiment of what you are saying, and in this case the term is not justified, but I don't believe that there is really any hard-and-fast definition of bricked, and whether a device is bricked or not depends on the device state and the available resources to the user.
If you were on a desert island, with a satellite Internet feed and no other computing devices, unpacking a zip file onto a USB file to boot would not be possible, so this would effectively be a bricked device.
The answer to that question should be easy, but try finding out from the www.ubuntu.com web site.
I was watching the Nixie Pixel channel on YouTube (no, not just for the eye-candy factor, she has an interesting perspective on OpenSource as well as other notable points... ummm, oops - maybe it is the eye-candy after all), and one of her recent videos pointed out that "Linux" has been expunged from the Ubuntu website, at least from all of the first level pages and those they link to.
I've said it here before, and I'll say it again, I think that Shuttleworth is trying to set Ubuntu up as a competitor to OSX, something based on open source, but differentiated and at a distance from Linux. If this turns out to be true, and Ubuntu gets divorced from the perception that it is a Linux distro, then I hope that he is prepared for many of the long-term Ubuntu advocates jumping ship.
My wife was looking at self-publishing on Kindle, and all of the tutorials assume that you use Microsoft Word as one of the last-steps in the formatting process (just prior to submitting the resultant file for final conversion by Amazon).
When I pointed out that Libre Office could write .doc and .docx files, she showed me various reports that somehow the resultant files would be unreadable on a Kindle, all written by people who appeared to know very little about font, typeface and styles from what I read, so I don't totally believe that you can't use other tools. Seems like there is either some (unspecified) fundamental lock-in, or a lot of misinformation washing around, which creates a feeling of FUD amongst writers.
Eventually I ended up putting one of our limited number of Home and Student 2007 licenses we have onto her machine (licence obtained to satisfy our local secondary school for whom using Impress to create assessed presentations would result in the work not being marked!) just to stop her complaining.
Maybe I ought to write something myself and see what the process really is. If I can get it working and document it, it may enable self-published ebook writers to escape from the Microsoft hegemony.
Yes, but not sleeping in the same bed means that you effectively have to make an appointment for sex. It also removes the opportunity for a quickie when neither of you can sleep, or when you both wake up early.
I certainly don't get nearly as much since my wife decided she could not stand being disturbed by my early starts to get to work. At least that's the reason she gave for wanting to sleep in another room....
There are lots of TV channels that offer live streams of their broadcast programs off of their web sites. This includes the BBC.
"Live TV" means that it is possible to watch it over the Internet while it is still being broadcast. An example of "Live" would be a 30 minute sit-com that is available over the internet 20 minutes after it starts on a broadcast medium (overlapping by 10 minutes). For things like football matches, this means that it must not be available on-line before the broadcast program finishes, i.e. at least 1:45 (and maybe longer) after the match starts.
The distinction about whether you do watch live TV online or not is very blurred. Iplayer, for example, allows both archived (catch-up) and Live, so how to you prove that your use of Iplayer is only for catch-up. It all looks a bit murky to me.
I wonder if the TV licensing people are able to get a court order to request your browsing history from your ISP?
I still wonder whether an analogue TV without a digital tuner still counts as TV receiving equipment. In my view, it shouldn't.
Are you sure about "you can generally interface a disk drive to a machine for a long long time"? Have you tried to connect a SCSI disk (anything earlier than UltraSCSI, but try a SCSI-1 SE disk for a real challenge) to a modern server? And older interfaces like ESDI, ST506, MASBUS or ESMD are long dead.
Even more modern technologies like IBM SSA are now dead. IDE and EIDE interfaces no longer appear on modern motherboards. Even when older HBAs are still available, they will be PCI adapters, and these are being eliminated in newer systems.
I believe that the SAS technologies are expected to be N+2 compatible, i.e. a first generation SAS disk will work with a SAS 3 adapter, but there is no guarantee that they will work with later adapters. Given the speed of evolution of such things, I expect disks to remain portable to current machines for 5 or so years after manufacture, and after that, you will have to rely on legacy hardware to be able to read them. Does not encourage me to use disk as a long-term archive medium.
This is partly by design, as the disk and system manufacturers want to continue selling systems, and they have built-in obsolescence.
My personal thought is that you would have more success reading a 2400' 1/2" NRZI mag. tape at 800bpi recorded using ar or tar from 30 years ago that you would an early SCSI disk, especially if it came from a Netware, VAX, PrimeOS or other proprietary OS.
Has anybody published figures about how long an SSD will hold data in a cold state? I get the impression that most rely on the wear leveling capability of the drives to re-compute damaged data from checksum information during the re-write of the data. And this requires the drive to be powered.
I'm really not too confident about putting a flash-memory (or any other static electronic) device on the shelf, and coming back to it in a few years time and expect the data to be still there. I would be confident with a tape.
Memory technologies are moving along so fast that there is no chance to time-test any of them before they are obsolete. And accelerated environment testing that vendors claim to have done is not really a good indication of the data retention capability of a medium.
But then I would probably recommend carving the data on granite tablets if you want it to last millennia.
If you remember, the swimming pool moved under the patio when TB1 took off or landed, so he did not have that option.
The torus shaped building was called 'the roundhouse'. Quite what you use such an unusually shaped building for, I don't know.
I'm sure that the few times they show the clip of TB3 landing, they've even managed to collect the smoke again. That's really environmentally aware!
It's the flame licking back up the lower part of the rocket that makes me wonder whether Derek Meddings was prescient or had a way of looking into the future (he was a model making technological genius after all).
I always though 'it would never look like that' when I watched TB1 and TB2 land, but I was so obviously wrong!
Although I can say that I handed in my first year University computing project written using roff and printed on a DECprinter ( a faster DECcwriter II without a keyboard).
I once wrote a document handling system in make, troff, tbl, pic, grap, and to top it off, integrated SCCS to do the versioning, including the version history dynamically added to the document at extract time.
From your figures, it looks like the estate you are using is 3000 seats. So. $3,700,000/3000 gives us, um, $1,233 (rounded) per seat. You really think this is not a lot?
Even if you do have a payment plan (and I'm betting that Microsoft would prefer a subscription plan rather than a deferred payment plan), that is still loading the business with costs that they may not have if they opted to stick with XP.
And the majority of those costs are in license fees, which you may not have if you can find an open-source solution that is adequate.
You've also not factored in any testing, specific business related software costs, or loss of productivity or training costs. If you are doing 3000 seats over a 6 month period, that's 500 a month, or about 25 a day (assuming that you're doing most of the estate during the working week). That's a tall order for 1-3 admins, even assuming you do across the network upgrades in place (which is disruptive to the users). Of course, if you have a homogeneous estate, you could do a replace, upgrade, replace rolling operation which is less disruptive to users, but you will need spare kit to do that, and will need the time to physically move the kit around..
Your earlier comment about a dual-core system with 4GB of memory is interesting. I'm sure that many, many business users of XP will have the majority of their estate running on P4 systems running with <2GB of memory. Places like call-centres do not regularly replace working systems, and the demands of filling in screen forms is such that you don't need much oomph.
For those users, dropping new kit in may not only be essential, but possibly cheaper as well.
These are all pretty old devices. I have a DI-524 on my network (and a DI-604 lying around somewhere spare), and they both have firmware issues that really mean that anybody still using them must have a masochistic streak, or not care (which may include the majority of users, unfortunately).
There has not been a firmware update for something like 8 or 9 years, and it is not possible to set the date on them (either manualy or by pointing it at an SNTP server) to any date after December 2008 if I remember correctly. I would expect that most people would have tossed theirs whenever they updated their broadband package.
Just in case anybody was tempted to try hacking into mine, I'm not using the WAN side at all, merely using it as a WiFi router on one of my wireless zones behind my Linux firewall.
Disks have more moving parts than tape and tape has no electronics at all (with one or two exceptions), so tapes should be more durable. True you do get tape failures, with the tape sometimes breaking, but as you say, you have at least two copies, as you would know if you worked with large backup/archive/DR systems, as some of us do. Tape is also more portable, is probably more resilient to data during transport. Removable disk, whilst possible, is probably more complex to manage. There have been attempts at disk libraries in the past, and I believe that the encasing of the disks and the connectors proved problematic.
Don't get me wrong, there is a place for data replication on disk, and systems like HPSS and TSM can use both disk and tape in a hierarchy to achieve high capacity as well as good speed for recently copied data. But I just don't see any serious manager responsible for large amounts of data drinking the disk cool-aid and ditching tape any time soon.
I see the biggest problem as connecting the thousands of drives you need to be able to compete with large tape libraries.
Even if you use SAS with multiple levels of expander, the limiting factor will be the number of SAS adapters in your controlling system(s).
With the tape archive and backup systems I have seen and used, it is possible to have multi-petabyte storage libraries with thousands of tapes controlled by a couple of systems (more than one for resilience). I'm currently working on a system with three systems connected to a library that backs up ~10TB of data per night.
I also use a high-density disk farm, with ~4000 disks controlled by 20 systems via SAS, and it is the most troublesome component in the environment, but they are always on and are configured for speed rather than capacity (although they do that as well). I will say that we get more trouble when the disks spin down and back up, so I would worry that you could have problems with not knowing about drive problems until it's too late, even using RAID. You would probably want to spin the disks up on a regular basis to detect which fail, so that you can replace them.
Of course there are trade-offs. The speed at which you can restore the data is dependent on the number of tape drives you can access concurrently, and for disk will be greater for multiple systems each with many disks, but would be more expensive
Even when I have put exactly the right money into the coin sorter, or posted a note for more than the requested amount.
The Morrisons fast-pay tills appear to take about 30 seconds to count the coins you've inserted, whilst asking you for more every 10 seconds. All the time, the rest of the people behind you are urging you to put more in to get out of their way.
I'll swear that they employ more people sorting these things out than they've saved in shutting down the normal tills.
It depends on whether you put all the
necessary essential add-ons, like service packs and antivirus.
Take a 2002 XP retail install CD. Install it on anything with at least a PII processor, and look at the memory footprint. It's tiny. Now add SP1, SP2 and SP3 in sequence, and note the memory footprint each update. It goes up significantly each time. Now put an anti-virus package on.
You've now got to the point where 512MB of memory will allow the system to boot, but not actually run anything at a reasonable speed. If I was cynical, I would say that MS have been deliberately bloating XP in the later service packs to try to get users to upgrade their systems (most people don't know how easy it is to upgrade memory), which will normally equate to a new system with a new install of a recent version of Windows that they can count as a sale.
I still think carefully before putting SP3 on the few XP systems I have, because of the detrimental effect on the performance of the systems (but I have double NAT and a hardware firewall in the environment that the systems sit on, and have told everybody to not use IE to browse). But somehow MS managed to get autoupdate turned on on my father's laptop, so he's ended up with SP3 anyway! His Thinkpad T42 with 512MB of memory is now crippled because of lack of memory but he's continuing to use it, because at 84, he does not think that investing £300 for a new system is justified. (note to self - I must find him a memory upgrade, or a second-hand Win7 system sometime. 2nd note - Must replace my T30 as well).
My first XP system from 2002 had 128MB of memory, and worked well enough at the time with Office 97, Netscape and Lotus SmartSuite, as well as running Counter Strike and other contemporary games quite well.
I would suspect that you have an X Windows client program that is warping the cursor back into it's window every time that that it detects that it is going outside. As for the buttons, it could either be changing the button mappings, or re-parenting the root window to trap the button events. These would not be problems with the X11 server, more like a poorly written client using the server incorrectly.
I actually wonder whether you are running an older ATI radeon card (something like a Thinkpad T30 or T43). If that is the case, it could actually be that you are suffering from a problem with the display driver not being reset correctly after suspend/resume. If that is the case, then disabling KMS may be the solution (google nomodeset or modeset=0). It's a pain, I admit, but the upstream developers regard R100/RV200 and R200 as being obsolete nowadays, so are not interested in fixing the problems. The adoption of Unity and now MIR (especially if they completely ditch X11 compatibility) will be close on a deal-breaker for me and Ubuntu. The shiny is just not worth losing X11 for. At leat Wayland has an X11 compatibility component.
I've got 12.04 running on my T30, and get strange things happening, but I am working with the pm-suspend tweaks to make the problem less intrusive.
Sure, outages happen. But good design should make such an outage relatively short. Keeping critical systems up even when things are failing should be part of the design and change process.
One day's down time for a regional health authority equates to hundreds of thousands of pounds of wasted resource, even more if you take into account the time of the people who attended their out-patients appointments without being seen.
I sometimes remember when I used to talk to a power engineer in a regional electricity distribution company, where five nines is a way of life, not an aspiration.
Ah, but if the problem is in-house, with your own employees, then you can at least bang some heads together and threaten to kick people out of the door.
If the problem is either in the Cloud, or with some outsourcing outfit, all you can do is threaten to not renew your contract in 4 years, by which time everybody will have forgotten!
Of course, if the contract was negotiated properly, you may get some financial recompense, but I don't trust any of out public organisations to be that good at getting such clauses into a contract....
"Does the EU think that will work over micro USB?"
Why not? My Sony Xperia SP has a single micro USB port that provides MHL output to drive a display over HDMI, the capability to charge, and also to drive a USB Keyboard and/or mouse. All at the same time.
Well. I doubt very strongly that there are any 6150/1 systems running in anger any more, or any PS/2's running AIX PS/2 or mainframes running AIX/ESA, so they really only need to worry about POWER boxes, and they started in 1990. That takes out anything before AIX 3.1.
And since then, the Microchannel architecture AIX boxes can't run a supported version of AIX, nor can any 32 bit POWER systems or any with RS64 processors, so they really only have to worry about 64 bit CHRP systems that run AIX6.1 and later. That means that they only have to worry about systems built since about 2000! And as the basic OS running under 6.1 and 7.1 is not really different under the covers, and basically is compile once/run anywhere, it's a much smaller target than Solaris or HP/UX.
So a bit of diversion going on here to detract from the fact that they really must be a bit thick!
I believe that the IBM Enhanced Keyboard was available for the PC/AT as an option from launch in 1984, before the PS/2. Wikipedia states that the manufacture of the Model M keyboard started in 1984.
I'm fairly certain that the first PC/ATs (early 6MHz models) bought by the Polytechnic I worked at in the 80's all came with UK model 102 key keyboards.
There is a trend in the UK for people to install under-floor heating now (either electric or powered by solar/ground heat carried by water), but the huge majority of UK properties use water filled radiators warmed by boilers running on natural gas, oil or solid fuel (wood or coal). Gas is commonly available as a mains service in the majority of the UK, and is cheaper per thermal unit than electricity.
I think that hot-air heating is not used is mainly to do with the fact that it really requires the house to be built around the heating system as it requires substantial air-ducts to carry the air around the house.
I did live in a house built in the '60s that used hot air as the principal heating, and I have to say there are significant drawbacks in my view. The heating was noisy at the best of times, and deafening when the fan motor bearings wore out (it was a Canadian system running on 120V, not the 230V the UK uses, and made procuring parts very difficult). Also, the kids used the air ducts as an intercom, shouting through them. You always had to be careful about noise in the bedroom, if you know what I mean, because of the way the ducts carried sound.
But the primary problem was that it didn't really heat the fabric of the house or the hot water. If you didn't have and entry lobby acting as an air-lock, it was quite normal for all of the heat to rush out whenever anyone came in or out of the house. And when you get a bird in the ducts, it's a real problem trying to get it out.
Anyway, it never really caught on in new build houses, and water filled radiators were easier to retro-fit into older houses, especially with micro-bore pipes.
Who said anything about IP? It's not the only network layer protocol around, although I admit it is the most common.
They may have a non-IP network that gets locally picked up via some innocuous device (hacked mobile phone maybe) and gated onto an IP network, or they might just have invented a non-IP peer-to-peer supernet mesh completely separate from The Internet.
Use some imagination, please!
Relax. It's AManfronMars 1. He/She/It is a Register regular, and very few of their comments make huge amounts of sense (although they just might!). Click on their name in the comment, and have a read.
Just don't try to understand it too much!
I really expect more from commentards who have been around here for a while!
I think the most common theory around here is that it is written in English, then translated into at least one, and possibly more than one other languages before being translated back into English.
"If you want a better phone for actually doing phoney things..."
...then my advise is to get a £25 Nokia or Samsung dumb phone. Tried and trusted technology, long standby and talk time, robust as hell, easy to use (as a phone), and it won't get you mugged or be particularly painful when you drop it down the loo or lose it when out on a bender.
Of course, if you want to run Angry Birds, then you really want more than a phone!
And passwords on privileged accounts every 30 days or less.
Not sure that many companies I've worked for actually abide by their own rules, however. For most companies, it looked like this was in the policies merely to satisfy an audit requirement.
Imagine having to change all the passwords on all your routers, intelligent switches, management consoles, data appliances - well anything that has a password that protects a configuration basically! I'm sure that most companies don't really know the scope of the problem.
Micrografx Picture Publisher used to be my go-to software for image manipulation on Windows (back when getting scanners working on Linux through the parallel port pre-USB was a real pain). They used to allow personal use of their old versions of software, and it often came as part of the software bundle that came with a scanner as well.
Although I don't have a lot of Windows boxes around now, those that are in my control generally have MPP installed on them still.
Well, they make white cider by passing diluted grain alcohol over the remains of the apple pulp after it's been squeezed dry of real juice. Probably imparts as much flavour as using the wood.
What the hell is going on here with the "Add an icon" tab? What was wrong with having the icons below the post! Has somebody at ElReg been drinking the Metro/Unity cool-aid?
Indeed, the sentence that reads "people are not willing to spend big money on mainframes – the Unix big guns" indicates that Dennis Yip has only a limited grasp of what he is talking about.
From a cursory examination, I would suggest that in his world there is just Wintel, and a single category of "everything else".
"Sounds like is more a matter of having to page different regions then and iterate over them. Shades of paged memory again"
Except that the remote machine cannot manipulate the mapping registers that are used to control the window. If you can't manipulate these, then you cannot see memory outside of the region. And even the process on the local machine, which is running in non-privileged mode, cannot alter the mapping registers. This is a function of the OS, and is controlled by system call, with appropriate security.
I must admit that if you think that paged memory is a bad thing, then you must realise that you are condemning all current machine implementations that use virtual address spaces. All virtual address space machines use memory mapped pages to implement a linear virtual address within a physical address space, and also allow memory overcommitting (==using a page space on disk).
There are NO architectures currently that I know about that allow multiple processes in a strictly linear global address space that is the same as the OS. I don't know whether you work with some that I don't know about, or if you don't realise how modern systems work.
Biting the hand that feeds IT © 1998–2019