Re: Overseas firms
There's a YouTube video of one of their senior people explaining how it should be pronounced. I don't have the URL, but it is like you say.
2692 posts • joined 15 Jun 2007
There's a YouTube video of one of their senior people explaining how it should be pronounced. I don't have the URL, but it is like you say.
Nowadays, charity shops in the UK normally won't accept electrical items, which means that it gets taken to the tip more often than not.
I'd love to know where to get a pair of Quad 405's in a charity shop or dumped on the street!
All my kit was bought new, with the exception of the NAD 7020 which I bought on Ebay.
Project Debut II, NAD 7020, Kesonic Kubs, JVC KD720 tape deck, and some anonymous Technics CD player that I can't remember the model number. All budget kit, but still quite acceptable. Mainly used to play vinyl.
(If I got a set of Quads, I'd definitely have to replace the amp!)
Hacking user quotas was old hat even in the '80s.
A bug in MTS (the Michigan Terminal System) running on IBM 370 mainframes had a bug where if you allocated temporary disk space for a session, and instead of allowing it to be freed when you logged out, explicitly freed it yourself, it would add the space to your permanent disk quota.
I also found you could hijack unused accounts (computing subsidiary students often left the course before logging on, and the admin's did not delete the accounts until the end of the year) relatively easily. It was by doing this that I was able to spend enough time to map all the mazes and complete the original Colossal Cavern Adventure. I think at one time I had my account, and control of three others.
I still wonder whether it was a coincidence that the day after I got 550/550 (we had the extended cavern), the game was blocked to students.
Ohhhh. That's sneaky. I hadn't spotted that!
As far as I am aware, for imported goods that carry the CE mark, it is the importer to confirm that the CE branding is genuine. For major companies that directly import themselves, this means that if they sell something that they've imported, they could be on the hook for damage claims resulting from selling devices that fraudulently carry the CE mark.
This being the case, if you buy your cheap tat from a major retailer like a supermarket or DIY store, or direct from Amazon (i.e. not from one of their associate sellers), you probably can have some confidence that the CE branding is genuine (it's in their interest to make sure that this is the case, because they are financially liable).
If you import direct from China through Ebay or another route, then it's YOUR responsibility to make sure that the CE marking is genuine. If you don't, then it's possible that you could invalidate any house or other insurances that may apply where it is operated, and could be liable to damages if you supply the item on to someone else!
If you buy from an individual or a small seller who has imported the item and sold it through some market or other, then things are a bit more muddy, because although they are technically liable, the chances of them actually being held to account is fairly low, so they may not confirm the validity of any of the safety marks (it costs money!).
If you have information that the CE branding on any item is not correct, you should report the supplier to the relevant trading standards organisation.
So my advice would be that if you feel you have to buy cheap tat, get it from a supplier who's reputation would suffer if they did not do due diligence and check the safety marks were real.
Sounds like a super-capacitor to me.
OK, I take the point that it has no GUI, but the whole point of this certification is that the devices will work out of the box without the admins having to understand how to actually configure and manage the devices.
At least in the past when admins had to write their own 'scripts', they dug in to the device to work out what is necessary. If the scripts are already written, they may never read the manuals!
Do other people worry like me, that tools like this relegate Sysadmins to GUI drivers, with no knowledge of how it all hangs together?
I'm not criticising tools like this, because they are necessary to run large environments like we have now, but making it easier also degrades the required skill. And my constant worry is that when it goes wrong, organisations who have de-skilled their sysadmins will no longer have the skills necessary to diagnose the problems.
Looks to me like we need to re-instate the System Programmer job discipline as the next tier up.
Of course, you could have just lied about the keyboard country. The scan codes for almost all US and European keyboards are the same and based on the location of the key, not the engraving. Of course, some keys are missing (any key between the Z and left shift is a normal problem) and the different shape of the enter key means that some keys differ around there.
If you're running Linux, you should be able to do this without needing super-user access, as long as the keyboard locale definitions are installed, and even if they are not, you could probably over-ride it using xmodmap (a real blast from the past!).
I'm not familiar with Arabic or far-eastern keyboards, but I know that IBM used to support 106/108 key keyboards with a shortened space bar and extra shift keys for Japanese (and I presume Chinese as well) keyboards.
Your own reference disputes what you are saying. From the referenced page:
"However, newer, partially or fully configured System z machines outperform Hercules by a wide margin"
It is quite clear that Hercules on a moderately powerful Intel based system can outperform a historical 360/370 architecture machine, but that is not a modern 64 bit zSeries system. IBM continues to persuade their customers that this is the case with worked case studies, and if you believe their 50th Anniversary presentations, they are even winning new customers to their mainframe platform.
One of the differences is that a zSeries system is designed to run at 90%+ CPU utilisation all the time, and with a high degree of resilience and exceptionally low downtime. What x86 plaudits continually fail to recognise is that such a system will keep doing this while CPUs fail, memory drops out and other hardware events happen. Commodity x86 hardware does not have the Enterprise RAS features to do this, and the Enterprise grade Intel based systems with some of these features (like the remaining Unisys or HP Integrity systems) approach the zSeries in cost because these features are expensive to add.
There will be a time when x86 based systems will have the types of RAS features that zSeries has had for some time, but I don't see it being now, nor any time in the immediate future.
And anyway. I don't want to see a world where one processor type has a virtual monopoly of all systems sold. IBM with the zSeries and POWER, and Oracle with SPARC derived processors are holding out for the moment, and I hope to see 64 bit ARM processors in the market at some time. There has to be some competition against x86, because it always has been a flawed architecture.
Some years ago, the company I was working for took delivery of an IBM 3575 tape library. It was a difficult delivery, because the site was on a hill, and there was no direct delivery bay (it was a site of convenience used because the company split in two, and this was the only remotely suitable building that the company owned to relocate one half to). All the kit was craned onto a flat roof next to the machine room, and in through a door in the side of the machine room. This was worked out the hard way after the previous delivery of this same order was rejected as the tilt indicators were triggered as it came in up the stairs on a powered stair-lifter.
As the pallet was carefully lifted and swung onto the roof, I saw that the packaging was damaged, so raced to grab my digital camera from my bag.
I recorded the state of the packaging, and then the unpacking process being done by a very unhappy IBM engineer. As the exterior cardboard box came off, we could see that the interior top packaging was dented, and that the top of the library (a 1.7 metre tall octagonal shaped prism) was pushed down in a 'V' shape, with the heavy gauge steel bent by several centimetres. The supposedly parallel rails that the tape gripper moved up and down on were bent a bit like (), with the middle being visibly wider than the ends.
Quite what had happened we never found out. My guess is that it was on a fork-lift that was raised too high so that it hit the top of a door or a ceiling beam during the unloading from the plane. It hadn't been tipped or fallen, because despite the damage, the tilt indicators had not been triggered. IBM asked for a copy of the pictures to use as evidence for a claim against the shipping company.
I still have those pictures somewhere, although I've never posted them anywhere.
You know, it's really ironic, but all my IBM Thinkpads before my T30 suspended and resumed really quickly, both using Linux and other operating systems.
But this was all done in the BIOS. It seemed that the first that the OS noticed when resuming was that the clock had jumped.
Since that time, suspend/resume appears to have been handled by the OS, and it's been getting worse. I've always had problems with Linux restoring the state of the ATI graphics adapters on later Thinkpads and kernels. KMS was an absolute disaster in post 8.04 versions of Ubuntu.
It's not that clear-cut.
When EU legislation is enacted, it does not immediately become law in the member states. Each member state is supposed to enact local legislation that covers the EU law, but there are reasons why this is not done. A member state may, under certain pre-negotiated circumstances within EU Treaties veto a law (and thus not be bound by it), or can state a derogation, or delay local legislation almost indefinitely, or can just ignore it.
Of course, if a country just ignores an EU directive, then the country (in reality, the incumbent government) can be taken to one of the various EU courts, but that is a long and expensive process (the costs of which will normally be borne by the complainant), and even at the end of it, all that is likely to happen is a slap on the wrist and a fine (which can it self be ignored with relative impunity). The ultimate sanction of expelling a country from the EU is extremely unlikely.
This is, of course, a very simplistic view of a very complex process, but one example of where this has hit the news moderately recently is the controversy over prisoner voting rights in the UK last year.
In response to the down-votes to my earlier post, what I was trying to say was that the small <5mW finger sized laser pointers that post people might have picked up as curios over the last couple of decades or so are unlikely to be the devices used here. I admit that it is perfectly possible to obtain lasers with much greater power and better collimation than these.
I personally think that the use of lasers over a certain power should be licensed (I thought it was in the UK, but it appears not). Certainly, some of the YouTube videos of people being able to melt quite significant thickness's of plastic (one video shows holes melted in CD cases) using lasers in the 100-200mW range are sobering. And the >1W hand-held lasers really ought to be regarded as seriously dangerous.
Looking at the UK Health and Safety legislation, it looks like using any laser above the MPE (Maximum Permissible Exposure) for the type of laser without the appropriate safeguards is illegal. The booklet HSE95 includes a section on "beam projection at roadways, occupied buildings and into aviation airspace" which defines what is acceptable, and what is likely to be acted upon by the authorities.
I must admit that I used to be interested in seeing how far a laser pointer could be seen from, especially when shone onto road signs (which reflect light back in the direction it came from) until I read this booklet!
The distance is the point.
At 3 metres, the spot from a 5mW laser is pretty much still well focused. But solid state lasers do not collimate the light very well. At 20 metres, the spot will be more like a centimetre in size. At 1,500 metres, the 'spot' would be metres across. I'm not sure, but I think that 5mW will be spread across such an area so thinly that you would have difficulty seeing that it was hitting anything, let alone it dazzling a pilot.
Also be aware that you would have to be in front of the plane and on the flight path to actually get it to shine into the pilot's eyes. From below and/or to the side, the best you would get was to illuminate the roof of the pilots cabin, and from behind you could not shine it in the cabin at all.
Of course, the hand held lasers they are talking about may well be the high power (up to 2 watt - real scary) ones, and they would be much more likely to cause problems.
I don't see why the specs for this system are seen as a problem.
I've recently put Ubuntu 14.04 on an old Acer Aspire One (it's replaced my EeePC 701 which is finally too small to be useful) with 1GB of memory, a 1.6GHz N270 Atom and 8GB of SSD. It runs fine, especially if you use the Gnome Flashback UI, which is a major concession to traditional users.
OK, I would not use this for photo or video manipulation, but browsing, playing media, terminal sessions to other systems, and email is all easily do-able. It fits in 8GB fine, and I use external flash for anything that doesn't fit.
The specs of this system easily beat my Acer. I can see something like this running Linux as a perfectly usable system. We've all just got so used to an excess of available performance that we've forgotten how little we actually need day-to-day.
Not sure about Chrome. I think I want more of an OS than it provides.
I spent a lot of time trying to keep a system with Thoroughbred XP2600+ system running! The thing would work fine at full speed, then start to crash but would start working again if I underclocked it. Few weeks later, it would start crashing again, so underclock it a bit more. Was not the memory or the MoBo.
Finally give up, and scoured Ebay for another Thoroughbred, and repeat the cycle. And again. Then completely give up on the machine!
If you got good results overclocking a Thoroughbred, then you had better luck than me!
Must finally get round to chucking the thing out. It's still in my not-quite-dead PC stack, kept only because it had a retail (not OEM) XP license on it.
AIX WPARs do some other very useful things. Even though they run on a certain version, they can present to the application the API of an earlier AIX verion.
So an AIX 6.1 system can containerise an application designed for AIX 5.3, which is still supported for a little while longer, but also for AIX 5.2, which is not. This provides a lifeline for companies that have software that won't run on the latest releases (although the excellent backward compatibility of AIX makes that fairly rare), and either cannot, or cannot afford to update the applications.
AIX 7.1 extends this further, allowing AIX 6.1 WPARs. A side effect of this is that customers can buy current hardware that will not run an earlier versions of AIX (although, amazingly, AIX 5.3 can still run on most Power 7 and 7+ kit - we will have to wait ot see about Power 8), and move their applications into these WPARs, and decommission their older systems.
And I believe that AIX Partition Mobility has now been extended to WPARs, allowing them to be moved to different system on the fly, providing the the storage has been appropriately configured.
IBM have used their WorkLoad Manager functionallity (WLM) to constrain WPARs to fixed amount of resource, including CPU, memory and I/O, so that a WPAR cannot swamp a host system.
This is all mature function that has been around for a number of years. Nothing new here.
"Battle Beyond the Stars" was deliberately "The Magnificent Seven" in space. And that was "The Seven Samuri" in the Wild West.
Just shows there's nothing new in story telling.
The speed at which computing was moving at that time ensured that nothing would remain the fastest for any length of time. The B+, B+128 and Master 128 and 512 were the follow on products, and it was not speed that was the primary cause for the updates, it was memory.
But some of these later systems were actually faster, due to the change in memory map, the move to faster ROM chips (early BBC micros had to actually slow down the system bus to read from the EEPROM that was shipped containing the OS and BASIC with issue 3 BBC micros), BBC BASIC 2 and 4 and later 6502 derived processors (IIRC, the Master used a 65SC12 which was a re-implemented design that altered the load/store timing shaved a few T-states off some instructions, while re-implementing the missing instructions that didn't work in the NMOS MOSTEK 6502).
We were just beginning to see the move to 16/32 bit computing. I'm not going to argue that the BBC would best a 68000 or a true 16 bit 8086 or later system, but that did not stop them being very useful machines long after the C64, Spectrum and other micros of the time were consigned to storage.
I beg your pardon!
In it's time, the BBC micro was pretty much at the top of the PC World Basic benchmarks for a couple of years.
In the following years, the original IBM PC, which was a (admittedly crippled) 16 bit system running at twice the clock speed, did not manage to better the Beeb (I have the figures in front of me, but I can't be bothered to type them in). And a comparison with the C64, Apple ][, Spectrum et. al. had the Beeb running rings around them.
I admit that benchmarking the Basic did not give a true indication of speed, but even if you look at the graphics speed and capability, the Beeb was the fastest and most capable home micro of it's time. The major drawback was it's relative lack of memory. I was even able to write a full Dec VT52 emulator in Basic that was as fast as the commercial terminals of the time.
Sure it's slow in comparison to machines that came later than it, but that is completely expected. You would not expect a favourable comparison between a Model T and a Mondeo.
What is surprising is why it has taken until now for it to be regarded as enough of a problem for them to do something about it. 5GHz Wifi has been around for quite some time.
Is it just that more people are deploying it? 802.11a was never hugely popular, but I suppose 802.11ac is a current technology that is being deployed now.
Wasn't DejaVu bought by Google when they acquired the Deja News Research Service?
EDT was excellent for sitting a complete computer novice down in front of a DEC compatible terminal with the numeric keypad labelled up with the individual functions (either with one of the latex overlays, or with sticky labels), and get them to enter some text into the computer. I've not come across anything that was picked up quicker.
The only thing that the students with whom I worked had problems with was the fact that you could not move into the 'blank' parts of the screen without adding some spaces at the end of a line. The concept of the 'end of the line' was difficult for them to comprehend. But everything else, including cut and paste, was picked up very quickly.
If you think using a VT100 was difficult, it was luxury compared to using a VT52! Remember the Gold (and Blue - although not used in EDT IIRC) key.
This was when people used to come to higher education having never seen a computer before.
Ah, but if you are using a SVR3 system, you would be using vi, not vim. Vim is over complicated, and that is coming from an Emacs user! Vi IMproved! My ass.
I vote for a return to ed!
"computers designed to last a decade".
No, Microsoft will ensure that the OS is obsolete, unsupported and vulnerable to malware before 10 years is up, and what they replace the OS with will be guaranteed not to work on older systems.
Joke? Maybe not!
I think that you will find that DOS 6 will run on anything newer than an 8088 with ~1MB of RAM or even less. Windows 3.1 needs a 80286 as a minimum and at least 1MB IIRC.
And with Wordstar 4.0, such a machine would probably still be faster to use than Word 2013 on an i5 at 2.6GHz!
My vote is to write the text in Emacs using Troff and Memorandum Macros.
"What's it like on the eyes?" - It depends what is wrong with your eyes. I've been a glasses wearer since my mid teens, and for the most part, can still get my long-standing myopia, astigmatisms and my more recent presbyopia fixed well enough with glasses to use my Sony Xperia SP. Have you tried seeing your Optometrist or Optician?
Many of the problems people have is that they are trying to display too much on too small a screen, not that their eyesight is degrading. I blame the unnecessary move the recent 'retina' type resolutions.
My Father is suffering from type 2 diabetes. It's was allowed to progress before being adequately treated until it is now affecting his sight to an alarming extent. With that knowledge, I am keeping a close check on my weight and eyesight in the hope that I don't suffer similar problems. Of course, I may also find that I suffer from cataracts or other unexpected ailment, but I would expect that medical science and regular checks are better now than they were 30 years ago for the people then of my current age who are now in their '80s.
It's a niche market that will largely (though probably never completely) disappear. People like me, in their '50s can use smart phones. We will still be trying to use smart phones (or what comes after them) when we retire.
Current oldsters grew up in a time when telephones were big and connected to the wall with a wire. Merely having a phone that you can lose in your coat pocket is still strange to some of them.
It may be that there will be a new technology (maybe wearable phones/display systems like Glass) that my generation won't be comfortable using, and Smart phones will become the new feature phone.
It's not just the UI that is the problem.
My Dad tried to use my Sony Xperia, and could just not get used to virtual buttons on the screen. He wanted something that he could get tactile feedback from (the haptic feedback told him he had pressed a key, but he had no idea of whether it was the right key), both to find the key and know that he had pressed it. He also found that having to hold the thing so that he could see the whole of the top surface meant that he dropped it a lot. Until you see an older person trying to use a phone, you forget how much restricted finger movement due to any of a number of ailments, impacts their use.
He looked at one of the Doros, and the similar BlueChip phones, but decided on a Samsung flip phone for a similar price. And he actually uses it, although he doesn't get texting at all.
Yes, I found it interesting that it is not the Spanish web site that contained the original data that was being asked to delete something, but Google, which just indexed it. Of course, without Google, it would be much more difficult to find the original data if Google removed it's index.
I suppose that the data could have been removed from the Spanish web site, and Google was slow to update it's index and also cached the original data, but I suspect that this wasn't the case.
I wonder if we will see similar things from the Wayback machine, although I believe that they already have a way of asking for specific data to be removed.
Ah, but it's not any sand. As I understand it, it has to be very pure, and currently places like Spruce Pine, NC provide a lot of the high quality quartz for the production of chips, and the raw cost of this is very high.
As the scale of integration increases, so does the importance of the purity of the wafers.
Most record decks, even manual ones, will not allow the arm to move much further in than the outside edge of the label. This is normally because of the bias counterweight, but also to prevent the stylus being damaged from 'playing the label'.
I'm also surprised about it having a 78 RPM track. Almost no record decks made in the last 30 years can even play at 78 RPM.
My Project, and most belt other decks like Linn, Rega et. al. have to have the belt manually moved to a different pulley position in order to play 45s. I think that there is a conversion kit which consists of a larger pulley and a longer belt for my Project, but I don't intend to fork out for and then mod my desk just for this record!
I think you are forgetting how 'primitive' most audio systems were. If it were just a signal, then you would have had to have all tape recorders, from the lowest cost to the highest HiFi to implement such a system.
No. It was more devious and complex than that.
There is a frequency inequality in the recording mechanism for audio tape that is fixed by generating a high frequency bias signal in the recording circuit that will not actually be recorded on the tape, because it is outside of the frequency response of tape (this is a gross simplification. See the Wikipedia article on Tape bias for more information).
What they did was put a high frequency subharmonic of the bias frequency as an interference signal on the record that caused 'beat' patterns (the audio version of a moire pattern) that were low enough in frequency that they would be recorded onto the tape, spoiling the recording.
But it never worked properly for several reasons. Firstly, the frequency of the bias signal generated by the tape recorder was not very accurate and depended on the type of tape the recorder was optimised to use (cheap analogue electronics of the time being a bit variable, as was the speed of cheap record decks). Secondly, the audio range of record player cartridges varied according to the quality - generally cheap ones did not track above about 14 of 15KHz, and thus would not pick up the interference signal unless it was within the audio range, and thirdly, the very best HiFi was perfectly capable of reproducing the interference signal, and some audiophiles claimed that they could hear it (even though it was supposed to be supersonic).
It could also be defeated by a notch or high frequency roll-off filter, and a lot of LoFi (and some HiFi) had these as so-called "scratch" filters.
So it was only marginally effective, easily defeated, and detracted from the listening experience. It was soon dropped.
...and you do realise that opting-in to Care.Data won't help prevent you being given the wrong anaesthetic at all.
As you say, it may help a company develop a new one that won't trigger your problem, but Care.Data does not make your data any more readily available within the NHS than it ever was.
The loading of your data into a Summary Care Record would be something that you would not want to opt-out of, but that's completely separate from Care.Data.
This illustrates how even well informed people can misunderstand the mess that the NHS has got themselves in.
It's funny that they never do factor in the number of expensive minutes lost having skilled people gathering up and throwing away their coffee cups and other rubbish compared with the cheap minutes of the cleaners.
I'm not trying to belittle cleaners, but there is a 3:1 or more ratio in cost of trained and skilled IT professional vs. (often minimum wage) cleaners.
It would only be any code that is covered by GPL that has been modified that would have to be included anyway.
Most of the application development tools and library runtimes are published under LGPL, so it is perfectly possible to add the controlling layer as an application that sits on top of Linux linking to LGPL code without having to provide the source to anybody, even the people who buy the binaries.
If you are extending it comment about modified code to the previous comments about stripping Linux down to stop housekeeping, the stuff that is likely to affect performance is all in user space, and can be configured out by modifying the runtime configuration. Similarly, any parts of the kernel that are not required can be stripped out at kernel build time by configuration. The configuration files for the kernel build and runtime daemon configuration are not covered by GPL, so would not have to be published.
This perception that anything that runs on Linux has to be covered by the full GPL is just crap, and the sooner more people understand this, the more likely it is we will see commercial applications appear to run on Linux, something that is definitely required for Linux to be perceived as a viable full alternative to other operating systems. The opportunity for Linux to take the desktop is past (unless it's Android!), but I'm still hoping that it can achieve sufficient traction that it does not die as a desktop OS.
The U2 was never really a 'stealth' plane. When it was designed, it's main benefits were it's high operational altitude (higher than the Russians Surface-to-Air missiles or fighters), which lulled the Americans into a false idea of it's safety, and the high endurance that allowed it to overfly most of the Soviet Union. In the years before surveillance satellites, this was the main method of identifying what the Russians were doing.
That's why Gary Powers being shot down was such a shock!
The SR71 added some stealth features, along with very high speed, which enabled the Americans to continue surveillance operations.
I've been saying for a long time that most users really dislike change for valid reasons.
I know that there have been layout changes, but the Windows interface introduced in Windows 95 is still recognisable in WinXP SP3, and even to a certain extent Win7. This needs to be recognised by the "change for change's sake" people. Whilst they can rationalise the changes themselves, they really should take their target audience's opinion more.
I am finding the same in the most recent re-skinning of Firefox. I'm just waiting for my Father to ask me how to find some of the things that have moved around.
On the subject of URLs and DNS names, it is perfectly normal to configure DNS to resolve a name into a number of IP addresses, in order to spread the load across multiple machines. The DNS server can be configured to rotate around the list of possible systems in a variety of different ways, and there were also ways to set up a dynamic DNS to allow the service state of the accessed systems to be reflected in the returned results.
If you think something like 184.108.40.206 (one of the IP addresses that Google responds on) is a problem, try typing in http://2915189354 as a URL!
I have a copy of the book "A Programming Language" from the 1970s (I bought it second hand in 1978), which defined both the language and it's name!
Of course there could have been an "Atlas Programming Language", but that's not the APL we know now.
When there were display ZX81s in the WH Smiths, I would type in a REM statement in the first line of a program, using the Sinclair special block characters, a small piece of assembler that would put a different value in the Z80's I register that was re-purposed by Sinclair to point to the page number of the first page of the character generator table in the ROM, and then call the code from the relevant address. I often added this to the program that was loaded, and then run the program.
What this resulted in was a screen of garbage, You could see that there was something there, and it would respond to all of the right commands like list, but the text was unreadable. If I remember correctly, the funniest thing was to put a value in that was the base page of the character table offset by one. This had the effect of shifting the displayed characters along by a number (32?) of characters, so the result was effectively a block-shift cypher of the program.
Was probably and Econet network. And the login screen was in BBC Basic anyway. We told users to do a <Ctrl>-<Break> before logging on to prevent this type of thing.
I ran a Level 3 Econet network back in the 1980s. If only the security had been better enforced (the concepts were good, but it was trivial to get around), then it would have been a great low-cost network for file and print sharing. But there was no concept of privilege mode in the BBC Micro OS, so it was simplicity itself to set the bit in the Econet page in memory to give you privilege at the network level. And once you had this, you did not need a users password to be able to get at their files.
Still, I suppose that you can't have everything in a single-user 8-bit micro. But I agree, the BASIC was good, with the exception of it not having a while-wend construct.
Pascal was created as a teaching language. It's prime goal was to be highly structured, and have a very concise syntax that encouraged students to think in the way that matched the good programming practices of the time (highly structured, functional and procedural programming). It generally succeeded in these aims.
It is quite clear that someone who learned Pascal could convert to other scientific languages (like Fortran or or Algol) relatively easily, and I know lots of people who moved to C with little difficulty.
But as a language, it was strongly disliked by students. Because of the precise syntax and strict type checking, it was a very pedantic language to write. In other languages at the time, you might get a successful compile, but have a completely broken program because of an escaped syntax error.
Now Pascal would never force you to write programs that worked, but it would protect you from some of the pitfalls that other languages might allow. But the repeated compile/fix cycles without a run caused many colourful moments in the classes I was involved in. But I'm not sure whether that was preferable to the compiler incorrectly attempting to fix simple errors like the PL/1 subset teaching compiler called PL/C, which is what I learned formal programming in.
The other drawback of strict Pascal implementations (and here I am explicitly excluding all Borland/Zortech and other 'extended' products) was that there was comparatively little support for some operations that were needed in order to cope with real-life problems. Files with multiple record types were complex (you had to use a construct like variant records to do this), and the very strong data typing did not have the equivalent of a cast operation (I'm still talking strict Pascal here), which made some of the tricks that you do in other languages difficult or impossible. There was also no variable length string construct (there were fixed length character arrays), and as a result, almost no what you would describe as string operations. This meant that you quite often had to do code some comparatively simple operations yourself. And there was no form-filling or screen handling features at all. But at least that was not unique to Pascal. Almost none of the high level languages of the time had that built into the language itself (it was normal for these to be added by library routines, the most obvious example being Curses for C).
This meant that kids who learned BASIC on 8-bit micros at home regarded Pascal as a backward language that restricted what they could do, whereas people from a formal teaching environment regarded it as very good language for precisely the same reason!
The other reason kids had difficulty with any compiled language was the fact that it was not interactive. The whole compile thing compared to just running it seemed wrong to them.
The data protection act talks about personally identifiable data, and defines it as being about someone, not belonging to them.
It has always been an exception to the data protection act that information stripped of the identity of the person that it is about is no longer covered by the act.
The problem lies in what is identifiable data. Obviously, name, address, telephone number all count as identifiable data. But hair colour, height, route you travel to work, and even things like salary are not actually unique enough to be considered as identifying data. But where this breaks down is that several pieces of data which by themselves do not identify a person, might in combination be enough to provide a key to link the all data in a particular record to an individual.
This is a problem that has come about because of the increase in power of the computers, and the increased sophistication of the analytical software that processes the data. This is was the crux of the arguments against care.data. So-called anonymous data is rendered identifiable.
On the subject of ownership. My house has an address. This has personal relevance to me because I currently live there. The fact that I live there currently does not mean that the text that make up the address is in itself is owned by me. I cannot ask the Royal Mail to remove it from their post-code database. I have no control over it. I do now 'own' the text of the address.
I totally agree with what Jason Bloomberg said in a follow up comment to my original. Jason. Have a thumbs-up from me.
I'm not sure I agree.
Data is data. It may be information about you, but you probably cannot claim to own it. In the case of the HMRC, they could be the custodian of the data, but even then, the only reason they could claim to own it was because they have gone to the effort to collect it.
But not everything they know about you is provided by you. Your employer is under a legal obligation to provide data to HMRC (as indeed you are). They may also have data about what benefits you have received, and if you have been under any form of tax investigation, they may have been given access to other data kept by other parties about you. I'm not saying that they don't have an obligation to keep the information private, nor am I saying that other people knowing it could not put you at a disadvantage, but don't claim ownership.
The only data that you can truly claim to own is that which you create yourself. If you do something like write, then what you write (assuming that it is not done under any pre-agreed contract) is probably yours, and you can claim ownership. If it is data about you, then you did not explicitly create it, and you cannot claim to own it.
This is my opinion, and not based on any legal knowledge. I would be interested to hear what other people think.
What you call the frame area, which I believe corresponds to what I called the menu bar is rendered by whatever toolkit you are using from inside the application (and, critically, the application's process space). The application can totally control what appears on that bar, although it will normally use standardised toolkit routines to do it.
This does not make it the Window Manager that is providing that menu bar. The Window Manager controls the encapsulating frame, and all of the widgets that it used to do this are outside of the application's process space (but do note, however, they may not be in the Window Managers process space - it could defer these to other processes under the way X11 is structured).
Do not confuse the Window Manager with the widget runtime shared object/library. They are not the same thing.
I see what you are getting at, however. The toolkit routines that create the menu bar are normally in shared objects/libraries that are dynamically bound in to the executable at run time. By providing a compatible but different set of routines at runtime, I can see that compliant programs could have their behaviour changed by the system, so that it would indeed be possible for the runtime to intercept and alter some of the expected behaviour.
But note that I said compliant programs. What about those that do not use the Gtk and QT runtimes to manage their menu bar. What if they do, but have statically linked the routines available at compile time. What if they are so old that they use the Xtk or Andrew Toolkit, or Motif, or CDE. Or, heaven forbid, coded all of the menu bars themselves!
If the modification is done at the runtime-call level (and this could be the bit I was unable to see when I wrote my earlier posts), it would be necessary for Canonical to patch each and every dynamically bound widget toolkit, and they would totally fail to manage statically bound binaries.
Regarding your comments on Wayland, remember that Canonical is not implementing Wayland. Their alternative to X11 is Mir, but this is not in current releases of Ubuntu. We are not talking about Wayland.
If, however, Wayland is making it the responsibility of the application to draw all window decorations, then I can see problems ahead when applications hang or crash. Having things like the "Close" button handled by another process, to allow mis-behaving process to be closed, is such a good idea that I wonder about the sanity of the Wayland developers in throwing this away. I have often wondered whether their drive to eliminate the overheads of X11 will end up throwing the baby out with the bath water. X11 may be old, but the concepts it introduced were mostly very sound, with the possible exception of the poor security model.
Puppet does indeed look interesting, but it is not like AD because it is layered on top of Linux, rather than being a part of the Linux infrastructure in the way that AD is integrated into Windows.
MS chose to use a registry for many or all of the important Windows and application settings, and then plugged AD into this to allow any program which used the registry to instead get the settings from AD. It's elegant and well thought out, something that I don't say about Microsoft very often.
Puppet relies on discrete 'modules' to perform specific functions. This means that every time you need to control a new application, it will probably be necessary to obtain or write a new module. This is very flexible, but ultimately more technically involved.
I am not currently running an environment that requires this degree of control (the problems in supercomputers with no system disks is not a problem that needs this type of solution), but I would certainly look at Puppet if I were in control of an environment that needed that level of control.
It depends on how you link it. If you resolve all external dependencies and statically link all library routines, and do not rely on any runtime services (like dbus etc), then it is perfectly possible for a binary compiled today to run on any Linux system as long as it is the correct processor type and the kernel API doesn't change.
In fact, looking at it, I would expect that many Linux programs compiled 15 years ago would still run, as many of them that old may well not have been linked against shared object files, and certainly would not have used dbus, dcop, bonobo et. al. Possibly more of them than were compiled 5 years ago.
The dependency on dynamically linked shared objects and runtime services is in my view one of the worst things that ever happened to Linux. It makes building programs that you want to work int the future without having to recompile more difficult than it needs to be.
Interestingly, but on a different note, I picked a binary of one of my tools off of one of my archives from a 32 bit AIX 4.1.4 system from about 1998, and successfully ran it without re-compiling it on an AIX 7.1 64 bit system.
I have several dozen 5.25" floppy disks that were created on my BBC micro ~30 years ago, and I am finding that a significant of them now have difficulty being read. The main problem is that the adhesive that is holding the oxide to the Mylar disk is breaking down, so each time I read a disk, I have to clean the drive!
The disks I have are mainly BASF, with some Verbatim and Nashua.
This is probably because they are truly 'floppy', and were not protected as well as the 3.5" hard-cased disks that the Amiga used.
I tried to embark on a process of capturing the disk images, but stopped when I had difficulty finding any new blank double sided double density floppies.
I now need to look at either reading them on a BEEB and squirting them over an RS/232 link (I think I have a PC with one of those left, and I came across the strange 5 pin DIN to 25 way D cable that I used to use, although I'll probably have to find a 9-25 way converter and a null-modem).
The alternative is finding a 5.25" DSDD floppy disk drive for a PC!
You also have to factor in that IBM develops POWER and Z series processors in parallel. Much of the technology in chip design (and quite a bit more under the covers) is common between the two families of processor. So POWER does have a high revenue earning sibling to help it out.
They also have some history in the embedded processor market. POWER chips are not as common as ARM, but they did get some traction in NAS and set-top boxes, and although they lost out in the most recent generations of consoles, the Xbox 360, Playstation 3, and Wii all used PowerPC processors, and the WiiU still does.
Biting the hand that feeds IT © 1998–2017