1734 posts • joined 15 Jun 2007
Galactic Patrol would be great, and contains enough action to satisfy Hollywood's CGI lust. Imagine how you cool you could make Worsel the Valentian!
That means that I am definitely one then, because I recognise almost all of them, and have read more than half!
Foundation was a trilogy
for many, many years, before dear old Isaac (bog rest his overworked imagination) decided to go down the future history route, and tie all of his series together. Hardcore Asimov fans don't think of the later books as canon.
It was interesting to see how he did it while the books were being first published, but in hindsight, I think it would have been better keeping R. Daneel Olivaw out of the Foundation stories. It all feels a bit contrived now.
Still, I think that it could be good, but would end up a bit slow for the Michael Bay and Jerry Bruckheimer generations.
I wish the BBC dramatisation of "Caves of Steel" still existed somewhere. If someone has it, they would be a real hero!
Totally agree. Mind you, I learnt more about Leprosy from the first book than from the previous 15 years of education.
and you know the 'phone number you call is good because...?
allow you to have the benefit of SSL encryption without the need of purchasing a certificate from a CA.
You loose the benefit of a trusted third party vouching for you, but you maintain the security of the encrypted link, so it's not all a waste.
I personally would want to use a trusted partner for my Webmail, but I may myself be happy for a self-certified certificate for services I expose on the Web for my own use.
Also, the problem of using a CA on a closed Intranet can be a serious issue without either setting up a local CA, or using self-signed certificates.
In either of the last two cases, having Firefox bitch about self-signed certificates is less than helpful.
@Wider Web - PC security tools
which is, of course, only available on Windows*. Big fail.
*Disclaimer - Windows is a trademark of the Microsoft Corporation. Other operating systems are available, some at quite reasonable cost.
Because sometimes, after a really hectic day, when I have 30+ windows open (today is a quiet day, I've only 24 windows open) with different terminal sessions (currently 12 different systems on the network), browsers (this one, and a separate window with two Nagios status pages, and two multiple tabbed HMC windows onto 16 different HMCs in the environment - it's a big environment) , document readers, configuration windows, notification windows, mail clients, it's nice (especially with a "Minimise All" button) to clear the desktop without having to worry about loosing your carefully arranged windows positions.
I appreciate alt-tab, sometimes it's takes too long to work out which xterm is which.
And yes, I use multiple desktops to reduce the clutter, and yes I have automatic setup of windows when I log in.
Also, in your case, sometimes window B is completely obscured, so how do you click on it?
As a long term UNIX user and more recently (13 years seems recent to me) Linux user, and having been taken through Sun View, OpenLook, twm, vtwm, Motif, CDE, fvwm, and various releases of KDE and Gnome, as well as many different experiments with the less well known desktop managers like Afterstep and Elightenment, I'm finding The World moving further and further away from what I want to use.
All I want is multiple overlapping windows, with a focus policy that I can change to what I want, and a quick way of starting any of the applications I use in a constant and consistent manner that does not conflict with selecting already opened windows. Multiple desktops are nice, and starting up a walking menu, either from a fixed point on the screen, or from a button press over the desktop is all I need. I can cope without drag-and-drop between folders and onto applications, and I can live without using the 'desktop' as a drop area to hold files (all this does is make you messy and uncaring about where on the system your files actually are).
I'm thinking of giving up completely on computers, grabbing a broom, and applying for a street sweepers job.
@Cam 2: At the risk of re-starting the Editor wars...
...there speaks a real emacs user!
ChiPod mp4 player
Mine was physically labelled as a 16GB unit and reported 16GB in the FAT, but was actually only 8GB minus a bit (presumably the firmware). But I was able to format it to the correct size, and it worked quite well until the NAND memory wore out. This was about 15 months of quite hard use.
I informed the seller, and was offered a refund, but I thought that even at 8GB, it was worth the ludicrously low price.
Who says it works
A lot of the Pulseaudio problems at 8.04 were a result of the defaults changing for PA between 7.10 and 8.04, leading to people who had upgraded from earlier versions being left with an unworkable configuration. However, I do have an outstanding problem with Pulseaudio on my T30 thinkpad that has been documented, but never fixed.
It would appear that after suspend/resume, the re-sampling rate changes by about 5%. This results in the pitch of the audio changing, raising it by just less than a semi-tone. As it is playing the sound faster, it also leads to gaps of around 1/10 sec. every 2 seconds in the audio with gstreamer based programs when there is insufficient data to play. I have not found anybody who has been able to permanently fix this, and it is marked as unlikely to be fixed in the 8.04 defect database. I have a local workaround for 8.04, but I've not done the work to do the same on 10.04 (the whole suspend/resume system has been knackered by the introduction of KMS).
It's still there in 10.04, and this, along with a problem with older ATI graphics cards not being re-configured (again by KMS) correctly after suspend/resume (new defect in 10.04 - it's to do with the way KMS and udev tread on each other's toes during a resume), which has left me running 8.04 on my main laptop.
And before anybody says I should get a more modern laptop, my T30 works just swell - why should I change!
I think we all have to agree...
Here are my cards. I've used Unity on the 10.10 netbook release, and IMHO it sucks big time. I've not tried 11.04 (yet) and admit that there is a good chance that it will work better but I don't normally run Betas, as I have enough to do in my life without having to fight bugs that other people take it upon themselves to find.
I am in "the desk top is there to control as many windows as I need" camp, and as a result, Unity is completely against the way I work.
But, and this is a big but, I am not a typical modern non-technical user. I work everyday with people who maximise whatever they are doing on their 22" 16x9 screen, work with tabbed terminal sessions in Konsole rather than multiple windows, and generally do one thing at a time. These are either OSX and iPad generation, or (bizarrely) people who have come from the green-screen single session terminal age and who have never felt comfortable in a windowed environment.
Having seen both of the above struggle to switch from one application to another, I believe that my multiple windows spread in a consistent manner across several desktops is better. But that's my opinion. I believe that I work faster than them, but again, that's my opinion.
I think that it's fair to say that Unity works well for those people who work in the way that Unity works, and it does not work for those who don't. By hey! This is a big world, and not everyone is the same.
I think it is necessary to explore new interfaces, as I am sure that I would not necessarily want Gnome or KDE as they are now on a touch-screen device. But having also been an Android user for around 6 months now, I'm not sure I would be happy with that on a tablet either, at least not on one that is more powerful than my netbook.
So I am happy that there is a new interface being tried, and even that it is the default (as long as they get the bugs out and make the alternative easy to choose). I just don't want to be forced to use it without jumping through hoops of fire if it doesn't suit me, and I would be grateful if others accepted that this is a perfectly valid viewpoint.
It's not that it's purple that bugs me...
...it's the blotches of other colours that make me want to reach for a screen degausser, until I remember that that has not been a problem for 20 years on CRT screens, and has never been a problem on flat-panel monitors.
And at least on the login screen, it is not obvious how to change it.
Yes, I've done it now, so I don't need anyone to tell me, but it should not be difficult on an OS aimed at ordinary users.
Mind you, shops delibrately confuse
In supermarkets, there is supposed to be a representative price on the tickets to allow easy comparison, like so many pence per amount of weight.
Unfortunately, my local T***o appears to deliberately compare different weights on the tickets, so one item will be priced per 250g, and another will be priced per 100g (and I have seen worse ones involving amounts like 330g, 350g).
There is only one purpose in this, and that is to defeat the measures introduced to allow product comparison.
I'm pretty good a mental arithmetic still, but sometimes I have to think for a few seconds before deciding which item is the best value. Other people wanting best value tend to just go for the supermarket brand, believing it will always be cheaper. More often than you may think, they are wrong.
I'm not saying that it's not a poor reflection on the education system nowadays, but it is clear that shops make it deliberately difficult.
No, it's not magic
Apart from the stored heat, there is also tidal energy in the mantle, caused mostly by the gravity drag as a result of the Moon.
There are also some people who think that there are fission reactions happening at various points in the mantle and core generating heat.
Not current accounts. I doubt they are *that* stupid.
What makes you think that multiple heads per platter is new?
I saw a magnetic drum device (swap on an IBM 360-65) which had something like 8 rows of heads spaced around the drum to increase speed.
Deskstar and Ultrastar
I don't know whether this is relevant, but I have used IBM and Hitachi Ultrastar (the enterprise version which actually shares much of the mechanical components), and generally they have been quite reliable. I have 3 in an IBM RS6000 44P-170.
Thanks for the suggestion. Unfortunately, most people who end up using powerline ethernet have already tried WiFi, and have moved on because WiFi could not cut the mustard for reasons including thick walls, inconvenient location of the ADSL that the WiFi plugs into and congestion.
Am I ever glad about the decision
I've always said that cached thumbnails were a potential danger to anyone browsing the Internet or using a web-based mail client. I hope that this will set precedent.
I agree that the victim has been awarded a pitiful settlement, and I would be very interested in finding out in 6 months time what a CRB check shows for him. I think that if I were his lawyer, I would go for interim damages pending their clients reputation being restored. If this appears in an adverse way on his record at some time in the future, then they would be able to go for lifetime damages that should be enough to support this person for the rest of their natural life.
And it would be even better if the claim would have to be contributed to by all of the members of the prosecution rather than the tax payers.
Yes, but there are legal streaming/download services available, so it is possible to use all that capacity without breaching copyright.
While it is technically possible that you may own the copyright to some CDs, this would have to be by contract, agreement or by actually being the person who recorded the material. As CD as a media is only about 30 years old, it is a fairly safe bet that nearly all material on any CD is still in copyright.
OK, ok, I know that there are compilation CDs out there that you can buy that contain already-copyright-expired material (almost any song recorded in the 1950's or earlier will fall into this category), and I know that EMI are getting twitchy about the Beatles back catalogue, but at present this is a minority of CDs.
One thing I want to throw out there is whether a re-master of a work recorded more than 50 years ago (in the UK) resets the copyright date. In theory, it could count as new material, but as IANAL, I do not know whether that is the case.
Any thoughts, anyone?
For those who don't recognise it
Take the flute riff. Work out where the bars are (clap 4 beats to the bar, along with the rhythm) such that you get 8 bars or 32 beats for the whole riff - it's a middle 8 after all) and it is the third and fourth bars of the flute riff, which matches the first line of the kookaburra song. It is pretty much note for note, with the same rhythm as well.
It's quite clear that it is the same, but I ask you, how can 2 bars, 11 notes in total be worth 5% of the revenue of the song. Especially when the original song is, I'm sure, meant to be onomatopoeic. At least it's the writers revenue they are going after. That should limit the payout significantly.
I'm absolutely certain that this ruling will lead to Australian lawyers finding two bars of matching melody in different songs, and then using this ruling as the basis for a copyright infringement claim. I'm sure that it could be done by computer if you had access to the right MIDI files.
There is at least one song in the UK chart at the moment, which every time I hear it, I say to myself "That's a song already", but I just can't put a name to it. I must try to remember to put more effort into working it out.
I feel that this should be dismissed exactly as EMI put it, as an unconscious tribute to the original song. Otherwise we will have to have melody searches against all songs currently in copyright before a new tune is published, in the same way as we have to have patent searches now. This will stop amateur writers dead in their tracks in the same way as patent searches stops small inventors now!
Re. MS, you also forgot
FUD campaigns, illegal (or at least immoral) discounting deals to PC manufacturers if competitor products were not installed, pricing models that penalised OEMs if they offered systems without an OS or with an alternate OS installed, including open-sourced code in products without acknowledging it's origin, random buy-up and shutdown acquisitions of competitor products, committee stuffing to undermine genuine open standards, participating in what amounts to patent cartels, unnecessary license purchases/ financial loans/bailouts with strings attached to control struggling competitors and ... oh well, I could go on, but it's all a matter of history.
One wonders what would have happened if Apple had not won it's case against Digital Research and GEM, which assured that MS had a significant clear-run at a graphical user interface in the Intel PC market. In my view GEM was significantly better to the versions of MS Windows that were available in the same time-frame.
I think you meant
Windows 98 SE and Windows ME, not Windows 98 ME. They were two different versions (ME following 98 SE), and while 98 SE was seen as good, ME was regarded as pants by pretty much everybody.
Didn't Apple say something about OSX servers recently....
Oh yes. They're stopping making them.
So we'll have racks full of Mac Minis then.
I'd just like to point out
that AT&T up until the late '80s used to run a large part of their environment around mainframes, many running UNIX! And you probably ought to look up other non-IBM OS's for 370 architecture systems as well. One of my personal favourites was MTS. I saw a demonstration of access to ARPANET (you know, a forerunner of the Internet) from this OS in the very early '80s. Also, for all it's problems, the influential OS Multics was a mainframe OS, and this set features that would appear in UNIX, VMS and a host of other OS's long forgotten.
I was involved with installing and running a channel-based Ethernet device running TCP/IP on a mainframe linking it to Sun and VAX systems in the later '80s (again, under UNIX).
I think that one needs to separate the hardware from the software, as there is a significant difference.
Mind you, if you look as some of the innovations, such as virtual addressing, virtualised systems, key based page level memory protection, I/O offload, multi-processor systems, distributed processing, hierarchical storage controllers, DMA, memory cache, multi-user and multi-tasking, use of ASCII (one of your benchmarks, ASCII was mandated by US government contracts in 1968, and before this was a COMMUNICATION standard, not a COMPUTING one), microcode, solid-state electronics and a host of more minor things, mainframe was often one of the first systems to implement them (often because the features were so expensive to implement, only mainframe-class machines could benefit).
Whilst many of these were not invented on the 360/370....zSeries systems (now the only real mainframe architecture remaining), they were almost all pioneered on mainframe-class systems like Atlas, KDF/9, Cyber/CDC, UNIVAC and others.
Damn. Forgot that the Reg. stripped multiple spaces out of comments. All of the note changes happen on syllables.
@AC re Calculators
I remembered having to work in base 12, and base 20 for £sd and base 16 and 14 for imperial weight as I hit submit.
Even though I was well versed in Maths at the age of 10/11, decimalisation still caused me problems when my one shilling of pocket money became 5 new pence.
"Decimalisation, decimalise, decimalisation will change your lives"
g a b cb g g a b c f g a a#af f f f g
Key might be wrong, and the a# should probably be written as a b flat, but I can't seem to see a flat symbol on the keyboard!
@AC re Joke - again
And you can't convert decimal integers to octal either!
99 (decimal) actually equals 143 (octal)!
For this one, you're getting the pedantic Maths teacher!
@AC re. Joke
No. I was serious and yes, I know that 8 is 010 and 9 is 011.
I was presuming that the person was using a calculator which worked in octal and decimal (and thus had 8 and 9 and point keys) but which was in octal mode, so that when they were typing in something like 18.49, the calculator actually registered 14 (neither the 8, the 9 or the decimal point would have registered). Would get the sums very wrong.
If you had actually bothered to think of the mechanics of it, you would have understood.
By the way. I think that your floating point octal to decimal is incorrect.
When writing non integer octals to one significant digit, the numbering would be
0.1 octal, which is 1/8 (0.125 decimal)
0.2 octal, which is 2/8 (0.25 decimal)
0.3 octal, which is 3/8 (0.375 decimal)
0.4 octal, which is 4/8 (0.5 decimal)
0.5 octal, which is 5/8 (0.675 decimal)
0.6 octal, which is 6/8 (0.75 decimal)
0.7 octal, which is 7/8 (0.875 decimal)
1.0, which is 8/8 (1.0 decimal)
So in Octal 0.5 + 0.2 + 0.1 will equal 1.0, which it needs to do in order for arithmetic to work.
The first significant digit after the octal point (geddit) is 1/8th's, the second is 1/64ths, the third is 1/512th's and so on.
This means that by casual inspection, 0.44 octal HAS to be larger than 0.5 decimal.
By my calculations 12.44 (octal) is (1x8) plus (2x1) plus (4/8) plus (4/64), which makes it 10.5625 (decimal) or 10.56 rounded to decimal pence.
I can't see how you got 10.14. Even if you had worked in pence, 1244 octal is 676 decimal.
You got the 0.95 decimal correct, however.
You could do the exact arithmetic if you worked in pence or cents. Non-integer arithmetic in any base other than 10 hurts my head.
If you look at physical calculators that work in octal (rather than 'soft' calculators on PC's and smartphones that can change the keyboard layout according to the mode), they normally do have an 8 and a 9 key (and a decimal point key), because they normally also work in decimal and hexadecimal.
These keys are normally disabled when the calculator is in octal mode, so you could press them, but I think I would notice that they had not registered. Also, as far as I am aware, nobody has produced a calculator that does non-integer arithmetic in anything other than base 10 (what a mind-bending concept that would be!).
For reference, look up the Texas Instruments Programmer Calculator that was available in the '80s, and any number of modern scientific calculators from makers like TI and Casio that also work in different bases including octal.
BTW. I was working in Octal on systems before PC's were invented, so I do understand it. I learnt clock arithmetic in bases other than 10 when I was about 8 in the 1960's, when they actually taught Maths in junior (primary) school.
Did you realise that Humans were meant to have thirteen fingers?
It's obvious, because in the HHGTTG, the ultimate question and answer is "What do you get when you multiply 6 by 9". Answer Forty two.
This is indeed the case if you work in base 13.
The only reason we work in base 10 is because we have 10 fingers. In some instances, it would actually make better sense to work in base 6, because you could then use one hand for 0-5 and the other as a carry. This enables you to count up to 35 with your two hands.
Gawd. I think up such crap!
Octal for checkbook balencing
I find this unlikely, as so many prices have 8's and 9's in them. Think of all the things that are one penny (cent) lower than a round number of pounds, euros or dollars. I guess that it depends on what your calculator does when you hit 8 or 9.
IBM trademarked the name TrackPoint
but other names include TouchStyk. Other names can be found in the relevant Wikipedia article, and many other good information sources.
IBM's name was the original, though, as they developed it to a product even if they did not invent it.
that when you clicked on a video in YouTube from Firefox on Ubuntu (at least, up to 10.4), it pops up a box that says that you have to install Flash, and then directs you to an Adobe page that asks which package you want to load. Once decided (and working out which format you need is about the only tricky bit), it downloads it and asks how you want to open it. Selecting the default will open whichever package manager is the default (IIRC it was gdebi last time I did it) and install it on your system.
OK, this will not be the version from the Ubuntu repository, but it should work.
As far as I remember, this is pretty much the same for Firefox on Windows.
Again, I believe that Totem will ask you to enable various options (lame and dvdcss) the first time it encounters a .mp3 or a DVD.
If you had a proper multi-user, multi-access OS with proper security deployed, you could get away from this whole 'personal real/virtual' machine and software deployment model with all of the associated security problems that is blighting us.
Oh, how 20th century of me to bring up diskless shared image and thin client access UNIX systems that were being done in the 1980s and 1990s. It was not without fault, but was far better than using a crowbar to squeeze multiple virtual systems onto a single piece of hardware, with all of the associated duplication and waste that this entails.
One of my mantras has been "There is no place for a personal computer in Business" for a long time, and I believe that it is as true now as it has ever been.
It really makes me want to go back in time an nuke Redmond even more.
I understand making data accessible. And I also understand that having relationships between data items makes a lot of sense. But I really doubt URIs are the way to do it.
My concern is that by using URIs (at least the way I understand them to work) will effectively hardcode location and shape information into the datasets in the same way that a schema does in a relational database, but with a fixed location. Unless someone can indicate otherwise, I believe that this makes the data almost completely non-portable.
OK, in a web-centric world this may make sense, but unless someone puts some clever caching technique, it means that you will only be able to use the data when you ate connected.
Sometimes you want to take a fixed snapshot, or make sure that the data in your thesis does not change between you writing it, and it being read by your moderator.
I'm all for making data easily useable (god knows I've spent enough time massaging data over the years), but to tie it to the Internet should obviously be stupid to anyone unless they are from the facebook generation.
I'm also not certain that it is reasonable for the person who originally structures and creates the relations in the data to be able to anticipate how that data will need to be used in the future. Today's data mining systems are all about making assiciations between data-sets that were never imagined when the data was recorded.
Something like an encapsulated schema in the data set would be a great advantage, but you would have to have someway of normalising not only the data sets, but the schema's to allow automated queries.
Probably not a single supercomputer.
That seems like an awful lot of hardware for the budget, and the dispersed nature means that it is far more likely to act as a group of smaller clusters that talk together than a single super-computer.
You will never be able to drive the WAN links at sufficient speed to spread anything other than encapsulated data type problems (like SETI@Home but larger) to the remote sites.
I would be interested in seeing how the power spreads around the six sites, because although the total amount of compute power may seem high, chances are that the power in any single part of the environment will be a fraction significantly under half of the total. That should put it much further down the top 500 that the 'low 30's' quoted in the article.
Also, by the time it is delivered, there will be new systems springing up in China, USA, Germany and even the UK!
Oh well. I have contracted for Fujitsu in South Wales before. Maybe I ought to dust off the old CV. Might be interesting to do some non-AIX work for a while, and I now have Infiniband experience.
Come on. It's really not monotone
I found most of a single scale in the 'song', at least seven notes, although most of the song stays on the three notes of a major chord.
I think many of the readers here ought to listen to the 'singles' chart nowadays, because they will be appalled by what is counted as music by the people who actually buy it in volume. I must admit, however, that I was intrigued yesterday to hear two versions of Adele's Someone like you (the normal version and the Brits version - both head and shoulders better than much of the rest) on the chart at the same time.
Autotune, whether it is required or not, is added to the vocal in it's most intrusive, buzzy manner for effect on so many songs now. I hesitate to say this, but JLS who obviously can sing a bit (no autotune allowed on X-factor live performances after all) have it on most of their songs now.
I do wonder what a 13 year old girl will be doing while "gettin it down" and which makes "we so excited" (sic) while "partying" which is legal! Sex and drugs and alcohol should all be out.
Anyway, I have to decide whether to listen to Planet Rock or Radio 3 on my way home to flush this meaningless and annoying fluff from my mental musical cache.
Whether the control rods are above or below depends on the design. In BWR instances, the rods are below the core - see the wikipedia article on boiling water reactors that actually have a diagram of the Fukushima type reactor, which is why I used the terms "inserted" and "removed" rather than "raised" and "lowered".
The simple fact is control rods in, reactor slowed. Control rods out, reactor quickened.
There are also different types of rods in some other types of reactor. There are moderator rods, whose purpose is to slow fast neutrons be become slow neutrons, which will actually speed up the reactor, and then there are the control rods, which are intended to quench the neutron flow to stop the reactor.
In a BWR type reactor, the whole core is immersed in water, and the water itself is a neutron moderator. There are only one type of rod, and these are all control rods. This is very different from PWR and AGR type reactors.
Again, this is what I understand from years of casual study, so I am not an expert.
The control rods form a part of the control system. They are not normally either completely in or completely out, they are normally partially inserted to control the speed of the reactor and thus the energy output. Whether they are above or below the core depends on the reactor design. These are apparently BWR (boiling water) reactors, and the rods are below the core, and held against hydraulic pressure by electromagnets or similar, such that should there be an interruption in electrical power, the rods will be automatically inserted by the pressure. This is a fail-safe system.
The rods allows the operators to 'damp down' (insert the rods) the reactor in times of low power demand or maintenance, and open it up (withdraw the rods) during periods of high demand. Under normal operation, you would never completely insert the rods, because that would stop the critical reaction, and effectively stop the reactor.
In the case of a serious event (such as an earthquake), it would be normal to completely insert the rods as a precautionary measure. This would effectively make the reactor subcritical, which will cause it to cool and eventually shutdown. This does not make the reactor immediately safe, but will remove any chance of it melting down. Most of the residual energy in the core will come from decay products of the U235 fission reaction that are themselves radioactive with short half-lives, and thus will spontaneously breakdown, releasing energy in the form of heat. These will breakdown naturally over a matter of days to the point where the reactor will generate less heat that it will loose through convection or conduction, and thus become 'cold'. This is what I think is meant by 'cooling fuel'.
It is this gradual breakdown of the decay products that requires cooling until a sufficient amount of them have decayed to the point that natural cooling will be greater than the heating effect.
Conversely, during startup, removing the control rods will allow the neutron flow to increase (U235 will always spontaneously decay and produce neutrons even in a non-critical reactor) until the critical point is reached, and the reactor becomes self-sustaining. Looking at sources, it appears that for a completely shutdown or new reactor (one with no uranium decay products in the fuel rods), a source of neutrons can be used as a 'starter' to speed up the build up of the neutron flux to achieve a critical reaction more quickly.
For anybody who is worried by the term 'critical', this is not being used as in 'dangerous', but as in a tipping point, in this case where the nuclear reaction becomes self-sustaining.
If you trust it, there are very good articles on nuclear reactors, BWR type reactors, control rods, and nuclear starters in Wikipedia. These appear quite objective and appear to me to be trustworthy, at least they do not conflict with other sources I have read.
..."It has taken major efforts by humans to keep them from going critical."
Please be careful with your use of 'critical'. As far as operating nuclear reactors are concerned, 'critical' is normal. Misusing the term may lead those who do not understand the terminology from becoming needlessly alarmed.
I admit that a reactor being shutdown should not be critical once the control rods are inserted, but I seriously doubt that in this case, the cores would have become critical in the nuclear sense even if the cooling had completely failed and they were damaged by heat.
The design is such that if a complete meltdown could occur, that resultant puddle of radioactive mess would be distributed over a large enough area such that a critical mass would not pool in any one place to allow an uncontrolled nuclear reaction to happen.
@Steve the Cynic
I'm a UNIX person through and through, and the first time I looked at this was in about 1987, with SVR2, which had code for DST, but had the cut-over dates to and from DST hard-coded in libc.
There is a configuration option that allows you to vary whether the clock is localtime or UTC without having to re-compile the kernel (a bit heavy handed in this day and age). It is based around setting UTC=yes early in the boot process (in /etc/default/rcS). The initial setting for a new install is supposed to be queried during the install process. I suppose I may have set it wrong, but I don't think that I would have made such an error, bearing in mind I was aware of the problem. Maybe I'll install again from the original media I had to see whether there was a flaw in the install process.
In my (biased, I admit) view, it's wrong to run a system on localtime, but I am in the UK, and winter time is the same as UTC (well, GMT anyway), so I have never had to worry about anything other than the Daylight Saving Time change. I guess that other locations have it harder.
Any reasonable system should run it's internal clock on UTC all the time, regardless of timezone and whether DST is in effect, and just alter the presentation of local time according to it's location, so 'adjusting' the clock should never be required. You should never have have the hardware clock changed on a correctly configured UNIX or Linux system, except to take into account leap-seconds or clock-drift.
This has always bugged the hell out of me when using a dual boot Linux/other-OS PC. Linux worked just swell changing the presented time according to DST, and then you boot your other operating system that ALTERS THE FREAKING UNDERLYING CLOCK!!!
When I put Ubuntu Lucid Lynx (10.04) on my Thinkpad last year, I was expecting this, but found that some misguided bright spark has added code to Linux (probably somewhere else other than Canonical) that actually expects the underlying clock to be changed incorrectly by the other-OS, and then 'breaks' the working tradditional UNIX/Linux time support to take this incorrect clock setting into account. Talk about working around someone else's errors.
Good for all the people who want it to just work and regularly boot both OS's, bad for anybody who actually understands what should be going on. Drove me nuts for an hour or two.
Hang on a sec. Back to iOS. Isn't it a UNIX/BSD derived OS?......
The trojan is an ELF executable, presumably for whichever processor runs in the Dlink router, but the vector to get it in there would appear to be a compromised MS Windows system that then attempts to brute-force access to the router. So there are actually two components, one of which infects a windows system, and the second of which is installed on the router by the first.
Thanks for explaining, although I did post this as a springboard to get replies.
As I have supported diskless UNIX systems for several years in the past (and will be again very shortly), I do understand about sharing a system image (which, incidental, on windows breaks a whole host of software unless you jump through hoops to redirect stuff away from the C: drive, which will be read-only, somewhere else - personal experience of pain here), and also identical hardware on the desktop. It's not a new technology except to Windows shops.
Citrix, VMWare and Microsoft are waaaaaaay behind the curve here compared to UNIX, both in diskless and remote display, and I have to feel that bending current windows to make it fit into a diskless/remote display model is the wrong way to go about it. Better would be to have made a 'new' windows with native thin client support and some compatibility with 'old' windows, than using a crowbar on the existing models. After all, MS did product switch before with NT. Maybe Longhorn should have been this, but they apparently could not get it to work without ex DEC system architects and IBM's assistance (WinNT history 101).
And I did talk about de-duplication, which is effectively what shared image is all about, and I did also talk about low power, diskless desktop display systems, but after a quick search, the only people I could find selling them was Wyse, who sell a diskless system running Windows CE for about the same price (once you factor peripherals in) as a basic PC. Many people in the past tried diskless PC's, and almost all of them are now NOT doing it (the earliest I remember was DEC Pathworks, which had diskless DOS systems with a network filesystem).
My closing comments about having been here before with other architectures still stand IMHO. I still think we have been here before, and I also still think that the current in-vogue implementations are flawed and designed to maximise revenue for suppliers rather than provide a good environment for customers.
So let me get this straight
You put all of your PC desktop images into large servers held in the data centre.
You then use something on each desktop to run a virtual session to those large servers.
What are those devices on the desktop? Oh yes, PC's.
I know that the devices on the desktop will be cheap/low power PC's, but bearing in mind how powerful even a basic PC is nowadays, where is the saving?
If you were to sell it to me as an administrative saving, or a deployment cost saving, or even as a data de-duplication saving, then I may be interested. But as a power saving?
Of course, if the desktop devices were diskless, low power consumption (ARM type power) real thin clients then this may make sense, but we've been here before, and commodity PC's always undercut specialist net devices (where are Tektronix, Oracle, NCD et. al. with their thin-clients, Netstations and X-terminals now. Oh yes, out of that business). The cost ends up being the screen, keyboard and I/O devices, not the PC itself.
Where savings are being made at the moment is that older low-power PC's are being used as the access devices, but this is unlikely to give you a power saving, and is not going to be a model for phase 2 and later roll-outs!
Does not affect LTS
LTS releases will still be 12.04 and probably 14.04. No difference.
6.06, 8.04 and 10.04 (Dapper, Hardy and Lucid) were for all versions (server, desktop, netbook), and that will not change.
@HMB - Word of advice
Check the native resolution of the display panel in the TV (it should be printed on the box or in the instruction manual). Too many TV's (and not always just the cheap ones) on the market today claim and will accept a 1080p signal, but will then downscale it to 1388x768 or 1680x1040 or whatever their native display panel can do.
I had a big argument with a major on-line retailer about this when the published resolution for a TV I bought from them was wrong on their website, and they were extremely slow to accept the fault. Even then, I needed to go through their onerous RMA process, which takes about 2 weeks, before they would refund.
I actually went through forcing the driver to override the EDID value read from the TV to prove the case, and actually at the end of the day concluded that using the VGA port rather than the HDMI or DVI port was far more flexible and gave more control.
I've just re-read my post, and it's odd.
I've written about fictional things, set in the future, but described them in the past tense!
It seemed right at the time, but it looks feels funny now. I think the common oxymoron is "a future history".