@AC re Joke - again
And you can't convert decimal integers to octal either!
99 (decimal) actually equals 143 (octal)!
For this one, you're getting the pedantic Maths teacher!
2953 posts • joined 15 Jun 2007
No. I was serious and yes, I know that 8 is 010 and 9 is 011.
I was presuming that the person was using a calculator which worked in octal and decimal (and thus had 8 and 9 and point keys) but which was in octal mode, so that when they were typing in something like 18.49, the calculator actually registered 14 (neither the 8, the 9 or the decimal point would have registered). Would get the sums very wrong.
If you had actually bothered to think of the mechanics of it, you would have understood.
By the way. I think that your floating point octal to decimal is incorrect.
When writing non integer octals to one significant digit, the numbering would be
0.1 octal, which is 1/8 (0.125 decimal)
0.2 octal, which is 2/8 (0.25 decimal)
0.3 octal, which is 3/8 (0.375 decimal)
0.4 octal, which is 4/8 (0.5 decimal)
0.5 octal, which is 5/8 (0.675 decimal)
0.6 octal, which is 6/8 (0.75 decimal)
0.7 octal, which is 7/8 (0.875 decimal)
1.0, which is 8/8 (1.0 decimal)
So in Octal 0.5 + 0.2 + 0.1 will equal 1.0, which it needs to do in order for arithmetic to work.
The first significant digit after the octal point (geddit) is 1/8th's, the second is 1/64ths, the third is 1/512th's and so on.
This means that by casual inspection, 0.44 octal HAS to be larger than 0.5 decimal.
By my calculations 12.44 (octal) is (1x8) plus (2x1) plus (4/8) plus (4/64), which makes it 10.5625 (decimal) or 10.56 rounded to decimal pence.
I can't see how you got 10.14. Even if you had worked in pence, 1244 octal is 676 decimal.
You got the 0.95 decimal correct, however.
You could do the exact arithmetic if you worked in pence or cents. Non-integer arithmetic in any base other than 10 hurts my head.
If you look at physical calculators that work in octal (rather than 'soft' calculators on PC's and smartphones that can change the keyboard layout according to the mode), they normally do have an 8 and a 9 key (and a decimal point key), because they normally also work in decimal and hexadecimal.
These keys are normally disabled when the calculator is in octal mode, so you could press them, but I think I would notice that they had not registered. Also, as far as I am aware, nobody has produced a calculator that does non-integer arithmetic in anything other than base 10 (what a mind-bending concept that would be!).
For reference, look up the Texas Instruments Programmer Calculator that was available in the '80s, and any number of modern scientific calculators from makers like TI and Casio that also work in different bases including octal.
BTW. I was working in Octal on systems before PC's were invented, so I do understand it. I learnt clock arithmetic in bases other than 10 when I was about 8 in the 1960's, when they actually taught Maths in junior (primary) school.
Did you realise that Humans were meant to have thirteen fingers?
It's obvious, because in the HHGTTG, the ultimate question and answer is "What do you get when you multiply 6 by 9". Answer Forty two.
This is indeed the case if you work in base 13.
The only reason we work in base 10 is because we have 10 fingers. In some instances, it would actually make better sense to work in base 6, because you could then use one hand for 0-5 and the other as a carry. This enables you to count up to 35 with your two hands.
Gawd. I think up such crap!
If you had a proper multi-user, multi-access OS with proper security deployed, you could get away from this whole 'personal real/virtual' machine and software deployment model with all of the associated security problems that is blighting us.
Oh, how 20th century of me to bring up diskless shared image and thin client access UNIX systems that were being done in the 1980s and 1990s. It was not without fault, but was far better than using a crowbar to squeeze multiple virtual systems onto a single piece of hardware, with all of the associated duplication and waste that this entails.
One of my mantras has been "There is no place for a personal computer in Business" for a long time, and I believe that it is as true now as it has ever been.
It really makes me want to go back in time an nuke Redmond even more.
I understand making data accessible. And I also understand that having relationships between data items makes a lot of sense. But I really doubt URIs are the way to do it.
My concern is that by using URIs (at least the way I understand them to work) will effectively hardcode location and shape information into the datasets in the same way that a schema does in a relational database, but with a fixed location. Unless someone can indicate otherwise, I believe that this makes the data almost completely non-portable.
OK, in a web-centric world this may make sense, but unless someone puts some clever caching technique, it means that you will only be able to use the data when you ate connected.
Sometimes you want to take a fixed snapshot, or make sure that the data in your thesis does not change between you writing it, and it being read by your moderator.
I'm all for making data easily useable (god knows I've spent enough time massaging data over the years), but to tie it to the Internet should obviously be stupid to anyone unless they are from the facebook generation.
I'm also not certain that it is reasonable for the person who originally structures and creates the relations in the data to be able to anticipate how that data will need to be used in the future. Today's data mining systems are all about making assiciations between data-sets that were never imagined when the data was recorded.
Something like an encapsulated schema in the data set would be a great advantage, but you would have to have someway of normalising not only the data sets, but the schema's to allow automated queries.
That seems like an awful lot of hardware for the budget, and the dispersed nature means that it is far more likely to act as a group of smaller clusters that talk together than a single super-computer.
You will never be able to drive the WAN links at sufficient speed to spread anything other than encapsulated data type problems (like SETI@Home but larger) to the remote sites.
I would be interested in seeing how the power spreads around the six sites, because although the total amount of compute power may seem high, chances are that the power in any single part of the environment will be a fraction significantly under half of the total. That should put it much further down the top 500 that the 'low 30's' quoted in the article.
Also, by the time it is delivered, there will be new systems springing up in China, USA, Germany and even the UK!
Oh well. I have contracted for Fujitsu in South Wales before. Maybe I ought to dust off the old CV. Might be interesting to do some non-AIX work for a while, and I now have Infiniband experience.
I found most of a single scale in the 'song', at least seven notes, although most of the song stays on the three notes of a major chord.
I think many of the readers here ought to listen to the 'singles' chart nowadays, because they will be appalled by what is counted as music by the people who actually buy it in volume. I must admit, however, that I was intrigued yesterday to hear two versions of Adele's Someone like you (the normal version and the Brits version - both head and shoulders better than much of the rest) on the chart at the same time.
Autotune, whether it is required or not, is added to the vocal in it's most intrusive, buzzy manner for effect on so many songs now. I hesitate to say this, but JLS who obviously can sing a bit (no autotune allowed on X-factor live performances after all) have it on most of their songs now.
I do wonder what a 13 year old girl will be doing while "gettin it down" and which makes "we so excited" (sic) while "partying" which is legal! Sex and drugs and alcohol should all be out.
Anyway, I have to decide whether to listen to Planet Rock or Radio 3 on my way home to flush this meaningless and annoying fluff from my mental musical cache.
Whether the control rods are above or below depends on the design. In BWR instances, the rods are below the core - see the wikipedia article on boiling water reactors that actually have a diagram of the Fukushima type reactor, which is why I used the terms "inserted" and "removed" rather than "raised" and "lowered".
The simple fact is control rods in, reactor slowed. Control rods out, reactor quickened.
There are also different types of rods in some other types of reactor. There are moderator rods, whose purpose is to slow fast neutrons be become slow neutrons, which will actually speed up the reactor, and then there are the control rods, which are intended to quench the neutron flow to stop the reactor.
In a BWR type reactor, the whole core is immersed in water, and the water itself is a neutron moderator. There are only one type of rod, and these are all control rods. This is very different from PWR and AGR type reactors.
Again, this is what I understand from years of casual study, so I am not an expert.
The control rods form a part of the control system. They are not normally either completely in or completely out, they are normally partially inserted to control the speed of the reactor and thus the energy output. Whether they are above or below the core depends on the reactor design. These are apparently BWR (boiling water) reactors, and the rods are below the core, and held against hydraulic pressure by electromagnets or similar, such that should there be an interruption in electrical power, the rods will be automatically inserted by the pressure. This is a fail-safe system.
The rods allows the operators to 'damp down' (insert the rods) the reactor in times of low power demand or maintenance, and open it up (withdraw the rods) during periods of high demand. Under normal operation, you would never completely insert the rods, because that would stop the critical reaction, and effectively stop the reactor.
In the case of a serious event (such as an earthquake), it would be normal to completely insert the rods as a precautionary measure. This would effectively make the reactor subcritical, which will cause it to cool and eventually shutdown. This does not make the reactor immediately safe, but will remove any chance of it melting down. Most of the residual energy in the core will come from decay products of the U235 fission reaction that are themselves radioactive with short half-lives, and thus will spontaneously breakdown, releasing energy in the form of heat. These will breakdown naturally over a matter of days to the point where the reactor will generate less heat that it will loose through convection or conduction, and thus become 'cold'. This is what I think is meant by 'cooling fuel'.
It is this gradual breakdown of the decay products that requires cooling until a sufficient amount of them have decayed to the point that natural cooling will be greater than the heating effect.
Conversely, during startup, removing the control rods will allow the neutron flow to increase (U235 will always spontaneously decay and produce neutrons even in a non-critical reactor) until the critical point is reached, and the reactor becomes self-sustaining. Looking at sources, it appears that for a completely shutdown or new reactor (one with no uranium decay products in the fuel rods), a source of neutrons can be used as a 'starter' to speed up the build up of the neutron flux to achieve a critical reaction more quickly.
For anybody who is worried by the term 'critical', this is not being used as in 'dangerous', but as in a tipping point, in this case where the nuclear reaction becomes self-sustaining.
If you trust it, there are very good articles on nuclear reactors, BWR type reactors, control rods, and nuclear starters in Wikipedia. These appear quite objective and appear to me to be trustworthy, at least they do not conflict with other sources I have read.
..."It has taken major efforts by humans to keep them from going critical."
Please be careful with your use of 'critical'. As far as operating nuclear reactors are concerned, 'critical' is normal. Misusing the term may lead those who do not understand the terminology from becoming needlessly alarmed.
I admit that a reactor being shutdown should not be critical once the control rods are inserted, but I seriously doubt that in this case, the cores would have become critical in the nuclear sense even if the cooling had completely failed and they were damaged by heat.
The design is such that if a complete meltdown could occur, that resultant puddle of radioactive mess would be distributed over a large enough area such that a critical mass would not pool in any one place to allow an uncontrolled nuclear reaction to happen.
I'm a UNIX person through and through, and the first time I looked at this was in about 1987, with SVR2, which had code for DST, but had the cut-over dates to and from DST hard-coded in libc.
There is a configuration option that allows you to vary whether the clock is localtime or UTC without having to re-compile the kernel (a bit heavy handed in this day and age). It is based around setting UTC=yes early in the boot process (in /etc/default/rcS). The initial setting for a new install is supposed to be queried during the install process. I suppose I may have set it wrong, but I don't think that I would have made such an error, bearing in mind I was aware of the problem. Maybe I'll install again from the original media I had to see whether there was a flaw in the install process.
In my (biased, I admit) view, it's wrong to run a system on localtime, but I am in the UK, and winter time is the same as UTC (well, GMT anyway), so I have never had to worry about anything other than the Daylight Saving Time change. I guess that other locations have it harder.
Any reasonable system should run it's internal clock on UTC all the time, regardless of timezone and whether DST is in effect, and just alter the presentation of local time according to it's location, so 'adjusting' the clock should never be required. You should never have have the hardware clock changed on a correctly configured UNIX or Linux system, except to take into account leap-seconds or clock-drift.
This has always bugged the hell out of me when using a dual boot Linux/other-OS PC. Linux worked just swell changing the presented time according to DST, and then you boot your other operating system that ALTERS THE FREAKING UNDERLYING CLOCK!!!
When I put Ubuntu Lucid Lynx (10.04) on my Thinkpad last year, I was expecting this, but found that some misguided bright spark has added code to Linux (probably somewhere else other than Canonical) that actually expects the underlying clock to be changed incorrectly by the other-OS, and then 'breaks' the working tradditional UNIX/Linux time support to take this incorrect clock setting into account. Talk about working around someone else's errors.
Good for all the people who want it to just work and regularly boot both OS's, bad for anybody who actually understands what should be going on. Drove me nuts for an hour or two.
Hang on a sec. Back to iOS. Isn't it a UNIX/BSD derived OS?......
The trojan is an ELF executable, presumably for whichever processor runs in the Dlink router, but the vector to get it in there would appear to be a compromised MS Windows system that then attempts to brute-force access to the router. So there are actually two components, one of which infects a windows system, and the second of which is installed on the router by the first.
Thanks for explaining, although I did post this as a springboard to get replies.
As I have supported diskless UNIX systems for several years in the past (and will be again very shortly), I do understand about sharing a system image (which, incidental, on windows breaks a whole host of software unless you jump through hoops to redirect stuff away from the C: drive, which will be read-only, somewhere else - personal experience of pain here), and also identical hardware on the desktop. It's not a new technology except to Windows shops.
Citrix, VMWare and Microsoft are waaaaaaay behind the curve here compared to UNIX, both in diskless and remote display, and I have to feel that bending current windows to make it fit into a diskless/remote display model is the wrong way to go about it. Better would be to have made a 'new' windows with native thin client support and some compatibility with 'old' windows, than using a crowbar on the existing models. After all, MS did product switch before with NT. Maybe Longhorn should have been this, but they apparently could not get it to work without ex DEC system architects and IBM's assistance (WinNT history 101).
And I did talk about de-duplication, which is effectively what shared image is all about, and I did also talk about low power, diskless desktop display systems, but after a quick search, the only people I could find selling them was Wyse, who sell a diskless system running Windows CE for about the same price (once you factor peripherals in) as a basic PC. Many people in the past tried diskless PC's, and almost all of them are now NOT doing it (the earliest I remember was DEC Pathworks, which had diskless DOS systems with a network filesystem).
My closing comments about having been here before with other architectures still stand IMHO. I still think we have been here before, and I also still think that the current in-vogue implementations are flawed and designed to maximise revenue for suppliers rather than provide a good environment for customers.
You put all of your PC desktop images into large servers held in the data centre.
You then use something on each desktop to run a virtual session to those large servers.
What are those devices on the desktop? Oh yes, PC's.
I know that the devices on the desktop will be cheap/low power PC's, but bearing in mind how powerful even a basic PC is nowadays, where is the saving?
If you were to sell it to me as an administrative saving, or a deployment cost saving, or even as a data de-duplication saving, then I may be interested. But as a power saving?
Of course, if the desktop devices were diskless, low power consumption (ARM type power) real thin clients then this may make sense, but we've been here before, and commodity PC's always undercut specialist net devices (where are Tektronix, Oracle, NCD et. al. with their thin-clients, Netstations and X-terminals now. Oh yes, out of that business). The cost ends up being the screen, keyboard and I/O devices, not the PC itself.
Where savings are being made at the moment is that older low-power PC's are being used as the access devices, but this is unlikely to give you a power saving, and is not going to be a model for phase 2 and later roll-outs!
Check the native resolution of the display panel in the TV (it should be printed on the box or in the instruction manual). Too many TV's (and not always just the cheap ones) on the market today claim and will accept a 1080p signal, but will then downscale it to 1388x768 or 1680x1040 or whatever their native display panel can do.
I had a big argument with a major on-line retailer about this when the published resolution for a TV I bought from them was wrong on their website, and they were extremely slow to accept the fault. Even then, I needed to go through their onerous RMA process, which takes about 2 weeks, before they would refund.
I actually went through forcing the driver to override the EDID value read from the TV to prove the case, and actually at the end of the day concluded that using the VGA port rather than the HDMI or DVI port was far more flexible and gave more control.
...there is no guarantee that the overhaul that Obama wants is the one that most of the commenters here want. He just make it easier for Big Business to get their patents through. It depends on whether the problem he perceives is that the patents are not working, or whether they take too long to grant.
I suspect the latter (him being lobbied by the people with the deep pockets), which will probably make things worse from our perspective, not better.
"I always get the impression the utilities were written by clever people who just happen to be lazy and can't spell properly"
Actually, it was more like the Teletype ASR-33's in common use as terminals back when UNIX was being formed being sooooooooooooo sloooooooow, that people abbreviated commands and flags so that they could work reasonably quickly. It is also the reason why ed is incredibly terse, but ultimately very powerful as a line editor. I recommend to every serious UNIX/Linux user that they learn ed, just so they can use vi and sed effectively.
Another explanation is that they were lazy, a positive attribute for all good system admins. (do what's needed efficiently and with the least effort). At least, that's my opinion.
The downside of the VMS DCL command line model was that for every command, you had to add entries to the DCL command dictionary, whatever that was called (it's been a long, long time since I did any DCL configuration, and much of that was on RSX/11M not VMS). All of the command line parsing and in-line help that allowed abbreviations was done by the DCL command processor, although I do believe that it passed any unprocessed arguments through to the program as a last resort.
One of the most useful and irritating features (both at the same time) is that UNIX-like operating systems left that to the commands , although if you look at ancient UNIXes, there were very strong conventions that should have been followed (if you look back at the USENET archives, there are many quite animated discussions about whether the shell should do more command line processing than it did/does). This meant that you could easily add commands to UNIX, and that internal shell commands, functions, aliases and external programs appeared almost seamless.
One of the quirks, however, was that some of the ancient commands, many still used today, never did abide by the conventions (the most obvious of examples of this are the "dd" and "find" commands, that have been there almost since Epoch began, and never did conform).
Over time and UNIX versions, the conventions broke down. One character flags gave way to words, and as soon as that happened, you have to code around whether the arguments like "-Fart" are equivalent to "-F -a -r -t", or whether there is a wordy argument which actually tells the program to break wind (this is a quick example off the top of my head from the "ls" command. There may be better ones).
Then the BSD/GNU people got involved, and introduced what I regard as the abomination of the -- (double minus) argument, supposedly to allow the syntax to be extended, but actually misused my many post-linux developers to completely replace the original UNIX flag processing implemented by getopts.
Couple this with the fact that often the GNU and GNU-like commands were never documented by proper man pages (or even their supposed native "info" command), or even by informative usage strings, and that there was effectively no controlling influence (a flaw in the Open Source contributory model itself) and you get to today's mess. I can totally sympathise with you with it being difficult and awkward to use (I actually count VAX/VMS as being the OS I would prefer to use after UNIX)
I look back fondly to V7 and SVR2 and SVR3 days, when conventions meant something, and people bothered to use them.
My coat is the sad old gabardine raincoat with the Lyons annotated UNIX source in the pocket (which, in case anybody bothers to read to the bottom of my past comments, I lost, but have found again).
I agree completely with your analysis and your sentiments, and even that Ubuntu peaked around Hardy Heron, but what I have difficulty with is whether limiting shareholding to small investors could be achieved, and whether it would work.
I also have a problem in how you would keep the shares with small investors, unless they were bought and sold through a market other than one of the stock markets, or managed completely internally making it a privately held company with an internalised share market (and I don't know how that would be organised or regulated for a large number of investors).
Also, having shares is a method of raising capital to invest. It is not a way of attracting a revenue stream, regardless of how it is implemented.
With shares, there is always the expectation that the shareholder could get their money back. I think that you are eluding to a micro-donation scheme, where you give them money and expect to gain some influence as a result. This is more like a subscription.
I've briefly looked at the charity shares model and I must admit that I don't understand how it works (probably my bad). It looks to me if you effectively lend your money to a charity for them to use and get financial value, and you get the satisfaction of knowing that you are helping and getting value back in the way of ethical satisfaction rather than financial gain. I can't see how that can be a business model for an organisation like Canonical. They still need to get some revenue stream in.
Of course, I may have misunderstood what you are saying (in fact, that is probably quite likely).
"lazily cruise along singing along to Radio 4"
What do you listen to on Radio 4 that you can sing along to?
The only thing I can think of is Desert Island Disks, and you don't even get the whole tracks on that. Or maybe it is the music rounds of "I'm sorry, I havn't a clue" such as "Pick up Song", which I admit it is difficult not to sing along with, but which only lasts for a minute or two!
I find (especially listening to the Today program interviewing politicians) that I get angry, and end up shouting, impotantly, at the radio!
Maybe MS are trying to make sure that older OS's are retired as well. I'm sure that IE8 could be made to run on XP, but why give users the choice to not pay MS more money by replacing the OS.
Interestingly, I still run a system that I boot on occasion which is Win 2000 Pro. Firing up IE6 on this (IE 7 cannot be installed) already says "Install another browser", with a link to IE8 which then tells me it can't run on that OS.
I know that Win 2000 is out of support now, so I have no cause for complaint (this system has some legacy payroll software that I can't re-install because of expired license key restrictions that allow me to run it in query mode, but not re-install it on another system - and the supplier no longer exists!), which I have to keep running for 3.5 more years to satisfy HMRC data retention requirements. It gets started about once every 6 months just to prove it still works.
Although I am in danger of resurrecting a long dead argument, I dispute that the ZX81 had more compute power than the BBC micro.
Although the BEEB's CPU only ran at 2MHz, whereas the ZX81 ran at 3.75MHz, the BEEB's CPU was a 6502 that executed most machine instructions in a single clock tick, whereas the Z80 in the ZX81 averaged four clock ticks per instruction, and some of them required up to 13.
This was the subject of endless controversy between Sinclair/Amstrad owners and BEEB/Apple/VIC20/C64 owners at the time.
In general, the Z80 instruction set was more advanced that the 6502, containing more instructions, more addressing modes, and even some proto 16 bit arithmetic instructions (by treating pairs of 8 bit registers as a 16 bit register). The 6502 contained enough instructions to do what was required though, and was considerably easier to program (the bible of both processors, written by Rodney Zak, "Programming the Z80" was at least twice as fat as his "Programming the 6502", and had smaller print to boot). This made the debate a forerunner of the CICS vs. RISC argument, which hinged around similar concepts.
Remember, though, that BBC BASIC was blindingly fast for the time, and it remained the fastest BASIC available well into the advent of 16 bit micros, as documented by PCW's BASIC benchmark that they ran on all the systems that they reviewed.
And the BBC micro had a built in assembler, and a means of passing arguments between BASIC and your machine code, and also documented all of the OS I/O, sound and graphics calls that you could make from your machine code. And of course, the display in the BBC was totally hardware driven, freeing the CPU up to run your programs.
I used to write both Z80 code on a ZX81 and Spectrum, and 6502 on a BBC, and believe me, 6502 was easier, and for basic data manipulation, faster.
Actually, they were not thermal at all. The paper was covered with aluminium, which conducted electricity, and was 'written' by a wire that passed a current through the paper as it wizzed round on a rubber belt. Where the 'spark' hit the paper, the aluminium vaporized, letting the black paper below show through. Crude, noisy, and completely incompatible with listening to the radio. That is why the 'paper' was silver with black print, and also why you got a new high power supply for the ZX81 if you bought the printer.
I believe that they were not allowed to sell the printer in the US, because (surprise, surprise) it contravened the US electrical interference regulations.
This was an example of innovative thinking that made Britain good at creating ideas, but pretty crap at exploiting them!
Although out of the box, the ZX81 did not have sound, there were a number of third party add-ons that gave you sound.
I had a Quicksilver board that had an AY8910 on it, feeding a secondary modulator to add sound to the TV signal.
Quicksilver also had a number of other accessories including a high resolution graphics board and a programmable character board. You needed an interface board that sat between the ZX81 and the RAM pack, which provided two interface slots for the add-ons. Ugly as sin, and made the RAM pack wobble problem different, but as I added an external keyboard to mine, complete with power switch and reset button, I did not have to touch my ZX81 at all.
As I've said before on these forums, my ZX81 actually had 18K of memory, the 16K RAM Pack, the 1K of internal static memory re-mapped to a different address when the RAM Pack was plugged in, and another 1K of static RAM on the ULA side of the data bus isolating resistors to hold programmable characters that were accessible by changing the contents of the Z80's I register, which was used to hold the base address of the character generation table, normally in the ROM. Happy days!
It was not called FPGA, although the technology was similar. The real name was a ULA, or Uncommitted Logic Array, which was a bleading edge technology in 1980/81, invented by Ferranti, a British company.
The ZX81, Spectrum and BBC Micro all had ULA's in them, to consolidate the function of dozens of 7400 TTL chips into a single large chip.
Unfortunately, the technology was still immature, and the production problems that Ferranti had were a large part of the shipping problems for all of the systems mentioned. I waited for nearly 6 months to get my BBC model B that had been ordered as soon as Acorn would take orders.
By the time later systems came along, it was possible to have your own design of chip fabricated moderately cheaply, so the ULA died an ignominious death
Are you sure it was a MK14? Many universities used a KIM-1 (produced by Commodore, at least at the end), which was a similar product but used a MOSTEK 6502 rather than the Nat. Semi SC/MP which is what I believe was in the MK14.
I remember having to write a sine-wave generator on a KIM-1, and I managed to get a higher resolution wave than everyone else by having a lookup table with just 1/4 of the whole cycle in the lookup table, whereas everybody else had at least a half cycle (and some of them stupidly coded the whole cycle). Fitting it into 512 BYTES was a real challenge. But then I also managed to write a simple lunar lander on the Sinclair Cambridge programmable calculator in just 32 program steps!
I'm playing devils advocate here, because I'm pro-nuclear, but I would suggest that the amount of fissionable material on Earth is limited just like any other earthbound energy resource.
This means that in the long term, you cannot use 'sustainable' in connection to nuclear.
The way I look at it is that the Earth has many energy resources, but because of entropy, they are all limited to one extent or another. Fossil fuels are stored energy from the Sun, nuclear fission is heavy elements (from the death of older stars) acquired during the formation of the Solar system), nuclear fusion is light elements probably from the formation of the Sun, wind and ocean currents are driven mainly by solar energy from the Sun, tide is gravitational (mainly from the Moon, and tidal drag is causing the Moon to slow down and approach the Earth so even this is finite), geothermal is (probably) natural nuclear fission (see above) combined with tidal effects from the moon, biofuels are capturing energy from the Sun and direct solar is (obviously) from the Sun.
So if you discount total matter conversion (and boy would that be useful), and fusion of hydrogen electrolysed from water (finite on Earth but lots of it), all energy except direct and indirect solar energy is limited. And the Sun won't last forever!
I have not checked, but I'm fairly certain that the license is valid and legal as long as you own the device, whether it is working or not (otherwise you may not have a valid license if the device breaks and you then repair it).
One thing I am not too certain about is transfer of the license. In the Microsoft and IBM EULA's (mainly software) that I've read in the past, there are often quite stringent and restrictive rules applied to the transfer of the license. I'm not sure what happens to the license for Sony devices if you give away a device that contains licensed firmware. Does the recipient have license to use it? Do they have to accept the EULA, and if so, how do they know about it? Can they be held to something if they have no right of return of the original device? All interesting questions. Anybody any ideas?
I'm taking issue with your "if you wanted to run Linux you should have bought a PC" line.
The Cell processor is interesting and different from any Intel/AMD/ARM processor that you will find in commodity PC and other hardware, and using a PS3 to run Linux was the cheapest and easiest way of getting access to it. It is not for you to say that it has no value, that is down to the person doing it. As an intellectual exercise, being able to program a Cell has serious merit to some people (I know, I have talked to some of them)
It is quite clear that Sony did something that had significant impact to a part of their customer base (even if it was only a small part), and that should be investigated. But two wrongs do not make a right, so publishing details of a hack that includes Sony IP is almost certainly against copyright legislation. But if the hack includes no Sony code or firmware, then I'm not sure whether that is illegal. If all that was published was a technique utilising an API, especially if the API was itself published, then I don't think (and IANAL) that that would actually count as copyright infringement.
What may be an issue is whether identifying the technique is against national implementations of the European Union Copyright Directive. This is actually more restrictive than DMCA when it comes to breaking a protection, but as that is not direct legislation, you need to see how any country has enacted it into their own lawbook.
I believe that of all of the European Union countries, Germany is the one that has enacted the EUCD most closely in their national legislation, so he may actually be being accused of an offence against that, rather than straight forward copyright infringement. This means that he may be on very shaky ground.
Silly me. If I had read the referenced article, it is actually clear that this is what is done (and actually says that simple radios would not work). But unfortunately, the Reg. article was trimmed, and the relevant information was one of the bits that was removed. Just proves that it is important not to take Reg. articles at face value, something I should have learned by now.
OK, are you are using one FM frequency per phone? I can't see this scaling well.
I would guess that if you had a number of frequencies corresponding to fixed distances from the stage, you could use a GPS app. in the phone to select the correct frequency based on what your phone thinks is the distance from the stage, but you would then have 'banding' where the person immediately in front of you is apparently in a different band, and gets the music out-of-sync to you. I'd have to do some math to see whether this is likely to be a problem, and also to see how many rings, and thus how many frequencies you would need for say Central Park or one of the big stadiums.
Another thought. If they told you the distance from the stage, and gave you the correct frequency (maybe on information posts in the venue) why use a smart phone at all. An ordinary FM radio with a digital tuner (or any phone with an FM radio) would suffice. Or maybe repeater speakers as part of the sound rig using a similar system. That way it could be encrypted, and not give the show away at all.
I do not agree with your comparison between Firewire and Infiniband.
Let me say that I am not an Infiniband programmer, but I do support HPC systems using IB and RDMA. Having said that, I do understand, IMHO, a bit about how RDMA is implemented, at least on the UNIX systems I support.
RDMA is a way for a one-sided communication (amongst other things), that allows a system (A) to perform a memory operation in another system's (B) memory space without the involvement of the second system's OS. But that does not mean that the system B is completely divorced from the transfer process, nor does it mean that A has full, unrestricted access to B's memory.
Before an RDMA operation can be performed in IB on UNIX, system B has to set up a memory region, and also set up an access window to that region to allow system A to use it. System B is then given access to that region without B's involvement, but cannot (as long as there is no flaw in the HW/FW/SW stack) go outside that region.
This means that it is perfectly possible to have the benefits of RDMA without compromising the security of the entire OS, and if a long-term window is set up (say, for an HPC type workload that runs for some time and uses the window for many transfers), the involvement of the OS on B is limited to setting up the window at the beginning, and breaking it down at the end of it's use.
Now I do not know whether Thunderbolt has this ability, or if it has, whether it is configured and used in MacOS, but just because RDMA is available does not mean that the system is completely compromised.
From what I read about Firewire (and this is just from the Web), the default was that RDMA was turned on, and was not limited by default. This is really the flaw, and could almost certainly be addressed by careful system administration, but if you don't know what to fix, you won't do it. I have heard other stories from various Web resources that Firewire really did have this flaw, and that it could really be exploited by plugin hardware. Even if the quoted example illustrates flawed system administration, just think how much useful information can be gleaned from direct access to the memory of a system.
Unfortunately, the good ideas of the hardware engineers do not always match with the requirements of real-world environments. But you would have thought that someone in the driver design process would have gone "..Hang on a minute, don't you think that this is opens a security hole...", but then I have seen too little joined up thinking in large organizations recently. Too many people still think that a PC is personal.
I reserve my judgement on Thunderbolt until there is more information.
but in this case, if the published policy was to use encrypted sticks, the worker was given an encrypted stick, and was told to use the encrypted stick, but subsequently didn't merely 'because they had problems', and then did not get the problems addressed, this should be a serious disciplinary issue.
The employee should be reprimanded at the very least, and if the employment policies allow, held up in front to the rest of the work force to illustrate how important these things are. This is especially true if they are in any position of seniority.
If this is not done, the excuse will always be that 'it is an education issue', and we will see these things happening more and more.
Interesting. I went there, and was prompted to download a .EXE file that contained a PDF and and XPS file.
WTF? An EXE file!!! What's wrong with a straight-forward download of the PDF? This is MS obfuscation at it's worst.
Indeed, on the page for reading the license terms there is the following
"Supported Operating Systems:Windows 2000;Windows Server 2003;Windows Vista;Windows XP"
JUST TO READ THE TERMS AND CONDITIONS.
As I am using a Linux system at an enlightened organization at the moment, a .EXE file is of little use without some form of MS tie in. Sort of undermines your comment a bit.
OK. You do that, and you get a list of every style used, including the standard styles which have been slightly modified.
I had to take an MS Word document a few years ago which had been hacked together from several different documents, and do some work on it. When looking at the styles, each time one of the source documents had been modified, or the template changed, and then parts cut out and pasted together, you got another variant of the same style. In the list of styles, I had about 60 listed, many of which, from their name, should have been the same style.
It took me the best part of a week to trawl through the document, eliminating the minor variations, and consolidating the styles together. I got the number of used styles down to about a dozen.
This is not necessarily a problem with Word, but in the fact that most people who use word processors that use styles don't actually know how to use styles properly (I admit that I didn't know before cleaning this mess up).
In my view, this reinforces my view that ALL current word processors are too complex, and all most people need (even those writing quite complex documents) is something only a little more beefy than Wordpad.
I admit that I am biased, as I was an n/troff and runoff user before I came across Word 2 on DOS 2.1. I didn't like Word then, and I have to say, I would prefer not to use it now.
When I used to read the Hi-Fi press, music was all about live and recorded and listened to in acoustically favourable surroundings. The nirvana was having an acoustically neutral room built around the sound system.
The music quality was important then, and everybody was striving to have the music sound like it did when it was played.
Nowadays, it is only classical music, and a very small amount of audiophile recordings that have this goal. Almost everything else will be autotuned, compressed, expanded, mixed, spatially processed and otherwise messed around with to death.
As a result, it's pointless paying this much for headphones if you are listening to modern recording, distributed digitally in any lossy format, in noisy environments. Pay a few 10's of pounds for something that is comfortable and produces a sound you like, and buy a few pairs.
My current set for everyday use are generic ear-canal, Omega brand (a crap stick-the-name-on-anything brand) that cost 6 quid that I found by trial and error. The sound is adequate, even with uncompressed audio, they block external noise out well, and they are so cheap that I don't worry about breaking or loosing them. And I don't look like a twat wearing them on the tube (not that I ride the tube often). I do have a couple of pairs of in-ear Sennheiser 'phones which I prefer the sound of, but I worry about knackering them every time I pull them. out of my pocket.
My home based Hi-Fi, which was always best-of-breed at purchase time low-to midrange kit (cherry picked NAD, Project, JVC, Technics and a pair of ancient Keesonic [niche brand from 25 years ago] speakers) has a pair of mid-range over-ear Sennheisers, and a paid of elderly Beyer-Dynamic headphones. I am very happy with the sound, and must people still go Wow! when they compare it to their modern systems. Especially when playing vinyl!
Bring back real music, that's what I say. Especially get rid of autotune. Anybody who can't sing in tune without this should be boo'd off stage.
Ignoring an image, why 200 bytes? Surely, Camera location, <=4 bytes (4 billion locations should be sufficient across the UK), number plate <=7 for UK plates (let's be generous and allow 10), and a timestamp, again let's be generous and allow 8 bytes (UNIX per-second timestamps can still be represented by 4 bytes until 2036).
Total, 22 bytes per record, maybe with some overhead.
Multiply by your 350 million records a day (I'm not sure if this is actually accurate, as not every vehicle is driven every day, and I want to be shown the 10 ANPR cameras a day that I pass), and you still only come up with around 10GB a day. Deem that data useless after 6 months, and you're talking about 2TB, or one current generation high end Sata disk in total.
The recorded image is only necessary if you intend to use record for enforcement (for say, driving without road tax, MOT, or insurance), but probably not for locating people by spotting their cars. If you had automatic discard of images of vehicles that were legal and not on a hit-list, then you would not need to keep all of the images anyway.
Bearing in mind that many more times that volume of data is transmitted and stored daily in any number of data applications, it does not seem too far fetched after all.
BTW. I wish to state that I do not condone this information being kept and used for tracking purposes at all. I don't mind to get illegal cars/drivers off the road, but that's it as far as I am concerned.
If I read it correctly, the MPEG-LA consortium are wanting H.264 decoders in the silicon, so that to play it, you do some setup (window size and position etc.) and then just throw the encoded byte stream directly at the hardware. Almost zero CPU usage.
If this is the case, you may be able to use some of the other accelerated features for WebM, but not the direct decoding.
Again, I am happy to be corrected by someone with more knowledge.
Biting the hand that feeds IT © 1998–2019