1696 posts • joined 15 Jun 2007
The problem regarding BSD as a Genetic UNIX is that there is no AT&T code in it after the huge bruhaha with regard to removing any code that was covered by the UNIX V7 educational licence that BSD relied on in the 1980's!
A UNIX educational license specifically prohibits the use of Bell Labs/AT&T UNIX code in a commercial OS offering (I actually was a Bell Labs V6 and AT&T V7 UNIX license holder for a number of years) or even for teaching purposes, and UNIX System Laboratories took the Regents of the University of California, Berkeley to court to enforce this when they (UCB) started commercialising BSD. BSD did not take out a System III or System V license to cover any code, they just replaced it, leading to BSD/Lite and FreeBSD.
My view is at odds as what Wikipedia says about BSD in the main article. I regard there to be a requirement for there to be actual code, not just design ideas in a UNIX for it to be considered as a 'Genetic' UNIX.
Also, in order to use the UNIX trademark, it is necessary for a UNIX-like OS to be subjected to, and pass the Single UNIX Specification (SUS) verification suite. AIX does, as does Solaris, HP/UX, Tru64 UNIX and SCO UNIXware. Linux and BSD do not, so cannot legally be called UNIX.
Darwin/Mac OSX falls into the same "not Genetic UNIX", even though it qualifies for the UNIX 03 branding (a point I did not realise until I researched it just now).
And as Slackware is definitely not derived from any Bell Labs/AT&T code (It's Linux, with GNUs Not UNIX code running on the top like any other Linux).
See http://www.levenez.com/unix, and try to find any feed from an AT&T UNIX into Linux. There are a couple from IRIX, and a few feeds from Plan 9, but I think that these were filesystems, GL and utilities rather than principal parts of the OS.
Don't get me wrong. I have nothing against BSD as it is a family of fine OS's. But it really is UNIX-like rather than UNIX or a Genetic UNIX.
My 'alternative' universe. What's yours like?
I said up front that I make a living supporting AIX. As it happens, I am currently contracting for IBM on a customer site, and have in the past been an IBM employee for a number of years.
But with my 20+ years of AIX (mostly outside of IBM) and over 30 years of other UNIX experience including 10 years of Linux in fields such as banking, utility, engineering, education and government, on systems running from micro-processors through departmental minis to Amdahl mainframes, AIX really has been this easy, at least if sensible design (i.e. like the manuals say plus a bit of common sense) has been followed. And it is still improving! (no, this is not a sales pitch, merely my observations).
I will stand my UNIX experience up against anybody else's. When I started working with UNIX in 1978, there were about half-a-dozen UNIX systems in the UK, and the total number of people with any experience in the UK probably did not exceed 100. And I have worked almost continuously with UNIX ever since.
Back to AIX, and no platform is without warts, and as good as I perceive it to be, sometimes you have problems. But where I am currently we have in the area of my responsibility 300+ AIX systems, being thrashed (literally) 24 hours a day, with 10's of TB of data changing on a daily basis, managed by a team of 5 people, some of whom have other responsibilities. On the same site, we have large Linux and Windows deployments, and there is also a Mainframe doing critical work.
Our current uptime on the AIX systems is low at around 60 days (having had some global power work done in the last two months), but normally runs into the 100's of days. In that 60 days, we have had about 8 disk failures out of an estate of about 4000 all of which were handled without any outage (including system disks). In the past, we have had memory failures, with the systems continuing to run until a convenient time to move the workload, and CPU's taken out of service in the same manner. We've also replaced complete RAID adapters (in an HA RAID environment), power supplies and cooling components without losing service. This is BTW, a clustered environment.
We are just about to embark in replacing 100s of RAID adapter cache batteries, and we do not expect to take *any* service impact at all during the work.
I would suggest that if the systems you 'have been forced' to use have been a bad experience, either you are not giving the whole picture (like if you think that you need the latest and greatest Open Source products - which would really be an application problem, not a deficiency of AIX or POWER platform), or there has not been due diligence in setting them up. Get someone who knows what they are doing in on the installation!
I have often found that sites tend to be partisan. Solaris or HP/UX sites often do not embrace AIX enough to understand how to run it properly, and vice-versa. But I do try to keep an open mind, and I do appreciate that I am not as knowledgeable of more recent Solaris or HP/UX systems as I am AIX. But in recent years, I have perceived them to be less innovative than the IBM offering, and when I last has serious work to do on them they just felt like they had been left in the last century when it comes to RAS and sysadmin tasks. But that's my opinion. I'm sure there are other opinions out there.
But I would say that AIX looks destined to the the last Genetic UNIX standing, given HP and Oracle's current attitude towards their products, and Linux still has a way to go in enterprise environments to replace it. I hope so, anyway, as I would like to get to retirement age without losing my career!
The problem is....
that even though Linux provides a UNIX-like programming and application environment, when it comes to enterprise features, even the best Linux distro is not as easy to keep running as the best of the UNIX platforms.
I'm biased, I admit. I earn my living supporting AIX. But if there is a problem on one of 'my' AIX systems, it reports it to me, gathers the debug information, and on the ones so configured will even call the problem in to IBM. Often, if it is a duplexed part like a power supply, fan or disk, the part can be replaced without taking the service down, and even PCI cards can be hot-swapped on many models. CPU and memory failure can even happen and the system can continue running. It's not quite Non-Stop but...
If mission criticality is an issue, it is possible to configure a system such that the partition can me migrated on the fly to another suitable system. AIX has been able to do live partition migration for a few years now.
It is just easier using AIX that trying to patch together something similar with ESX or other virtualisation technology. This may change over time, but it has not yet, and I cannot see any real evidence that any of the large distro providers are doing anything to do it.
The standard complaint I hear is that some people regard UNIX as 'backward' compared to Linux, but that is the price of stability, and I'm sure that BSD users will say the same. I would say that Linux runs the risk or stumbling while it is running forward.
I do also support SuSE systems, and run Ubuntu on my own systems, and there is no doubt in my mind that if asked (and there was no real financial hurdle), I would recommend and AIX system over a Linux one (but, of course, Linux over Windows).
When I talk to people who have grown up with Linux without having used UNIX, it is clear that without that perspective, they just cannot realize the difference, and just regard Linux as UNIX on the cheap.
Moving parts misnomer
I think that what was meant was "discrete components" rather than moving parts.
If you go back to the '60s, a laser was made up of several components, including an exciter, a lasing element, and a collimator. They tended to be about the same size as brick, very power inefficient, and cost thousands of pounds.
They also had quite short operational lifetimes.
You can still buy lasers like this, but they are mainly used for high power applications.
Solid state lasers changed all of this. We would not have CD/DVD/BlueRay, optical communications, laser pointers, or a whole raft of gadgets and toys if they had not been invented.
Not bad for a "solution looking for a problem to solve".
Round and round we go, where we stop, nobody knows!
Aren't we at the Itanium/x86_64 point again?
Surely the problem with all of these APU or GPGPUs is that suddenly we will have processors that are no longer fully compatible, and may run code destined for the other badly, or possibly not at all!
The only thing that x86 related architectures have really had going for them was the compatibility and commodity status of the architecture. For a long time, things like Power, PA, Alpha, MIPS, Motorola and even ARM processors were better and more capable than their Intel/AMD/Cyrix counterparts of the same generation, but could not run the same software as each other and thus never hit the big time.
Are we really going to see x86+ diverging until either AMD or Intel blink again?
Zippy the Pinhead Re: methane
Bearing in mind how much of a greenhouse gas methane actually is, it would be better to put the organics into a digester, extract the methane, and burn it as a fuel. It would then be the less damaging CO2 and water, and we would have gotten some useful energy from it, and what eventually goes into the landfill would be less of a hazard.
I heard Peter Mills of New Earth Solutions on Radio 4 who suggested that we should mine the plastics from landfill sites, if only to use them as a fuel, although he actually suggested re-using them, and only burning them when they could no longer be recycled.
I think that we need to examine how disadvantaged people in developing countries pick over their landfill sites to get every bit of useful material, down to the tins, bottles and plastic bags. It's not nice, but it gives these people a way of generating some money out of nothing, while reducing what is in the landfill to just the worthless waste.
I'm not suggesting that we should force people into a scavenger class (although bog knows, making the long term unemployed do this once in a while might teach them something valuable about their benefits), but it is clear that there are lessons that we 'superior western' countries could learn from our less fortunate cousins.
@Peter Simpson 1
Unfortunately, as matters have panned out, Sarah could and did quit the game!
I shall miss her.
Just try posting something that breaks the rules, and see whether it actually appears.
Sometimes things get through, and you see a "Rejected by moderator" on the thread, but normally they just don't get through.
It just shows that the Register has dedicated moderators.
My bug-bear is that sometimes, when I post something that I don't think breaks the rules, I still get a post rejected, and I cannot find out which of the rules the moderator thinks I broke. I know it is down to the moderator and their decision is final, but just a single "Rejected because of rule X" would be useful. I had a public exchange with Sarah about this on the comments thread of the news item announcing the rules.
And I have one recent post (which was critical of the Reg. using an inappropriate stock picture appearing on the revolving marquee headline) that did not appear, and was eventually rejected, but it took two weeks for it to be rejected. Strangely, for that two week period, it's status was neither accepted nor rejected, nor was it in 'limbo' (no status). It actually said "Updated on...." This was a new status to me!
Apple use HFS+ already
but only on devices that attach to a Mac.
It used to be the first time that you attached an iPod to a computer with iTunes installed, it would check what the computer was, and if a Mac, format the iPod with HFS+, and if a Windows system, use Fat32.
I found this out when I inherited a nearly-but-not-quite broken iPod from my Daughter after the dog chewed it, and had to install HFS+ onto my Linux laptop to use it.
Soon worked out how to swap to Fat32 (what's the choice when considering two equally patent encumbered filesystems), even keeping the music loaded (ain't tar wonderful)!
Hmmmm. Forgot about the driver signing process.
I just don't use Windows enough for that to have been immediately apparent.
However ext2 IFS (http://www.fs-driver.org/) appears to be signed already, at least for Windows Vista. I know that Microsoft could withdraw the signing certificate, but...
We desperately need
someone to leak exactly which patents Microsoft are using as the tip of the wedge.
Whilst I believe they should be challenged, the likely ones are Fat32 patents that are often quoted, #5,579,517 and #5,758,352. Unfortunately, these look like they still have 5 and 7 years respectively to run.
Maybe Microsoft are trying to make sure they get maximum value from these by building up a long list of licensees before the patents become useless for trolling.
Now, to reformat the microSD card used in my 'Phone to ext2 or journal-less ext4. I don't need no steenkin' Windows compatibility to attach to my Linux systems!
Actually, interesting point. Why don't companies making Android devices ship an ext2 driver for Windows as part of the application suite for their devices, and remove Fat support? After all, most users are used to putting buckets of crap on their Windows systems as soon as they get a new device. Why not a new filesystem? I know that there will be problems using cards from other devices, but how often to most people do that? Most people use the microSD card as fixed memory, and I'm sure that many would have to think hard about where the microSD card actually is.
I was going to say
exactly the same.
"obtained through their employer"
Thinkpads have a longevity in line with their robustness, and are very popular 2nd user systems. If you spot someone with a T30 or a T40 through T43 (and the odd T60 as well), chances are it's an ex-corporate machine doing sterling service for value and quality concious individuals. Just look on eBay to gauge this popularity. A T43 will still do everything most people want to do on the move, especially if loaded with Linux.
I'm glad I agree with Andrew on something, even if it is something as mundane as a choice of laptop!
"(Perfectly legal if the last computer it was used on has been retired.)"
This really depends on the type of Windows licence provided with the old computer. If it's a full retail version, you are completely correct. If it's an OEM version, then the licence restricts you to the system that it was purchased on, and some OEM licence keys cannot be used for hardware from a different manufacturer (the installation process can check the BIOS identification string to check that the machine was made by the manufacturer who bought the OEM license).
MS will sometimes grant an activation string if you have to replace the motherboard as a result of a system failure, but I've found that recovery CDs in this scenario do not always work with different motherboards, at least for systems from large suppliers who use custom BIOSes. Simple answer is, if you can get a copy of a retail disk, guard it like it is gold.
I recently found this out when trying to license XP for a VirtualBox on my laptop, which runs Ubuntu (VirtualBox loads a specific BIOS in the VM which is completely unrelated to the actual system BIOS). I could not get it to accept the IBM OEM WinXP Pro key printed on the COA on the bottom of the machine until I cloned the BIOS identification strings in VirtualBox.
Of course, to a system integrator, providing a full retail licence will cost either them or their customer a lot more money than the heavily discounted OEM licence that Microsoft will sell them. This would put the supplier at a significant competitive disadvantage (I believe in the UK it is in the order of £50 per system) to their competitors who just use OEM licences, and as a side effect, ties them almost irrevocably to Microsoft, who will threaten to withdraw the OEM licence if they do anything that Microsoft don't like (like pre-installing Netscape Navigator or Lotus Notes/Symphony [old Symphony, not current], or shippping systems without an OS, or even with Linux pre-installed).
And of course, this also means that MS have a continual revenue stream as people replace their PC, and MS counts another Windows sale, even if it is an OEM one.
"You drink it, you piss it out, they collect it and serve it to someone else"
I think you're confusing proper beer with that fizzy cold stuff that appears to have almost displaced ale in too many pubs.
Funny, the taste of lager when warm and flat, together with it's colour does remind me of something along the lines of your comment!
re: bork bork bork
The data capture system was on the Internet, but that does not follow that the main DB server is. They could have (although probably didn't) written each census record to tape, and then bulk-loaded it into a completely standalone database system.
Most internet facing systems are a combination of an internet attached web server of some form, with only enough storage to hold transient data, together with a significant number of security layers, some of which may take part in the transaction, and one or more database servers.
Thus, the database system is only indirectly attached to the Internet, and cannot be directly attacked. One bank I worked at had more than 10 different security zones between the front-end web servers and the systems holding the databases.
The internet facing web server gathers your data, then commits it through secure protocols and intermediate systems to the backend, and then deletes the transient copy.
Normally, the gathering system has no way of bulk-loading data back from the database machine. It may be able to get individual forms back (in order to allow you to edit them), but this needs to be done on an individual basis, and often the security checking is done off of the internet facing box.
This means that even if the Web facing system is hacked, without some authentication information for each address, it will not be able to load data from the database.
This is large web application design 101.
It is normal for there to be multiple security zones, such that it is not possible at to use, at each boudary, any other protocol than the allowed one to get further into the network (implicit deny, explicit allow).
Much more likely is that if there really was a breach, it would have been one of the routes that are used for remote system administration, and once in, a path to export the data was constructed, although even this has problems.
As far as I can tell, there are around 25,000,000 residential addresses in the UK. If the census form could be encoded in 8KB, this would make an approximate size of raw data of around 200GB. This is not a huge amount of data as things stand today, but I would not be wanting to squirt it through a SSH tunnel over the Internet!
I think that all of the posters who take this statement at face value ought to read some of the UK government security standards. These definitely exist, and they were not written by people who are security illiterate. See http://www.cesg.gov.uk
The problem is that they are difficult to interpret, and are couched in terms that many IT people don't understand (they talk a lot about data crossing security zones rather than being securely stored), and sometimes it seems like there is no real world help in ensuring that a particular application or solution meets the requirements (government security auditors will often tell you that something is not compliant, but will not offer any advice on how to make it so, nor suggest security mechanisms during system design). Thus implementing a security solution often become an iterative process of attrition with the security people.
When I was last involved, it was even the case that some of the Infosec documentation describing what has to be done is classified as RESTRICTED, which does not help trying to implement what they say.
Generally, it is not a lack of standards that cause this type of data breach, it is implementation (often by companies contracted to supply services), or ignorance of the standards by individuals working on such data. Although there should be safeguards, it often only takes one person to make a mistake to put at risk complete datasets, especially if there is any external route in to the systems implementing the solutions.
forced - by law
In case you had not noticed, it is a criminal offence to not fill in a census form when requested, backed up by fines and a criminal record. Is that forced enough for you?
I was questioning the claim that the mainframe was never hacked, not the comment. Should have made myself more clear!
The problem is that the term 'mainframe' makes does not actually describe either a computer or an operating system.
The IBM 9370 running AIX/370 that sat under a desk at one of my previous jobs was a (baby) 'mainframe'. The 3090s running VM/CMS and RETAIN (an OS in itself) that I used when in IBM were 'mainframes'. The Amdahl 5890E running UTS and AT&T RDS UNIX was a 'mainframe'. The Honeywell 6180 running MULTICS was a 'mainframes'. LEO was a 'mainframe'. The IBM 370/168 running MTS I used at University was a 'mainframe'. The ICL 1904 and 2904 running George that many Universities had were 'mainframes'. The DEC Systems 10 and 20 running TOPS were 'mainframes'. I could dig around and find a lot more 'mainframe' systems.
Now. Were none of these hacked? I can tell you for a fact that I hacked an Amdahl running R&D UNIX as part of my job more than once, and I must admit to breaking into accounts on MTS on the 370/168 while at University to get more computing budget to play the original Adventure (come on, it was 30 years ago. There must be a statute of limitations on this, surely!).
This article probably means an IBM mainframe running z/OS or its ancestors, probably using RACF. Even this, I'm sure, can not claim to never have been hacked! I have just found this http://www.os390-mvs.freesurf.fr/tenflaws.htm, in which item 9 clearly states that the author gained key 0 protection from a non supervisor account on MVS. Sounds like hacking to me.
I will freely admit that current mainframes running z/OS are incredibility secure, but I ask again. Where is the references that state a mainframe has never been hacked!
I would like to know
where the references to back this claim up are!
What you need is inertial navigation.
Submarines have used it for years when underwater, and surface ships and missiles used to use it before GPS satellites existed.
In fact, I seem to remember that German V1 and V2 missiles used a very primitive form of this for navigation. A documented way of crashing a V1 was to tip the giros by flipping it over wing-to-wing using a late mark Spitfire, Mosquito, Tempest or Mustang, all of which were fast enough to catch a V1.
It goes back to beyond the golden age, and pre-dates the term Science Fiction. I seem to remember Isaac Asimov commenting in the forward to one of his short stories on the argument between the use of the two terms when Astounding Stories was being published (it's even older than Isaac (rip), but he was representing the view of Hugo Gernsback, the founding editor).
Haven't heard that term in a long time!
It's not even mid-engined!
I was going to mention transputers in my last post
but I decided that it was long enough already!
This is completely wasted on ~100% of commercial software
In that part of the software market, it's all about rapid application development, and sod the efficiency. They rely on Moore's Law to make sure that by the time their software hits customer systems, the computers are powerful enough to cope.
So MIC processors will be completely wasted on commercial boxes, which is where the majority of the systems will be sold.
Even if someone (extremely cleverly) produces an IDE that can generate parallel code to make good use of many-cores, much of the workload that is done is not suited to run in a parallel manner anyway.
Apologies in advance to those that do, but most new programmers nowadays are never taught about registers, how cache works, the actual instruction set that machines use, and I'm sure that there are a lot of people reading even on this site who do not really understand what a coherent cache actually is.
I work with people who are trying to make certain large computer models more parallel, and they are very aware that communication and memory bandwidth is the key. Code that is already parallel tops out at a much smaller number of cores than the current systems that they have available can provide. And the next generation system, which will have still more cores, may not actually run their code much faster than the current one.
But even these people, many who have dedicated their working lives to making large computational models work on top 500 supercomputers, don't really want to have to worry about this level. They rely on the compilers and runtimes to make sensible decisions about how variables are stored, arguments are passed, and inter-thread communication is handled.
And when these decisions are wrong, things get complex. We found recently that a particular vendor optimised matrix-multiplication stomped all over carefully written code by generating threads for all cores in the system, ignoring the fact that all the cores were already occupied running coded separate threads. Ended up with each lock-stepped thread generating many times more threads during the matmul than there were cores, completely trashing the cache, and causing multiple thread context switches. It actually slowed the code down compared to running the non-threaded version of the same routine.
It will be a whole new ball game even for these people who do understand it if they have to start thinking still more about localization of memory, and if they will have difficulty, the average commercial programmer writing in Java or C# won't have a clue!
what the advantage of a MIPS processor over ARM is.
ARMs are already cheap-as-chips, low power, and easy to license. Several Chinese companies are already making SoC implementations, with graphic assists on the silicon, including Rockchip, who seem to produce millions of the things to go in chipod and apad type devices.
They need to at least recover their complete court costs in a timely manner. Otherwise, Lockheed et. al. and their proxies will just tie SpaceX up in court until their budget is exhausted.
This is the problem with the US (and increasingly European) legal systems.
Anyway, I'm hoping that they successfully defend their reputation.
BBC iPlayer - another brick in the wall.
I'm cross, but not because AIR is going, but because it is proving the trend that is making Linux a less suitable OS for ordinary users.
BBC iPlayer was one of the few platforms for content delivery with content expiry that actually worked reasonably well.
The reason why this is important revolves around the perfectly understandable attitude of the content owners wanting to protect their content, and thus their existence.
Like it or not, free content is not the way that the world is going, and the large production companies investing millions in current TV series and films will not license their content for delivery channels unless those channels at least make it difficult to capture and re-distribute it. And strictly speaking, get_iplayer accesses the content in a manner against the terms and conditions for iPlayer.
This means some form of DRM. Without a trusted DRM mechanism, you won't get _legal_ streams or downloads of new content playable on Linux. Without big-name current media, those enlightened ordinary users who try to use Linux will give up. So goodbye to Linux as a creditable Windows alternative.
One of the fears that the content owners have of Open Source platforms (and this includes Open DRM and content delivery platforms, not just the OS) is that someone can take the source and hack it to allow data capture. They will never trust it, so unless AIR remains closed-source (which is perfectly allowable under GPL/LGPL provided it is written correctly), it will become untrustworthy, at least to the content owners.
Whether a closed solution is actually any more secure is an interesting question, but that is a matter of perception and contract law (if you provide some software for a fee, and it fails to do what it is meant to, leading to a financial loss, then it does not matter what the License Agreement says, there may well be legal redress against the provider).
Open source makes no promises, has no contract, and thus has no legal redress.
Sadly, despite efforts from people like Red Hat and Canonical, I think Desktop Linux has now missed the boat. It is clear that the world will/is moving on to tablet and mobile based devices which include some form of content delivery and control system built in from the very beginning. These may be Linux/UNIX based, but they aren't what I call a general purpose Linux device, which is what I want.
but you forgot it is not an infinite resolution camera! They use "image enhancement" to sharpen the image. That's the magic!
I keep asking why, when matching fingerprints, the computer shows each record on the screen. Just think how much faster it would be if it didn't have to do that, and say, just did a relational database search on a hash of the loci!
Only topped by the real-time IR satellite images down to a resolution of about 5cm that appears in Behind Enemy Lines. I'll also swear that the first missile fired at the F/A 18 is in the air for nearly two minutes, whilst following highly evasive manoeuvres.
Maybe I'm showing my age, but I used card punch time clocks (which normally are referred to as "time clocks") in one of my early jobs.
Might I suggest that you watch the Warner Brothers cartoons of Ralph E. Wolf and Sam Sheepdog. They always clock in at the beginning of the cartoon, and out at the end. That's a time clock.
1. They might have access to leaked phone number lists, or they may have a copy of a Directory Enquiries CD set from BT, or they might just make them up!
2. They probably don't. It's just a line dangled to make them appear more plausible. Alternatively, they may have some leaked information from BT or your ISP, because it is certain that at a known time, those organisations know which IP address is allocated equipment on which phone line.
3. Windows is ubiquitous. For home systems, chances are that at least 90% of homes with a computer have a Windows variant rather than a Mac, Linux or other system. And even those with Linux probably have Windows installed somewhere as a dual boot.. The Reg. readership are not typical. My house as all three (Win2000, WinXP, and Win7, OSX, and Linux), as well as an AIX box.
I suppose that there will be an increasing number of houses that have broadband for just their TV, gaming console, iPad or Android Pad. I wonder how the ISP's will cope with supporting such customers? At the moment they all appear to be geared around having a Windows box around.
An understanding of evolution was not essential to the creation of the smallpox vaccine. This was developed by observation, hypothesis, prediction, experimentation and conclusion, exactly as the Scientific Method dictates.
Your example of a Flu vaccine is not a good one, either. Most Flu outbreaks are of known strains, of which there are many. Each vaccine developed is a mix (normally of three strains), and is only effective against a small number of these strains sometimes more than the three target strains), and it is the job of the vaccine producers to make an informed guess about which will be the main threats each year. They then prime the process to produce the vaccine (which are developed in chicken eggs) to produce the vaccine for that year. This process takes weeks to months to get the number of doses for a large population. If they select the wrong strains, the vaccine could fail to protect at all.
What gets the medical profession worried is new mutated strains of 'flu, for which they don't yet have a vaccine. It is necessary to isolate the virus in order to culture it to produce the vaccine. By the time a vaccine for a new variant is produced, it may be that a sizeable part of the world population has been exposed, reducing the value of the virus.
And why do you think that you can trust what a half-life means? And how do you know what radioactive decay is? And how do you know how much of the original sample remains? And how do you know you can trust the mass spectrometer? And... and... and ad nauseam.
Until you think about it, most people regard experimentally confirmed hypotheses as truths. Unfortunately, science does not really refer to truths, but about not-disproved hypotheses. This is a fair point if you believe the scientific method, but becomes hard to justify to someone who wont acknowledge it.
You just have to try arguing this with one of these people who are good at it to understand what it is like. They effectively argue that you have to justify the entirety of known science in order to trust it, and most people get too cross after a while to argue effectively. I just refused to continue once I realised what their tactic was.
Creationists do not dispute extinctions. They just don't believe the time scales over which they happened.
I've whiled away many hours arguing about ID and creationism with some otherwise completely rational people, and the most skilled of them have convincing-sounding answers to almost every question you could ask!
Firstly, they argue that the dating techniques are not accurate, as nobody understands all of the hypotheses that they are based on, you have to take it on 'faith' that the whole chain of scientific proof is true, and thus their single faith belief (in the Bible) is more trustworthy than many beliefs that previous hypotheses were correct.
Then they will argue that if dating cannot be relied upon, then how do we know that the Earth is older than 6,000 years (I don't know where 10,000 years came from, my friends were certain it was only 6,000).
Then they will argue flood.
Then they will argue 'test of faith' of the believers.
The most recent discussions I had with one of them even allowed for micro-evolution (change of colour, eating habits etc) as a result of environment.
It's all highly amusing, and I still count several of them as friends. But that does not stop me thinking that, at least in their beliefs, they are a bit crazy. But it livens up a beer or five!
Ahhh beery crazy discussions!
This is Apple
with big pockets. I would imagine that Dell, HP, IBM or any of the white box manufacturers would have been quite happy to flash different bootstrap code from normal to allow OSX to boot, considering the number of servers they would sell. Would probably also still support them as well, if asked.
This is the "Rules of Engagement"
The Royal Navy are quite capable of preventing a lot of the dhows and speedboats from causing bother to the tankers. After all, even a 30mm cannon can do serious damage to an armoured wooden ship, and helicopters can react very rapidly over quite large distances.
Unfortunately, the Rules of Engagement state that they have to have a reason for stopping or boarding the dhows, and also that there has to be evidence of hostile action before the RN can fire on ships in the Indian Ocean and Arabian Gulf.
Besides the pirates, there is a large amount of quite legitimate sea travel in these seas, so the standard tactic of the pirates is to look as innocent as possible until they are within a few hundred metres of their target, and then move fast. Once on board, they have hostages to hold the navies of the world to account to prevent any action.
Because the military lawyers advise against possible harm to civilians, especially the hostage crew once a ship is taken, it is almost impossible for anybody to take it back without collateral damage, no matter how well trained or armed they are. This is compounded by the unprecedented access the media has to publicise what has happened, and focus the World's scrutiny.
This is not just an RN problem, but one that affects all countries navel ships in the area.
I think that all of the examples you quoted show Apple refined other peoples ideas.
The iPhone, slick though it is, is just a smart phone, and people like Compaq/HP and Palm were selling smart phones with touch screens long before Apple.
The iPad is a touch screen tablet. Many of these before the iPad, but again the iPad is very well executed.
iMac - Pretty for it's time, but in no way was it the first system-in-a-box with a keyboard and mouse. I could point out several CP/M systems from the early '80s with similar form-factors, and the classic Macintosh pre-dated the iMac.
Stylus free touchpads. Goodness. How long have Synaptics been around?
Development and distribution. I know of several websites that will allow purchase and delivery of applications direct to a device, even to smart phones. I'm not too sure about a development environment, I don't know whether this is actually integrated into iTunes, because I do not write such apps.
BTW, you missed out iPod, or maybe you realised that that was not the first in it's field either.
Apple are great at industrial and ergonomic design. No doubt about that. But innovative?
Now, for innovative, try thinking Nintendo with the Gameboy and Wii. It is possible to be first in the field.
This is all fine
as long as the only data you keep is in a form understood by the cloud. I must admit that I have only Google Docs to go on (and I don't use that much), but it appears to me that if you want to keep some data that does not fit with the applications supported, you will struggle.
Of course, as I have often said, I am not a typical user any more, and many people only use data of defined types 'music', 'pictures', 'video', 'documents (embracing email, letters, the odd spreadsheet)', but as long as there is no generic data container (think file), I will not be able to work totally in the cloud, and probably won't at all (damn, wrong already - I've just remembered that I'm using gmail a lot now).
Computers are a generic tool to me. I may use one any time for a purpose I have not yet thought about. I'm regularly throwing gigabytes of data around my home network, and have not got sufficient bandwidth to do that over the 'net.
All of this hype about the 'Cloud' is currently just a wet-dream of the people who want to tie-and-charge consumers (I won't say customers) into their money generating machines. It may change to an benevolent altruistic model, but I'm not holding my breath.
that ACS:Law £200,000 fine was against a limited company. If that was the case, then UK law says that he *personally* is not liable for the company losses unless he was a director, and then only if he was negligently running the company (and although he was a con artist, this does not amount to negligence in UK corporate law).
This article says that he has been declared *personally* bankrupt, so the two things are not necessarily linked.
When it comes to personal property, as long as the money used to buy it was extracted from the company in a legal manner, then there ain't much that can be done to link the company losses against him personally. That is what a limited company is all about.
Of course, he could have been stupid, and set it up as a partnership (trading, not legal - although who with is a moot point) or as a sole trader, at which point he would be liable. But he wouldn't be that stu..... Oh, wait. Maybe he would.
Unfortunately, this would be SOOO insecure, as the answer-back string is triggered remotely.
As can (believe it or not) the programmable function keys of a VT220. I'm sure that I spent some time twenty years or so ago, writing a program that would set a PFK (on the shifted function keys IIRC), and then trigger it.
All you needed was write access to the device, and you cold make the current user apparently run anything you wanted them to! Similar techniques worked for HP2392 as well.
This was with UNIX, not VMS, so I'm not sure that this was possible unless you were already were a privileged user (could you so it through Phone, I wonder).
You're assuming a certain type of game.
I would guess that Nintendo are trying for another Wii moment, with completely new types of game with more interaction that you can get from a tradditional controller.
But this is the manufacturers talking
They are not interested in the netbook they sold yesterday. That's history. They are looking at the one they may not sell tomorrow.
I'm still happy with what my EeePC701 can do running Ubuntu. I'm just a bit worried where I can get a replacement battery when it dies!
@Joel 1: He might have been in the audience!
@JEDIDIAH - So has Android...
... but you have to jump through hoops to find them!
top, ps and kill all exist and can be used (at least top and ps) if you can get a shell on the phone. Kill depends on how you get the session.
But then you can also run "Advanced Task Killer" which is in the Market place, and 2.2 onward has an enhanced Task Manager
"mirror the mainboard"
I presume that you mean that the PCI cards appear on the 'wrong' end of the board, and also that the case opens on the 'wrong' side. Chances are these were systems with BTX (as opposed to ATX-type) motherboards, that were supposed to mark a new integration of board and case design to allow better cooling. It was an Intel specification. Gateway and Dell produced several systems using them.
Absolute bugger to try and find a replacement, because nobody makes them any more.
At one of my contracts
I spent a lot of time gathering data about systems that needed OS upgrades in a company with a large (more than a thousand system-images) heterogeneous estate. I created it in a relational, normalised manner that allowed complex queries.
When it was decided that the task was too big for one person to actually do all of the upgrade work (duh! hundreds of systems!), I was told to hand my data over to an administrator to manage it, and was relegated to just a technical resource performing some of the upgrade work. The first thing the administrator did was to dump my data into an Excel spreadsheet "so everybody could use it", after which the management of it went to pot. Because of numerous data-loss errors, they eventually surrounded it by scripts to effectively serialise access, not trusting Excel's multi-user protection features (this was some years ago, so things may have got better).
I had actually asked for the data to be stored in a multi-user RDBM (it is a large organisation which employs a dedicated DBA team, so there were plenty of databases around), but I was told that there was not a suitable system around for management tasks, and told to do the best I could with what was available. I did not feel appreciated at the time.
I find it incredibly ironic that an organisation that has bought in to databases, spending millions on Oracle and other DB licences to manage customer data, cannot see the benefit for using such tools for their own management purposes.
Ho hum. I can't see myself working there again! Everything has now been moved to India.
Not just Apple
almost all the world's consumer electronics, tat and anything else that has fallen in price dramatically over the last 20 years.
Even the stuff made in Korea and Taiwan often contain significant numbers of components sourced in China!
Answered my own question. On 28th September this year, users of Mendip have to re-tune our boxes again(!), presumably to have the channels shuffled down to lower frequencies. Can't find the exact details, but www.digitaluk.co.uk says that this needs to be done.
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- 20 Freescale staff on vanished Malaysia Airlines flight MH370
- Did Apple's iOS literally make you SICK? Try swallowing version 7.1
- Neil Young touts MP3 player that's no Piece of Crap
- Review Distro diaspora: Four flavours of Ubuntu unpacked