273 posts • joined Saturday 28th April 2007 14:36 GMT
Re: IT angle?
Upvote from me. Exactly the point. Seems half of the above commentators have trouble with reading comprehension. This isn't exactly helped by the bizarre way the article is written.
And what constitutes electronics? You have a battery and and a switch, which is already electrical. Some sort of arbitrary dislike of semiconductors? The item most likely to fail is the battery, followed by mechanical devices freezing.
Going to end badly
I can't see that this is a good idea for either Dell or MS. MS will get a seat on the board, which is enough to have real influence. History has shown that companies where MS gets serious influence make bad decisions on the promise, and get shafted in the future when the decisions turn out badly. MS have a very bad reputation for keeping faith.
MS can't offer anything to Dell that makes this attractive. Early access to technology for MS isn't exactly going to be of earth shattering importance, MS have done very little new or innovative for a very long time. But such access has the threat of stifling innovation and business agility at Dell. The downside for MS is that they are again getting into the position of starting to compete with their customers. A cosy relationship with Dell will almost certainly add to the perception of all the other PC manufactures that, whilst they can't do much about the need to purchase Windows for PCs, they probably want to actively consider every possible option for any other product they make. And right now that means Android. Maybe MS thinks that they simply can't fight Android in the mass market and will be content with a only couple of big manufacturers (Dell and Nokia). This puts MS on a trajectory to buy these two and to compete with Apple head to head in this space. This could end as badly as iPod versus Zune, and if I were a betting man, that is where I would put my money.
And this was where? The National Enquirer? World Weekly News?
NASA isn't a person, it doesn't make decisions like this, mostly because it can't. It is a huge, bureaucratic, risk averse, political machine, with 18,000 employees, and up to 300,000 including contractors. (Which would include contractors like United Space Alliance - who were largely responsible for shuttle operations.) You don't keep the lid on a cynical, strange, and stupid idea like this in a structure like NASA.
The rule of conspiracies applies. Never ascribe to conspiracy that which can adequately be explained by incompetence. NASA had more than enough managerial incompetence to cover disaster conspiracies many times over. Sad truth is that they simply didn't have a clue there was a problem. There had been foam strikes before, they had already made a decision long before to degrade the issue to one that was not flight critical, and they thought that given they had seen it all before, and got away with it, things would be no different this time. So they went home for the weekend.
Re: Not just the foam strike
Not that I know of.
Re: In reality nothing could have been done
The investigation panel showed how if the damage was taken seriously it would have been possible to have put an astronaut in a position to see the damage and to access it. There would have been time to do this. At this point there would have been an unequivocal need for drastic action. They suggested that stuffing the hole with a selection of on-board materials and changing the entry profile may have been enough to save the crew, it not the orbiter.
Whilst there is some truth that NASA is very politically directed, they would know that loss of the orbiter would inevitably lead to a congressional investigation where every email, phone call, and every tiny bit of physical evidence and documentation would have been worked through. Once the foam struck the die was cast. They were going to lose the shuttle programme if they lost the orbiter. Senior mangers would have known this. In part, where NASA failed is that senior management didn't know there was even the slightest hint of a problem. The internal culture simply didn't allow for there to be one.
Not just the foam strike
There were a great many lessons in the Columbia disaster. Whilst el Reg provides a nice write up the basic reason, taking time to look at why it could happen, as well as the what happened would be worthwhile.
The investigation uncovered a huge number of flaws in management of the shuttle programme. It wasn't just that NASA lost a second shuttle that set in motion the retirement of the fleet, but that NASA manifestly was not able to show that it was up to the task of managing the programme. It was clear that NASA would never be able to get the shuttle programme past losing one in every 50 flights. Some of this stemmed from inherent defects in the shuttle's design, many of which were inflicted on NASA due to the politics and budget cuts in the 70's, but a great deal from issues in NASA's internal culture.
Mission rules required that the ground control team provided constant oversight of the mission. Yet there was so little concern about the state of pay that the mission controller gave the team the weekend off. Both violating mission rules, and evidencing the total lack of interest in the foam strike.
Whilst the foam strike was always the prime suspect in the loss of the orbiter, there were other very serious engineering flaws uncovered. The investigation spent some time specifically looking at NASA's processes, and specifically criticised it's "broken safety culture." The external tank manufacture had been so tightened up financially that the position of manager of a particular part of manufacture, and the position of safety and quality control for the same part was occupied buy the same person. Yet no-one seemed to realise the fundamental conflict and inevitable loss of safety this would bring. Ultimately NASA was shown to have not learnt any lessons from the loss of the Challenger. The same hubris, and culture of "we got away with it last time" that doomed that craft, also doomed Columbia. The issue of foam strike was degraded from a flight critical one - where in the original rules for the orbiter this was a non-negotiable flaw that would have led to instant grounding of the fleet until resolved. It was let slide to the point that it was considered a regular "problem" that they would ultimately sort out, and not considered a serious enough to impact flight. An identical mindset as they had for the SRB O-ring seals that doomed Challenger.
The report on the disaster is worth reading from cover to cover. Whilst there is nice story of forensic engineering, the real story is in the surrounding culture, and the question of just how and why it was allowed to happen.
Re: She can promise anything
"possibly the only thing that would get her re-elected is....actually I can't think of anything."
I can. He is called Tony Abbot. We have the absurd situation where the single biggest electoral asset that Julia has is Tony, and his biggest asset is Julia. Both are probably scared stupid that the other party has a leadership spill. Perhaps the politicians we get we deserve, but what desperate mortal sin are we guilty of to deserve this mob of idiots?
Re: You'd think there would be a vegetation free zone around this expensive sensitive equipment
If you look at the satellite pics there is a reasonable amount of space around most of the the instruments, all except the AAT which does get a bit close to the trees. However a lot of the support buildings do not have much of a gap, and it seems from the news that they have lost a lot of these buildings.
I tend to support the original sentiment - there wasn't enough clear ground. The site is right on the top of the hill, so a clear area that doesn't bring the fire front right to the doorstep of the facility is possible. And for the main cluster of instruments was both done, and worked. It is the fire front that does the damage. There is always the risk that blown embers will ignite a building, but the telescope facilities are mostly metal domes, and won't catch fire easily. A full force fire front however will melt them in place. In this respect distance is the only hope. It is quite possible that it was a simple ember that took out the support buildings anyway. It is a very common way to lose a building, and will take out a building sometimes well after the fire front has past.
I have both been to Siding Springs, and have experienced Oz bushfires first hand. When I vistied it was very different, I have a pic of the UK Schmidt telescope dome enveloped in cloud.
The Microsoft guys negotiating the deal really don't care in the slightest about the level of discount that can be calculated against a per seat price. It was a given that they were going to sell an all of department license. The only question was what the maximum amount of money they could extract from the DoD was. That number was probably not too hard to discover. Then all they do is work on convincing the DoD to hand it over.
The DoD's job is to muddy the waters and convince MS that the DoD really have much less money to spend, and get MS to latch onto a goal price that is actually lower than it is. Given the number of ex-DoD consultants that MS could engage to help, I suspect the whip hand is actually Microsoft's, and not the DoD's. But it always good to let the loser save face. A press release from the DoD making themselves look good is a small price for MS to pay for extracting that last 100 million from the DoD.
Predictable but important
If this was a few years ago and the university announced laptops running Windows all round, there would have been hardly an eyeblink. For better or worse, the default platform for tablets is iOS. Perhaps hard for the hardcore Apple fanboi and anti-fanboi equally to stomach, but Apple/iOS is the Microsoft/Windows of the tablet world. TCO of a single OS, single hardware platform, plus the existing tools for content creation (nobody mentioned iBooks Author, yet it is certain to be a key part of the case for iPad) is going to make a very compelling case for rolling out iPad. The real questions are much much harder than deciding to go with iPad.
There is a very clear tsunami rolling across the oceans of higher education right now. Most universities know it is coming, but I doubt anyone actually understands what will really happen, or what the right answer is. But the traditional university teaching modes of lectures, tutorials and practicals, plus exams is obsolete. What nobody knows is what the right replacement is. Access to very high quality teaching material from the likes of MIT, free on the internet, plus access to a wealth of other information that previously would have required hours a day in the library clearly outpaces the current model. But we should be able to do vastly better than this. Whether this means universities cynically reducing costs whilst maintaining a bare minimum education standard, or driving towards real improvements in outcomes and maintaining the current funding, that is a political matter. But not pushing for change is derelict.
I do however suspect that the UWS rollout is probably ill conceived. Content creation is not going to happen overnight. Indeed I would consider that there should have been a two year lead time for the training of lecturers in creation, and time to actually create the content, review it, rework it and then only roll it out to the first wave of students in the third year of the programme. Expecting the academics to be fully embracing the tools in time for a first semester delivery to the students is going to yield nothing more than Powerpoint slides of last years lectures available on-line. Something that will provide exactly zero improvement on the current regime. Freed of the need to actually listen in lectures students will spend the hours idly viewing Facebook and messaging their friends across the lecture theatre. It can, and should be, much better; but I bet it won't be.
If it takes a 17 point improvement to win the category, it would seem that Balmer could win it four times in a row and still not make the top ranking for actually doing a good job, rather than just a better one. That really is starting from a low base.
Re: Total bollocks from el Reg
Sadly true. Seven thumbs down and counting. One assumes that the down-voters also lack the technical ability to understand what I wrote. I have come to the conclusion that there is a core group that will down-vote any comment that does not actually slam Apple, and even comments that are neutral to Apple will attract their down-vote. The stream of comments to this article that suggest that many see it as simply a forum for Apple bashing, and nothing more rather reinforces this view. It is becoming no better than YouTube comments.
Total bollocks from el Reg
Seriously, we get two articles on this patent in one day on el Reg, and it appears that in neither case have the authors actually bothered to read the patent, or if they have, they lack the technical competence to understand it. What we do get is the now very tired Apple bashing fest that is fast making technical commentary from el Reg on anything to do with Apple essentially worthless. This is sad, there was a time where el Reg was actually worth reading for such commentary. It no longer is.
1. The patent does not attempt to patent near field charging. Got that? Really it doesn't. The title alone should be a give away: "Wireless power utilization in a local computing environment" Note the bit about "utilization." It is a patent on how to use wireless power in an innovative manner.
2. The innovative bit about the patent is the re-radiating of power from one device to another, and a protocol for controlling this. Go down to the claims section and have a look. The claims is where the actual meat of what is patented is. The stuff earlier is explanation, it isn't what is claimed for patent cover. Indeed the rules of patents require that you cover any earlier contributing technology. If you see something in a patent that you have heard before, it is there, not because of some nefarious attempt to re-patent existing technology, but due to a requirement to place the new work in the context of what has gone before. Not doing this can cause the patent application to fail. Note that you can't be expected to cite provisional applications from competitors - they are secret until the patent is approved.
Seriously, this article is so bad it should be deleted. It is an embarrassment to both el Reg and the author.
There are a couple of fundamental errors in the design and assumptions here. Sadly they pretty much nullify what has been done.
There are three sources of heat loss - radiation, conduction, convention. The design and tests have not addressed these correctly.
Radiative losses are independent of atmosphere - they remain essentially identical in a vacuum or at sea level (for those wavelengths that the atmosphere is transparent to - which are those that matter here.) Heat loss due to radiation won't be noticeably less at altitude.
Heat losses by conduction through air are independent of pressure until the mean free path is longer than the distance between objects. For the dimensions and pressures of this project you can assume that conduction remains about the same. Use of an aerogel insulator would help significantly here.
Convention might matter. Even at 0.01 of sea level pressure, the air can move, and thus can convey heat from the motor to the body of the aircraft and thus to the outside. However the vastly lower pressure reduces the heat capacity of the air equivalently, so the energy moved reduces considerably. You may need to consider how to prevent convection cells of air forming. Making sure the cells of air are small (where small is a few mm) is the way to do this. Aerogel is good here too.
The critical one is radiation. Space blanket only provides useful insulation against radiative losses. It does this in two ways: it reflects radiation back to the source, and being a highly reflective material, it radiates heat very badly, and so does not lose heat by itself radiating energy. In order to work it must not be in contact with anything - it must have a clear space around it. Sandwiching it between two layers nullifies its entire function.
To use space blanket you must wrap the outside of the assembly very loosely - possibly in more than one layer, with a minimum of contact points between the rocket motor and the blanket, and if more than one layer minimal contact points between the layers. If you want an example of how it is done, look at a picture of the Apollo Lunar lander's legs. Indeed check out any picture of spacecraft and observe how the blanket is arrayed. Multiple loose layers of blanket will trap small cells of air, and thus also effect a reduction of convection.
As mentioned above, you won't know how well the system performs until you test it properly, and this means into the baro-chamber and packed with dry ice for an extended soak. It is worth applying a bit of basic physics here too. You know the energy drain of the heater - power = volts time amps. You can work out the thermal coefficient of the motor - (so many grams of aluminium, so many grams of propellant - or use a surrogate of similar known material) thus you know how the temperature of the motor should rise with time when the heater is energised. You can compare the observed temperature with the ideal case, and work out the thermal losses. You may discover that a carefully insulated motor will not require a heater, or if it does, you can work out the minimum heater current required, and appropriately size the power source.
A layer of aerogel then a couple of loose layers of space blanket and I very much doubt a heater would be needed. Insulating the system batteries would similarly benefit - enough that with internal losses naturally heating the batteries you may obviate any thermal problems in the batteries.
Missing the point
The idea that Apple must find a Steve clone, and that even an arsehole clone is better than no clone is fundamentally flawed. Steve is gone. There is nothing that can be done to change that. All great companies have great leadership, but the nature of that leadership changes fundamentally with changes in the leader. An attempt to maintain the past by emulating a few external attributes of the now gone leader is no better than a cargo cult. Building replicas of planes out of straw does not make the real planes arrive, and being an arsehole does not a visionary make.
Apple's senior management have a seriously difficult task ahead of them. There will never be another Steve. One reason Steve got away with being who he was, was that he was one of the planets most wealthy people. He didn't need to do the job. He wasn't a career executive coveting the CEO position and its pay-cheque. He founded the company, and still owned a goodly slice of it. The only answer is to recognise this. Apple can't succeed by trying to emulate Steve's management. They do need to take serious note of what was good that he brought to Apple, and try to distil it, and ensure it remains in the company DNA, and then find managers that recognise what it is that makes Apple Apple, and who will continue it. The huge dangers are that they become paralysed, or succumb to ego driven infighting. Guiding the company down this path is Tim Cook's job. A successful Apple will not be the same Apple as when Steve ran it. It may be better, but it can't ever be the same, and attempts to keep it so will doom it.
Re: Is it just me?
The sticking point seems to be this:
"Adkins had been seconded by Dockwise from another company, Cadenza Management, which was actually his employer"
You can be certain that Cadenza Management had had Adkins sign the usual email clause with them. But that isn't the same as him signing with Fairstar Heavy Transport, even though he was working as their CEO. So Dockwise sue Cadenza Management to get access to one of Cadenza's employees emails. That gets pretty weird. If you are a contractor for a company, they don't automatically get access to your email account.
Could be interesting
What Apple could, and IMHO should do is explore much more interesting possibilities in processor design. The x86 chip is fine so far as it goes, and the ARM all well and good, and nice for low power, but neither are exactly anything more than the most boring and basic functionality. Computer archtictecture has gone backwards for decades. Right now, raw speed individual CPU is no longer the prime issue. Ever since Apple bought PSemi I have wondered if they might do something really interesting. Where interesting involves taking some advanced architecture ideas and running with them. The one that I would love to see - tagged memory. Adding tags to memory can be used to provide hardware differentiation of addresses and data. Instant pointer management, and with it a major step towards secure systems. Also add a full/empty tag, which provides for intrinsic synchronisation in memory, and with it support for fine grained concurrency. These are not new ideas - look back to the Tera MTA for one example. But you could go a very long way back to see lots of additions that can provide for secure systems, and parallel code support.
It would be a brave move, but if you look across the industry, the only company that is in a position to make a break from the ossified architectures we currently use, it is Apple. Worrying about Windows compatibility just repeats the mistakes that keep things bogged down. Linux is so conservative in its internals (no bad thing, it is just important to understand this) that it won't be able aid any such progress.
What goes around
Water/liquid cooling always used to add quite significantly to the cost. Almost doubling the cost of a machine. Cray used to do the T3E in both water and air cooled versions for this reason. The water cooled version packed a lot more processors into the same cabinet footprint, and was the only way to get really big configurations. At the same time Thinking Machines made a big thing about the cost effectiveness of their air cooled designs. Considering the prevalence of water coolling for gaming machines at least some of the components should be pretty cheap now. But anything custom or low volume is going to be a problem.
The Cray 1 was also water cooled, but conventional cold plate. The Cray 2 was famous for being liquid immersion (in Freon) cooled. You had to drain the Freon out of the cabinet to perform any work on the machine. Cray's factory cooled a large pond outside the building with excess heat.
Not really unusual
I have seen similar things happen with other service providers. The root cause was a cancelled credit card which triggered an automatic flagging of the account as being used fraudulently. The card was actually legit, but had been cancelled for other reasons - however it was pretty easy to see why the provider would put two and two together and assume that the card had been stolen and later cancelled by the rightful owner. The next step was less sensible. They then looked in their database and decided that some other accounts that were apparently linked (by IP address) were also therefore fraudulent, and cancelled the lot. Took ages to sort out.
If it is something like this Amazon will not be forthcoming with an explanation as it might reveal something about their internal fraud detection policies. Even of the rules a stupid, they won't reveal them.
Re: and Apple? - how about an edit botton????
The usual mechanism is to disable edits as soon as anyone replies to the post, or after a short timeout. It isn't as if this is a new problem or hasn't been solved many times before.
bUtton that was.
Owning such a company isn't something we would not associate with Microsoft generally, but that they do, and they use the technology for their mapping application does bring into contrast the differing attitudes of our big technical companies.
IBM famously runs leading edge research, and even boasts the odd Nobel prize. Google isn't so focussed, but isn't adverse to bleeding edge and oddball efforts, including hardware, and spends pretty big. Microsoft spends pretty big, although seems to have an inbuilt ability to ignore most home-grown good stuff (rather like Xerox). Which leaves us with Apple. A point that rankles with this self confessed fanboi (and owner of many an iProduct.) Apple underspend on research, spending much less than the industry average, and vastly under the levels of the preceding companies. One might guess why they have been manifestly unable to get their heads around new - previously non-core product technologies - like maps.
Microsoft and Apple are well known for buying in technology, MS more than anyone. Which works fine if it is software, and the technology is thus of the same flavour as your core capabilities. But it takes something more if you are building a major new capability. This is what is worrying about Apple's Maps. Until we see the same sort of innovation as we see with Google and MS - actually doing research and pushing the edge in more than just software - we are not going to see a product capable of competing.
Not useful for a head up display
Whenever a display technology that can layer on glass is mentioned, the idea that it could be used for a head up display comes soon after. But it won't work. The key part of a head up display isn't the display technology - it is the optics that allow the display to appear in your field of view in focus. Try driving down the road whilst your eyes are focussed on a streak of grime on the windscreen. The road is out of focus. You can't focus on a display laid over the windscreen and also drive. A head up display uses a set of lenses to focus the image of the display device at infinity, so it appears in focus whilst your eyes focus on the road.
Re: The problem is ALWAYS the PEOPLE.
I have the feeling that Apple's map problems have nothing to do with software, and everything to do with data curation. If Google have 7000 people working on maps, you can bet that 95% of them are doing nothing but staring at GIS data applications and hand tweaking the map data. Sadly this isn't work fit for human beings, and will also pay about as well. Th best quality software on the planet won't replace low resolution satellite images with high resolution photographs taken from airplanes, or magically fix out of date business information. The only way to sort this out takes lots of effort, money, and time.
See my post above. I though this too for a while, but a little digging and you find that "iPod Out" is not the analog audio output. It is the special iPod emulation mode iPhones have. I'm reasonably sure that iPod Out requires video out - this is how it displays a virtual iPod on a car's touch screen. (Which a car that supports iPod Out has. iPod Out having been developed in conjunction with BMW.) Apple really should be much more clear on their web page.
Maybe much more complex
It looks a bit more complex that Apple have actually said, and Apple have not been smart in dispelling the confusion.
There is a lot of talk about the loss of audio line level out, and the Apple web page says that iPod out is not supported on the adaptors, leading many to assume they mean no line level audio out. Which it seems isn't the case. IPod out is a mode where an iPhone or iPod Touch will emulate an iPod in a manner that allows really nice integration into car audio systems, where it actually displays an iPod control screen, complete with album art on the in-car system. It is this that doesn't work. Assuming they are actually supporting line level audio out in the adaptor, the adaptors at least include a DAC, so it isn't just a connector. The adaptor probably contains more than this too.
The iDevices have never supported S/PDIF, and I very much doubt they will start now. Those docks that do support it have licensed a special USB chip from Apple that allows access to the internal digital audio stream. I doubt Apple will be giving up that control.
I suspect we are going to see some later technical descriptions abut the Lightning interface, but Apple have let slip a few things, and a look at some of the issues with USB make these make more sense. Apple say it is an 8 signal interface. Which is already interesting. USB 2 is two signal (+ and - signal) and USB 3 adds four more (superspeed TX +/- and RX +/-). The remaining two signals may be Apple simply keeping the old serial interface, or they may have done something much more interesting and the Lightning interface may not be USB at all, and the adaptor contains a USB interface chip as well as a DAC.
The plug is double sided, and I think everyone has assumed that because it can be inserted either way up it means that although it has16 physical pins, they are simply 8 electrical pins duplicated. This may not be true. If the socket has only 8 pins, sure, but if the socket has 16 pins we may see some slick use of differential signalling and symmetry allowing four pairs of differential signalling pins, plus the power, ground, and maybe power output for accessories. Apple have explicitly said 8 signal pins - so the question of where power and ground come from needs answering anyway.
Apple will want to future proof this for some time, so a range of things are possible. In a decade's time our expectations of what can be done on the connection interface, and indeed what we expect from our smart pocket device may be significantly more extended than we imagine now. Indeed, have a look at Thunderbolt. Cut out the two low speed signalling lines and a few redundant ground pins and it would fit. Who knows? The name is tantalising.
Yes, I expect that the vast majority of commentators on this will make the mistake of claiming that the uncertainly principle is in doubt, and totally miss what has actually been claimed.
From the first linked article:
"It is often assumed that Heisenberg's uncertainty principle applies to both the intrinsic uncertainty that a quantum system must possess, as well as to measurements. These results show that this is not the case and demonstrate the degree of precision that can be achieved with weak-measurement techniques."
The experiment addresses the phrase: " as well as to measurements." The intrinsic uncertainty remains.
Business as usual
Samsung are a seriously big company. The executives that sell fab services, those that sell memory, and those that run the phone division probably never see one another from one month to the next. Their individual jobs are to make money with their divisions. If one of the other divisions is lawyering up with one of their customers, all they will care is that that customer continues to buy their product or service. This is simply how large companies operate. Nothing is ever personal, it is always just business.
3.5mm != line level audio out
In the past iPods used really quite good (Wolfson) DACs, and the sound quality from the line level output was significantly better than that from the 3.5mm jack (which passed through a necessarily mediocre quality headphone driver amplifier). So loss of the line level outputs is potentially quite annoying.
On the other hand, Apple introduced access to the digital audio stream via USB - so long as you licensed it - and thus got access to the restricted interface chip needed. Which is how the high-end docks provide S/PDIF output. So Apple may have decided that mediocre audio via the 3.5mm headphone jack is OK for those that use cheap docks, and if you want any sort of quality, you have to buy a USB audio enabled dock. Which seem to start at about $150. They have not been using the same quality of internal DACs as before, so perhaps it doesn't make the difference it once did.
I will be interesting to see what they do use the available pins for. They may decide to output video, although analog video would make less sense now than ever. HDMI needs many more pins than they have, and by the time you get here Airplay in indeed the right answer. An Apple TV is cheaper than a USB enabled dock. It has S/PDIF output too.
Count the pins - 9 is fine.
Go have a look at the pinout for the 30 pin connector.
There are 8 pins devoted to the now deleted Firewire connection.
There are four more ground pins - so up to three are redundant.
There are three pins for video (composite and s-video) output
There are two pins of either reserved or unknown function
There are two pins for audio input
There are two pins for serial IO
There are two pins used for primitive control of iPod functions (one to click sound output, one to control charging)
Pretty much none of the above are needed. USB control subsumes the serial IO, Firewire is dead (sadly). Noone is recording with their iDevice. That is 23 pins that could be deleted without affecting the provision of audio output.
We only need to find 21.
Pins that will be needed are USB, signal and power (4 pins), 3.3 volt power output pin, to power things like the camera adaptor, maybe a separate ground from the USB one - which is six pins, so add two for audio out, and you still have a pin left. Whihc I suspect will be a new in-dock mode control pin that signals the iDevice what sort of dock it is connect to. It may be actually be a upgrade of the accessory indicator (pin 21) function.
Re: Get him out!
Probably a number of reasons.
The Swedes cannot provide a blanket exemption against extradition. It is illegal for them to do so. A lot of people forget that in most countries there is an explicit divide between the government and law enforcement. It is not possible for the government of either Sweden or the UK to step into the legal process and pervert it. This is for very good reason. Neither the Swedish or UK governments (as in the elected governments) are involved in the Assange legal process. The law courts are, but the whole point of the law courts standing is that they are not under the influence of the elected government. They enforce the law.
The Swedes could make a statement that there is currently no warrant or request for extradition from the US for Assange. Doing so of course would be worthless. No-one who believes the conspiracy theory would believe them. To make a statement that the Swedish government would ignore any such extradition request from the US - before it had been made - would probably put them in breach of a number of treaties.
If any such extradition was requested by the US from Sweden, the UK must also give permission. So any idea that this is some weird conspiracy to allow the UK government to claim clean hands on the deal doesn't wash. Since the UK must give permission anyway, the obvious point is simply this: why didn't the US simply ask the UK to hand him over? If you wanted pick the country in Europe most likely to kow tow to some notional US imperialist line, and ship Assange over, you would put the UK at the head of the list, and Sweden pretty close to the back.
The entire furore is over a total minnow in the grand scheme of things. It certainly feeds Assange's ego. He clearly is someone of international importance, a man who can bring entire superpowers to their knees. He isn't. He is a footnote. People forget. It wasn't him who leaked anything. He simply provided a forum for leaks. And he didn't do it alone, without him Wikileaks continues anyway. Assange has a very heightened idea of his own importance, and seems to act on this ego.
The most likely answer is that he will eventually get shipped back to Sweden, where the charges will either be dropped, or fail in court, probably because the women that complained get cold feet due to the publicity. He will then go free. End of story. Now that probably strikes more fear into Assange's heart than anything else. For it will prove that he actually isn't important enough.
Re: Security checks and diplomatic bags
Diplomatic bags are exempt from any search. X-Raying them probably comes under this. What is a bit more interesting is that the treaty is explicit in that the bags contain documents. A literal reading would suggest that if, for some reason it was obvious, without search, that the container contained a person, that that container was, by definition, not a diplomatic bag.
Personally I would simply load the "documents only" bag into the unpressurised hold of the next plane to Ecuador.
Re: "very serious charges"?
You can't be charged unless it is done in person. The entire point of extraditing him to Sweden is to allow charges to be laid. So long as he stays out he cannot be legally charged. He knows this. The European Arrest Warrant was issued by Sweden in order to get him back there to allow him to be charged. Arguing that because he hasn't been charged he must be innocent is simple ignorance about the manner in which the process happens.
The way Swedish law works is different to the UK. Once he is charged he is required to face trial within two weeks. This is one reason why the charging process happens later than you might be used to. In Sweden, the process requires a "second interview" during which charges are laid. It is for this interview that Assange's arrest warrant was issued. There is a lot of misinformation about the process, which seems to be wilful ignoring the nature of the legal process in Sweden and trying to re-interpret the names used for the stages (which will be in Swedish) in a manner that suggests a far less serious level of intent.
Everything that all the Assange supporters complain about hinges on one wild assertion. That that bastion of conservative politics, the well know lapdog of US imperialism, Sweden, has already agreed with the US to ship him over to the US once he lands in Sweden. This isn't credible. If you drop that one assertion, the rest falls apart.
What about the rest of the planet?
The popularity of BBM with the UK kids seems assured. But what to the kids in the rest of the world like to use? Being one of those that has no idea what the yoof of today are up to, I have no idea about BBMs ubiquity, but do feel a little sceptical that RIM have the market sown up in every country that their rivals operate.
Re: I am DEFINITELY no rocket specialist/enthusiast...
Like all comedy, the answer is in the timing. Much better to use one that is three times more powerful than mess about with worrying that the three don't ignite exactly at the same time (or worse, one fails to ignite) and then have to cope with asymmetric thrust just as it tries to launch. Failure of one SRB to ignite on the Shuttle was one of the unsurvivable accidents (and possibly most spectacular) possible.
On the other hand
Following up from above, the problem with the test chamber pressure probably remains. Even if the igniter doesn't create much gas, the motor grain is designed to create much gas, so a sputtering grain that is doomed to fail in a real launch might manage to create enough pressure in the REHAB chamber to cause itself to light up. This presents a difficult problem. It suggests that a completely valid test does need a large enough vacuum chamber to cope with a significant amount of gas production.
One is reminded that some tests of real rocket motors use basically a very large water jet ejector that can sustain a vacuum even after the motor starts running. I wonder if something designed to fit on a fire hose would work? (Only partly in jest here.) An industrial size water jet ejector would probably work if you could find one.
A bit of searching around brings a few facts up.
PIC uses Lead Dioxide and Silicon, and yields Lead and Silicon Dioxide as reaction products, neither of which are gasses at low temperatures. That and any vaporised plastic. The electric match uses Antimony TriSulphide and Potasium Chlorate, which yields only some Sulphur Dioxide as a gaseous product, the Potasium Chloride and Antimony Oxide not mattering. Thus the igniter may well not be reducing the pressure in REHAB all that much. Clearly the easy test is as suggested above - just ignite one inside the test chamber and see what happens to the pressure. That alone should set to rest any issue of whether any additional hardware is needed. There are two places where the pressure matters. In the test vessel (REHAB) where you want it to remain low, and in the motor chamber, where you simply need it to be accurate. In the motor chamber the temperatures may be much higher, and the reaction products may stay gaseous for a tny bit longer, but this is what they will do in the actual launch, so this remains accurate. Outside the motor chamber everything will be cold, and I suspect that little of the reaction products will remain gaseous for any meaningful time except the the Sulphur Dioxide.
Next, the equation of burn rate of the motor is Rate = constant x pressure^n. Where n depends upon the propellent composition, and seems to vary between 0.2 and 0.5 Basically the burn rate only depends upon the chamber pressure. Whilst this was known in the abstract, just how critical it is is perhaps a surprise.
The point of the PIC is to slam heavy hot particiels into the rocket grain, obviating the need for heat transfer by conduction though hot gas.. But the grain won't stay burning unless it is subject to sufficient pressure. If it doesn't start burning fast enough to build chamber pressure it seems it fissles out. Probably as soon as the PIC is depleted. So it may be argued that a problem with the PIC is that it may not be producing enough hot gas. Given the above this is perhaps not a surprise. A reliable ignition might be achieved by a modification that simply adds something that produces hot gas as well as the hot sparks in the motor chamber. This could be much more reliable and less subject to catastrophe than a burst plug. However it may simply be that the difference in composition between the different manufacturer's motor grains may be enough to bridge the gap. The burn time of the motors might provide some clue as to this since the motors are much the same weight.
Re: Not exactly enterprise
Exactly. It is difficult to imagine how dedupe would make anything more than a trivial difference to a personal computer's persistent storage use. Personal computer file system use is dominated by pictures, audio, movies. All three of these are already compressed. Dedupe makes little to no sense. Indeed it would probably just slow everything down and wear out the flash faster.
This really is a big thing. I remember arguing for this access 15 years ago, but at the time the government agencies were required to make a token profit, and charged for anything they did that was actually useful. This initiative will make a huge difference as it becomes possible to perform useful analysis on data, and build a business on doing this analysis without the impost of what were what significant charges for access to the data. Access for fundamental research will be similarly enhanced.
I'm going to bet that the owner and staff had concluded that he was engaged in creating a clandestine video of goings on at their restaurant. This suggests that that they actually had something to hide, and that they were worried. So rather than just address the attack, either McDonalds, or as likely, the French tax office, might like to give the place's operations a thorough audit.
Wrong device at the wrong price
Sony seem to have simply got the thing wrong. It is cheap, looks naf, and isn't waterproof. It clearly has never had anyone who has designed a watch before near it. The software will probably be rubbish too.
What it should have been is more expensive. And better.
Sony should have swallowed some of their usual not-invented-here pride and gone to talk to Seiko about the entire project. Sony can supply the internals, and a proper watch maker that understands how to make things that actually work, look good, and stay working on ones wrist should make the case. And make more than one case.
As above, when talking about nice watches - they are jewellery. People like to have nice things, and depending upon taste, clearly very nice, and very expensive things, on their wrists. Not something that looks like it was designed as a high school project.
It needs to be something that people will desire. Heck, make it £500 and as lovely as anything from the Swiss. Make it beautiful, and make it work really well. Sell it with a contactless charging pad (just leave it on the pad overnight and it charges). Make it in a range of styles, finishes, bands. Ensure that the buttons are the sort of thing one expects on an expensive watch, not a bit of cheap consumer electronics tat. And so on. Do that and they would sell more, even at £500, than they will sell of this bit of tat at £100 odd.
This will sell to the odd geek, who will later discard it. Sony need to get back a bit of the mojo of old, and make something that people actually desire to own. Once they sell the £500 one for awhile they can introduce the diffusion brand version for the ordinary folk. Oh, and give it a recognisable name, so that people can ask "oooh is that a Sony XXX?" (Where XXX does not equal "SmartWatch Andoid". XXX could have equalled iWatch - but that one is taken.)
Re: Haase and Bennett
Still proves the point, no matter how duplicitously obtained. If a pair of convicted drug dealers can get the "exercise the Royal Prerogative of mercy" call, and be pardoned of a crime, there is no logical argument the British government can reasonably stand on to deny Turing. The process clearly exists, and has been used recently.
Re: Not really a LEGO Turing Machine!
Different infinities. Try Cantor's diagonalisation mechanism to show that the Turing machine can still cover things. So long as there are Aleph null places on the tape, it is big enough.
Re: Not really a LEGO Turing Machine!
There is nothing stopping you writing a state control table that implements a program that can use the tape to hold both data and program code for a new automata implemented by the Turing machine. That is after all just unified code and data. And a nice exercise in showing how a Turning machine encompasses all other automata. In principle you could code an x86 interpreter on a Turning machine. All the CPU state still lives on the tape, an you could put x86 code and ordinary data elsewhere on the tape. What matters is that the ONLY mutable data lives on the tape. The table is fixed.
Re: Two points of view
Metadata. Ah yes. Good point about the metadata. That is going to be one place where flash might help. However it may start to be better to just cache it directly in system memory. There might simply not be enough metadata to warrant any optimised secondary storage for it at all. The breakpoint is likely a moving target. Reliability questions making things a little less clear cut.
Re: Not really a LEGO Turing Machine!
Yup. It isn't a Turning machine at all. Sorry, but it isn't.
A Turing machine only holds machine state on the tape. The programme, that is the state transition table is fixed and cannot hold mutable state. That is the definition. This model device only used the tape to provide I/O. There was mutable state inside the simulating engine - the engine internally did the calculation 2+2=4. It was not calculated on the tape, which is what a real Turning machine must do.
So, sorry, nice bit of Lego, but it is NOT a Turing Machine.
HPC has always been thus. I/O isn't random access to lots of little bits of data, it is massive broadside access to very very large lumps of ordered data. Latency of access is swamped by the transmission time. Optimising for access is simply missing Amdhal's Law. Most caching strategies don't apply. Data is very often only read once. Optimising data layout, order, prefetch, matching bandwidths, this is where you win.
Same for deduplication. Science data is inherently not internally correlated. Except for the case where finding hidden correlations is the entire point of the computation in the first place. Enterprise level dedupe doesn't get any traction at all, just slows things down and costs money.
Just the nature of the game.
- Geek's Guide to Britain INSIDE GCHQ: Welcome to Cheltenham's cottage industry
- 'Catastrophic failure' of 3D-printed gun in Oz Police test
- Game Theory Is the next-gen console war already One?
- BBC suspends CTO after it wastes £100m on doomed IT system
- Peak Facebook: British users lose their Liking for Zuck's ad empire