1604 posts • joined Thursday 18th June 2009 14:54 GMT
No, I think it's to protect at the other end...
i.e. Motorola and Samsung versus the no-name, very low specification, resistive tablets that are threatening to give Android an unfair bad name. I also think that maybe why in 'Honeycomb' they've picked a codename that sounds good and is being pushed as part of the branding, and seem to be retaining it to the next minor version.
Either that, or it really is just that the code doesn't look very nice. Not everything is a conspiracy and companies do sometimes tell the truth.
@AC "pad size"
I thought it was more that 10" screens are quite close to the size that both the international community and the Americans separately have settled on as being good for a piece of paper, so the thing ends up a natural size for web pages, PDFs, magazines, etc. The 7" screen is conveniently like a paperback, but less suited to the web. And that's before you throw in the media centre component.
I think you're right; with Clang now fully capable of C++ and Objective-C++, they've switched to a Clang/LLVM pair for Xcode 4, to power not just the compilation stage but the static analyser, the as-you-type error highlighting, and a bunch of other workspace things.
At present they're pushing all the way to a native binary, but it feels like it'd be a relatively trivial step from here to build LLVM into the OS and move towards LLVM bytecode generation being the default in the developer tools.
No, no good guy, but I'd rate Google as the better guy. They've done quite a lot to lead various markets in great improvements, be it in search, maps, email or whatever. They tend to be a bit lax on copyright in places like YouTube and Google Books but I doubt they'd predicate an entire avenue of their business on copyright theft.
Conversely, Oracle's actions with all of the Sun acquisitions are almost a study in trying to use legalistic means to quickly consume whatever value is left as a precursor to dumping the lot.
Could be more telltale than you think
Even if this were an official announcement from Apple, it'd have a little of the 'me too' to it, given that Microsoft has already announced an ARM port of Windows 8. Obviously the difference is that if Apple decide they want ARM then you stop being able to buy an Intel Mac anywhere, but supposing Apple were to switch and to demonstrate gains in doing so then the door will be completely open for companies that ship Windows machines to introduce competing devices into their ranges.
So: Apple's move could start a trend, or at least have more of an impact than just on the tiny OS X audience. Though you'd have to buy into the version of events where Apple are highly influential in everything they do rather than just occasionally influential in some areas; assuming genuine benefits do appear from ARM laptops then I'd expect Windows manufacturers to offer devices anyway, and quite possibly sooner.
While the Mac App Store almost surprises by being quick and snappy (though it shouldn't; this is just comparing it to iTunes), it is as the name suggests Mac only. So the Windows people, for whom iTunes performs about a thousand times worse still, would be left out.
That said, the handset itself has a pretty good client — maybe leave iTunes to back up and possibly to organise, but otherwise keep itself out of the software side of any of Apple's network connected devices? If ever an application could do with having its functionality pruned, it's iTunes.
Your comment is disproved by trivial research. From the Apple-specific websites that have reported the rumour:
9to5mac, from where the story originates: "Apple has long used the proxy of iTunes to push updates to its iOS devices ... Smartphone competitors have long offered a different, more direct method for software updates that happens over-the-air."
Mac Rumors headline: "iOS 5 to Finally Deliver Over-The-Air Updates?"
MacWorld: "Other smartphone operating systems such as Android can be updated over-the-air,"
GigaOm: "Smaller, incremental updates like those served to Android might be the way to go, but that would require a significant change in the way Apple approaches updates "
Barnes & Noble operate in the US only, whereas Amazon operate internationally and (finally) launched the Kindle outside of the US last year. So, while it was an oversight not to mention the Nook, it was probably understandable from an article written in a territory where the thing isn't available.
On the contrary
You're assuming it wasn't a bug, even though it (i) was a 'feature' that explicitly wasn't used — the problematic data wasn't posted to anybody or harvested in any way; and (ii) turned up for the first time in a major OS version revision.
Fine, don't trust Apple, but the evidence that this was deliberate is relatively flimsy and the harm it will have done to others will have involved malicious third parties. I'd therefore rate it alongside any other security bug, such as the dozens that crop up in all of the major operating systems and typically lead to malicious code execution, privilege escalation, etc. So it's something that should have been caught, and it may affect your opinion of competence, but it was quite probably just an oversight.
It's what the companies that have the locations updated to them across the network in real time (I think all three of Apple, Google and Microsoft?) are doing with that data that's more scary.
Proposed Kindle changes undesirable
If the new Kindle screen is literally like the Sony ones then it'll be a step backwards because the Sony screens have a noticeably lower contrast, which people tend to attribute to the way the touch stuff is bonded on top. If it's a tablet-style form factor with no buttons whatsoever then it'll be a step backward in usability since you'll no longer be able to hold it in one hand and turn pages, e.g. while using the other hand to grip some part of the tube/bus/train that you're commuting on but which has no free seats.
On the concept of a separate tablet, I see Amazon not just as the only company with a serious chance to challenge Apple at their own integrated market game, but quite possibly the only company that could displace Apple and become the dominant single player — by simultaneously having a ready-made broad retail audience, a strong content offering and, if Android based, being able to get the bloggers on board from day one.
It has become rather diffuse, hasn't it?
At least on Windows, it didn't perform very well even back when it just managed music but now that it also does movies, apps, photos, podcasts, etc, etc, etc, it really does feel a lot worse. I'm always optimistic that the problem is just the app's heritage (it was an OS 9 app originally) and that the encouraging signs Apple have made by rewriting QuickTime and fixing the Windows port of Safari (for performance and system integration; forget what it does versus your favourite browser) mean that the iTunes problem could be fixed, possibly even in the not-too-distant future. But there's no reason to believe it'll actually happen.
To quote LBJ:
"You do not wipe away the scars of centuries by saying: Now you are free to go where you want, and do as you desire, and choose the leaders you please. You do not take a person who, for years, has been hobbled by chains and liberate him, bring him up to the starting line of a race and then say, 'you are free to compete with all the others,' and still justly believe that you have been completely fair"
It's a question of measure and degree. The iPhone libraries meet probably 80% of the OpenStep spec, which was a pure API effort and was explicitly meant to be vendor neutral. So it's very closely OpenStep related. And OpenStep being explicitly for multiple-vendor implementation, provenance isn't relevant while you're willing to conflate OpenStep and NextStep. Which also nullifies your FIAT/Ferrari comment. OpenStep is a framework, not a company.
Android phones, like iPhones and others, implement probably 40% of SGI's OpenGL (since ES 1.0 cuts a very large amount of extraneous stuff and 2.0 culls almost the entire fixed pipeline) and almost none of the rest of the old SGI APIs, along with none of the design patterns.
If you look inside OpenStep source, you'll see NSArrays, NSDictionaries, NSNumbers, target/action patterns, delegation, key-value observing, the same protocols (in the NSCoding, NSObject sense, albeit largely informal), notification centres, a run loop, selectors and fully dynamic dispatch. If you look inside iPhone source, you'll see all of those same things. So it's the same fundamental base objects, the same fundamental design patterns and mostly the same higher up objects.
Summary: your "some parts" is a massive understatement; I don't consider it so incorrect to suggest a single lineage as to maintain the article's author was wrong. It's not just that the odd API has survived and it's nothing to do with the legal name of the company involved.
Suggest you do your research. Or, possibly, stop using 'shill' as a generic smear when you've nothing intellectual to say.
Me, on "Kindle beats Apple's closed book on choice": "I have a Kindle and an iPad, and tend to buy a few books a month on the Kindle but have never bought any at all on the iPad"
Me, on "The Microsoft mobile reboot needs rebooting": "Google would argue they're only temporarily closed source, but even if the Android source code is never published again, it'd still be the only one of the offerings from Microsoft, Apple and Google to allow anyone to install any application from any source."
Me, on "Android, Steve Jobs, and Apple's '90%' tablet share": "Android dominates the phone market ... by being on a lot of different handsets, relatively cheaply. It ticks all the boxes that a large proportion of the market care about, which is enough."
Me, on "Steve Jobs vindicated: Android is not open": "Whether an OS is open source is a completely unrelated issue to whether it has an open market in applications, Microsoft Windows being the obvious evidence — it has the most diverse market possible and not one jot of it is open."
Me on "Tesco herals 2011 as YEAR OF ANDROID": "I was under the impression that 2010 had been the year of Android and that, across the whole market, Android phones were outselling iPhones already. So it's a bit surprising that Tesco have only just caught up."
Me on "Apple 'greed' tax spreads beyond music, movies, magazines", when in a particularly bad mood: "Apple have now made it impossible for others to defend them convincingly. The new charge on subscription services means that the costs of being in the ecosystem now outweigh the benefits for many major companies that you've heard of."
"Fanboys all a-quiver"
Possibly I'm being dense; does this article's subtitle contain the usual attempt at humour or is it just plain trolling?
Re: the actual story; I can't imagine what took so long, and agree with Ben that the device doesn't even look very nice.
@Davidoff re: "not a 'NeXT handheld computer'"
It has an updated version of the language runtime (per the move towards formal protocols and the addition of closures), all the old Foundation classes (with additions), much the same conventions and patterns (target/action, delegation), and the kernel is a much updated version of that which was part of NextStep. Only the user interface library is all new, per the new user interface paradigm — multiple touches and direct manipulation are in; at the C level DisplayPostscript is out due to licensing costs and a PDF-derived alternative (so, same primitives but no language) is in.
I'd say it is quite closely related to NextStep.
With both mucksie2676 and the other posters making the point that correlation is not causation and, even if it were, taking the one difference in abstract is massively overreaching.
From a completely subjective point of view, I live in London, in a typical London sized property. When I bought books, I tended to read them once or twice, maybe keep them on a shelf for a while but eventually give them away — either to a friend or via those handy Oxfam collection bins. I don't have the space to keep every book I've ever read and I don't really have the desire to. I have a bunch of reference books that I'll probably keep forever but they're in the minority.
Hence, I really don't care about portability of my e-book purchases. I have a Kindle and an iPad, and tend to buy a few books a month on the Kindle but have never bought any at all on the iPad despite downloading the iBooks app and trying the sample copy of Alice in Wonderland. I have the Kindle app for the iPad too, but doubt I've opened it more than about twice.
It's simply that the Kindle is by far the better reading device. It's a much more convenient size for public transport, you quickly forget that it's a screen you're looking at because the type looks a lot better (ie, you can't see the pixels), and the screen is visible in direct sunlight and doesn't attempt to blind me when I'm reading at night. So, it's more convenient: (i) on public transport; (ii) in the park; and (iii) in bed. Which is a clean sweep of my normal reading environments.
I guess it's nice to know that my purchases will someday be portable should a better device come along, but as I've yet to read any of them more than once I'm not really that bothered. It's actually much more bothersome that I can't pass them on having now finished with them.
On the contrary
The primary thing that dictates the best screen size is the size of an average human hand. The bigger necessarily equals better crowd are primarily those that have decided they want to criticise and have worked backwards to find any distinguishing feature. I'd rate it about equal with "the Android user interface is worse because it doesn't have CoverFlow" on the scale of valid criticisms.
Of course, that doesn't mean Apple won't ship a bigger screen. At this point, distinction from the previous years' model seems to be getting ever more slender across the industry, so giving people anything on which they can try to rationalise an upgrade is advantageous.
According to my maths...
1080p has a diagonal of almost 2203 pixels, so on a screen with a 50 inch (or 127cm) diagonal that's 44 PPI (or, the other way around, each pixel is about 0.058cm across).
So if you sit 2m away, each pixel subtends an angle of about 0.02 degrees on the retina, or very very close to 1 arc minute. A human eye in optimal illumination conditions can distinguish two lines if they're 0.6 arc minutes apart, so technically a higher resolution could be beneficial. But I imagine not really in any of the scenes you're likely to see on TV, which tend to be moving scenes featuring large objects rather than perfectly static shots of typography or, ummm, rakes at a distance. And probably not at all if, like most of the people here, you've spent most of your life staring at a screen that's maybe 45cm from your face.
It's probably worth someone else checking my maths before you take this as a definitive answer...
There's some sort of refund mechanism, but I've never been clear exactly how it works. There was a minor uproar amongst developers when it was first announced since Apple were taking 30% of the original sale, but then supplying 100% of the refund from the relevant developer's account. I suspect they may just have decided to make the feature a little obscure rather than deal with the matter properly.
@revisionists both ways
What the iPhone had was a multitouch, direct manipulation API. So, in the browser you didn't use a little joystick to step down the page one notch at a time and you didn't go to a little submenu somewhere to decide whether you wanted 50%, 75% or 100% zoom. And it shipped with industry standard fonts like Times and Helvetica, using Apple's preferred typographically-accurate rendering, on a screen with a then high pixel density. It was also the first phone to ship with and with an OS built around the presumption of a GPU.
When you bought an iPhone you also got an unlimited data plan — at the time almost unheard of — and no carrier interference.
Most competitor's phones then rarely shipped without being crippled by the carrier, required a BSc to operate fully and were proud to include only exactly one font. I was at an a Nokia presentation just shortly before the Microsoft announcement and one of the invited speakers opined that it was a big joke that anyone should care how text on screen looks, as long as it's there.
So, limited innovations if you're going to boil them down to numbers. But if you're the sort of person that genuinely thinks the correct way to compare devices is a simple present-or-absent feature check list and that direct manipulation and physical navigation metaphors count only as "a pretty UI".
The Asus Eee Pad Transformer doesn't have the same specs as the iPad. It has a slower CPU and GPU (see, e.g. http://www.glbenchmark.com/compare.jsp?benchmark=glpro20&showhide=true&certified_only=1&D1=Apple%20iPad%202&D2=Motorola%20Xoom&D3=Apple%20iPad&D4=Samsung%20GT-P1000%20Galaxy%20Tab), being based on an older chip, which it counterbalances with a better camera and slightly larger, slightly higher resolution screen. And it's just £20 cheaper, so all it really evidences is that other manufacturers are able to hit much the same price points for much the same technology.
No innovation required
Android dominates the phone market not by being on any innovative devices — in truth nobody has innovated for years — just by being on a lot of different handsets, relatively cheaply. It ticks all the boxes that a large proportion of the market care about, which is enough.
Google would argue they're only temporarily closed source, but even if the Android source code is never published again, it'd still be the only one of the offerings from Microsoft, Apple and Google to allow anyone to install any application from any source. They also score points against Microsoft for the breadth of the SDK - Java in the VM or C (or anything else GCC can do) directly on the processor. Microsoft are allowing .NET managed code only. Don't expect Angry Birds too quickly and probably don't expect a port of the Unreal Engine at all.
Not sure what you mean about GPU architecture
"It just boggles the mind that GPU architecture is still in the sub 1 Ghz, single core days of yore."
I'm not sure where you're getting your information from — to the extent that anybody cares about clock speeds, GPUs exceeded 1 Ghz a long time ago, and they're all in the hundreds of cores nowadays. The top of the range workstation GPUs from NVidia (such as the Tesla C2070) have 448 cores, and those, like those on the consumer cards, are fully programmable in various C-like languages such as CUDA and OpenCL.
So I genuinely think I must have misunderstood your comment.
"If you write efficient code, then it'll run pretty much just as fast on .NET as native, because after JIT compilation kicks in the first time it'll be compiled to native code and optimised just as efficiently as a native compilter anyway."
An ahead-of-time compiler can take as long as it wants to compile code. So it can apply some very computationally expensive algorithms to try to seek out optimisations. It can analyse as much or as little of the program as it wants to make optimisations.
A just-in-time compiler is designed to do a lot better than interpreting, but doing things quickly is still a concern. So they tend to apply only simple-to-compute optimisations, and don't generally get an opportunity to look at the whole program in overview.
In practical terms, just-in-time compiled stuff usually ends up being in broadly the same ballpark as the precompiled stuff (both being several orders of magnitude better than anything interpreted), but it's false to label it "just as fast". As noted by other commenters, if just-in-time costs you even just 5% of performance then that can amount to hundreds of millions of dollars in a year for an operation the size of Facebook.
You don't think WP7 is a result?
While much of the onerous stuff resulting from the judgment vanished almost immediately upon the change of US president, I tend to think that making it much harder to deny that Microsoft had damaged the industry set the stage for their decade of failure to succeed in new markets (the Xbox being the only exception). It's not just because of products like the Zune, out of date almost immediately upon release (as a regular iPod clone just before the iPod Touch came out), but also Windows Mobile - which was around a long time before the iPhone-inspired smartphone boom but failed to gain any traction. I think a contributor to that was an industry consciously resisting a manufacturer with an antitrust track record and consumers being similarly wary of a tarnished brand.
Ummm, not quite right
The main groups that have continuously shouted about Android being open and that being one of the reasons why it is better are (i) Google; and (ii) a particular, vocal segment of Android users. It's those that support Android that have conflated openness and a bunch of other issues, not those that seek to detract from it.
I agree that the conflation is unhelpful and often misguided. I strongly disagree that it is mainly the product of "those with an axe to grind".
So that leaves just me...
... who is perfectly willing to believe that the presenters of Top Gear may have libelled Tesla (though, equally, they may not have done, I don't know) and willing to admit that he doesn't find the programme entertaining at all? I always find their little conversations to be laboriously over rehearsed. I mean, not on the Master Chef level — they're well above that sort of stuff — but so as to make the 'wit' very hard to enjoy.
Thus concludes my highly irrelevant and completely personal opinion.
Come on, be fair
The piece clearly isn't an op ed, and gives both sides of the story. All of the reasons you give for it being a spent story are within the article - there's no convenient ignorance of facts and no endorsement of either side of the argument. The reported has even put allegations of bias to one of the main actors, Nimmer, and gone with a headline that suggests the story is no big deal.
It's not bad reporting to say "this person prominently says this, the facts are these" just because the facts are more subtle that the person contends.
@AC 04:45 GMT, AC 13:22 GMT
@04:45: I'm not sure you've understood my point. Or, more probably, I haven't understood yours. I was trying to make it clear how little most people care whether Android is open source. Whether an OS is open source is a completely unrelated issue to whether it has an open market in applications, Microsoft Windows being the obvious evidence — it has the most diverse market possible and not one jot of it is open.
@13:22: your post has no basis in reality. It's a simple troll. Nobody is bickering about Android 3.0 not yet being open source, it's a simple fact. Quite a lot of people, like me, are pointing out that it doesn't matter in the slightest. You're also wrong to state that Apple's market share is shrinking, as it's still growing, and growing faster than the market as a whole. However, it's growing substantially less quickly than Android definitely did during 2010 and probably still is, and Android shipments were ahead of iOS shipments if you restrict numbers to phones only.
Again, all facts. But this is the Internet, so I'm sure you can find someone who will take your bait.
It's nothing like a poor troll. The point isn't that Android is worse than iOS because it's not open, it's that Google are wrong to say it's open because it's demonstrably not according to their own test. And promising it will be again in the future isn't the same.
To be honest, I think that anybody that relies on "it's open" as the cornerstone of their advocacy for a consumer-facing embedded operating system has already lost the argument. To advocate Android you should focus on the free market in applications, the price and the diversity of devices, none of which this article disputes and none of which are affected by the news it covers.
Could be because it's not important to the story
The story: a sea change in the OS offering from Apple, and all the user-facing changes it brought. Your allegation: it doesn't go into the internals, so obviously it's delusional. Have you ever heard of confirmation bias?
There are a bunch of different bits of underlying technology in OS X. Some of them are Apple originated, some of them aren't. None of them are particularly relevant.
I don't see the problem
It's just competition in the marketplace, and DLNA isn't actually supported by very many devices. Competition is how we run our capitalist societies. I vote: let Apple license their proprietary solution, let the market decide. If, as you seem to imply, it has already decided then there seems to be little damage that Apple can possibly do.
That's not my interpretation
It sounds to me like all they want is more straightforward legislation. I guess the problem is that English courts tend to give exact literal meanings to legislation and will apply a purposive reading only in extreme cases. Though that's quite a good thing if you're trying to avoid politicising the judiciary, so it cuts both ways.
You've obviously not seen the modern style of documentary film
It'll be 30 minutes of someone looking solemn, while walking in crowds, telling us we're about to see the recreated flight and that seeing it will be the single most important thing that happens to us in our entire lives, then some out of focus shots of someone that looks a bit like Gagarin sitting on a bed or looking in a book or something like that, then some talking heads from the world of light entertainment to tell us why the space flight was so incredibly important and — most significantly — how it affected pop culture, then some out of focus shots of someone that looks a bit like JFK looking sad, intercut with someone that looks a little like Khrushchev beating his shoe on a table, maybe 30 seconds of the recreated flight cut to new age music which primarily focuses on the man who looks a bit like Gagarin doing out of focus reaction shots, then 30 minutes being told that seeing that was the single most important thing that will happen to us in our entire lives.
Nope, it's Intel only
I can't imagine why, given that the codebase doesn't otherwise have Intel dependencies per the evidence of Linux support, but according to http://www.mozilla.com/en-US/firefox/system-requirements.html it's 10.5 or 10.6, Intel only.
I think you're wrong about OpenGL, and WebGL.
The industry remains unable to standardise on a 3d file format for a variety of technical reasons. So standards usually fail to gain momentum. The industry standardised a long time ago on the best ways to talk to a GPU, and building it on top of canvas means they're extending something that has already found traction rather than trying to do a completely new thing.
It is also false to argue that OpenGL has been languishing for some time. DirectX is more popular because driver support is better on Windows (partly because drivers are easier to write; it's only very recently that they've been willing to deprecate anything in the OpenGL spec) and it makes it trivial to port to Xbox. It's purely a money decision. Both APIs fully expose everything the GPU can do and the GPU is fully programmable, so they're basically equivalent.
WebGL actually descends from GL ES 2.0, which cuts out all of the fixed pipeline legacy stuff, explicitly to be simple, straightforward and modern. GL ES 2.0 is a total of 109 function calls, using which you can upload any data you want to the GPU and supply vertex and fragment shaders to do whatever you want. As it's for embedded systems (where nobody supports them), there are no geometry shaders yet. Go to Google Images and check out Epic Citadel and/or Infinity Blade to see what real products are being released based upon it.
To be honest though, graphics APIs as the route to hardware are probably just about done. It'll be GPGPU where major advances are made, which means OpenCL (not part of OpenGL, but designed to work well with it) on the Kronos side, DirectCompute (part of DirectX) on the Microsoft side.
I did just say that I think it's a matter of what you're accustomed to. So that's at least one person who likes anti-aliased, loosely or not-at-all hinted subpixel rendering acknowledging that others may not.
I think the wider attitude comes from the historical perspective. The unique corner that Microsoft have boxed themselves into on font hinting was a result of wanting to make vector fonts look like bitmap fonts at low pixel densities. So it's a historical anomaly. Pixel densities are much higher now than they were when Microsoft made that decision and are getting higher every year. So in a few years, everyone's going to be happy anyway — when you can no longer see the anti-aliasing you'll be happy, right?
Supporting aggressive hinting has the technical problem of introducing unpredictable text lengths. For example, the letter 'n' might end up being 4.5 pixels on your screen. If you aggressively hint that, it turns into either 4 or 5 pixels. If you print, say, 80 of them (ie, a normal sort of length of text), you've introduced 40 pixels error, one way or the other. But that's only on your screen; if the option is present and somebody else has set to respect font shapes then they don't get the 40 pixels error. And if the GUI is scaleable and someone else has aggressive font hinting on, they can get anything from 0 to 40 pixels of error.
As a result of that, it becomes very difficult to lay out your GUI in a designed fashion, using the same concepts and tools as designers like to use for the page or within Photoshop or anywhere else. So designers don't like it and tend to produce worse results. And in an increasingly design-oriented industry, you need to keep your designers happy.
Therefore, the real logic is push subpixel now, significantly ease design tasks (especially when specifying new layout technologies, like WPF and I'll wager Direct2D), in a few years everyone will be happy in any case. To extend that to alleging that anybody who likes aggressive hinting is obviously a kook is obviously an unsafe way to proceed, but I think it explains why companies like Microsoft can't just offer the option, and why the default position is to find some reason not to offer support.
I don't think that's ClearType
ClearType still aggressively hints, giving that uniquely Microsoft look of primarily spindly horizontals and verticals, with some 45 degree diagonals. The way they display fonts normally is so far from how they print that they actually commissioned special fonts to solve the problem (see, e.g. Calibri, which is the default in Word as of Office 2007).
For the new browser they appear to have taken a step towards matching the way a font appears on paper, on signs, on computers from other manufacturers, etc, and away from avoiding all subpixel considerations at all costs. Which I think is the direction WPF went in quite a while ago, and is something they've received a lot of praise for in Windows Phone 7.
I'm a fan, but then I normally use one of those computers from another manufacturer and I think a lot of it is just what you're accustomed to expect.
Expect the same from a hundred more: Apple hasn't disabled JIT for anything, as it was never available in the first place. It simply failed to make it work across the entire operating system in a release that had to be ready for a highly visible hardware launch.
That aside, Microsoft's conversion is difficult to take at face value because of their history, but hard to find fault with. My feeling is that they want to be able to compete away from the desktop, so needed a reasonably modern browser they could port to everywhere. It's also probably safe to assume at this point that they'll retain the desktop for as long as it persists, so that's not so much of a concern. This is Microsoft finally looking to the future.
It's because all business has risks
The simple unequivocal existence of a risk doesn't in itself justify withdrawing from a market. Decisions are made by looking at the expected gain, which includes allowances for all the risks.
So: it's a straightforward decision for someone like Amazon; probable gain is great and they've the clout to minimise the risk of rejections. It's also quite simple for amateur games writers, since the risk of rejection is tiny and the potential gain is great, even if extremely unlikely.
In practise, the probability of being rejected adds only very slightly to the probability of your work making no income. It's just that "Apple rejects perfectly reasonable app; aren't they a bit controlling?" is much more interesting than "App is released; nobody uses it".
I believe the US Department of Justice are already looking into the area. There's some suggestion from the usual quarters that connections between Google and the current administration may have helped expedite the issue, but absolutely no suggestion that the investigations will be influenced in any way.
The problem is that they wouldn't actually be leveraging their iTunes monopoly. To find competition infringements, authorities will need to establish that Apple have and are using a dominant position in one market to distort competition in another. In this case that'd be a dominant position in app vending to distort various subscription models. I actually think there's a good chance because Google aren't the only app vendor for Android so pure Android sales numbers aren't the question and iOS people tend to spend more on apps anyway. Apple's subscription rule limits what those offering subscriptions can do about passing Apple's 30% on to the consumer and thereby seems to distort the market.
That all being said, I maintain that Apple think they can use their current position — for as long as it lasts — to gauge subscription vendors, not that they're intending to keep down competitors.
Yes, everything other than Safari is crippled in the sense that under iOS 4.3 it runs only exactly as fast as it did under iOS 4.2.
It's also nothing to do with http versus file. If you read the article, you'll see that web apps no longer appear to be cached locally. So both routes are http. The distinction seems to be between UIWebViews, used everywhere except Safari, and Safari. Their browser is a lot faster than web content displayed everywhere else.
It's more than possible that they want to get a significant amount of field testing done for free by incorporating the new engine into Safari. When they're sure it works properly, they'll put it into the standard web component used throughout the OS.
Apple have never denied they were working on a tablet or slate. They never confirm or deny their plans in advance.
As a result of that, I agree that they very often have plans they haven't discussed and that we can't conclude with certainty that the SDK wasn't a plan at initial launch. But it's wrong to impute dishonesty.
Conversely, in response to the original poster, half a decade of maintaining a bridge between their native APIs and Java and citing Java as an on-the-box feature, then half a decade of maintaining it internally at their own cost, then a collaborative effort to transfer maintenance to the same people that maintain Java on Windows — including providing source code and documentation — suggests they probably don't despise Java.
Are you some sort of robot, posting your identikit comment to every Apple story that appears?
There are no grounds whatsoever from which you can conclude that Apple's 30% charge on subscriptions is an attempt to deceive customers. You've taken one wrong and used it to allege a completely unrelated offence.
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- Pics Brit inventors' GRAVITY POWERED LIGHT ships out after just 1 year
- Microsoft teams up with Feds, Europol in ZeroAccess botnet zombie hunt
- Storagebod Oh no, RBS has gone titsup again... but is it JUST BAD LUCK?