That is all...
More details rumors about the iPhone 5S that's reportedly being readied for a September 10 unveiling include a 128GB option, improved low-light camera performance due to a dual-LED flash and an ƒ/2.0 lens, a new 64-bit A7 SoC based on the ARMv8 architecture, and a step up in memory bandwidth from LPDDR2 to LPDDR3 RAM. Longtime …
That is all...
I bet you wont yawn when the new iPhone 5S turns out to even work when you're holding it wrong! I know, that wasn't mentioned in the list, but maybe . . .
Apple are very worried about Windows Phone - and especially Nokia's high end devices. WP already hit ~ 10% market share in a number of countries including the UK. Apparently Apple are considering competing on price! They must be really panicking if that is true!
Blimey I reckon a 24 month contract with be £1M a month to pay for that baby. I assume you can make calls on it?
3 up votes and 5 down votes. Hmmm?
Alrighty then. The iphone is the best phone on planet Earth. Samsung are crap and deserve all the law suits they get.
I'll come back and check the up votes later.
Ta ta for now
There you go.
Shouldn't REALLY surprise you that a story about new iPhone rumours will attract a fair number of iPhone fans. :o)
The best smartphones are now made by Sony: Xperia Z, Z Ultra, SP and the coming Honami in September.
Check this out: https://www.facebook.com/sonymobile
"The best smartphones are now made by Sony:"
Oh please.... Stop smoking that rotten crack.
Sony stopped making good smartphones after the P910i and stopped making good camera-phones after the K850i.
And I stopped caring about them ages ago.
If you stopped caring about them, you wouldn't know how good they are or not, surely.
And before any gimp gets excited about that comment, that could apply to anyone making that comment that brand x are crap, I stopped caring about x years ago.
It's called ignorance, and there is no excuse for it.
If they'll be shipping it with only 1gb of RAM and given that iOS doesn't use virtual memory for process storage, why would Apple want to transition? The only use under iOS as currently designed would be to allow larger sections of the disk to be memory mapped (a virtual memory use Apple does permit), which is not exactly a limitation developers often run up against.
I could understand it if the risk were ignoring the next step until it's too late but the 64-bit ARM architecture is ready to go and Apple controls both the tools and the channel of distribution so it can force very quick changes in those.
As Apple doesn't usually implement technology until there's a pressing business need to do so, this rumour doesn't sound all that likely to me.
Don't forget that you can process 64 bit values a hell of a lot quicker on a 64 bit system than a 32 bit system. On a 32 bit system, there's a speed penalty for using doubles and long ints (or however else you want to describe a 64 bit float/integer). Less so on a 64 bit system.
I agree that a 64 bit CPU in a phone is silly, though it is possible that architectural cleanups in ARM64 (removing outdated stuff, adding more registers) might allow making a CPU that runs 64 bit code faster (or using less power) compared to 32 bit code - basically it has overcome the disadvantage of pointers taking up twice as much on-chip cache. We'll have to see the first few real world 64 bit ARM SoCs before we know. I suspect there's really no difference, and the only advantage of 64 bit today would be the same as that of a quad core CPU...marketing!
I actually could see Apple making the A7 64 bits but leaving iOS 7 running 32 bit. The reason for this would be to allow developers to run iOS 8 betas in 64 bit next summer and prepare their apps to run in 64 bit, if they plan to make iOS 8 support 64 bit. They wouldn't need it for the iPhone, but by then they might want to ship a 4GB iPad. While there are ways to have more than 4GB and run a 32 bit OS, they are ugly (PAE)
If Apple ever wants to let people use their phone/tablet as a desktop computer, by having an OS X "app" that runs when it is docked to a monitor and allows use of a bluetooth keyboard/mouse so you can get a fairly normal desktop computing experience for those tasks a phone or tablet suck at, such as anything involving a lot of typing, or the type of FPS games that don't really translate well to a touchscreen, then they'd definitely want a 64 bit CPU (and yes, maybe even quad cores) But perhaps I'm the only one who thinks this is a good idea...
The only real difference for 64 bit CPUs is the ability to address more than 4GB of RAM. 32 bit ARM SoCs already support floating point values using 64 bit FP registers, so a 64 bit CPU is not going to speed up floating point. There probably aren't any apps that would get a noticeable boost in speed because they are performing integer operations on values > 2^32.
Yeah but ARM is awful for floating point so you want to avoid if at all possible.
Is there a benefit in terms of the number of registers you get or is it like powerpc or sparc where the only difference is whether you use 64 bit instructions or not ?
(Shouldn't be any floating point in the kernel cannot see them needing more than 4GB ram for the moment).
Only thing I have noticed that matters on Solaris when I tested a fair few things on sparc (using -m32 or -m64 with suncc) is openssl. (x86 isn't the same because you get more registers with 64 bit which makes it always worth it).
Actually 64-bit gives you two things, 64-bit addressing (address lots of RAM, useless on a phone with fixed RAM) and 64-bit numbers (most useful for floating point), which are also pretty useless on a phone for the most part. The increased size of memory addresses tends to slow things down or use more memory, so Java for instance compresses such things.
Classic mistake, 64-bit's is not always faster depends on what your application does.
The question you should be asking is, why not? 64 bit doesn't increase the CPU die size very much (maybe a few percent) and it doesn't make anything run much slower.
It's really not that big a deal. Remember, Nintendo 64s from the mid 90s (with 4 or 8MB of RAM) had 64 bit CPUs. Also, AMD started transitioning desktop CPUs to 64 bit when 256 to 512MB of RAM was commonplace.
It's not all about addressing physical memory. Files have addresses too. Also certain algorithms run faster with native support for 64 bit integers.
(+1 to the other poster who correctly said that "64 bit" has nothing to do with support for 64-bit floating point numbers.)
Classic mistake, 64-bit's is not always faster depends on what your application does.
Yup. I'm with the crowd that says 64-bit---huh? In general it's going to slow things down if your instruction bus width has to double in size. I'm not sure how ARM is handling the transition to 64-bit. A new 64-bit wide instructions plus legacy (retroactively named Thumb-32?) plus Thumb-16 seems awkward, to say the least.
Besides the inherent disadvantages of 64-bit with respect to increased code size and/or need for different ISA modes, what advantages would it have? Only scientific applications really need double-precision floats, so that's the preserve of clusters, not phones. And there are precious few other applications that are screaming out for bigger integers that can store values > 4Gb or +/- 2Gb for signed. This is especially true when your physical RAM doesn't even extend beyond 1Gb (though I guess mmapping a really large file or externally shared memory might be a potential use).
In my opinion, the best way to improve current 32-bit ARM chips would be to increase the number of registers (though it's already pretty decent with 16, and bumping this also means increasing instruction size) and/or improve the range of NEON SIMD instructions (with ability to do things like summing and testing conditions across values and a way to select/shuffle sub-words based on the condition, though again, this is much more useful with 64-bit or better registers). So going 64-bit for its own sake is a terrible idea, but if it's just a side effect of implementing a richer set of features, it's OK I guess.
Damn.. I meant to add: 4Gb of RAM should be enough for anybody!
The only thing I can possibly think of, without reading up on how their implementation works, is the fingerprint recognition in real time at any orientation. Even then it seems a bit much perhaps.
Why 64 bit on a phone?
These are smartphones. Smartphones play games. If you think games (and pretty much anything else with large 3D scenes) won't benefit from 64 bit accuracy, where have you been for the last 20 years?
I know it's not a phone game, but Kerbal Space Program is one example of how limited float accuracy can cause all kinds of wierdness, like watching your aerobraking apoapsis vary between "completely miss the atmosphere" and "make a huge crater in a lithobraking manouver" until you get closer to the target planet.
Being able to grab huge numbers in and crunch on them in the minimum number of clock cycles is always going to be an advantage.
64 bit will also look good for marketing because 64 must be twice as good as 32 for anyone without a technical understanding of the issues.
Which is probably most of the iphone buyers. No class or demographic barrier on the iPhone - not going by the range of iphone cases that can be bought in my local Tesco.
The only real difference for 64 bit CPUs is the ability to address more than 4GB of RAM.
You do realise the ARMv7 architecture has support for 40-bit memory addressing, meaning 4GB has never been a limit (it's closer to 1TB).
What? Tesco's and the iPhone? Shirley this can't be true?
Wow. According to Fandroids, you have to be a shiny-shiny hipster to want/own and iPhone. I can't think of a place less likely to be used by Hipsters than Tesco's apart from Asda.
Not that I own an iPhone or frequent Tesco's or Asda on a regular basis.
Paris will be crying into her Jimmy Choo's at the thought of buying her iPhone from Tesco's....
"A new 64-bit wide instructions plus legacy (retroactively named Thumb-32?) plus Thumb-16 seems awkward, to say the least."
64 bit chips don't have 64 bit wide instructions. The "bittiness" of a chip refers to how wide its [integer] registers and ALUs are. Nothing else. Re: instructions: it takes just as many bits to say e.g. "add register #2 to register #3" regardless of how big those registers are... 32 bit, 64... 8... 256... etc.
Also, 64 bit software doesn't use that much memory, usually. The default integer size is still typically 32 bits. It's just the size of pointers that changes from 32 to 64 bits.
"If you think games (and pretty much anything else with large 3D scenes) won't benefit from 64 bit accuracy, where have you been for the last 20 years?"
Even the original iPhone had a floating point unit with hardware support for double precision, i.e., 64 bit. Again, the bittiness of a CPU doesn't refer to how wide the floating point units are.
"You do realise the ARMv7 architecture has support for 40-bit memory addressing, meaning 4GB has never been a limit (it's closer to 1TB)."
And 32 bit Intel chips since the Pentium Pro can address 36 bits of physical memory (64GB) via segment registers. But it's a million times nicer to be able to specify addresses in the CPU's native integer width.
I'm amazed there's so much pushback to switching phones to 64 bit when there's basically no disadvantage to doing so. And besides, many Android phones now have 2GB of memory. Typically, 32 bit OSs can't make full use of 4GB of memory because some of the address space is reserved for DMA transfers and whatnot. Which means that when phones start shipping with 4GB of RAM in the next year or two, we will need 64 bit processors and OSs to take full advantage of them.
I do wonder if the people complaining about 64 bit are just Fandroids and this is a knee-jerk reaction to Apple doing anything. If a Google phone came out with a 64 bit CPU, what are the odds that message boards would be flooded with Fandroids using it as an example of how Apple just makes shiny, technically inferior products with no focus on engineering.
64-bit apps will generate 60% of the apps revenue for iTunes, wheras 32-bit apps now only generate 30%. I'm sure that's enough "pressing business need".
Actually andreas has about the only sensible reason for such a move - they could claim incompatibility of the shiny new 64bit iphone with the old 32bit one - and say you have to buy new 64bit versions of your apps.
Which puts more money in apples' bank account.
I'd be doing it just to get some 64-bit arm experience under my belt. (I'm not sure that came out right...)
Run IOS apps on a mac and keep that haswell sleeping.
to TomH: propaganda.
Maybe, just maybe, Apple is preparing the ground for the next generation of phones with a unique address space, no more distinction between volatile and persistent storage
"I do wonder if the people complaining about 64 bit are just Fandroids and this is a knee-jerk reaction to Apple doing anything."
No, that's called a persecution complex and can become a debilitating illness if left unchecked.
The way that the 40 bit addressing works on a 32 bit ARM is by the use of segment registers allowing you to offset the virtual address space for a process into more that 4GB of memory. It's not new technology, and has been a cornerstone of the instruction sets of processors since the mid-1970's.
The first architecture I saw address extension done was the 16-bit PDP-11, which had it's address space stretched from 16 to 18 and then to 22 bits in different models. I do not know the ins and outs of Intel's PAE, but I suspect that it is something similar. The Power processor family also does something similar for it's virtual address space, although it does not need it to stretch the address space. Most other modern processors (those designed in the last 30 years) do something similar to support virtual addressing (but not necessarily for address extension).
The basic method involves breaking up the virtual address space into chunks called segments, and then adding a real-address offset to the base address (normally designated as a page number) in the address decoding hardware. This allows a process to see a linear address range scattered over a larger possibly non-contigious address space. The impact to the code-writer is ZERO. There is nothing that needs to be done for a user-land process to cope with this technique. All multi-tasking OS's have done this for what seems like forever.
It does make the OS have to a bit more work every time you start or context switch a process (it has in some way to manipulate the segment registers - it's different in different architectures), but it's well understood what needs to be done, and has been a standard technique. And it is perfectly possible to write the OS itself to work in a virtual linear address space (an example was the 32-bit AIX kernel running on 64-bit RS64 and later Power processors), where the OS is in control of manipulating the segment registers for itself, as well as for all of the other processes. The 32-bit kernel could manage 64 bit processes, with more than 4GB of real memory on the system, which when I explained it used to puzzle people for whom the 32-bit to 64-bit migration in Windows seemed like a huge deal.
The major limitation to this is although the system may have more memory than the size of an address, it can only be used in chunks determined by the width of an address. So for example, an individual process in an ARMv7 with 40 bit LPAE can only address 4GB of the address space, even though the architecture will support 1TB of real memory. But of course, you can have more than one process, allowing you to utilise all the available memory. And as a side effect, you have the ability to share pages across multiple processes for in-core shared libraries, shared memory segments, and memory mapped-files.
This is not even a problem for the OS, because all the writers have to do is to keep at least one segment free, and then manipulate the segment register to allow the OS to see any of the real memory. Of course, it can't see all of memory at the same time, but it can get access to any of the memory.
The issue of whether 64 bit addresses will add any more inefficiency over 32 bit addresses is all to do with whether half-word aligned load and stores can be done natively. On some architectures, performing a half-word operation (for example a 32 bit load or store on a 64 bit machine) requires loading an entire 64 bit word, and then masking and shifting the required part of the word to obtain the correct half word value. This may be microcoded, but in some architectures had to be done by the program itself. This is slower, and on some architectures, the decision about whether to 'waste' 32 bits of memory verses the performance costs of half-word operations was a difficult decision.
I would have to research the ARMv7 and ARMv8 ISA to know whether this is the case, although I would welcome someone in the know to provide an answer.
Whether floating point load or store operations can be done in units other than the word-length is different from architecture to architecture. For example in Power 6, it was necessary to load a floating point value through a GP register (or two in the case of a double-word FP value), and then move it to a floating point register. For Power6+ and Power7, it is possible to directly load from memory to a floating-point register, allowing you to do double-word FP loads (128 bits) in a single load operation. This decouples the FP processor from the natural word size of the CPU.
Not that I would buy an iPhone but one in chrome or platinum could like nice. Is gold not just a little trashy these days though?
More likely to be anodised with something hard and goldie-looking. Titanium Nitride would fit the bill (you'll probably have seen this treatment on some drill bits), except it looks more goldie than gold (and thus trashy). The UK bicycle component company Middleburn used to make chain rings with some tasteful hard-anodised colours, if earthy colours were your thing.
Well seeing as ifixit seemed to reckon that apple pay about $20 per 32GB of flash for the iphone 5 (for which apple charge $100 e.g. when bumping from the 32 to the 64gb model) and they reckon there's $440-odd of pure profit in the base 16gb model, I reckon with another year of price reductions and economies of scale in supply, Apple could manage to sell ONLY a 128GB model at the base price and still make $370 profit per phone.
Although of course that doesn't consider that it looks like Apple make $611 per 64GB iphone due to the extreme overpricing of a few gigs of extra flash, and of course there's some losses in retail distribution etc that ifixit don't take into consideration.
Maybe not the Apple way, but perhaps time to sacrifice a little bit of the profit for market share? There are rumblings afoot and maybe this is one way to tackle them.
I think they'd influence some of the Market their way and have the dual purpose of not losing the Apple fans that have found they constantly have to administer the space on their device cos they didn't want to pay an extra $100/200 for $10/$30 worth of flash back when they bought it. I also think it would make the decision a lot easier for those that are updating their phone and considering other options.
Go from 16/32/64 to 32/64/128, and the 5C cheap model would stay at 8GB (assuming they're really trying to cut down the price on it as much as possible to hit half the retail price of the base model of the 5S)
There's always iCloud if you want to store more than your phone can hold. I have the 16GB model and I still have almost half my space left. I don't keep my entire music collection on my phone like some people though.
The IET engineering magazine seems to think they pay $20 for 64GB (Unless there are using better quality stuff this time it has always been the same figure with part numbers at the commercial price. (i.e in bulk but without any discount). Apples bill of materials is always exactly the same they never put anything better in if it goes over the figure. Don't remember what it is.
(Some of the stuff in the magazine really annoys me. There was one thing in particular by Monster headphones about how they engineer them to last only 18 months).
8GB is no longer relevant for apps nowadays since a lot of iOS games would take around 1GB, some more, some less. You're okay with 16GB because you don't store games in your phone like others do. I would say 32GB should be a standard storage nowadays.
So it could be: i5s at 32/64/128GB ($199/$299/$399), i5 at 32/64 ($99/$199) only and maybe i5c is just available at 16GB (free) with subsidy. If Apple will have 128GB i5s and keep i5 as mid-range model, you'll see i5 will have the similar storage space as 5s which starts from 32GB and let the low end i5c to take over 16GB storage.
"The IET engineering magazine seems to think they pay $20 for 64GB (Unless there are using better quality stuff this time it has always been the same figure with part numbers at the commercial price."
I don't know where they're getting their figures from but the flash in iPhones is much more like an SSD than a cheap SD card. Those are the prices that you should be comparing. And you can't get a 64GB SSD for anywhere near $20.
"There's always iCloud if you want to store more than your phone can hold."
Sure it will definitely help in my area where the only achievement my local telco's have done over the past 5 years is bragging about how good their 3G networks were while in reality sucked even worse than a 70 year old hooker with false teeth!
""The IET engineering magazine seems to think they pay $20 for 64GB (Unless there are using better quality stuff this time it has always been the same figure with part numbers at the commercial price."
I don't know where they're getting their figures from but the flash in iPhones is much more like an SSD than a cheap SD card. Those are the prices that you should be comparing. And you can't get a 64GB SSD for anywhere near $20."
Well the figures I saw were Apple paying $40 for 64GB flash, not $20, and you can bet that the flash that goes into a 64GB SSD is around that price (probably lower) with retail prices now being $60-70.
Of course you can't *buy* a 64GB SSD for anywhere near that price, same as you can't buy a 64GB iPhone for anything like the BOM price. I'm talking about the hit Apple would take.
64GB phone (e.g., S4) plus 64GB microSD - 128GB phones are already available today. And probably far cheaper than who knows how much Apple will charge for the 128GB option. I imagine 128GB microSD will be available soon too.
"Since Apple is in charge of both hardware and OS design,"
A common claim for their phones and computers, but it doesn't make sense. The hardware is manufactured by companies like Intel and Samsung. True, they have a hand in it, but Samsung also have a hand in their OS design (since Android is Open Source, and they build their own OS around it). Apple may have more control over their OS, but Samsung have more control over their hardware, which they make themselves.
Give me 128GB contiguous over a split memory.
Samsung Galaxy S4 owners with the 16GB model will tell you that no size of SD card will help you download apps if your internal memory is full.
And that's why the first thing you do is APP2SD as much as you can, and always store your media on the SD card.
Imagine if those S4 owners had a 16GB iThing and zero storage upgrade options? I'd rather have a split huge memory than a contiguous tiny one.
NOBODY manufactures all the components in their phones. Not Samsung, not Google, not Nokia. No one even comes close to being able to do so.
Apple is in CHARGE of both software and hardware design != Apple makes everything themselves. It doesn't matter if they don't manufacture all the hardware inside, they choose exactly what hardware is used and they design the SoC themselves to their own needs, even going so far as designing their own CPU core from scratch.
That's a difference between iOS and Android. iOS can be designed knowing exactly what components it will have to work with. Android has to be designed for a huge range, because there are some very low end devices missing a lot of basic features, and high end devices that add crazy features Google never even considered.
I doubt this, but it is worth mentioning.
Since Apple do control the hardware and OS, and have a significant hand in the design of the CPU itself, it isn't impossible for them to start to exploring less conventional architectures. Nuking the filesystem and replacing it with a persistent object store that is managed by directly addressing it contents would be a great thing to do. That would require 64 bit addressing now. They did have a system that worked a bit like this once - it was called the Newton.
Like I said, I very much doubt it, but I continue to nurture the hope that with the huge ecosystem of hardware and software design now under the Apple banner, they will start to innovate past the current typical architectures.
"A golden casing" casing you say, is that because Steve Mobs pees on each and everyone of them:
128Gb storage is finally getting into a level that begins to be a little bit more meaningful. 64bit...is a step toward future proofing as OSs all will go there like it or not. Cool changes.
128Gb RAM, faster processor, Sapphire buttons. Is someone worried about the possibility of the Ubuntu Edge?