That is all...
More details rumors about the iPhone 5S that's reportedly being readied for a September 10 unveiling include a 128GB option, improved low-light camera performance due to a dual-LED flash and an ƒ/2.0 lens, a new 64-bit A7 SoC based on the ARMv8 architecture, and a step up in memory bandwidth from LPDDR2 to LPDDR3 RAM. Longtime …
If you stopped caring about them, you wouldn't know how good they are or not, surely.
And before any gimp gets excited about that comment, that could apply to anyone making that comment that brand x are crap, I stopped caring about x years ago.
It's called ignorance, and there is no excuse for it.
If they'll be shipping it with only 1gb of RAM and given that iOS doesn't use virtual memory for process storage, why would Apple want to transition? The only use under iOS as currently designed would be to allow larger sections of the disk to be memory mapped (a virtual memory use Apple does permit), which is not exactly a limitation developers often run up against.
I could understand it if the risk were ignoring the next step until it's too late but the 64-bit ARM architecture is ready to go and Apple controls both the tools and the channel of distribution so it can force very quick changes in those.
As Apple doesn't usually implement technology until there's a pressing business need to do so, this rumour doesn't sound all that likely to me.
Don't forget that you can process 64 bit values a hell of a lot quicker on a 64 bit system than a 32 bit system. On a 32 bit system, there's a speed penalty for using doubles and long ints (or however else you want to describe a 64 bit float/integer). Less so on a 64 bit system.
The only real difference for 64 bit CPUs is the ability to address more than 4GB of RAM. 32 bit ARM SoCs already support floating point values using 64 bit FP registers, so a 64 bit CPU is not going to speed up floating point. There probably aren't any apps that would get a noticeable boost in speed because they are performing integer operations on values > 2^32.
"A new 64-bit wide instructions plus legacy (retroactively named Thumb-32?) plus Thumb-16 seems awkward, to say the least."
64 bit chips don't have 64 bit wide instructions. The "bittiness" of a chip refers to how wide its [integer] registers and ALUs are. Nothing else. Re: instructions: it takes just as many bits to say e.g. "add register #2 to register #3" regardless of how big those registers are... 32 bit, 64... 8... 256... etc.
Also, 64 bit software doesn't use that much memory, usually. The default integer size is still typically 32 bits. It's just the size of pointers that changes from 32 to 64 bits.
"If you think games (and pretty much anything else with large 3D scenes) won't benefit from 64 bit accuracy, where have you been for the last 20 years?"
Even the original iPhone had a floating point unit with hardware support for double precision, i.e., 64 bit. Again, the bittiness of a CPU doesn't refer to how wide the floating point units are.
"You do realise the ARMv7 architecture has support for 40-bit memory addressing, meaning 4GB has never been a limit (it's closer to 1TB)."
And 32 bit Intel chips since the Pentium Pro can address 36 bits of physical memory (64GB) via segment registers. But it's a million times nicer to be able to specify addresses in the CPU's native integer width.
I'm amazed there's so much pushback to switching phones to 64 bit when there's basically no disadvantage to doing so. And besides, many Android phones now have 2GB of memory. Typically, 32 bit OSs can't make full use of 4GB of memory because some of the address space is reserved for DMA transfers and whatnot. Which means that when phones start shipping with 4GB of RAM in the next year or two, we will need 64 bit processors and OSs to take full advantage of them.
I do wonder if the people complaining about 64 bit are just Fandroids and this is a knee-jerk reaction to Apple doing anything. If a Google phone came out with a 64 bit CPU, what are the odds that message boards would be flooded with Fandroids using it as an example of how Apple just makes shiny, technically inferior products with no focus on engineering.
The way that the 40 bit addressing works on a 32 bit ARM is by the use of segment registers allowing you to offset the virtual address space for a process into more that 4GB of memory. It's not new technology, and has been a cornerstone of the instruction sets of processors since the mid-1970's.
The first architecture I saw address extension done was the 16-bit PDP-11, which had it's address space stretched from 16 to 18 and then to 22 bits in different models. I do not know the ins and outs of Intel's PAE, but I suspect that it is something similar. The Power processor family also does something similar for it's virtual address space, although it does not need it to stretch the address space. Most other modern processors (those designed in the last 30 years) do something similar to support virtual addressing (but not necessarily for address extension).
The basic method involves breaking up the virtual address space into chunks called segments, and then adding a real-address offset to the base address (normally designated as a page number) in the address decoding hardware. This allows a process to see a linear address range scattered over a larger possibly non-contigious address space. The impact to the code-writer is ZERO. There is nothing that needs to be done for a user-land process to cope with this technique. All multi-tasking OS's have done this for what seems like forever.
It does make the OS have to a bit more work every time you start or context switch a process (it has in some way to manipulate the segment registers - it's different in different architectures), but it's well understood what needs to be done, and has been a standard technique. And it is perfectly possible to write the OS itself to work in a virtual linear address space (an example was the 32-bit AIX kernel running on 64-bit RS64 and later Power processors), where the OS is in control of manipulating the segment registers for itself, as well as for all of the other processes. The 32-bit kernel could manage 64 bit processes, with more than 4GB of real memory on the system, which when I explained it used to puzzle people for whom the 32-bit to 64-bit migration in Windows seemed like a huge deal.
The major limitation to this is although the system may have more memory than the size of an address, it can only be used in chunks determined by the width of an address. So for example, an individual process in an ARMv7 with 40 bit LPAE can only address 4GB of the address space, even though the architecture will support 1TB of real memory. But of course, you can have more than one process, allowing you to utilise all the available memory. And as a side effect, you have the ability to share pages across multiple processes for in-core shared libraries, shared memory segments, and memory mapped-files.
This is not even a problem for the OS, because all the writers have to do is to keep at least one segment free, and then manipulate the segment register to allow the OS to see any of the real memory. Of course, it can't see all of memory at the same time, but it can get access to any of the memory.
The issue of whether 64 bit addresses will add any more inefficiency over 32 bit addresses is all to do with whether half-word aligned load and stores can be done natively. On some architectures, performing a half-word operation (for example a 32 bit load or store on a 64 bit machine) requires loading an entire 64 bit word, and then masking and shifting the required part of the word to obtain the correct half word value. This may be microcoded, but in some architectures had to be done by the program itself. This is slower, and on some architectures, the decision about whether to 'waste' 32 bits of memory verses the performance costs of half-word operations was a difficult decision.
I would have to research the ARMv7 and ARMv8 ISA to know whether this is the case, although I would welcome someone in the know to provide an answer.
Whether floating point load or store operations can be done in units other than the word-length is different from architecture to architecture. For example in Power 6, it was necessary to load a floating point value through a GP register (or two in the case of a double-word FP value), and then move it to a floating point register. For Power6+ and Power7, it is possible to directly load from memory to a floating-point register, allowing you to do double-word FP loads (128 bits) in a single load operation. This decouples the FP processor from the natural word size of the CPU.
Yeah but ARM is awful for floating point so you want to avoid if at all possible.
Is there a benefit in terms of the number of registers you get or is it like powerpc or sparc where the only difference is whether you use 64 bit instructions or not ?
(Shouldn't be any floating point in the kernel cannot see them needing more than 4GB ram for the moment).
Only thing I have noticed that matters on Solaris when I tested a fair few things on sparc (using -m32 or -m64 with suncc) is openssl. (x86 isn't the same because you get more registers with 64 bit which makes it always worth it).
Actually 64-bit gives you two things, 64-bit addressing (address lots of RAM, useless on a phone with fixed RAM) and 64-bit numbers (most useful for floating point), which are also pretty useless on a phone for the most part. The increased size of memory addresses tends to slow things down or use more memory, so Java for instance compresses such things.
Classic mistake, 64-bit's is not always faster depends on what your application does.
Yup. I'm with the crowd that says 64-bit---huh? In general it's going to slow things down if your instruction bus width has to double in size. I'm not sure how ARM is handling the transition to 64-bit. A new 64-bit wide instructions plus legacy (retroactively named Thumb-32?) plus Thumb-16 seems awkward, to say the least.
Besides the inherent disadvantages of 64-bit with respect to increased code size and/or need for different ISA modes, what advantages would it have? Only scientific applications really need double-precision floats, so that's the preserve of clusters, not phones. And there are precious few other applications that are screaming out for bigger integers that can store values > 4Gb or +/- 2Gb for signed. This is especially true when your physical RAM doesn't even extend beyond 1Gb (though I guess mmapping a really large file or externally shared memory might be a potential use).
In my opinion, the best way to improve current 32-bit ARM chips would be to increase the number of registers (though it's already pretty decent with 16, and bumping this also means increasing instruction size) and/or improve the range of NEON SIMD instructions (with ability to do things like summing and testing conditions across values and a way to select/shuffle sub-words based on the condition, though again, this is much more useful with 64-bit or better registers). So going 64-bit for its own sake is a terrible idea, but if it's just a side effect of implementing a richer set of features, it's OK I guess.
Why 64 bit on a phone?
These are smartphones. Smartphones play games. If you think games (and pretty much anything else with large 3D scenes) won't benefit from 64 bit accuracy, where have you been for the last 20 years?
I know it's not a phone game, but Kerbal Space Program is one example of how limited float accuracy can cause all kinds of wierdness, like watching your aerobraking apoapsis vary between "completely miss the atmosphere" and "make a huge crater in a lithobraking manouver" until you get closer to the target planet.
Being able to grab huge numbers in and crunch on them in the minimum number of clock cycles is always going to be an advantage.
I agree that a 64 bit CPU in a phone is silly, though it is possible that architectural cleanups in ARM64 (removing outdated stuff, adding more registers) might allow making a CPU that runs 64 bit code faster (or using less power) compared to 32 bit code - basically it has overcome the disadvantage of pointers taking up twice as much on-chip cache. We'll have to see the first few real world 64 bit ARM SoCs before we know. I suspect there's really no difference, and the only advantage of 64 bit today would be the same as that of a quad core CPU...marketing!
I actually could see Apple making the A7 64 bits but leaving iOS 7 running 32 bit. The reason for this would be to allow developers to run iOS 8 betas in 64 bit next summer and prepare their apps to run in 64 bit, if they plan to make iOS 8 support 64 bit. They wouldn't need it for the iPhone, but by then they might want to ship a 4GB iPad. While there are ways to have more than 4GB and run a 32 bit OS, they are ugly (PAE)
If Apple ever wants to let people use their phone/tablet as a desktop computer, by having an OS X "app" that runs when it is docked to a monitor and allows use of a bluetooth keyboard/mouse so you can get a fairly normal desktop computing experience for those tasks a phone or tablet suck at, such as anything involving a lot of typing, or the type of FPS games that don't really translate well to a touchscreen, then they'd definitely want a 64 bit CPU (and yes, maybe even quad cores) But perhaps I'm the only one who thinks this is a good idea...
The question you should be asking is, why not? 64 bit doesn't increase the CPU die size very much (maybe a few percent) and it doesn't make anything run much slower.
It's really not that big a deal. Remember, Nintendo 64s from the mid 90s (with 4 or 8MB of RAM) had 64 bit CPUs. Also, AMD started transitioning desktop CPUs to 64 bit when 256 to 512MB of RAM was commonplace.
It's not all about addressing physical memory. Files have addresses too. Also certain algorithms run faster with native support for 64 bit integers.
(+1 to the other poster who correctly said that "64 bit" has nothing to do with support for 64-bit floating point numbers.)
64 bit will also look good for marketing because 64 must be twice as good as 32 for anyone without a technical understanding of the issues.
Which is probably most of the iphone buyers. No class or demographic barrier on the iPhone - not going by the range of iphone cases that can be bought in my local Tesco.
What? Tesco's and the iPhone? Shirley this can't be true?
Wow. According to Fandroids, you have to be a shiny-shiny hipster to want/own and iPhone. I can't think of a place less likely to be used by Hipsters than Tesco's apart from Asda.
Not that I own an iPhone or frequent Tesco's or Asda on a regular basis.
Paris will be crying into her Jimmy Choo's at the thought of buying her iPhone from Tesco's....
More likely to be anodised with something hard and goldie-looking. Titanium Nitride would fit the bill (you'll probably have seen this treatment on some drill bits), except it looks more goldie than gold (and thus trashy). The UK bicycle component company Middleburn used to make chain rings with some tasteful hard-anodised colours, if earthy colours were your thing.
Well seeing as ifixit seemed to reckon that apple pay about $20 per 32GB of flash for the iphone 5 (for which apple charge $100 e.g. when bumping from the 32 to the 64gb model) and they reckon there's $440-odd of pure profit in the base 16gb model, I reckon with another year of price reductions and economies of scale in supply, Apple could manage to sell ONLY a 128GB model at the base price and still make $370 profit per phone.
Although of course that doesn't consider that it looks like Apple make $611 per 64GB iphone due to the extreme overpricing of a few gigs of extra flash, and of course there's some losses in retail distribution etc that ifixit don't take into consideration.
Maybe not the Apple way, but perhaps time to sacrifice a little bit of the profit for market share? There are rumblings afoot and maybe this is one way to tackle them.
I think they'd influence some of the Market their way and have the dual purpose of not losing the Apple fans that have found they constantly have to administer the space on their device cos they didn't want to pay an extra $100/200 for $10/$30 worth of flash back when they bought it. I also think it would make the decision a lot easier for those that are updating their phone and considering other options.
Go from 16/32/64 to 32/64/128, and the 5C cheap model would stay at 8GB (assuming they're really trying to cut down the price on it as much as possible to hit half the retail price of the base model of the 5S)
There's always iCloud if you want to store more than your phone can hold. I have the 16GB model and I still have almost half my space left. I don't keep my entire music collection on my phone like some people though.
8GB is no longer relevant for apps nowadays since a lot of iOS games would take around 1GB, some more, some less. You're okay with 16GB because you don't store games in your phone like others do. I would say 32GB should be a standard storage nowadays.
So it could be: i5s at 32/64/128GB ($199/$299/$399), i5 at 32/64 ($99/$199) only and maybe i5c is just available at 16GB (free) with subsidy. If Apple will have 128GB i5s and keep i5 as mid-range model, you'll see i5 will have the similar storage space as 5s which starts from 32GB and let the low end i5c to take over 16GB storage.
"There's always iCloud if you want to store more than your phone can hold."
Sure it will definitely help in my area where the only achievement my local telco's have done over the past 5 years is bragging about how good their 3G networks were while in reality sucked even worse than a 70 year old hooker with false teeth!
The IET engineering magazine seems to think they pay $20 for 64GB (Unless there are using better quality stuff this time it has always been the same figure with part numbers at the commercial price. (i.e in bulk but without any discount). Apples bill of materials is always exactly the same they never put anything better in if it goes over the figure. Don't remember what it is.
(Some of the stuff in the magazine really annoys me. There was one thing in particular by Monster headphones about how they engineer them to last only 18 months).
"The IET engineering magazine seems to think they pay $20 for 64GB (Unless there are using better quality stuff this time it has always been the same figure with part numbers at the commercial price."
I don't know where they're getting their figures from but the flash in iPhones is much more like an SSD than a cheap SD card. Those are the prices that you should be comparing. And you can't get a 64GB SSD for anywhere near $20.
""The IET engineering magazine seems to think they pay $20 for 64GB (Unless there are using better quality stuff this time it has always been the same figure with part numbers at the commercial price."
I don't know where they're getting their figures from but the flash in iPhones is much more like an SSD than a cheap SD card. Those are the prices that you should be comparing. And you can't get a 64GB SSD for anywhere near $20."
Well the figures I saw were Apple paying $40 for 64GB flash, not $20, and you can bet that the flash that goes into a 64GB SSD is around that price (probably lower) with retail prices now being $60-70.
Of course you can't *buy* a 64GB SSD for anywhere near that price, same as you can't buy a 64GB iPhone for anything like the BOM price. I'm talking about the hit Apple would take.
64GB phone (e.g., S4) plus 64GB microSD - 128GB phones are already available today. And probably far cheaper than who knows how much Apple will charge for the 128GB option. I imagine 128GB microSD will be available soon too.
"Since Apple is in charge of both hardware and OS design,"
A common claim for their phones and computers, but it doesn't make sense. The hardware is manufactured by companies like Intel and Samsung. True, they have a hand in it, but Samsung also have a hand in their OS design (since Android is Open Source, and they build their own OS around it). Apple may have more control over their OS, but Samsung have more control over their hardware, which they make themselves.
NOBODY manufactures all the components in their phones. Not Samsung, not Google, not Nokia. No one even comes close to being able to do so.
Apple is in CHARGE of both software and hardware design != Apple makes everything themselves. It doesn't matter if they don't manufacture all the hardware inside, they choose exactly what hardware is used and they design the SoC themselves to their own needs, even going so far as designing their own CPU core from scratch.
That's a difference between iOS and Android. iOS can be designed knowing exactly what components it will have to work with. Android has to be designed for a huge range, because there are some very low end devices missing a lot of basic features, and high end devices that add crazy features Google never even considered.
I doubt this, but it is worth mentioning.
Since Apple do control the hardware and OS, and have a significant hand in the design of the CPU itself, it isn't impossible for them to start to exploring less conventional architectures. Nuking the filesystem and replacing it with a persistent object store that is managed by directly addressing it contents would be a great thing to do. That would require 64 bit addressing now. They did have a system that worked a bit like this once - it was called the Newton.
Like I said, I very much doubt it, but I continue to nurture the hope that with the huge ecosystem of hardware and software design now under the Apple banner, they will start to innovate past the current typical architectures.
This is just like the "...but why would you ever need more than 2gb of ram???" nonsense.
I hear this kind of thinking all the time and what makes it a thousand times worse is that the people saying it are too damned clever to be doing so. Slap yourselves before you lose credibility. Stop being the catholic church of the 21st century - and can't you see you are being USED?
Who has the most to lose from progress in the "mobile" sector? Traditional desktop and laptop manufacturers. You don't see a 64bit HP ultra-pad because consumers would no longer buy an ultra-book. Am I making sense? The reason surface rt failed was because it was supposed to be rubbish so people would buy the "proper" surface pro. If ms didn't have any competitors this strategy could have worked but they were simply too arrogant.
I'm hoping a tech firm with zero foothold in traditional desktops/laptops will take us forwards -and apple could fit that criteria/be willing to make the potential sacrifice in other areas because they too are arrogant, however hopefully in a manner that could prove beneficial to consumers!
Just my 2p
...compensate for the slower 64-bit access?
It takes the same number of clock cycles, but a CPU with a 64 bit data bus can drag a 64 bit number in all at once, rather than having to drag it in piecemeal. A CPU with 64 bit ALUs and FPUs can crunch on 64 bit accuracy numbers all at once, rather than having to bit-mash two 32 bit chunks of a 64 bit number.
Slower? Really, what?
Along with all the hardware updates and things added to the 5S. will the screen still stay the same as the 5? With newer mobiles hitting the market with at least 5 inch screens. the iphone5 was not welcomed with its stretched screen. What Apple must realise is that with so many mobile users wanting a larger fondleslab. marketing the 'newer' iPhone 5S with a now outdated 4 inch screen just won't cut it with its loyal user base any more.
Could this see the death nail for the iPhone, has it run its course?
Come back Steve J. Apple needs you.
Not everyone who wants a 'higher end' phone, also want a massive screen.
If you want a phone the size of a tablet, great, go buy a Samsung Mega. Leave us who like a phone that (comfortably) fits in a trouser pocket with something less brobdingnagian.
The choice of higher end handsets, with reasonably (read <5") sized screens are getting fewer and further between.
Also, a 'loyal user base' are surely more likely to buy what ever they release, regardless of screen size, because they are 'loyal'
Major architecture change. Am I the only one who sees this as bad news? I.e. force obsolesce of all existing iPhones, by making all future versions of existing apps as well as new apps incompatible with existing devices? And that future iPads will also use the same architecture which will cause the same thing to happen to all existing iPads?
I feel a major deja-vu coming on...
Will it still be too small? Have an outdated, basic, yet oddly frustrating to use UI? Lack of any useful settings or configuration options? Lack the ability to easily see what hardware elements are enabled (and eating therefore battery)?
I bet it will.
You can talk hardware all you like. Put iOS on it and I'll never be interested, unless Apple actually do some innovating for once.
Apple are often berated these days at the lack of thinking differenly and not coming up with novel ideas. Additionally they're repeatedly accused to pinching the ideas of others. However, when they genuinely do consider something a little whacky (i.e. 64-bit processor in a mobile phone) everyone pipes up questioning the reasons why, claiming there will be little (or no) benefit, that it's overall a bad idea. It is worth remembering that Apple first successfully mainstreamed the idea of a touchscreen-only mobile phone way back when, as there have been plenty of "me toos" since we can be pretty certain they got something right. In two or three years time it's not inconceivable that all new mobile devices will be equipped with a 64 bit CPU, with an OS and apps to support that. It's what's known as evolution. Something I see Apple are pretty good at and something that many companies are not - preferring just to sit back and be a "me too".
Incidentally, whilst Apple have not increased screen size beyond 4" as it affects useability and convenience to carry, there are plenty of Android devices about for those of you who do want a phablet. Personally, I have an iPhone and an iPad - two well suited devices that compliment one another rather than become a single compromised device that fits neither job particularly well. Of course we mustn't forget that it's the cloud that's the glue keeping stuff in sync; and it's this glue that we need to feel confident in with regard to robust security. Worryingly I think there's some way to go here.
Yet another sub standard interim Apple offering by the looks of it (how little can we get away with giving the fools and still have them pay top dollar? I know, a 128GB version ought to do it. There's no way they have cottoned onto the 800% markup on each flash module yet)
little or no innovation mentioned so far, not that it was expected on the 'halfway house' release (apart from ofcourse, the bulging sapphire button rumour with inclusive ageing fingerprint recognition tech within...swoon!!), oh and has anyone mentioned the predicted battery life of a 64bit O/S running on this handset, or even the possibility of running a 64bit O/S on previous handsets without the fragmentation that Apple tends to point out consistantly on other O/S's?
It's so satifying to see the likes of apple panicking. You never know, we may end up seeing something new out of them eventually, its either that or re hash something else not previously nice, shiney and rounded...................Or even something that was i suppose.
Same old shite...................gold case......enjoy!
I have an iPhone 5, the only reason I got it is because I dropped my 4S and broke the screen.
The iPhone 5 has a faster processer and more RAM I just have no idea what that extra stuff is accomplishing.
I have seen no real difference in real life between the 4s and 5 though, has anyone?
This new iPhone 5s... what's the point of it?
I'm betting this is so that when Apple finally do release a phone with more than 4GB they're not stuck developing two branches of the same OS for legacy devices. Lately Apple has been maintaining the two previous hardware versions as a lower cost alternative meaning they're providing the new version of iOS to the previous 4 models, assuming they would be willing to cut this down to 3 or else put up with a 32 bit legacy branch for one revision, and that they don't make a disproportionate jump in iPad RAM, then we might indeed see 64 bits not brought in until iOS 8 or even 9. One of Apple's good points is despite the limitations they've put in iOS they at least make sure the last 3 or so iterations of hardware can run the new version of iOS, albeit with reduced functionality where applicable, and to achieve that they really need to transition the hardware ahead of the software or else face legacy issues. Of course as others have mentioned it'll also help with software development and testing as far as developers are concerned, if nothing else Apple could provide a preview version of 64 bit iOS at the appropriate juncture to make sure as many apps as possible are ready to go when 64 bit iOS drops. As we all know a device without developer support is worthless and Apple won't want a lag of several months while developers update their apps for 64 bit, many of them take long enough to update to new hardware and software as it is, and Apple certainly don't want to be shipping test iphone units in advance of their public announcement because someone would be bound to leak it.
Biting the hand that feeds IT © 1998–2019