Re: Wow so he knew about all the chip roadmaps..... (@Snake)
A search for 'genius salesman' (no quotes) returns 'Apple - Jobs at Apple - Retail (us)' as the first hit. So maybe they are looking to get a more direct Steve replacement after all?
2138 posts • joined 18 Jun 2009
A search for 'genius salesman' (no quotes) returns 'Apple - Jobs at Apple - Retail (us)' as the first hit. So maybe they are looking to get a more direct Steve replacement after all?
With about one sixth the number of users as 'does it really qualify for the list?' Safari, Opera is not a major browser (sources: StatCounter, W3Counter, NetApplications). Which is a shame.
Some technical autopsies wouldn't go unappreciated either.
I've now carried out a little research; there was a ROM call to write a single byte to any other RAM page, which is reported to take about 25µs. So slow.
The z80 has a 17-bit address range if you include port IO; maybe the solution was as simple as being able to IN and OUT to at least some portion of the RAM that wasn't paged in? Then you could set up a landing area for paging and pass packets back and forth?
I'm just guessing but if the screen display took 32kb of 48kb, maybe the calculation was that there was no point shipping CP/M until they had enough space to give the more normal close-to-64kb of memory to CP/M + application? So it had to wait until the video display could be paged out; the other CP/M micros were using maybe 1kb for a text mode display, or just shipping all that stuff out to a terminal.
So what we need is to get tough on use of 'vulnerable' and a common sense crackdown on 'tackle'?
While the BlackBerry man obviously has his own commercial interests, I think he's got an indirect point in that the iPhone is no longer particularly exciting. Most of its one-time advantages are now commodity features; an expressive touch interface is the norm and 95% of most people's use is probably texting, using the browser or using apps for services that are also accessible in the browser. They can do that on more or less any handset out there.
Even if you take Apple's case at its strongest — that the iPhone is the best phone in its class — it's now just one in a pretty big class.
I feel like what Samsung is saying is 'this is the flagship phone; we're going to throw every idea we have at it and see what sticks' — some people are going to find some of the new features useful, nobody is going to be at a disadvantage because of them and Samsung can gauge reactions to figure out what to port to next year's midrange handsets.
I guess it's a more market-driven approach to figuring out where to go next, rather than dictating simplicity from the centre.
I guess it's another way of dealing with that tricky œ? Wiktionary has an entry 'manouvre' as 'a common misspelling' so presumably it's an easy one to get wrong.
The difference between the C64 and Spectrum graphics are decent hardwired graphics semantics with a weak CPU versus a cheap-to-manipulate frame buffer with limited colour resolution and a speedy CPU. Trying to reduce it to one being definitively better than the other isn't really constructive. Both have been pushed much further now by enthusiasts without budget limits but for every Creatures there's a Knight Lore, for every Revs there's a Hard Drivin'.
Given that in 1984 (when I could most readily find prices via Google) a C64 cost £195.95 and Spectrum was £129, I don't think it's fair to make the blanket statement that Sir Clive's lot were optimising incorrectly.
If I have a car and one day it breaks down, I'm likely to accept the inconvenience of two or three buses. If there were no such thing as cars then I would have put more effort into arranging my life so that I didn't have to catch two or three buses to get to work.
It's always possible that Google paid for a simple perpetual licence, especially if the patents the MPEG LA found didn't add up to a particularly strong collection. VP8 was bought and open sourced so it doesn't seem too much of a stretch to think Google might just say 'okay, so it'll have cost us slightly more than we thought' .
Alternatively, maybe Google's counter-attack patent collection twisted the MPEG LA's arm? They've got a commercial interest in keeping their H.264 patent pool customers happy.
Bringing VP8 to market years after H.264, without adequate hardware acceleration and so as to cause uncertain legal liabilities for implementors doomed it. Google's only hope is to trump H.265 before it sees wide adoption, though even then chances are unlikely because it's pretty much a guarantee that H.265 will be used for 4k and 8k transmission standards so it's more or less guaranteed to be easy to find silicon for.
There's a Nyquist element to it; the lower the resolution the lower the frequency of signal an image can contain — in layman's terms, lower density = less contrast. You can antialias so that the pixels aren't obvious but there's a physical limit to the amount of information you can present. When you step up to a display that includes all that extra information you probably still can't see the individual pixels but you can tell that edges are sharper and more lifelike, on text, on images and everywhere else, and you can then perceive a certain lack of sharp focus when you go back to the old display.
So it's really nothing to do with whether you can see the individual pixels or not, it's about how much information (in the digital signal processing sense) can be packed into an area and therefore how close an approximation a screen can be to actual printed text.
The compiler can be found at https://github.com/mortdeus/legacy-cc ; the oddest bit to my eyes is the apparent need explicitly to declare storage e.g. as at the bottom of https://github.com/mortdeus/legacy-cc/blob/master/last1120c/c00.c — my experience goes only a little back beyond C89 so it's possible I'm completely misreading what's going on but it looks like the equivalent of an assembler's defb or equivalent, with extern being used in functions to import globals (so maybe scope wasn't well established yet?). Can anyone enlighten me?
It has a BSD layer and an optional X server; it's very comfortable for UNIX users — throw in VM Ware or Parallels and it's the only type of computer that allows you to run X, Microsoft Windows and OS X applications together on the same desktop.
So, yes, the perfect platform but for the cost, the limited hardware range and, for a lot of people, the company the money goes to.
In terms of points-per-inch* Apple currently supports exactly two sizes — 163 and 132. The iPad Mini is the old iPad resolution at the iPod/iPhone density. So I guess it'd be within the established parameters for a larger iPhone to go the other way; take the current 163 and turn it into a 132 for an almost 25% increase in size, going from a 4" screen to a 5".
There's an argument that'd be no further fragmentation as Apple already mostly recommends the same point size of widgets for both devices.
* given that Apple maintain a distinction between points and pixels, so that retina and non-retina code looks identical.
Not to take anything from the joy of the squirming but the assertion that Apple's software is totally unprepared for fragmentation is inaccurate. iOS 6 introduced auto layout — which amounts to setting a bunch of arbitrary constraints on view size and placement relative to anything else you like — thereby fully preparing iOS for fragmentation.
You're yawning inappropriately. I think what you meant to yawn about was another year, another stream of rumours months in advance about a bunch of potential incremental improvements, many of which probably aren't accurate anyway.
You're talking voodoo [economics].
If your phone is locked then you can't join another network, though plenty exist. If your console is locked then you can't buy games from another source, of which none exist. It's cause and effect but a real difference, at least when it comes to temporary purposive exceptions to existing laws.
Re: the PC model; I don't think the 3DO was the same thing at all. It was custom developed hardware with a single stationary target. So lack of subsidies was a real issue — compare and contrast with other consoles that typically cost quite a bit less than the hardware cost at launch, and with the PC that slowly gained steam as a gaming device over more than a decade.
I think that the problem with the PC now is simply that you can buy a very expensive one so some people do buy a very expensive one so that moves the centre of gravity for games. PC gaming is likely to remain more expensive than console gaming because nobody can put a foot down and say 'there is no choice for anybody; all must stick with the older technology in order to reduce cost for new owners and to keep things simple'. For people that game on PCs, that's probably a good thing as they retain the freedom to spend more on a better box if they want to.
The difference is as you say — with mobile phones they're usually subsidised but you explicitly pay the subsidy back over the course of your contract. There's also often no technical barrier to using them on another network.
Conversely, with consoles you get a subsidy and you can't point at the exact date and payment that paid off the subsidy. There are also no other commercial entities that could offer you an alternative service.
I guess the 3DO is instructive example of why things work like that, being a console that wasn't subsidised with a specification that anyone could implement and no licensing costs for games; net result: a $700 console that nobody bought. The PC model just didn't work. Consumers had a choice and preferred a subsidised console with more expensive games, embracing the PlayStation instead, which was also from a newcomer, so established businesses versus upstarts wasn't really a factor.
Based on the spec at https://wiki.ubuntu.com/MirSpec the objective is to have Mir sitting underneath QT. So at least KDE should be in.
The implementation will need to be very good.
The central contradiction as I imagine it is that normally you move your eyes to look at something. If moving your eyes also moves the thing then you're getting very unnatural feedback.
If they solve that by having the scrolling occur only when you look at certain areas of the screen — e.g. look at the bottom to scroll down, return your eyes to the main portion to stop — then you have to judge when to stop based on peripheral vision.
Supposing the people at Samsung have solved these problems then they deserve to be number one.
I can explain the theory. By reducing what the connector does to 'a serial bidirectional stream of data' you turn all possible external connectors into mere software extensions. Whatever you want to output must already be data within the device, so you export that data over the connector and let the cable worry about reformatting it.
In this case that appears likely to have put some sort of video codec into the loop between device and cable and leaves the cable having to decompress video and run a framebuffer.
So what Apple has done, in contrast to the single-port Android phones, is made no concessions whatsoever to the two or three cables it's pretty obvious most people are going to use in real life right now. I guess the calculation was that the chips that have to go into the cables are going to be very cheap very soon and they need a connector that they can stick with for ten or more years in order to ensure accessory lock-in. There's probably also an argument that they've overreacted to the old connector having long disused pins for Firewire, still having what will very soon be obsolete pins for analogue video, etc, etc.
Are you sure you're not being a bit vague about the difference between civil and criminal law? And possibly about the difference between disclosure (ie, when a party becomes aware that evidence exists) and inspection (ie, when you deliver that evidence that is already known to exist)?
Chad H and others above are correct. The point of the law is that people can't go to court and say 'I think The Ford Motor Company has been employing assassins to come round my house at night and move my bins; I demand they disclose appropriate paperwork on all contracts paid for in June so that I can check'. Such a system would quickly bring commerce to a halt as a million crackpots come out of the woodwork and insist on expansive document disclosure from every company they dislike.
Apple may be acting disingenuously — it's difficult to know without any idea what knowing the cost of documentary disclosure (eg, maybe the engineers emailed each other about it on and off for months; who's going to trawl through and find that stuff, and redact anything not related to the topic that happened to be in the same thread of conversation and is commercially sensitive?) — but the principle is solid.
I guess it's a bit like the idea that if the police can't prove you performed a crime then you shouldn't go to jail (although softer, because that's criminal and this is civil); the point is to prevent harassment by people (or authorities) on a fishing trip.
Surely it depends how that SD card slot is wired up internally? It's normal for them to reside on the USB bus (just like the keyboard and trackpad, usually) so if a system is USB 2.0 only then the SD card slot is likely limited to that bandwidth.
In this case Sim City 2000 remains on sale, via gog.com, for a grand total of $6. So regardless of the illegality of abandonware, ripping this game off has no moral justification either.
More likely Samsung see the value in the idea and therefore think it's a useful* thing to add to their handsets. Because all it does is reproduce the well-established historical look of tickets on a digital screen they don't see any IP hurdle to including it.
I don't like the copying debate because it sort of endorses the idea that there's necessarily always something wrong in copying — that if any feature of your product has any close antecedent then you've no right to claim creativity. In this case it's clear that the Apple feature has inspired the Samsung but I don't see that there's anything wrong with that, especially if Samsung end up doing it better. On a technical level, Apple's implementation is very lacking as every app that wants to put something into the passbook has to push it there, which in most apps means making your booking then digging through submenus to find the 'put into Passbook' option. If I've booked something in a compatible app, the pass should just be there in the Passbook; I shouldn't have to think about whether it's pushed or pulled and I definitely shouldn't have to do anything manually.
* Freudian typo: sueful.
So now I can do it for him.
FIrefox already has an established developer community so I don't think there's too much risk of early abandonment. Mozilla has shown no desire to sell parts of their software so I assume the licensing will be exactly like the browser — install the free package and there's no trade mark issues. That's unlike Android where you can't provide some components or use the name without paying a fee.
I therefore think the Firefox OS could make a play for the very low end; the area where unlicensed Android currently plays but with the advantage that if nothing underhand is going on then there's an opportunity for more legitimate companies to supply a more visible push.
It's not much of a chance but then I wouldn't have given Mozilla much of a chance in the early 2000s so I think it'd be foolish to write the thing off.
That wasn't a vote, it was a write-in campaign, and I think it was pretty harmless considering the first USS Enterprise dates from 1775 and took part in the War of Independence.
AmigaOS didn't even get a standard widget set until 1990 and never had protected memory. One therefore has to question the designation of 'best in class' when e.g. OS/2 supplied both in 1988.
Android, iOS and other handset OSes are designed so that the user never explicitly closes programs. That's why it isn't particularly intuitive on either of them — whether it's iOS's long press or Android's digging through the system settings. Thinking that you need to close programs is akin to a superstitious belief.
It is sad that people didn't want WebOS but it was pushed very heavily, with TV commercials featuring U2 and deliberate public spats with Apple (ending up in the USB Implementers Forum if memory serves) to get the bloggers on side. People simply didn't want it, because 'proper multitasking' actually isn't a feature they care about. The availability of Angry Birds, Temple Run, etc, is more interesting. Consumers are more interested in the total sum of what they can do with a device than with the technical way in which it is done.
I think you're being seriously paranoid — proprietary toolkits are no more "designed to lock you into a particular platform" than cars are designed to pollute the air or houses are designed to reduce the amount of public space.
I think it's the combination of having $137bn in reserves and shares that are about a third down from their 52-week high that is probably bring people out of the woodwork. They probably feel either like they're owed a payout for loyalty or that it's worth chancing it anyway.
You mean in contrast to his love of all West Coasters, like the employees of Microsoft and Google?
SimCity 3000 wouldn't have been so bad but for the forced parades — any time you're doing reasonably well the citizens decide to hold a parade in your honour. In a move of fantastic wisdom the user can't skip the parades; you're forced to sit around at the slowest time scale until they stop. That just kills the whole experience as suddenly you're incentivised not to do too well.
The BBC version even made it onto the Electron where you have to jump through even more hoops than usual not to have the display eat about a third of your available space and the CPU ends up running more slowly due to memory contention. That's where I first played it. Both versions use the same jarring four-colour palette though, if memory serves.
The boring historical version is that PostScript was the standard for high end printers so several vendors built desktop platforms around PostScript as the description language for drawing on-screen rather than rolling their own versions of QuickDraw or GDI or whatever — Sun was one (with NeWS), Next Computer was another. PostScript is a full programming language* and when adapting NextStep into OS X Apple looked at the licensing fees for the implementation NextStep had used and decided instead to keep the same primitive drawing semantics but do away with the language.
Separately, over at Adobe they designed PDF as a record of the output of a PostScript program (so, to spend storage in order to save on complexity, at least initially). So PDF also inherits the same primitive drawing semantics as PostScript.
That made it easy for Apple to add PDF rendering and print to PDF to its operating system and all applications just work. There's no translation layer whatsoever, the drawing operations are just serialised and stored or deserialised and performed. As iOS is a close relative of OS X, with exactly the same graphics operations and frameworks, the same stuff just naturally carried over.
The same is not true of Windows because even once they were looking to do something beyond the GDI Microsoft insisted on inventing its own document format in XPS and tied WPF, its modern drawing framework, around that.
(*) trivia: the original LaserWriter — a key component in the early desktop publishing revolution — had a CPU 50% faster than the Mac it was meant to be attached to because it had to do all that high resolution rasterising.
I can believe they still sell iPod Shuffles but that's about it. It's cheap, it's barely bigger than a button and there's no moving parts or screen to scratch so you can take it jogging or to the gym without it being much of a disaster if you drop it or lose it. It's more expensive than the competition but that doesn't negate the market segment it's aimed at. And the iPod Touch is good for the kids, letting them have all the latest apps without a mobile contract.
The Nano and the Classic doesn't seem to have much purpose though.
Microsoft has raised the price of Office for the Mac to be the same as Office for Windows. Mac users aren't paying a premium.
That was my impression too; I was writing in response to Silverburn though I realise I was slightly ambiguous so: it was a MacBook Pro (ie, a 'professional' model) but had no trial versions of anything preinstalled. Not Office, not iWork, not anything.
I tend to prefer Pages over Word because it fits so much better into the OS and hence so much better into normal workflows. Last time I used Word it had not just its own keybindings as referenced by Quxy but its own dictionaries and its own text rendering — which was very heavily hinted and not pair kerned, like Windows XP used to be, so stuck out like a sore thumb. Before Pages I was using IBM Lotus Symphony, which was OpenOffice under a different UI and is now discontinued.
There's nothing missing or wrong with iWork that 90% of users would ever spot. So in practical terms it's 90% as good.
Weirdly it wasn't preinstalled on the Mac I bought recently; I'm not sure if that's because it was refurbished (though iPhoto, GarageBand, etc, were there).
Either the iPhone was revolutionary or augmented reality glasses aren't as e.g. Vuzix will be on the market earlier and practical augmented reality itself is at least a decade old. Similarly either the Mac was revolutionary or self-driving cars aren't as e.g. Mercedes-Benz demonstrated one in the 1980s and even had one drive the normal autobahn from Bavaria to Copenhagen and back in the mid-90s.
I don't see Google attempting to imitate Apple in any sense beyond being in some of the same markets. I consider either both to be revolutionary and transformative or neither. You don't have to pick just one.
That's an advantage; I think the greater advantage if you asked most non-technical people is the user interface. To use an Oyster card or pay by an NFC-enabled debit card I just touch the thing against the sensor. In one fell swoop that identifies that I'm the person making the transaction and that I explicitly wish to proceed.
With anything that deliberately cuts out that need to put the one thing next to the other you have to start layering on apps and menus and so on. Then it stops being something that 90% of consumers would use themselves, let along something they're happy about when the rush hour becomes even more congested as people stand around launching their applications to get into the tube.
And that's quite apart from the fact that a properly-implemented NFC solution could work regardless of whether the phone is charged.
You don't think "Actually I'm really good with numbers so I'm forced to assume that you're lying to me. Put Steve on the line." is the normal way to go?
£1800 is £1500 before VAT. Right now £1500 is US$2,330.25. The premium for buying in the UK is therefore only about 6%. It's not really worth applauding but these sort of stories usually seem to attract misinformation.
Is it worth getting the machine at all? The extra pixels are actually fantastically useful because if you don't want to run at a pretend 1440x900 you can jack up the desktop resolution to a pretend 1920x1200 and due to the pixel density everything still looks perfectly sharp*. So you can finally fit a desktop worth of stuff onto a laptop screen.
It'd be nice if other manufacturers would follow that sort of lead but I guess we're going to have to wait for Microsoft to ditch the desktop completely (as Boot Camp shows it to scale very poorly but Metro-as-was to scale flawlessly) or for Google to make a Chrome move before we get anything usable.
(*) internally that's implemented as rendering the desktop at 3840x2400 and sampling down so the scaler is throwing away information rather than trying to guess it — always a much better position to be in.