57 posts • joined 11 Feb 2010
My opinion based on the Wrist PDA, MetaWatch, and Pebble
The Wrist PDA still did some things right, IMO, that the Pebble still doesn't. It was a far more powerful device (as far as software - the Pebble probably has more CPU power) that made it a lot easier to avoid a phone pull, and a modern device could do what it did with live sync with the phone databases. However, it was HEAVY, THICK, and had absolutely atrocious battery life (if you made it past a day without recharging, you were doing really well, unless you did the iPod Shuffle battery mod, then you got 2-4 days). Oh, and they broke if you looked at them funny.
The MetaWatch is really a massively flawed device. Battery life is mediocre, and they went for an extremely low power CPU without much local storage at all. This meant that they do almost everything on the phone, rather than on the watch CPU, and the watch acts as a terminal. So, the watch ends up laying on the Bluetooth interface heavily to keep things going, burning through tons of battery. And, because it's literally getting screen updates over Bluetooth, button presses end up quite laggy, especially if you haven't done anything in a while with it.
The Pebble is an interesting balance. There's some UI concerns caused by the lack of a touchscreen (the buttons are good, but I also want touch/stylus for some applications), but the screen is pretty nice, it has enough local processing power to run software directly on it (meaning that the Bluetooth link is used much less, saving both watch and phone battery), and they seem to have a pretty good idea of what they're doing. Right now, with third-party apps running on the watch, and the latest update that allows Pebble<->Android app communication, I'd say that it blows the MetaWatch out of the water as far as capability.
Oh, and both the MetaWatch and Pebble will sync time once they connect. And, the Pebble's low battery warning, I've found on recent firmware to be pretty useful - if I get the warning, that means that when I get home, I need to put it on the charger, but I don't need to worry about it before then. The only time I've run out with recent (1.9+) firmware, I forgot to do that, and went out a second day with it wanting charged, and it still lasted most of the day.
I could actually believe that they're not losing developers...
....when you only have five developers in the first place, if none of them leave, you're not losing developers.
RIM would have to attract developers before they could lose any.
First, IBM had a 2880 kiB standard, that many PS/2s used. It didn't take off.
Second, there was always the LS120 and the Caleb 144 MB drive, if you needed a floppy disk form factor. The LS120 was a bit more popular, but not much.
Hey now, some of us prefer the clitmouse.
Re: Going in the wrong direction.
It also lets them get away with stupid scaling tricks (because they're exceeding the limits of 20/20 human vision at reasonable viewing distances) for making things bigger or smaller.
Out of the box, it's shipping in a virtual 1440x900 mode, which is actually rendering at 2880x1800.
In fact, it's only exposing doubled modes out of the box. It scales 1024x640 and 1280x800 doubled modes up to 2880x1800, and scales 1680x1050 and 1920x1200 doubled modes down to 2880x1800 (those will likely hit the GPU hard, especially the 1920x1200 mode, because the GPU is working as hard as if it were driving a 3840x2400 monitor, and then scaling the output down).
In my case, it's the screen.
My current machine is a ThinkPad T61p 14.1" 4:3 motherboard, in a T60 15.0" 4:3 chassis, with a IDTech IAQX10N 2048x1536 15.0" IPS panel.
I've got it maxed out at a 2.6 GHz Core 2 Duo - slower than today's low-end CPUs, and a high-end quad core will blow it out of the water - and 8 GiB RAM. It has a Quadro FX 570M, which is slower than Sandy Bridge integrated graphics - so in 2012, it's a joke.
This is the most powerful machine that can run this screen, due to chassis changes, too. But, I refuse to downgrade on the screen - it's so much easier to work with stuff when I can have everything on screen at once (I have good eyesight).
Finally, someone came out with something with higher pixel count, reasonable (for me) density, and IPS. It happened to be Apple, and the bastards seriously hurt expandability of the machine, but I'll still be getting one because it meets my needs better than my five year old frankensteined ThinkPad.
Granted, Apple did hide the options to get the desktop area of a 2880x1800 normal Mac, but I'm guessing it won't be long before someone finds the hidden switch, and if nobody does by the time I get one, I'll go looking for it myself, if I decide to run OS X as my main OS. (A Japanese site showed that it can be done in Windows easily (just set the display to 2880x1800), so I'm good to go on Windows.)
Re: Professional video editing... on a laptop?
Cheating a bit, but a used IBM T221 gets close, at 204 ppi (resolution for monitors is a linear measurement, not the measurement of pixel area), and it beats the crap out of it in pixel count (3840x2400, instead of 2880x1800). And you can get one for about a third as much as an entry MBPR, so around $650-800.
If you're just going after pixel count, plenty of 2560x1440 and 2560x1600 monitors, at 27" and 30" respectively, to get close to the MBPR, and they're between $300 and $1500 (the $300 ones are reject panels, though, as I understand, so you wouldn't want to use them for content creation).
Something odd about the subpixel layout...
Even Sharp's doc is vague on whether that's actually 8.3 million white pixels that are filtered through a lower resolution RGB (of some sort) filter (which would make it, at best, 1280x2160 in reality with RGB stripe, or 1920x1080ish with PenTile RGBG), or something different.
Re: And just how many laptop computer models can you buy without a forcibly bundled Windows?
Because the manufacturers are paid for the bundled stuff, is my understanding.
I ended up building the laptop I wanted...
...because I wanted something with dual cores, support for 8 GiB RAM, and a 2048x1536 IPS display, and I couldn't buy it.
So, I have a ThinkPad T61p motherboard in a 15" 4:3 T60 chassis, with a display that was used in a certain medical configuration of the ThinkPad R50p.
That solves points 1, some of 3 (and I can dock if I need better port spacing), 5, and 6 (TrackPoint).
Unfortunately, it's not what you want in point 8. ;)
Re: Battery life
The ones with touchscreen color LCDs that are claiming that long battery life are turning the LCD off, so you have to turn it on to check the time.
Re: Metal Bracelet
Then buy a 22 mm metal bracelet, and install it. Problem solved. :)
Re: Not E-ink, but...
Actually, the Pebble's display isn't e-ink, either, it's the second generation of the display that the Meta Watch (which I think is the TI watch you're referring to - either that, or the eZ430 Chronos) uses.
The problems with the Meta Watch are two-fold... the stock firmware is set up as a glorified dumb terminal, and the toolchain that actually works costs $2000 once the trial is up.
(And I'm wearing a Meta Watch right now, FWIW.)
Re: Missed opportunities for watch co's
IIRC, Seiko has done a couple e-ink watches.
Re: Why do they still bother with bitmaps
Worth reading: http://www.pushing-pixels.org/2011/11/04/about-those-vector-icons.html
Cliff's notes: At small pixel sizes, pixel-perfect detail is needed. Vector graphics suck at pixel-perfect detail. Also, at small physical (on-screen) sizes, the detail needs to be different to look good (although with sufficiently high density displays, you could implement that with different vector graphics for each size).
Re: sub editing can be fun
Acorn and Microsoft actually both have prior art on the technique that Apple uses to get higher resolution without changing the physical size of screen elements.
Acorn all the way back to the BBC Micro (for most of it) and RISC OS (for the higher resolution assets), Microsoft in Windows Mobile 5 or so (and they even called it the same thing as Apple, HiDPI).
Sometimes retard is considered offensive in the US, too
There are some circles here in the US where retard is considered a massively offensive word, too.
What's really fun is, I've encountered some of those people on a technically oriented diesel car forum.
Talking about injection timing can be dangerous if you're not careful...
Oh, and I voted for commentard.
Looks like he hit his target just fine, so I'd say he had perfectly good gun control.
I think it was way overreacting, though, and it'll be completely ineffective unless he also homeschools her and imprisons her in the house - because this could easily be the catalyst for running away, and now any adult friends that she may have know to hide her from her parents.
Well, it does give money to someone who has bought DVDs in the past, and may be using that money to buy more DVDs.
Also, by purchasing it, you're increasing demand on the used market, raising its prices, which means that the new market can charge more as well.
...my 15" 2048x1536 LCD isn't impressed by your 22" 2048x1536 CRTs.
Nor is my 22.2" 3840x2400 LCD.
Well, that's not the parent's argument, that it's bad value for money or anything like that.
The parent's argument is that Raspberry Pi is a Broadcom shell company, and Broadcom actually wants Chinese cloners, because they have to buy the Broadcom chip to clone it.
The thing is, unless these things are being sold for a loss, I'm not seeing the problem even if all of that is true. What's important has either been documented, is readable from source code, or is in a binary blob that's available to ALL OSes (IIRC, the GPU's binary blob runs on the GPU, and the GPU actually can take OpenGL instructions natively with that blob, so the "driver" is an open source stub that feeds the instructions to the GPU side of the device), is my understanding.
The display can be a junk (donated) CRT TV, though, so not exactly expensive.
At least in the US, free or very cheap CRT TVs are a dime a dozen, because of the digital TV mandate here.
Besides, it has composite out for the CRTs, too.
And that ethernet interface is an optional extra (along with 128 MiB of the RAM).
I'd stick with an Arduino...
...at least to my knowledge, the GPIOs aren't buffered on the RasPi, and there's no analog I/O.
And, if you blow up an I/O on the RasPi (which, my understanding is that's easier), you've gotta replace the whole board. Do it on an Arduino, and you need to replace a $4 chip.
But, a RasPi would be a good supplement to an Arduino for additional processing power...
This one is more US-specific, but...
...take a look at "Money as Debt".
Essentially, it discusses how the US economy has a nasty feedback loop - you can make almost infinite money by loaning it, and to pay off the loans, you have to make more money because there isn't actually more inherent value in the economy. (And, any system allowing interest on loans would suffer from that, all wealth migrating to the banksters, or both.)
So, to make more money, you end up trying to make more value... but that means you have to make more. Which means more people, more product, more consumption.
Obviously, this is an unsustainable endless cycle, and eventually it's bound to collapse.
Basically, the current US (and world) economy is a ponzi scheme, and the entire world is wrapped up in it. Such a ponzi scheme would work if you had unlimited land and natural resources, but they're finite... so it's collapsing now, and everyone gets screwed except for those at the top.
The UI is webOS's best asset
So working on the UI makes perfect sense.
And, HP's said almost all along that they're continuing OS updates and such for the devices in the field.
Because they have to draw full resolution
There are legitimate text rendering benefits to a 2048x1536 tablet.
And, if a game developer wants a game to run fast on a 2048x1536 tablet... there's always running it at 1024x768.
Where do I begin?
So, for starters, if they buy the whole webOS GBU from HP, they get Palm's patent portfolio - one that has a few in there that Apple is downright terrified of. Handy, when Apple is pummeling them in the courts.
Second, the ecosystem will take a few years to build. But, pump money into it for a few years, and you get the ecosystem. Or, do regular fire sales, like HP just did...
Third, webOS is no longer dying - it has a strong userbase now. Admittedly, they already have Bada, but who uses Bada? And, Nokia is now effectively the preferred WP7 partner, so it's in a similar situation to Android.
And, finally, have you actually used webOS? It's far LESS like iOS in a lot of ways.
Except, some of the worst open source software I've dealt with...
...has been developed under a bazaar model.
You can have open source under a cathedral model, and in fact, Linux uses such a model. Linus gets the ultimate say over what goes into Linux and what doesn't, ergo, it's cathedral.
The best software tends to be managed pretty strongly by one person or one committee to avoid bloat, and push things in a certain direction, rather than a bunch of devs contributing code that they think would be cool in the program.
32 vs. 64-bit
First off, if they want Win8 to run on a lot of existing Atom devices, they need to support 32-bit, because Intel disabled 64-bit support on the first-gen mobile Atoms.
Second, there's no 64-bit ARM architecture, just the 32/26-bit original one, the 32-bit current one, and the 32/40-bit one that's coming out.
It looks to me like it's more drastic than that.
It looks like they're actually forking the desktop version of Windows off, just keeping the NT kernel, breaking compatibility with everything, and betting the farm on tablets with the UI. Then, on x86 platforms, sticking Virtual PC and a copy of Win7 in there for software that doesn't run on the new UI.
So, this migration is more like Mac OS 9 to Mac OS X, than, say, Mac OS X 10.4 PPC to Mac OS X 10.4 x86 - a complete reboot of the platform, rather than just adding a new architecture.
Stacks of drivers vs. different HALs
It's a combination of the truth and FUD.
Every modern x86 system, things are in consistent places, because they're all following the standard set by the IBM PC AT. ARM systems, they have no such rules.
However, Windows NT has a way to handle this, the HAL.
IIRC, Windows NT only has one HAL for an x86 PC (and another for an x86-64 PC). That's enough to get far enough along that everything else is a driver.
However, for a DEC Alpha system, it has one HAL for every motherboard. The installer will start in ARC/AlphaBIOS, where the firmware does hardware abstraction, then it'll try to autodetect your motherboard, or it'll ask for a floppy with your HAL if it can't. (This was back when NT4 asked for a floppy for everything.)
There's no reason that Microsoft can't do something similar for ARM - every chipset gets a unique HAL.
They're already doing that...
...at least with the car warning stickers. Big huge warnings on the sun visors, and on my 99.5 (I'm in the US, our 99 was still a Mk3, 99.5 was a Mk4) Golf TDI, there's a small sticker on the windshield, too.
And, RISC isn't the be-all, end-all...
...well, at least if you use the, you know, "reduced instruction set complexity" definition, rather than the "load-store" or "all instructions are the same length" definition.
The fastest "RISC" processors nowadays aren't RISC by any definition relating to how complex the instruction set is.
If you're dispatching micro-ops, your ISA is no longer RISC, in my opinion.
But, that's not a bad thing - an instruction that turns into several micro-ops (assuming that instructions are the same length, which is true on ARM (except for Thumb), or that the longer instruction isn't too much longer, which is almost always true on x86) uses less memory than the same task implemented as multiple instructions that translate directly to micro-ops. Using less memory means that the instruction gets loaded into the caches quicker, and it uses less cache (except for micro-op decode cache).
All of this means that you don't need absurdly fast memory bandwidth to get good performance out of a CPU that uses these techniques, and you can use a little less RAM. This is why x86 machines could be fast in real-world use, despite atrocious synthetic memory benchmarks compared to various absurdly expensive RISC workstations.
Fun fact: ARM Cortex-A8 is no longer RISC, by my definition - the ARMv7 ISA includes some multiple load and multiple store instructions that are broken up into individual load/store instructions in the CPU. The micro-op instruction set is still ARM in an ARM CPU that uses micro-ops, and most instructions do still map 1:1 with an internal instruction, but there are a few that are broken up.
(Also, the Thumb decoder dispatches ARM instructions, but I believe every Thumb or Thumb-2 instruction maps 1:1 with an ARM instruction, so it's not really micro-ops, there.)
Because the HAL doesn't abstract the CPU instruction set?
Obviously, Apple will make the migration easy for new software - potentially as easy as a recompile, if there's no inline x86 assembly, but an existing binary can't run on an ARM system without resorting to x86 emulation.
A few things that help...
...first, Mac OS previously ran on big endian CPUs (m68k, PPC,) and now runs on a little endian CPU (x86.) So, endianness is already dealt with in OS X.
Second, any endianness issues that have crept in since the PPC->x86 transition won't affect ARM - ARM is typically run in little endian mode. (And, ARM can run in a big endian mode, too.)
Finally, Apple started a 64-bit transition not long after starting the x86 transition. (And, IIRC, they did the 64-bit transition twice - once on PPC, then once again on x86.)
Non-programmable HP scientifics...
...there's always the HP-35 (not the 35S), or the HP-45.
Of course, those are ancient, ancient pieces of hardware...
Letterless Model Ms...
Might talk to Unicomp about that. They at least had blank black keycaps for their keyboards, and Unicomp keyboards are Model Ms.
Samsung allegedly gave misleading information to NetSec's founder.
Microprocessors are a processor on one chip...
...so Itanium, Nehalem, the various SPARCs, etc., etc., are microprocessors.
(Interestingly, many POWER processors are NOT, as some are multi-chip modules - and not just multi-chip for multi-core, but different parts of the cores on different chips.)
A friend of mine actually does use the under 17 thing as an IQ test...
She actually rated one of her apps as 17+, specifically to weed out morons that would install the app, not read the description, be surprised when it does something the description says, and then go and rate it 1 star.
Actually, it does make sense
The problem is twofold:
1. The US government is completely incompetent at running many social programs, healthcare for the poor being one of them. Fraud, massive corruption within the organization, and mountains of paperwork that mean that you die before you finish the paperwork to get the healthcare you need mean that our government shouldn't be allowed anywhere near healthcare. Even if government can do healthcare right, OUR government can't.
2. This isn't government healthcare (well, OK, the government will pay for healthcare for lower income citizens.) This is the government mandating that everyone buy from the existing plans from the existing corrupt, abusive companies - not reducing the plans. It does compel insurers to provide coverage, whereas they didn't have to before, but it has no price controls. Before, insurance companies had to compete with not buying insurance at all. Under this, you have to buy from the oligopoly of insurers, whether you like it or not, and therefore it's far WORSE than before - insurers can jack up their prices FAR higher, because your other option is going to jail.
Re: Re: Re: GAU-8
To be fair, the 5.56mm NATO cartridge is fairly weak, as far as weapon cartridges go - allegedly, it was designed to badly injure, not kill.
An injured soldier takes more of an enemy's resources than a dead soldier, assuming that the enemy soldiers care about their injured fellow soldiers. (And, I've heard that the US military is switching to larger calibers for the Iraq war, because of enemy soldiers NOT saving their wounded, and because 5.56 can't get through walls.)
And I'm American...
...and, while it's not a "Yank accent" at all, it clearly sounds like bark to me.
There's always Revol or Cricket...
Problem with them is, extremely limited data coverage areas (remember about 5-10 years ago, when you couldn't use data when roaming, at least on CDMA? They're that way, combined with not having much of their own coverage.) as well as limited VOICE coverage areas.
However, they do offer cheap voice and text, and don't ask many questions when you sign up, as they're no-contract providers. (This makes them rather popular with drug dealers.)
Dogs are a terrible idea...
Here in the US, a dog alerting is considered probable cause for a far more invasive search... so cops have been known to train drug dogs to false alert, just so they can harass people.
You could trade it in on a Apple II Plus with 48K RAM and a Disk II...
Steve Jobs actually tried to recall all of the Apple-1s when the Apple II came out, so that Apple wouldn't have to support them. Anecdotes say that, when the ][+ came out, the trade-in offer had become, trade in an Apple-1 (which would be destroyed,) get a ][+ with 48k RAM (keep in mind that the maximum officially supported RAM on the Apple-1 was 8k) and a Disk ][ for free.
Journalist's Guide to Firearms Identification
Just because they say it's an AK-47 doesn't mean it is one. Or even close to one.
I was under the impression that the z/Arch CPUs were modified POWER CPUs anyway, just with different decode front ends and/or microcode.
So, if they're already using a POWER core, but instead running z/Arch instructions in hardware or microcode... that's better than having to translate to POWER instructions, and then to the micro-ops of the real CPU, no?