873 posts • joined Wednesday 7th March 2007 18:42 GMT
But the otehrs were free at the time
...that's the difference. API for map access is not free, not even from GOOGLE, normally.
Google Maps has always been free, to consumers, as an APP. Just as has been Yahoo, or MapQuest, or any other. But, the Google API, for map data access for businesses, to make their OWN map apps, that is not free, not even here in the USA. The only times it ever HAS been free, once google acquired sufficient market penetration, they started charging for it. Its a pattern clear as day, and anticompetitive, to push competition in that space out, and it is illegal in the entire EU (not just france).
the API is not free
Consumer access to an app is one thing. Business access to map data through Google's or someone else's API to make their OWN app (even not just a map itself, but to display location information in virtually any kind by doing lookups of Google;s data through that API), that is not free. Not in the USA, not for android device makers, but for France, it WAS made free, to squeeze an existing entrenched company out of that market.
Google did this more than in just france.
Not with that screen
That screen alone is $800. Take that price out of the deal and settle for a generic 27" non-IPS panel with 1080 instead of 1440v resolution, and the remainder of the components are priced in line with competing SB based towers.
He stated his had a manufacturing issue, but he used others on the same desk without issue. As with MANY keyboards, short of the feet at the back there are no adjustments, and if your desk is not perfectly flat or itself imperfect, MOST keyboard will have a rocking issue, easily solved by adding a few mm thick foam pad under the keyboard, or a few cheap foam rubber stick-on feet.
Its a WORKSTATION, not generic PC, and comes with an $800-1K 1440v IPS panel. You're paying a $4-500 premium for a screen with high resolution, not just extra inches. Add a similar screen to any generic $1100 SB PC equivalent and it costs more than this iMac. You can barely build this machine from parts on NewEgg and save more than $200 off the retail price, and sacrifice TB and a slew of options in the process.
I don't own one. I'm a Windows and Unix systems analyst by trade, and have custom built my own rigs for 20 years. Still, this machine interests me greatly. For the price point, it has everything i want and more (including that screen). If all you want is 1080p content and web browsing, then you want a $600 PC and generic 24-27" screen, not this machine, so don;t buy this machine. I'm looking for something with 16GB RAM (that won't cost $1K extra to get to using 8GB chips, this has 4 RAM slots), with SSD and HDD internally and a slew of high speed external drive ports. I have eyes on a new storage chassis to replace an aging eSATA Multitap chassis, and TB fits that bill, and still lets me add my 2 existing 24" displays to the side of that 27" monster. I can not buy a tower from any major retailer, plus that 27" or similar screen, for less. Factor in that the iMac 27's from 2 years ago still clear $1K easy on ebay, some $1500, means I could replace this iMac every 2 years with a newer better one for less than I average spending overhauling out-of-warranty custom towers, and without all the hassle. Plus, it will run OS X, Windows 7, and Linux (and all at the same time if I want) which can not be done on any generic PC at all.
USB, even 3.0, does not support the camera control protocols essential to Pro-grade video hardware, nor aidio gear. USB3 is fast, but not duplexed, and lacks core protocol stack used by pro equipment. Also, all those legacy FW HDDs that themselves don;t have USB3, well, they run a hell of a lot faster than USB2 using FW, and FW is also fully TB compatible with adapters.
This is a pro focussed box, and pros demand FW. More so than eSATA.
misrepresentad a tiny bit
...and leaving out some other notes as well.
A) Yes, it DOWNLOADS to the Applications folder, but it does not "install" There is not installer at all in fact, it is simply a single-file executable.
B) It only does this if the user has previously told OS X not to bother them about future files being moved to the applications folder. The default is a prompt anytime anything is placed in this folder, regardless of the source.
C) By downloading the the Applications folder instead of the Downloads folder in the dock, nor by providing a disk image on the desktop to "install" from, many mac users will be confused. Some might not readily find the app at all after it "installs."
D) as the program was never "installed" running it the first time is a manual action, and it will further prompt a warning about running untrusted applications downloaded from the Internet. If somehow they manage to make it download by itself without a promt, the user may never know it;s there to bother to run it, and if they do they will see this warning and know it to be an app they have never run before that did not come with the mac or any other trusted application run through a true installer.
E) Since it's not installed, there is no auto-launch, and it will not be running in the background without it being manually launched. no dock icon (unless it's running) might be a clue to people who think it to be a legit app that an AV app that does not launch with OS X is not a real AV app. Unlike a real AV app, even if it was running, and generated a pop-up (from the background) it would not produce a system level alert, but would be forced to dance in the dock to get the user's attention, another hiont it is not an intergrated security application.
F) after all this, they still have to trick the infected user into giving them a credit card. Its not a worm running monitoring activity, it can't access protected user data or monitor web activity, it has to actually trick the user, and can only do that when manually run?
Big deal, they have a handy trick to self-install on some macs, after already tricking a user to their web site portal, but they have crippled its true usefulness as a worm/trojan since it can't auto-run, and also removing it is as simple as dragging to the trash.
could still be blank
It wasn;t until XP SP2 (if slipstreamed) that when installing you were required to enter a password. It did prompt for one, but it was possible to leave it blank with little more than a warning.
You are not moving 15TB of new files over the wire daily or even weekly. You are moving only the new vlocks in changed files plus new files over the wire each day (as a continual background process.
With a 15TB dataset, and considering likely 10-12TB or less of that is valuable "do replicate" material required offsite, and an average 1-2% daily FILE level change, your delta for block replication will likely be about 0.5-1% of 15TB, or about 150GB. This is HEAVILY compressed using the best methods to streamline transmission (CPU is cheap compared to bandwidth, especially considering hardware compression cards that do it very fast, real time often), so likely you're talking about a payload over the wire of less than 50GB a day.
Your RPO is 24 hours, not 4, unless you;re already considdering onsite backup storage, so getting yesterdays data offsite is a 24 hour window. If you;re backing up DIRECT to offsite systems, you did it wrong. You back up local and (selectively) sync (some of) that data to an offsite repository at the block level. This does not even account for deduplication of that data in flight.
I have deployed D2D2D multi-point DR systems for 10TB data sets and had successful replication over as little as a T1, if not two of them. Ive done 300TB data sets over OC12 easily, with headroom to spare. BTW, OC3 data transfer speed is 1.5TB per day, about 10x what you need to keep 15TB of data synced. (we recommend double the "need" to handle odd load sizes from bust days).
Becuase a cheap tape is used one time for archive, which is nice. However, rotating a tape through daily/weekyl backups will kill a tape in 10-30 jobs, meaning it needs to be replaced. $30 per tape vs $100-150 for the same capacity disk, which lasts thousands of re-writes, means the rotational (not archive, we still use tape for files moved off-site at month end, one tape set only), means the cost of disk is in fact lower than tape even in only 1 year of use, let alone 4-6.
Second, online backups are LOCAL. The only data going offsite is the daily rotation (optionally electronic over a wire, further eliminating man hours and courier costs), and only for compliance reasons. Disk stores weeks if not months of incremental backups in a single disk set, not one backup job per tape catalog, allowing quick and easy recovery of any files, not just last nights. You only need one disk set on-site, and one off-site, not multiple sets in rotation constantly (being replaced every so many rotations as an ongoing cost).
Most of the cost of the tape is in the DRIVE. $4-8K per head unit, plus the chassis and robot mechanisms. An 80 slot jukebox can easily be $60K+ if it has 4-8 drive heads for concurrent backups. A D2D system uses cheap, non-raid (RAID is optional), DASD chassis, costing $1-2K for 16 slots. A typical on-site system comparable to an 80 slot jukebox might have 3-4 drive hot swap shelves, and cost $10-15K (including the high performance eSATA cards for the serevr to support it). It can run as many as 14 or more concurrent jobs, not the 4-8 a tape chassis can handle further simplifying DR planning and backup windows, and there's always a "slot" open to do a data restore without hard reserving an expensive drive to do it.
To do a normal pyramid or similar backup rotation for 60 days, lets say based on 10TB of data, you need 8 weekly master sets, (2 of which are the monthly rotations), and 12 daily incremental tapes. Given 2% daily average incremental change, you need 200GB of daily tape, so just 1 tape a day. using 1.5TB tapes (compressed capacity), you'll get about 1.3TB per tape average, assuming masters are in a single set (it will take more if they're in different tape pools due to wasted excess tape unused). 8 tapes per master X 8 sets = 64 + 12 tapes. Replacing each tape every 15 uses (not including what's archived periodically, that applies to a D2D2T process equally), you're talking roughly 70 new tapes every 120 days at $30/tape for 4 years (plus 10 cleaning tapes in that same period). That's $25,000 in tape costs, plus cleaning tapes. Adds up fast doesn't it. Same cost for local 90 day D2D backups, plus 2 spare sets of disks for offsite rotation, given D2D deduplicates all those bonus master jobs into a single data set, you need about 50TB online capacity and 20TB (compressed) off-line capacity. 4 1.5TB disks per offsite rotation, and 10 more for onsite. A single 16 bay chassis holds all 10 local disks, including RAID 6 and leaves the 4 slots for archive needed. 20 total 1.5TB disks, replaced under warranty free for 5 years, at a cost of about $150 each for enterprise SATA drives. $3,000 total disk cost. Given the chassis is about $4K vs $60K, which method is cheaper? Given a 16bay chassis connects over SCSI or a few bridged eSATA ports, andf that tape drive needs at least 2 Fiber controllers (on each end), connecting that tape chassis is about $4K more expensive as well. Each have the exact same server host requirements and licensing.
media cost != DR solution cost
You might be able to get 1.5TB tapes for $30, where enterprise 1.5TB Disks cost $150. How much is the fiber connected tape head array you're sticking those tapes in, for an 8 head system with 80-120 tape slots? $60-100,000? How much does it cost to have 32 bays of SATA ports online on a 10G connected storage chassis behind your tape server? About $10K.
Tape may still be the right path for archive, for the small set of data you have to keep more than 3 years after it's backed up, but for short term, near term, and especially the current backup data set, you can re-use that DR HDD 10,000 times, and you can reliably re-use that DR Tape 4 times.
Move your daily backups to D2D chassis, even better still replicated at the block level off site (often cheaper than paying for 4 years of Iron Mountain tape pickup services). Lat that data migrate slowly (using 24 hour windows instead of 6-8) to a low capacity tape system used only for archive, and snap only weekly or monthly archives as required by your regulations because your online D2D chassis already has every version of every file created over a 30-90 day window. Even short term offsite rotations (your daily and weekly sets moving offsite), are cheaper to do on disk than tape due simply to tape replacement costs over time, so even if you can't afford wired data duplication to your failover site or DR storage provider, its STILL cheaper to use D2D for backup than Tape, EXCLUDING long term archive requirements.
As for the benefits of Disk over tape when it actually comes time to recover? How many man hours of IT time are spent every year in your organization recovering files and data that are not in the available onsite tapes from yesterday still cataloged inside the jukebox, meaning they need to be recalled from offsite, re-imported, and wait for an available tape bay then spool the tape to find a file or folder? With Disk, you can restore any file or folder from often several months ago INSTANTLY. How many times is that tape corrupt (per industry averages, 1 in 10 tapes checked has bad sectors or can't be read at all after it has been removed from the site and returned later, if and when the right tape can even be recalled based on accurate data of when a file was even backed up). I've seen cases where it took more than a week to get a vaslid tape from an offsite depot with a valid copy of the server folder needing to be recovered. How many man hours does it even take to rotate all those tapes every day, prepare them for shipping, and more? All that time and potential data loss adds up in real dollars. If you;re not counting manpower, overhead, etc in your costs, you have again failed to properly compare the costs of disk and tape.
I worked for a major DR firm, one that pioneered most of what we use in place today, though you have probably never even heard their name. They did not sell disk or tape, they sold DR software and the server units to make DR happen. In 1998 they had already figured out D2D was better. I did over 400 DR plan designs based around their technology, mostly in competition with Symantec, Veritas, and similar tape products. In every single case we were able to deliver a D2D appliance system at lower cost, hardware, disk, licensing, and rotational media, using their older tape drive as the archive footprint (which works in most cases as what you are required to archive vs back up is usually 25-50% of your total backup size), a complete replacement for their systems, for less TCO. In many cases (almost half) we were able to deliver that system for less than their annual license upgrade price plus a new tape drive, not even considering total competitive system pricing. In every sinlge case, we reduced their backup window by more than half, eliminates issues with multiple concurrent backup conflicts, simplified which systems could back up when, and permitted instant restore of any file for any system. IN about 30% of cases, they also added offsite data replication, which ended up costing less than their pre-existing tape rotation contracts (daily pickups became montly, remaining dollars covered the cost over 4 years of the offsite system installation or hosted partition from a provider).
Disk IS cheaper (for most environments) when you have someone who knows the technology design the data plan. It's faster, its more reliable (offsite rotations can even be RAID sets for added reliability, but disk is in fact more reliable and more durable than tapes to start with), and its easier to meet data compliance and archive requirements with it. It reduces manpower, and it DRAMATICALLY reduces RTO and RPO.
I have an Audio CD from 1993 that has a data partition on it in addition to the tracks, and when placed in a Mac (running System 8), it displays a very DVD-like menu system including movind images that are to be clicked to launch into the music video, or even launch AOL to the artist's custom content page there. This idea was not new or novel when patented, sorry.
Not a SINGLE vote for Otherland? Could have sworn I listed that one. Maybe I noted it would have to be a TV series not a movie and it got discounted. (4 very large books to tell a single story, not going to do well on film, but would make an amazing TV series!)
we'll hear real contrary news later
I expect it will sound like this:
1) you have to accept or in some other way interact with the attachment, it can;t auto-execute.
2) gaining shell access is not gaining root access.
3) root access is only gained if previously enabled (not the default, and not commonly done by accident).
4) it only works for people on your friends list
5) some settings in Skype that are not the default may need to be set, turned on/off.
6) since use of skype essentially required using a current version, by the time an ITW exploit exists, all active skype users will be patched and this attack vector will be useless.
7) though in some cases (noted) root access can be attained, installation of executable code is still not possible without further user interaction, including entering of the keychain password.
Not ANY time soon
Even with 64bit arm chips (next year), including quad cores and higher end GPUs capable of handling what people expect from Apple (iLife), Apple is not in the business of compromising the performance of their machines to fit a niche market. ARM doesn't support TB, doesn't support Display Port, doesn't have a SATA interface, and so much more.
Can they make OS X run on ARM? yea, iOS is a port of OS X itself... Can they do it in a compelling form factor, with compelling performance, comparable top their MacBook base model or Air? Well, since the price offset between the ARM and the i3 is about $25, and they might remove 10-20% of the battery in the process saving maybe an additional $10, I don't think they could reasonably shave moire than $100 total off the price of the machine, and it would fall far short of a full macbook performance. This might be usable for a dual-boot tablet sometime in late 2012/mid-2013, giving some limited access to basic OS X apps, but again, it might ride that price up enough to not be relevent, especially if still limited to tablet resolutions on the screen. At the same time, macbook prices are falling, and it;s reasonable we could see a $700 macbook by that point, if not less.
Yea, they're persuing it, apple keeps their options open, we found that out at the intel launch that they had from DAY ONE built and compiled every single piece of code on at least 3 platforms (Power, Intel, AMD, and a hint there were others too), so then going strong on ARM is a given, but will it result in a product? Not on current ARM architecture. When ARM is significantly more powerful that Atom, and can still be cheaper and more power efficient, it might replace the lowest end machines, but only if X-Code can cross-compile ANY app with a few clicks, otherwise those using ARM OS X may have a dramatic disadvantage in software availability, and emulating X64 on ARM64 just isn;t going to work well.
Lots of people know OS X fell in pwn2own, few know the extent of what that actually means based on the contest rules.
Execution of code was completed, yes, BUT, only through MANUAL intervention, and only at user code level authority. Code was NOT installed, root or other escalated permissions were not attained, a bot/trojan/virus could not be deployed or left behind, and the server receiving the "tricked" connection to a pre-generated web site (successful phishing attack required first) required the hacker to be online to accept the incoming attack and directly interact with the pwnd machine. They also did not acquire or bypass keychain (though IF you could get code escalated and running on a mac (possible directly, but not yet proven remotely), there was an exploit shown (now patched) to do that.
Remote code installation on OS X has never once been demonstrated, even using now-patched vulns under the assumption a user had not yet installed the patches, that had escalated admin permissions and/or the ability to access secured portions of OS X or the keychain. There are ways proven to compromise a mac, yea, but they are not capable of being automated, and can not self spread, they all require a central server, making therm easy to block and stop, and first the user has to be tricked for that to even be possible. Proof of concepts of defeating one or more layers of security, making assumptions that other barriers can/will be breached, have been shown, and every individual layer of security has been breached, but no hacker or security team has ever shown a complete path to enable that remotely.
More so, if you could get a virus installed on OS X, it would dance in the tray when running, show up in task manager, in general be easy to spot. Really the only viable ways to get an app in here boil down to tricking the user to go to a site, tricking them to download code, tricking them to type their keychain password, even use the mac installer itself, and all this boild down to damned easy to detect with AV software activity.
I'm not suggesting this can;t be done, that OS X can not be compromised, but I am suggesting people get a grip, and understand just what Pwn2Own is, and that the methods used provide very, very low level risks, and simply because a Mac fell does not mean it can be remotely compomised (yet). Also, everything known for pwn2own is handed over, and those vulns patched.
And in this case, it doesn't even appear the Mac cross-code even works... just that yet another coder tried and failed.
no no no
ALL those server can co-exist, pooled on a single IP. internal DNS can handle routing incoming port 80, 8080, and 426 conenction based on URL headers as easily as IP direct routing. it;s a simple matter of properly configuring a firewall and DNS server, or using something like DataPower.
There are a few cases a server and URL must be matched to a specific unique IP on a specific port, but very, very few of them have anything to do with web services.
not just Unis
Pretty much all businesses continue to operate on the completely false premise that each DNS name requires a unique IP address. I;ve seen companies not only put each top level domain on it's own IP, but multiple sub-domains on their own, an FTP server on a separate IP, all fully routable and that could have co-existed on a single IP.
The only time you need a unique front facing IP for a site is when you do not want 3rd parties to see that multiple URLs are all backed by a single system, or when more than one have to be capable of responding on the same port when other data (like the URL itself) is not concurrently passed to the server (session-less browsing, telnet, etc). In some cases, applying QoS at the edge is a need and this makes using a single Ip difficult, but really, most people using more than 1 IP could use at least fewer than they do now, many could in fact use just 1.
Too many businesses simply find it's "easier" to have a new IP than bother with DNS and proper fire-walling to route traffic.
This analyst may be right, in that a model of the iPhone very similar to the existing will start manufacture in September. However, we suspect a separate, larger model with a bit more CPU/GPU oomph vs this later will begin manufacture very soon to be on sale in Aug/Sep. This wuold be consistant with other apple policies of releasing a new model now and a lesser model a bit behind it, and it would also end the 3GS manufacturing, allowing Retinas currently used in IP4 models to be re-allocated into a small model 5 as the demand for the larger phone takes away that of the older one freeing up that use (which would not be available until the prior shipped).
some I didn't see here, and some I'll second
Missing: (I did not read every page, props to the few who mentioned these)
Otherland (Tad Williams)
The Mote in God's Eye (and its sequel, Niven + Pournel)
Ender's game (actually about to enter production)
Nueromancer (also in production)
Out of the Silent Planet (C S Lewis)
One's noted I'm adding a vote to:
Foundation Saga (and prequals too, really about anything from the future history cycle)
Snow Crash (though that could go badly/over-the-top cheezy)
One's I do not agree with:
Ringworld. Might make it on TV as a series, but not enough action/plot for a movie and/or takes too long to get there. This was for FILMS. I would vote for it being a TV show in a second, but it's not good movie material. this has been discussed a long time.
I'm not even getting started on Fantasy... If we stopped writing now we'd still have a century of blockbuster releases to hit screens... BTW: George R R Martin "Game Of Thrones" Episode one THIS SUNDAY on HBO, SO PSYCHED! 15min promo brought me right back to the prologue of the book, I knew exactly who each character was on SIGHT. Super well done.
but only at 24hz
HDMI 4.1a supports 4Kx2K, but not at 60hz, not even at 30, only at 24hz at 36bpp color. 3D supprot is limited to 1080p/24 as well (full resolution double frames) or 1080i/60.
In contrast, DisplayPort supports higher resolution (limited only by bandwidth, which is 17gbps vs 1.4a's ~10gbit) more color spaces, and a faster etherchannel (and/or a USB passthrough). It also daisy chains up to 4 displays (at lower resolutions). Its also royalty free.
1.4 added a data channel and passthrough audio, and some additional format supprot, but did not increase the link bandwidth. We need HDMI 1.5/2.0 to do that to support not just 4K, but get 60fps in full color on it, let alone 3D.
DP is the better system, especially moving into these higher resolutions. This is confirmed by the inclusion of DP conenctors on the 4K shipping TVs today. We either need a new HDMI (likely with yet ANOTHER damned cable end), or to just move to open source and be done with it.
not all of it
more than half the active virus activity can be laid at either Sun or Adobe's feet. MS might be able, theroetically, to secure their own house, but 3rd parties can not be (until such a time as we all agree the current security model sucks, agree to loose backward compatibility, and start all over).
and thats why it will fail
Simply because you can do that, without software, means there's no encryption. no encryption = no business/government use. No government market = game over.
iOS is VERY close to surpassing RIM on the DOD STIG checklist, its the most secure mobile platform on it;s own, RIM only beats it when combined with an expensive BES server (which iOS does not need to meet the same STIG compliance goals). Its already acceptable to use in many other less secure environments because it has full disk encryption and remote wipe abilities native to a simlpe exchange server. Its being adopted heavily.
Android meets virtually none of the audit security requirements. to do that means disk encryption, central management, blocking side-loading apps, centrally controlled user access, Full Exchange support, and requiring apps to use and enxrypt removable media.
the very things Fandroids toubt as their platform advantages are the things keeping them out of busineeses and government, and will eventually lead to poor adoption later.
Utter analysis fail
Andouid is shipping more units, but ONLY because it is in TWICE the market space serving more than twice the population. In markets it directly competes with iOS, Andoroid is the 2:1 second place. Android had 615% market growth, but they added 700% more market, and the market space itself more than doubled, which means for each new person CAPABLE of buying an Android phone (one who chose to buy any smartphone in that time period, they only gained about a quarter of them, and most of that in places iOS was not sold at all. As appel continues their relentless expansion into more nations, Android is the only one standing there to lose market stare.
As for tablets, Appel has massive cost efficiency from 1st party control of the OS, a symbiotic line, and 200m units shipped high volume manufacturing. They have made direct investment in facilities enabling them to corner parts markets and buy at UNDER wholesale, sometimes even having back-end profits when facilities sell parts to their own competition. they;re clearing near $200 in profit on each iPad, and they have lower R&D than any single competitor in the space. With the competition aving 1-2m part model runs, splitting high R7D costs across it, and having a back seat in the OS development process (reactive not proactive development costs a LOT more), they simlpy can not produce a comperable platform at a lower price, and if they could, Apple can easily come down.
...and Apple has the back-en profits too. Samsung never sees a cent after shipping a Tab, in fact, ongoing support drains what profit they make. GOOGLE makes all the real money here. until that shifts, until costs come down, until several players simply leave the market allowing fewer to share the consumer base, the can not directly compete, and they know it.
yea, Android's going to take some market share. it however is essentially banned from business, and that's a BIG deal too. Banks, Hostitals, government contractors, local governments, their security audits, STIG, HIPAA, SOX, etc make it clear android can not be allowed in. Without business, they lose. 80% of the fortune 1,000 are in process of testing and/or deploying iPads. Schools are adopting them at the district level. Andoid and iOS have mostly, until very recently, only competed in retail, but Apple owns the business market, and that's a bigger and high profit market. They learned that lesson from Microsoft. Google missed that day in school.
the drive is already SATA. Why go SATA (ion the bus) to USB host, to USB node, and BAXCk to SATA, wasting all that resource and overhead, and loosing duplexed communication and committed writes in the process, when you could just have used the native SATA connect already present in the drive?
eSATA drives were more expensive when they were IDE inside the caddy, but now that all drives are SATA native, USB is actually more complex and more expensive, and slower and more limited...
Better still, TB carries SATA over PCIe, so an external TB device is seen as an internal hot-swappable disk by default, and another drive or a display can be chained off that without requiring a hub, and at 20gbit (duplexed speed) instead of 5, and at 10w instead of 5w power.
USB3 came WAY too late to catch on. it got replaced before it was ever available.
FW has protocols neither SATA or USB have, including ones critical to high end camcorder operation and HiFi audio systems. It;s a niche port, most people do not need it, but we can not eliminate it. For these people, including a TB port is just as good as a native FW port since it can be easily used with an adapter, not possible on notebooks that do not already include FW ports natively and/or an ExpressCard slot.
eSATA2 in practice is faster than USB3. USB is not full duplex and that alone is enough, not to mention the CPu latency inherent to USB that SATA does not apply. Also, lacking NCQ, Comitted disk writes, and more, means USB3 devices can still not be used in RAID, can;t host VMs reliably (and in some emulators at all), and will be very poor for heavy random read/write operations like databases. USB3 is good for little more than bulk transfer and/or backup, but if we need eSATA foer everything else, we might as well just USE eSATA (since the drive controller supprots that natively without ADDING a USB host on top, making it both faster and cheaper and simpler architecture). eSATA6, now present in every current generation chip, (it already GOT it;s speed bump, a few years ago, and is now COMMONLY deployed)is more than twice USB3 speed in practice, and 6gbit (duplexed) vs 5 gbit (not duplexed) theoretical.
USB3 coasts more (for storage), is more complex, imparts more CPU load, has higher read/write latency, lacks critical protocols, and in every other case if fully redundant to all other EXISTING technologies. The ONLY thing USb3 brings is a combination of bulk transfer speed with legacy device support in the same port. However, TB brings that and SO much more also without adding any new ports (or cables for that matter, using common DP-mini to DP cables).
can we grow up?
Why is FoxConn automatically associated with Apple? Apple is not even its biggwest client, and nearly every one of the major retail tech firms has products being made there. Of the suicides and other issues there, few if any have even happened to folks assigned to Apple lines or in Apple used areas. Apple has some of the tightest controls and standards in place for FoxConn of anyone they make stuff for, and they're one of only a handful of companies that do routine on-site inspections of those standards including record reviews, and go further to have "surprise" inspections routinely.
FoxConn has more employees at their China facility than the capitol city of my state has total residents. Less than 10% of those people work on Apple gear. Stop associating the two, please.
They made it faster, but its still not full duplexed, still lacks core protocols needed for high performance disk use (other than bulk transfer), can't be used in a RAID set, and more. If you already have SATA onboard, and in use cases it;s faster and has a superior protocol set, why not use it and forget USB3, especially when every current chipset ships with SATA6 which is even faster, and when eSATAp carries 5v on laptops and 12v on desktops at 2A (more than double USB3). TB makes it even less relevent.
USB3 is too little too late. It has no single use case not already met by eSATA + USB2. going to USB3 requires new cables, and more complicated devices and more cost. Using SATA for all storage universally makes everything simpler, and there are no other "high speed" peripherals out there already that can be used over USB (even USB3) due to latency and CPU load issues. (they require FW specific protocols, or low PCI latency USB can not meet).
SATA is hot-plug by default. in fact, its Superior to USB's software implementation of it. Also, there are eSATA thumb drives, and eSAAT is a 5v 2A output with more power to drive devices (like BR burners and multi-drive caddies) than even the new USB3 supports.
Now that the Tegra2 competition is rolling out in tablets and soon phones, Tegra2 will go away. nVidia cut a few corners on the platform, and made it just slightly less potent in video, which was not a big deal, but cutting hardware decryption, THAT was a big deal, and that means the HDCP protection chain is broken, and the MPAA and most other licence management partners will not authorize their content to be played back on those devices. In simpler terms, Tegra2 is not getting Hulu, might get Netflix but only with e limited selection of content and only if netflix fights hard for it, and most online streaming services pushing licenced content also won;t be releasing apps for it. No content = no sale for a media device.
nVidia should release a Tegra2.1 platform ASAP re-instating the hardware decoder, or they're going to fade away.
iOs way ahead
I work with DOD STIGs all the time. BB only has a grace in there because Obama essentially "made" them once he was chief, but even that has strict limits on what he can and cannot use the device for. only WinMo 6.5, when backed by onboard 3rd party apps, and 3rd party servers, is sufficient to meet DOD STIG (and then only when the company owns the phone itself, not a user device, and the issuance of each device is audited and manually signed off on by a government representative, per device).
iOS is VERY close to meeting STIG, and without requiring an additional audit server in addition to Microsoft Exchange. There are only a few things on the iPhone 4 and iOS 4.2 not yet secured by the new encryption APIs, and only 2 tick marks on the STIG checklist for remote management and authoritative control left to tick off. iOS 5 could meet DOD STIG completely with a few simple changes, a web server, and Exchange 2010 deployed (itself to STIG standards), and an optional server to push iOS apps internally to registered devices (bypassing the App Store). Then it's just a matter of the gov't placing a big order with Apple and outfitting its users with devices more secure (and cheaper to manage) than BB or WM6.5.
Android I'm afraid is last in class. (behind WinMo 7, sad to see they missed so much being the only "true" STIG approved device). Even Symbian ticks more boxes off. For Android to meet STIG, it needs to lose the USB and SD ports except through internal API controls (moving files in/out including enforcing that media to be encrypted is OK, but it can;t be used as expanded native storage if ti;s removable). the file system needs full encryption, accessible only through API. Al apps have to be signed. The device needs a centrally managed remote wipe service and native Exchange support. And a remote policy system has to be implemented for enterprise audit and management. And lastly, no side-loading. Most of the things Android users want have to go to meet STIG. I don't see that happening....
in fact, they'll be moving away from the .pst itself on PC sooner rather than later. Why? Because every time you change 1 item in the PST, the whole frelling multi-GB file shows up in your nightly backup. Worse, when open, the file is locked and can not be backed up on most systems. Time Machine can't support that, Windows image backup chokes on it, and when storing PSTs remotely on servers, their nightly backup is either massive, or admins ignore the PST file entirely in backup.
Also, HFS+ is much more efficint than NTFS dealing with this type of data. NTFS handles very poorly massive directories of thousands (or millions) of files, as would be in a long e-mail history. Linux systems can use library files to mask this data-set from the larger file system, and OS X ties these libraries further into extensions of the file system, allowing other apps to either see them as a single file or a directory or both. WinFS was reported to introduce such features, but it has yet to materialize.
As for mounting a PST on a MAC, there are conversion tools. If it;s an archive, simple create a new container, import the PST into it, then close it and only use it when you need to (you do not have to merge the PST in your current mail account).
but Microsoft, after more than a year of notice (publicly, who knows how long the Mac BU knew about it), failed to implement it. Microsoft does not want to support CalDAV as it is one more thing others can do without Exchange, and Exchange is the crack that hooks organizations to Microsoft.
Apple mail talks to Exchange just fine, http://www.apple.com/macosx/exchange/. 2007 or higher. perhaps your exchange is still on 2003 (which was dropped)?
This is about Microsoft's product that doesn't support CalDAV, the defacto STANDARD, and lacking that support it can't talk to Mobile.me (and a LOT of other calendar services). It can still talk locally to apple sync services, which will sync Outllok to Mail and Calendar and then THEY will sync to mobile.me, and outlook still talks mail and contract to Mobile.me just fine, but calendar is out because MICROSOFT failed to add support for the industry standard, and Apple's old way has to go for security reasons.
I could go into lost of details, but i'll try to keep this short. (not that it;s brief).
1) if you drive an average 250mi/wk (12K per year), a common hybrid or micro diesel pushes that on about 5 gallons (US) of gas, or about $18-20US per week. The leaf uses 30-32KWh to charge from dead empty (but it never really can be now can it) and given the last 10% of the charge is hardest, you'll get about 33-35KHw average per 100 mile equiv charge, so ~100Kwh/week, ballpark US at $0.10/kwh = ~ $10/week, or about half the price of gas in a comparable small hybrid car assuming a 12K mile annual driving habbit. Because of summer and winter (mostly winter) inefficiencies, you can reliably trust the car only 70-80 miles a day unless you can charge while at work so keep that in mind.
2) CO2. Pretty much the simple answer is, unless more than 50% of your power comes from renewable sources, it is NOT greener. Wells to wheels figures put oil at about 23lbs CO2 per gallon (19lbs is from burning it, the rest refining, mining, transport is a small part of the total). Coal is 2.1lbs/kwh just burned. Mining, storage, and transport is actually WORSE than oil (oil is mostly pipe-lined, coal has to be manually moved in fuel burning vehicles, and drilling vs mining, mining is also worse). Natural Gas makes 1.9lbs CO2/kwh. 56% power from coal, 20% from natural gas, just combustion, you;re at about 1.5lbs/kwh. 150lbs a week, give or take your region. This does NOT include transmission losses (but the roughly 33KWh figure /100 miles above does), and does not including mining/refining. 5 gallons of gas = 100lbs CO2 (+ mining refining) and electricity = 150 (plus mining and transport). That's just CO2... Sulfur, Mercury, and other hazardous EPA managed output from fossil fuel electric generation is about 3-6 TIMES as high in an EV than burning even diesel fuel.
The environmental impact of an EV is much heavier than a standard car. By some estimates, building a car is half or more of all of its CO2 output it will ever have. I don't know if I buy it, but an EV is more impact as making those batteries, and the rare earths in the motors is higher than the impact of a block of iron for an ICE. By how much? VERY hard to tell.
A comparable micro-diesel, with similar interior room, and driving performance, is about $7-15K less than an EV (depending on subsidy, and considdering some microdiesels and hybrids get their own subsidies too). Odds are, at even twice the driving range (pushing an EV to it;s theoretical max per week given charging times), gas prices have to go over $6 within your first 3 years of ownership, and continue to rise to $8/gallon inside 7 years, while energy prices stay the same (unlikely), in order to break even. This does not take into account higher insurance on EVs, and higher repair bills, nor that battery swap every 10 years... They will also likely depreciate in value VERY fast as newer battery tech and better systems are coming out rapidly (current EVs will be obsolete in 5-7 years, replaced by MUCH better ones, and battery prices are falling by as much as 25% a year. buying an EV for economic reasons is not sound. buying an EV for environmental reasons is not sound (unless you live where more than 50% of power is from solar/wind/water).
There's a lot of fuzzy math, and a lot of FUD out there. One thing to keep in mind, that EV, it uses as much as half a common house in power every month (or more) per car. We can afford a 1-2% increase in total grid output in the USA right now, and power capacity is not expected to dramatically improve for 5-10 years. That means 1 in 75 houses can have an EV, and even this is limited to areas not already strained by underdeveloped power (California, southeast USA, etc).
They ARE the future, but we're not in the future yet. EV development has to continue, and to do that they need SOME of the cars to sell, a small number, governmet subsidized. In 20 years we might be able to start selling them in masse, but for now only confused greens and misled people believe they're doing something good by buying one (other than supporting a growing economy).
Is there a fog in here?
What part of enforing open competition, little different from FRAND legislation on patent licensing, is supposed to correlate to socialism?
Does the government own the network and give it people? no. Regulation is FAR from socialism.
The data roaming 24MB cap applies to feature phones, not data plans. This is the data plan component of phones that have general data allotments. (camera phones etc).
AT&T clarified to me, formally, there are no caps or limits on iPhone, Android, or Data Connect off-network roaming, there are no overage fees, and unlike the wording of the default wireless terms I'm linking to below, they will not change or cancel your policy for exceeding any limits. It is possible the 3rd party carrier might throttle or disconnect your usage if you abuse the roaming partner's ToS, but AT&T will not do so.
As for tethering, there is a simple explanation most people do not recognize or understand. AT&T can really give two shits about DATA load. (they're an L3 backbone provider, they have the bandwidth....). What they care about is AIRTIME. Cell phones are specifically designed and programmed to hold a static IP, but yet use very little (near zero) airtime on a channel when idle, and have short time-outs to release the channel allowing a tower to "push" data back on a separate one on demand to that static session IP, and of course so long as the channel is availabel, the phone can use it when it needs. It attempts to "reserve" the channel, but it is mostly not in use unless you are actively, or have just recently, sent or received data.
However, PC operating systems have no such understanding, and very very little of PC software operates on push technology. Windows, even sitting idle, with mail open and maybe a browser, is near constantly pinging away at systems and services. This creates enough load that even when using "no data" sitting idle, your phone's connection to the network can not be idled itself, and thus you are using air time with no data load. This is also why they dislike streaming and torrent loads. (they're 100% on, but using limited or metered bandwidth, it's better for the carrier if you download the whole movie all at once and get OFF the air to watch it as opposed to streaming). This gets even worse when your phone is a hotspot and lets more than 1 PC connect...
Bandwidth is very cheap. Airspace is very expensive. They would rather slam data into your mobile at 14mbps and get you off the air than let you idly ping services holding that channel in permanent use. THIS is why they charge more, and many don;t even give you extra data with that fee (AT&T does). You're likely to use significantly more airtime, without actually placing significantly more network load, and it;s the airtime they want you to pay for.
This measures individual wind farm output. Bad move. Wind farms need to be considered on a whole, cross connected, providing a balanced load to the general grid. Studies showed that a grid spaced over 500-700 miles can have a less than 2% output variance, where a single farm can vary by more than 80%.
Yes, the data is relevant to the owner of a single farm that is directly connected to a LOCAL grid, or where few other farms share loads, but if the grid is properly built out, better yet using superconducting trunk lines (like those deployed in many place in Europe, Long Island, NY, and more), then the load can be normalized.
Also, there are uses for the overproduction and off-peak wind. elctrolysis... Convert water to H2 to feed Fischer Tropshe processors to make petrol. The H2 production can be scaled up and down in fractions of a second, and in a few hours can make enough H2 to run a synthesizer for a day. H2 production is independent, and can be used to balance the grid. It can also be run the other way to generate heat to run a turbing in peak load times. The synthesizer runs at a predictable pace, and used what H2 it has (and coal, CO2 waste, or natural gas), to make petrol, greases, oils, and other high alcohols. There's great research out there on this. Fuel manufacture and off-peak wind are symbiotic industries.
Apps in the store CAN download content, but NOT executable code from 3rd party servers.
Going to a page and running java script there is a cached (and sandboxed) activity inside safari. This browser can permanently download (and run when offline) code inside of a 3rd party app, which may be capable of allowing that app to bypass other Apple rules about modifying an app outside of Apple's approval process, enabling unsupported features, or enabling that code to interfere with other apps, files, or device security.
in simple terms, this app is capable of modifying it's own code and capabilities post-download without apple's oversight. 3rd party content is OK, but not 3rd party code. Want 3rd party code, it has to be run inside safari where apple can control what it does.
F* the carriers, because this rule has more to do with them than Apple. This is about letting unapproved 3rd party executable code be used inside a 3rd party app. Its a security risk, and a policy violation.
The only reason Google gets away with this is simple. Apple initially conquered the market, but through exclusivity deals. The OTHER carriers wanted next gen phones too, and when Pre dropped the ball and WP7 got rediculously delayed, and RIM failed to even do anything, they turned to Google. Since Google was the only option, google go to set the rules or those carriers would all have lost to those carrying iPhones. Once it was popular, they could not go back on that deal, and even AT&T later accepted it to not loose out on the Android business (30% of users is a big number).
get it right
the VLC group itself ordered it pulled. Not because Apple does not support GPL, but that GPL does not support ANY form of "app store" model. Apple has nothing to do with it. A VLC app on Android can be equally ordered pulled. the remote is fine, the VLC player is not.
device vs apps
The iPad does supprot 1080p, and when mirroring the image is unscaled to 1080v lines (just in 4:3 format). It is the Video API that is currently limited to 720p. Why you ask? because big content asked for it to be until such a time as apple actually licenses and distributes 1080p content, or until netflic or another major online provider doers. With no native (legal) 1080p content, enabling 1080p output is basically an admission by apple they know you're using pirated wares (since we all know the count of people with 1080p home video and yet have the inability to get it on their TV is very small).
current version support
The A5 does do 1080p, the SOFTWARE ON IT does not yet. This is a limit imposed by licensing (mostly), since Apple, not any streaming video app for the platform, yet supports 1080p, so why shoudl the video player API... Games coming out will use 1080p, just not the video app, not until Apple, Netflix, or some other big player has an iPad app and online available 1080p streaming legally licensed MPAA content. Anything less is an admission by apple they know you have pirate wares and they're OK with that. As a licensee themselves (and barely holding onto that status at this price point) they're pupets of the MPAA. Put you blame where it should lie.
1st vs 3rd party
Apple has numerous 3rd party accessory partners. They ALWAYS price their first party parts high such to give the 3rd partys room to undercut them. If they put the price troo low, there woudl not be a secondary marekt, and then the argument of Apple being the only one making all this proprietary crap would actually be true.
There are already $7 HDMi adapters available for the iPad 2 and iPhone 4... There will be $14 magnetic lids soon too I'm sure (fewer people saw that coming and are a bit slow to react...)
Keep in mind, Apple has to develop the part internally. Then they license the API out and 3rd parties make the stuff. Apple doesn't WANT to make this stuff, they want the 3rd parties doing it, but they can't very well have a 3rd party on the inside during development as that leads to leaks, and they have to have an adapter out day 1 somehow, and that means1st party. Little gets the 3rd party guys jumping faster than to release a proprietary part at a high price and then they go "shit, i could make that for $5, and if I do, and I'm first to sell it, i could sell it for $30 vs their $40 and make a killing." and then 15 guys all release parts a few weeks later and drive the price down to about $15, and some knockoffs sell through mono-price for $6.
Is it really hard to see the logic in this model?
Sony, especialyl the bravia series, have had numerous issues and support cases related to a poor implementation of HDCP and some HDMI handshake protocols in their TVs. Often when there's a new HDMI anything, Sony systems come up first in the complaints. They have layered some proprietary sony-to-sony control protocol on top of HDMI allowing one device to control another, but without using the actual HDMI specs for such a thing, and this causes major issues with any device supproting more than just the basic video feed protocols.
a) they don't include it becasue that would eliminate many of their 3rd party accessory makers.
b) if no one made it, it really would be a proprietary PoS connector
c) its not included because very few people use it.
Apple HAS included a number of optional connectors in the mast, most notably mini-DVI connectors. They ALLWAYS price up new connectors, to drive others to release that part, and hopefully for those makers to try to convince OTHER companies to ALSO use those parts, making the connector more universal. They did that successfully with both USB and Fire-wire, and now display-port is getting a lot of draw, and I expect Thunderbolt next. the 30pin connector was kind of a solution to a problem at the time, but now so many things use it, they don;t want to break compatibility and abandon it, which I personally appreciate. If an iPhone 5 or a new iPod doesn't work in my shop radio I'll be VERY upset...