123 posts • joined 28 Sep 2009
Re: It doesn't matter how good the display is if there's nothing to display
The BBC some time ago (at least) transmitted BBC 1 HD with an output that swapped between interlaced and progressive at the GOP boundaries depending on which they were getting a better compression from.
This has now been rolled out to all services on the PSB3 Freeview HD multiplex (BBC One/Two/Three HD, ITV HD, 4hd) and I believe it is also used on COM7 (CBeebies/BBC Four, Channel 4+1, 4seven, Al Jazeera HD).
I don't think it is used on satellite - changing the interlacing mode on a GOP basis was not part of the Freesat or Sky specifications. Doing this caused a problem on early Freeview HD units, and in some cases TVs using external Freeview HD boxes (it depended whether the box passes through the 1080p25 GOPs or converts to 1080i50). There tended to be brief switches to black and audio glitches on mode switches - annoying but bearable on programme transitions, not acceptable when it could switch more than once per second (a GOP is usually shorter than 25 frames)
UPCs are incredibly cheap
Membership of your national GS1 subsidiary costs a couple of hundred to a couples of thousand dollars depending on your company turnover. GS1 UK charge £107 joining fee and £117 annual membership if your turnover is under £500k, which entitled you to codes for 1,000 distinct products. There is no per-product fee. You just have to include the barcode in the label you were going to print anyway. It literally costs nothing beyond ensuring that the printed label is in spec.
For turnover of £1bn or higher, the joining fee is £327 and annual fee is £2,602, which gets you a prefix valid for 100,000 product codes.
A Global Trade Item Number (UPC is a subset) describes one product. Not a family. In the milk example, skimmed milk will have a different code from semi-skimmed. A 2pt container will have a different code from 1pt. Organic a different code from regular, from value. Order the same code and you'll get the same back.
RFID tags contain the GTIN as one of the data components, so you don't make any saving compared to a paper barcode - you still have to be a member of GS1 if you want to sell your products at any retailer. If you just want to sell your products in-house, there's a range of GTIN codes reserved for private use.
If you want fewer than 1,000 codes, you can go to a reseller who will register your product under one of their prefixes. They can be a lot more expensive per code. You still only pay once to register the product, every use of that code is free.
11 SP1 by another name
The problem here is that Microsoft refused to call their April update - corresponding with Windows 8.1 Update 1 - by a new name. So they have to go around calling it 'with the 2929437 update installed'.
If they had actually called it by its true name - Service Pack 1 - it would be clear that they are breaking their own Service Pack and Security Update policies (and the same goes for 8.1 Service Pack 1). The Service Pack Policy says that they will support service packs for Windows (and Windows components such as IE) for 24 months after the release of the following service pack. The Security Update Policy says:
"Microsoft will provide security update support for a minimum of 10 years (through the Extended Support phase) for Business, Developer and Desktop Operating System products. The security updates will apply only to the supported service pack level for these products.
"Both the Mainstream Support and the Extended Support phases require that the product’s supported service pack level be installed to continue to receive and install security updates.
"Security updates will be available from Windows Update during the Mainstream Support phase, and the Extended Support phase (if available)."
Since there is officially no service pack for Windows 8.1 or for IE 11, security updates should be on Windows Update for the original release, regardless of whether another update has already been installed. Alternatively, if we count Update 1/2929437 as being Service Pack 1, they have still withdrawn support for the original release nearly two years before they should have.
Memory inside microcontrollers?
Current generation microcontrollers have far more memory than needed to contain the very simple program for a keyboard. You could program the keyboard controller with a document, then use some special switch or key sequence to have it type out that document on demand.
We know that "security" services sometimes engage in physical hacks, breaking in at night and replacing the keyboard with one programmed to record your keystrokes. Later they can break in again and collect the recorded data from the logging keyboard. It's not a stretch to think that journalists could use a similar approach to hide copies of documents - or at least that the goons would think that.
I doubt there's a microcontroller in the power supply unit, though!
Too many devices
The dirty secret is that a base station (Node B, ENode B) can only handle about 1,000 devices - or so I was told by a network engineer about a year ago. More than that, and the control channel is completely swamped with the handshaking data.
Sector antennas are used to split up the area surrounding the mast into multiple cells - each antenna covering anywhere from 5° to 180° of the circle surrounding the station - with one base station handling each antenna. It's likely that only one or two base stations actually cover the stadium (obviously depending on the siting of the masts relative to the seating) meaning there are probably at least 10x as many devices in range as the station can actually handle.
Conversely, at the British Grand Prix a couple of years ago, I rented a Fanvision device. Antennas around the circuit broadcast on VHF frequencies, using the DVB-H system. You had a choice of three video feeds, two different commentary feeds, timing information (last and best lap for each driver plus sector times) and news headlines from Autosport. Wasn't cheap though, think it was around £100 rental for the weekend. Sadly F1 no longer allow Fanvision to operate at their events - presumably Bernie wanted too much money. They do still operate at Nascar events and the Indy 500.
LTE Broadcast is exactly what it says, however - broadcast. It would behave just like Fanvision. Specific video streams would be broadcast without reference to whether anyone was interested in receiving them. You might be able to get a Scores app that understood how to interpret score data being broadcast, but for Twitter and other websites/web services you'd still be stuck with the overloaded ordinary network. That general purpose network might even be worse if the carrier has reassigned a frequency from the main network to the Broadcast system, reducing the spectrum available to the regular unicast network.
"By 2018 he envisages data usage as twelve times that of 2012, and as Vodafone has recently said this is heavily driven by the adoption of 4G."
I don't agree. My view is that as users step up from feature phone and older 'smart'phones to current-generation smartphone, they go from hardly using data at all to using hundreds of MB per month. However, they pretty soon plateau at a level of mobile data usage that they're happy with. As the rate of adoption of smartphones starts to slow down - Ofcom's most recent Technology Tracker survey puts it at 65% of the population, up from 63% the previous quarter - I expect the rate of increase in mobile data to slow down. Most projections are taking the initial exponential growth in data usage and extrapolating exponential growth into the future, uncritically.
It's true that browsing on phones and tablets is displacing browsing on home PCs. However, at least some of that browsing is happening on Wi-Fi - in the same survey, 73% said they browsed on their phones using their mobile data connection, 69% said they used home Wi-Fi and 32% used Wi-Fi elsewhere. In the previous quarter that was 72% on mobile (+1%), 64% on home Wi-Fi (+5%) and 30% on hotspots (+2%). The survey doesn't say what proportion of browsing was done on each, but I would expect that where home Wi-Fi is available it would be largely used in preference to mobile data.
"The WP team has released three updates since 2012"
Nope. The Windows Phone team has only worked on Windows Phone 8.1 since the release of Windows Phone 8.
All the updates to Windows Phone 8 have come out of the Windows Sustained Engineering team, which is part of Product Support Services. That's why they have only done work that OEMs have requested (plus egregious bugs) rather than advance the product forward.
The reason for the long wait for WP8.1 is the heavy engineering to complete the port of Windows Runtime from Windows 8.x. I'm pleasantly surprised to see that they have actually implemented a pretty good slate of new features as well as that, I had feared that the updated developer platform would be all we'd get.
Why do I call it "heavy engineering"? Windows Runtime's implementation on Windows 8.x is implemented using Win32 GDI, USER, Direct2D, DirectWrite and numerous COM components. The dependency chain on this is a nightmare, with seemingly every component having cross-references into some other component, and would break if those other components were missing. It's taken them years to decouple the lower layers to the extent that Windows Phone 8 is even possible, that it's not expecting to find a full copy of GDI in there. (The MinWin project reportedly started in 2003.) Windows Phone has to fit into a relatively small footprint (my Lumia 820 has 8GB of storage and indicates that 1.8 GB is used by the system). Taking the Win32 dependencies as a whole is not an option, it's too big. So, to port it to Windows Phone, they've had to either follow the dependency chain and work out where to cut it, or to reimplement the feature without taking the dependencies.
It won't be complete in Windows Phone 8.1 - there will still be APIs available on Windows 8.1 not in WP8.1, and possibly vice versa - but it will be possible to have common UI code, which *wasn't* possible in WP8/W8. I hope that after this, the Windows code will be changed to the WP8.1 version, and the Windows team will write any new APIs cleanly so that it can go into both products without causing huge lag. If the aim is to change the 'Windows RT' SKU to use the Windows Phone codebase rather than the Windows 8 codebase, this will have to happen.
Re: XP and Exchange 2013 - already stuffed.
Exchange Server 2013 is supported with Outlook 2013, 2010 SP1 (with an update) and 2007 SP3 (with an update). I'm not aware of a dependency on the client OS version. Outlook 2013 does require Windows 7 at minimum.
Source: http://social.technet.microsoft.com/wiki/contents/articles/845.outlook-versions-supported-by-exchange-200720102013online.aspx for Outlook versions supported.
If you were hosting your own Exchange Server, you'd need a 64-bit install of Windows 7 SP1, at minimum, to run the 2013 Management Tools remotely.
Usage stats, not purchase
The source for the data is the User-Agent string detected by the analytics scripts running in the web browser, for web sites using Net Applications for their analytics. It's only capable of detecting the currently-running operating system (assuming the browser isn't lying), it's not possible to tell that a given product key would be valid for a later version.
Re: Flashing news!
I suspect they're sniffing the browser agent string and sending different content to different browsers.
Indeed - and IE11 is no longer detected as IE-family, the sniffer doesn't know what the heck it is, so it gets dumped in 'must be some previously unknown variant of Netscape 2.0'. Which is exactly what ASP.NET's default browser caps do up to .NET 4.5. There are jhotfixes available for .NET 4.0 and 2.0-3.5, but I'm not entirely sure whether they just fixed the detection files, removed the detection feature, or defaulted to assuming max capabilities rather than fewest.
IE11's User-Agent string is substantially changed from older versions of IE, *because* it is a much more compliant browser and newer websites were sending incompatible, or fallback, content. The change makes it look a lot like Chrome, which means most sites will send it their latest content version, which should be mostly compatible in IE11.
Microsoft Dynamics will have to come up with an update that actually detects IE11 as a 'capable' browser before it will work without selecting Compatibility Mode.
The Compatibility Mode button - which is only available on the desktop browser, it's not in the 'Metro' version - tells IE to send an IE7 User-Agent string to the server (nearly - it sends ';Trident/7.0' in the string as a tell-tale that this is really IE11, not 7). The browser then defaults to its IE7 rendering mode, unless the site sends an X-UA-Compatible HTTP header (or META tag) telling it to use a newer mode.
If IE decides that the server you're connecting to is on your Intranet, it will use the Intranet Zone settings. The default for the Intranet Zone is to always pretend to be IE7. This can of course cause problems for applications developed for IE8 and up. The Intranet Zone is, by default, only enabled for domain-joined computers, and the default detection rule is basically 'if the hostname in the URL doesn't contain any dots, it's Intranet'. The Intranet Zone rules are configured on the Security tab of Internet Option - click the Local Intranet icon, then click Sites to set up the rules for what is considered Intranet. To disable compatibility for intranet sites, press Alt+T to get the old Tools menu, and select Compatibility View Settings. Then uncheck "Display intranet sites in Compatibility View". These settings can be set through Group Policy.
Microsoft's Compatibility View Lists also gives them the ability to send a custom User-Agent for specific domains. This is what went wrong with IE11 against Google's websites when it was first release: Google's code didn't work with IE11 originally, so Microsoft added their domains to the compat view list indicating IE10 (but using the Trident/7.0 token rather than Trident/6.0 as IE10 would send). Then, just around the time that IE11 was released, Google fixed their code to work with IE11's real User-Agent, IE10's real User-Agent - but it broke when IE11 sent its pseudo-IE10 string. MS then took Google domains out of the compatibility view list, but it takes a little while for the browser to download a new list.
As I recall, there was never a public version of 64-bit Windows (beta or Gold) for Alpha. NT 4.0 supported Alpha, using the 32-bit instruction set, and Windows 2000 supported it right up to release candidate 1. Then Compaq pulled the plug on support. MS press release: http://web.archive.org/web/19991012214337/http://microsoft.com/NTServer/nts/news/msnw/compaq.asp
Why did it matter for Compaq to support it? Windows on Alpha was never a retail product, only available with a new Alpha-based system (OEM product), and MS require the OEM to provide front-line support for OEM Windows. (I think they'd do better by standing behind their product, regardless of how acquired, but it's their decision, and a large part of why OEM Windows is substantially cheaper than Retail editions.)
I believe MS continued to work on 64-bit Windows using Alpha hardware until IA-64 hardware became available in moderate volume. WOW64's origins - of running 32-bit x86 Windows programs on 64-bit Alpha 'native' operating system - explain a lot of the oddities in the handling of 32-bit programs on x86-64, such as dual views of the registry, inability to load 32-bit code in a 64-bit process, completely separate 32- and 64-bit copies of most libraries, segregated Program Files folders, etc.
Re: IE 11 User-Agent string
Yes, it is:
Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko
And Google didn't work with that, so Microsoft set up the compatibility view list so that IE sent a very-nearly IE10 string to it:
Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.3; Trident/7.0)
The real IE10 sends:
Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)
At some point - presumably in the last week - Google changed their code. Now the real IE11 User-Agent string works, and the real IE10 User-Agent string works. The faux-IE10 string set by the Compatibility View list, however, doesn't. (I've just tested this out with IE10 on Windows 7, using Fiddler to change the requests before sending - sendng 'Trident/7.0' causes it to break in exactly the way described.)
So now, Microsoft have changed the Compatibility View list so that IE11 sends its native User-Agent string.
Microsoft are warning that they intend to remove the feature in future versions:
"Starting with IE11, document modes are deprecated and should no longer be used, except on a temporary basis. Make sure to update sites that rely on legacy features and document modes to reflect modern standards.
"If you must target a specific document mode so that your site functions while you rework it to support modern standards and features, be aware that you're using a transitional feature, one that may not be available in future versions."
Re: Doesn't necessarily mean it's Microsoft's fault
I believe it was a combination of the two:
- Google's code in March didn't detect the new IE User-Agent string properly...
- ...so Microsoft added google.com (etc) to the IE compatibility view list, telling IE11 to pretend to be IE10...
- ...then Google changed their code last week to work properly with IE11's correct User-Agent string, but break with the IE10 string (only when the IE11-specific 'Trident/7.0' appears, and therefore doesn't break in actual IE10)...
- ...now Microsoft have removed the CV-list entry so IE11 reports as itself
The current 'ttl' element in the CV-list is set to 1, presumably cache for one day before trying again.
Information on IE's User-Agent string and Compatibility View list can be found at http://blogs.msdn.com/b/ieinternals/archive/2013/09/21/internet-explorer-11-user-agent-string-ua-string-sniffing-compatibility-with-gecko-webkit.aspx
@ Tom 13
The problem is, or was, Google not sending standards-compliant code to IE11, when IE11 sends its latest User-Agent string. Therefore Microsoft added Google's domains to its Compatibility View list.
This list does not necessarily do the same as clicking the Compatibility View button. The button forces IE to emulate IE 7 (which is useless, in my opinion - it should emulate IE 6). The Compatibility View list can cause a custom User-Agent string to be selected for a given site, it can turn other features on or off such as back-forward caching, it also lists domains that are known to require ActiveX controls (and therefore have to load in the desktop browser rather than the 'immersive' mode), and which GPUs and drivers are known to have problems with hardware acceleration.
IE11's User-Agent string is deliberately very different from IE10's, in order to cause more sites to send it standard-compliant code rather than code designed for IE 6. Google's code must have been detecting it incorrectly. In the current version of the compatibility list that I just retrieved, the only feature disabled for Google's domains is the back-forward cache. It's also excluded for microsoft.com.
Switchover done, not tablets
In my view, the last decade's rise in TV sales was for two reasons:
1. Thin, light, and slightly less power-hungry flat-screen HD TVs (plasma, LCD, LED backlight) became sufficiently affordable to replace bulky, heavy, very power-hungry SD CRTs;
2. Practically the entire world went through a digital switchover.
Both of the above reasons fed on one another and led to a boom in TV sales.
Both reasons will have petered out. The digital switchover is complete in the major economies and well underway in the rest of the world, with deadlines in the next few years. Those people who were going to replace their TV for one with an integrated digital tuner have done so; those who added an external box are no more likely to replace their TV than they would have before switchover.
The trouble for TV manufacturers is that they really haven't come up with a new must-have beyond HD, which for many viewers is still a marginal benefit. People might be buying TVs with 3D, they might be 'Smart' TVs, they might even have 4K resolution, but in most cases that's simply because those features were bundled with a TV that had the desired size, picture and sound quality on normal 2D, 1080p and SD broadcasts.
Has anyone done an analysis of the relative sales of *real* PCs over the last five years? Those that are actually powerful enough to do more than web browsing on?
Asus and Acer were predominantly netbook manufacturers. I wouldn't be surprised to find that the market for netbooks running Windows has been essentially replaced by iOS, Android and (to an extent) Windows tablets. But the question is, is the *rest* of the PC market actually holding up beyond a brief fad for netbooks?
X-UA-Compatible not a long-term solution
Microsoft are planning to withdraw compatibility modes from IE in future versions:
"Starting with IE11 Preview, document modes are deprecated and should no longer be used, except on a temporary basis. Make sure to update sites that rely on legacy features and document modes to reflect modern standards.
"If you must target a specific document mode so that your site functions while you rework it to support modern standards and features, be aware that you're using a transitional feature, one that may not be available in future versions."
Re: Obsessed with consumers
iPhone and Android devices support (or at least support*ed*, in the case of Android) Microsoft's Exchange ActiveSync protocol, which does email 'push' over HTTP/S - so you can hook them straight up to an Exchange server (2003 SP2 or higher) or any other server that implements EAS, and you just need a normal data plan rather than Blackberry-specific plans.
(Technically, EAS Direct Push is actually client pull - the server just doesn't reply to the client's request until it has something to send or the connection is about to time out.)
The integration between Exchange and the Blackberry Enterprise Server was always one of the big pain points, so I heard. Many Exchange admins would probably be glad to bury BES and BB in a ditch somewhere.
It appears that BB10 synchronizes email using the EAS protocol - this allows a BB10 device to be used for push email without BES. Not clear whether BES10 wraps its tentacles round the heart of the Exchange server when it is installed, though, or merely acts as a man-in-the-middle extending the server's responses.
Module certification, not product
All the FIPS 140-2 certification does is say that if you use the crypto facilities in Windows (because these modules are common across all implementations of the NT kernel), that they will implement the approved algorithms properly, and not leak information outside the module to other parts of the application or to other applications. It's not a high bar.
This certification absolutely does not mean that data stored on the device is secure against external attacks.
Earlier versions of Windows and Windows Phone crypto modules were also certified - the Windows Phone 7 ones certified under Windows CE. I'm not sure what the threshold for needing a new certification is, but all that's happened here is that NIST's wheels have finished turning and the new certifications for Windows 8 have been signed off - just in time to start the process all over again for Windows 8.1. If, that is, whatever changed in 8.1 requires a new round of certification rather than just adding the approval for the new version.
Re: Possibly stupid question
Yes. It does mean that. Not because of frequencies, but because the limited spectrum available after release of '700 MHz' won't be enough to reconstruct the services we have now, at their current coverage levels, unless the newer DVB-T2 standard and AVC/H.264 compression is used for many more of the services.
If you have Freeview HD equipment, you'll be fine. Any non-HD gear quite possibly won't work, or won't receive all SD services, after this band is released. You should check that any new equipment has the 'Freeview HD' logo (YouView equipment is fully DVB-T2/AVC compatible, but doesn't fully implement IPTV in the same way as the Freeview HD logo now requires, so can't have the logo).
If the decision to switch to DVB-T2/H.264 isn't taken, then it should just be a case of retuning the box. It'll still scan channels 49 to 69, it just won't find anything up there.
Personally, I don't think the case has been made for release of this band. It seems to suffer from circular reasoning: the predicted demand for bandwidth comes from demand for linear TV on mobile devices, so we have to turn off the current linear TV broadcast in order to make space to send it over an inferior protocol?
Re: It's not about addressable memory
*Microsoft* implemented PAE just fine, from Windows 2000 onwards. Manufacturers of commodity hardware didn't - most hardware and drivers in the PC world could not handle being presented with 64-bit physical addresses. So when introducing Execute Disable/No Execute in Windows XP SP2 - which requires PAE to be turned on, on x86 processors - Microsoft deliberately capped the physical address space at 4 GB for compatibility with the cheap hardware and bad drivers.
Server editions of Windows on 32-bit processors, both before and after XP SP2 / Windows Server 2003 SP1, can access however much RAM is fitted, up to whatever the limit is for that edition of Windows. Windows Server 2003 and 2008 Standard Editions are also limited to 4 GB, but for market segmentation reasons, not technical ones (i.e. want access to more than 4 GB of RAM? Pay more).
Re: Pesky paper trails
There is no proof that what the computer has recorded internally is the same as what the voter selected, and what is printed on the audit trail. That fundamental lack of ability to see how the machine is operating means that it cannot ever be trustworthy.
Re: Modern MIPS isn't as RISC as it used to be...
ARM doesn't have a delay slot, but if you perform any computations using the Program Counter register (e.g. retrieving local pool data, immediate data that's too big/complex to go in the immediate part of a MOV instruction) you find that it's actually pointing two instructions (8 bytes) beyond the instruction that does the computation. That's a bit mind-bending.for anyone who grew up on a CISC processor.
Refarming permission already granted
Ofcom granted permission for O2, Vodafone and EE to refarm their 2G spectrum for first 3G and then 4G services. http://www.telecoms.com/161582/ofcom-approves-2g-and-3g-spectrum-refarming/
Right now, you can't make phone calls on LTE - we're still waiting for the networks to implement Voice-over-LTE. The phone falls back to 2G or 3G to make phone calls. That means 2G can't be completely switched off yet, as even 3G coverage isn't up to it.
Frankly, I think the telcos should be required to sort out their coverage, deploy VoLTE, and shut down 2G before they get any more spectrum. We're more than 13 years on from the 3G auction and coverage is still pretty atrocious. I particularly object to the idea that broadcast TV would have to go through yet another technology upgrade in order to keep its current coverage level and range of content, to make space for telcos to continue to run three generations of incompatible networks, the oldest of which was obsolete more than 10 years ago. A sunset date for 2G would *make* the telcos improve 3G coverage.
Stuck on WP 7.x
I suspect the OEMs weren't willing to put in the effort - if it was even possible - to bring up WP8 on their old hardware.
Windows CE does not have a standard boot loader. It is up to the OEM to write their own boot loader, which calls directly into the OS 'Startup' function once it has located the image to run. The kernel is mostly supplied as shared source code and Platform Builder will build your code and link it to produce an image. See http://msdn.microsoft.com/en-us/library/aa446905.aspx for details (that's CE 5.0 rather than 6.0 but it's much the same on 6.0 and later versions). Dealing with interrupts, timers, power management and other basic hardware resources is a job for the OEM Adaptation Layer, often written by the processor manufacturer (as a Board Support Package) but the OEM can customize it. http://msdn.microsoft.com/en-us/library/ee479387(v=winembedded.60).aspx
The Windows 8 kernel expects to run in a PC-like environment. For ARM devices, it uses UEFI to boot and ACPI to describe the system hardware in a way that Windows can use to configure itself to the system. I can't find anything explicitly saying that this is how Windows Phone 8 does it, but the intro for Windows RT is here: http://blogs.msdn.com/b/b8/archive/2012/02/09/building-windows-for-the-arm-processor-architecture.aspx .
I can easily imagine that the UEFI and ACPI implementation is larger than the space available for the CE boot loader. It's likely that the various hardware in the device doesn't conform to the Windows-on-ARM models that would allow generic function drivers supplied by MS to be used, meaning that the OEM would have to write new drivers (the driver models are completely incompatible). It's a vast amount of effort that would mostly be wasted if new devices conformed to the Windows-on-ARM hardware model, and probably running a huge risk of bricking the old phones even if it could be achieved.
I can't see this happening again: I think it is very unlikely that any technical changes will now obsolete Windows Phone 8 hardware. The kernel is the same as on the desktop, the server and on Windows RT devices, and it boots and talks to hardware in the same way. Microsoft don't have a third kernel stream to use (excepting research projects like Singularity, or the .NET Micro Framework which is smaller still than CE). There's a much clearer and cleaner demarcation between MS-supplied code and OEM-supplied. The runtime is the same as the full .NET Framework, with the server pieces removed but otherwise the same.
The main programming difference between Windows Store for desktop/tablet and Windows Phone apps is that Windows Phone Runtime (WinPRT) still wraps up the Silverlight/WP7 UI controls, rather than using the UI controls developed for Windows Runtime. Windows Phone 8.1 'Blue' is basically held up waiting for that. My suspicion is that the Windows Phone 8 SDK was so late because they were trying to get it done for WP8, but couldn't make it work in the space/speed/time available and cut it at the last minute. There's not much point investing heavily in the apps with a shifting base underneath - or maybe all the changes to the apps were already done for proper-WinRT-on-WP and thus can't readily be back-ported to the old UI components?
Non-DRM music stores
iTunes, Amazon, Play.com - that's three to be going on with
Amazon's only 'DRM' is that the MP3 file is watermarked in a way that can tie it back to your account. See http://www.amazon.com/gp/help/customer/display.html/ref=dm_adp_uits?ie=UTF8&nodeId=200422000 (some files don't even have this stamping). I think being able to trace where an unauthorized copy came from is a reasonable step.
The files themselves should play on any conforming MP3 player, so the usual complaints about having to repurchase, etc, simply don't apply.
The movie and TV industries really need to get a clue and follow suit. They're still stuck where the record industry were five years ago. The other thing the movie and TV industries really need to do is stop making exclusives and openly distribute all content through all stores: I'm unsure of which subscription service to join, because I have no guarantee that the content I might want to watch will be available through my choice. They don't have long-term exclusive deals for DVDs, why is downloading or streaming any different?
Time slicing is no problem at all if the majority of the threads on your system are blocked, waiting for something to happen (e.g. user input, a network request to complete). The battery killers are the apps that poll to find out if something's happened, rather than subscribing to an event that tells them something has happened. It's down to the OS to provide such a notification system, and for developers to use it rather than polling (the OS typically has to provide a way for the app to find out information when it starts up, or to make decisions in response to another notification - it can't *just* have events).
Also, apps should not waste CPU time (hence power) calculating things that the user cannot currently see. On iOS, Windows Phone, and Windows Runtime ('Metro') apps on Windows 8, if an app is not in the foreground, all its threads are suspended. You have to specifically register distinct code to be able to run in the background. The OS only gives these background tasks a limited amount of time to run before killing them, to prevent runaway code killing the battery. Audio players, turn-by-turn navigation or location-tracking apps need to register so they aren't suspended, and Apple and Microsoft check that these permissions/capabilities aren't requested by apps that shouldn't have them when verifying them for store inclusion.
Android allows apps to create as many threads as they like, and doesn't suspend background apps. You might 'need' more cores on Android simply because background apps are unnecessarily wasting CPU time (and battery power).
The rule of thumb is that you need more cores if you constantly see more than 90% usage across all the cores. It's very unlikely that a single active app plus the OS rendering, and a few background tasks (that are throttled anyway) can actually saturate that many cores. Windows Phone and iOS devices top out at dual-core.
Cars in need of extra sponsorship often perform significantly better in pre-season testing than in race trim. In pre-season, they don't need to pass scrutineering, so can be below minimum weight, and don't have to provide a fuel sample after qualifying, so can run on just barely enough fumes to get round a lap.
The licence fee funds the BBC
I'm sorry, but you're wrong: ITV does not receive any part of the licence fee. It should not: it is an entirely commercial organisation.
I am trying to find a documentary source for you, but I'm struggling. The law (Communications Act 2003 section 365) requires the BBC to collect the licence fee, but to pay all money collected (less any refunds to be paid) into the government's main bank account, known as the Consolidated Fund. The government then decide how to allocate whatever they receive.
The Consolidated Fund accounts for 2012-13 shows "BBC Licence Fee Revenue" as £3,122m. The BBC's Annual Report shows £3,091.7m income plus £16.8m 'premium' from the quarterly payment scheme.
The government, from 2008 to 2012, did top-slice the licence fee to fund Digital UK and the Switchover Help Scheme. Since the BBC was a large shareholder in Digital UK, its accounts were consolidated in the BBC's accounts, and SHS was also arranged under the BBC. Now that switchover is complete, that money is heading to the government's Broadband Delivery UK scheme.
The problem with real-world currencies is not governments
It's banks. Banks create money when they create loans, and they now create the vast majority of money (estimates range from 95-98%). Unless you are prepared to have 100% reserve banking, preventing banks from creating money, you cannot base an economy on BitCoin without completely debasing the currency. There are many sources for this, here's one: http://www.webofdebt.com/articles/dollar-deception.php
The fact is, fixed currency standards don't work; they cannot scale to the level of growth in the economy without increasingly mining an ever greater amount of whatever commodity you pegged it to, and having to store it. Gold was useful here because it doesn't obviously degrade, and has few uses (its use in electronics, of providing a tarnish-free and reasonably conductive coating to ensure good contact for connections made and broken repeatedly, was decades away when we went off the gold standard). Failing to keep up with economic growth causes deflation, which is generally considered a bad thing: http://krugman.blogs.nytimes.com/2010/08/02/why-is-deflation-bad/
Fiat currency allows the supply of money to approximately match the aggregate demand of the economy, without uselessly mining a resource that you're not going to use. Central banks can wield a few levers to try to keep the supply slightly ahead of demand, in order to get a little inflation, which helps devalue debts as well as savings. The problem we've had for a decade or so is that the economy is very imbalanced, with consumer electronics largely in deflation, cancelling out some very high inflation in house prices (not measured in the favoured 'consumer price inflation' metric) and other commodities.
Really, money is just a medium of exchange: something that has wide acceptance in exchange for other things. It's just our IOUs to each other: I owe you a day of software development, you owe me an Xbox. By assigning numbers to these IOUs, I can transfer your IOUs so that Samsung owe me a TV. You have to think of money's value being in terms of what it can buy. Instead of thinking that a sandwich costs £2.50, you say that a pound is worth 4/10ths of a sandwich.
The major problem for government is that politicians do not understand how money is created, and how differently it behaves under a fiat currency system compared to a pegged system ('gold standard'). Too many economists - and, unfortunately, the ones that the politicians are listening to - still make their predictions based on ideas from the gold standard era - that there is a finite amount of money.
Re: What doesn't help with Adobe..
Apparently there is a way to bundle third-party applications in a way that WSUS can consume: see http://wsuspackagepublisher.codeplex.com/
Microsoft have not bothered to make it possible to update third-party applications through Windows Update because the vendors all want to have control over the updating experience, and won't produce proper MSI installers that actually use Windows Installer properly (rather than just wrapping a script, for example). Windows Update does support driver updates, but when did you last see a timely update for your graphics card on WU? Never, because nVidia and ATI insist on shovelling additional control panels and other shovelware along with the driver, and don't package the install properly.
Adobe get kickbacks from Intel for bundling McAfee AntiVirus with Flash, Oracle get kickbacks from Ask for bundling their toolbar. I'm sure one of them tries to bundle Chrome as well. If Ninite are allowed to install without offering the prompt, Adobe and Oracle don't get their kickbacks.
Re: Thought it said Free Software Foundation on the door
No, it's simpler than that. Any DRM system *must* have the decryption key on the user's system as well as the encrypted content. The only way that the key can be protected is by some form of obfuscation. Even if protected by other system or application keys, the application has to be able to unbundle the key.
Open source software can never be certified for implementing a DRM system, because there is no way to hide the system for hiding the key, without massively obfuscating the code for doing so - something that would simply not get checked into the system. There would somewhere have to be a binary blob implementing the DRM, but that is not compatible with the GPL. It is compatible with *other* open source licences, but the FSF's purpose is to promote GPL.
So we have an impasse. Hollywood won't release its content officially without DRM, but GPL software cannot implement DRM, and it offends the sensibilities of other contributors to the W3C.
Re: RE: thus proving taxation systems are broken
Don't see why corporate taxes should not be assessed on revenues rather than profits. My income tax is assessed on, well, my income, less a personal allowance. In fact my personal allowance is slightly *reduced* because my employer pays for private health insurance - which I don't expect to use, but haven't opted out of.
I'd have no problem with allowing a 'corporate allowance' of something like number of employees registered in PAYE, multiplied by some reasonable wage level, to ensure that the company can always pay its employees.
Re: thus proving taxation systems are broken
The source of these problems is actually very simple: countries have agreed to write their tax systems so that multinational companies are not taxed twice on the same profit - called 'double taxation'. This is supposed to be fairer to the company.
The problem that occurs is small jurisdictions that don't need a lot of revenue - in absolute terms - set their tax rates very low - in percentage terms. (Or places that set corporate taxes to zero for overseas corporations and raise all their revenue from residents.) Through assigning some income to such tax havens, or exaggerating the costs of some required resource, whose supply is routed through the tax haven, the corporation can reduce their tax bill in the high-tax countries that are actually providing the revenue. This is referred to as 'double-non-taxation'.
Google, I believe, has assigned the copyright to its logos to a subsidiary in a tax haven, then that subsidiary charges a ridiculously large amount to each national subsidiary for use of those logos. Starbucks did something similar, and also routed all buying of coffee beans via Switzerland, for which the Swiss subsidiary extracted very high management fees, so each national subsidiary is paying far more than open market price for coffee.
Amazon UK's servers are actually hosted in Luxembourg, and all purchases from amazon.co.uk are therefore reported as being made in Luxembourg, meaning they pay Luxembourg's very low rate of VAT rather than the UK's much higher rate. VAT-bearing goods were formerly routed via Guernsey - as in, shipped from a UK warehouse to a Guernsey subsidiary, and back to the customer in the UK - in order to avoid VAT, but HMRC have closed that one (Low Value Consignment Relief was a special feature for the Channel Islands, intended for small businesses actually based on the Islands selling small amounts of stuff to the UK, but it was abused, and so small Guernsey businesses don't get the relief any more.)
Microsoft have set up their patent licensing subsidiary Microsoft Open Technologies Inc in a tax haven, and Microsoft Corp will pay MOT Inc royalties for use of those patents. (You didn't think it was really about making the interoperability groups arms-length from Redmond, did you?)
The answer is also quite simple. Strike out the double taxation rules. All revenue raised in the country that the end customer lives in is taxed at the prevailing rate in that country. Multinationals are then playing by the same rules as corporations that do business solely in one jurisdiction.
However, that is considered bad for business, so the suggestion from the Tax Justice Network is to employ country-by-country reporting. That is, change the global accounting standards so that multinationals are forced to report accurately how much revenue was raised from each country. The group profits are then apportioned to each country according to the proportion of revenue, and tax assessed in each country according to the corresponding part of the profit.
Re: "introduction of a new rendering engine can have significant implications for the web"
The problem is that standards are very difficult to specify precisely using English. The specifications for HTML 4, CSS level 1 and CSS level 2 have not changed in 15 years. They were sufficiently ambiguous that even though browser manufacturers were doing their best to test to the specifications - even Microsoft for IE 6.0 - there were incompatibilities between the results. There were no rigorous, shared, conformance tests for any of those until really the last couple of years, so the required behaviour was not nailed down - still isn't, really. Even different versions of WebKit - that is, current builds of Chrome and Safari - could, and do, produce different behaviour.
CSS level 2 was found to be so ambiguous, and have so many underspecified features, that it led to a revision 2.1 which nailed more stuff down and removed a lot of the underspecified stuff.
A lot of the effort in the HTML5 and HTML v.Next, and related, specifications has gone into nailing down precisely what was actually meant in earlier versions. There's now a serious effort to write shared conformance tests, and to actually run them automatically for each browser build, checking for regressions. IE has quite a lead in the official conformance tests, because Microsoft have been submitting the most tests to the suite - not without debate as to whether the test actually tests the behaviour it claims to test, and whether it comes up with the right answer.
Re: Internet Explorer 6 staggers on?
Microsoft's own upgrade-from-IE6 website http://www.ie6countdown.com/ (which uses statistics from http://netmarketshare.com/ ) indicates that the Far East is really the only outpost left where IE6 has significant usage share on the open web. Well, let's be honest: China. In the UK it's well below 1%.
NetMarketShare weight their statistics - gathered from tracking bugs on websites using HitsLink, I believe - by overall internet traffic from each country, to rebalance the distribution of users of their customers' websites. StatCounter do not do this. It does mean there could be big sampling errors if relatively few users from China are browsing sites that use HitsLink.
I'm still not sure how well these companies deal with Network Address Translation, having multiple computers behind a single public IP address. The Far East notoriously also has very few public IPv4 addresses, with NATs being widely deployed. If the counter cannot see through the NAT, it will record a count of 1 for each browser used behind the NAT regardless of whether there is one instance or a million, heavily distorting the results.
Re: TV is in the way
Apparently I can't subtract today. Three have 2x15 MHz at 2.1 GHz. They would still have to turn off some 3G to get some 4G in this band, if any phones even support LTE in this band.
TV is in the way
'Three' cannot launch their LTE service yet because they only got 2x5 MHz in the auction, and they got the lowest-frequency block, which will still be occupied by TV services in some parts of the country until the end of July. Their only other licensed spectrum is 2x5 MHz in the 2.1 GHz band (well, and 1x5 MHz intended for time-division duplexing, which has never been used). That spectrum is used for their UMTS (3G) services, and UMTS cells would, I think, have to be turned off to repurpose them for LTE.
"I like the free open standard better than the H.264, thanks."
@Mikel: You have that backwards. H.264 is the open standard, developed by the Motion Picture Expert Group under the joint auspices of ISO, IEC and ITU. H.264 is the ITU-R project number - it is also known as MPEG-4 Part 10 Advanced Video Coding, and published as ISO/IEC 14496-10. In order to be published by these organizations, contributors have to sign up to the organizations' patent policy, which says that patents covering the specification must be available on fair, reasonable and non-discriminatory terms - but it does not define what those words actually mean. Due to the wide membership of MPEG and of the standards organizations, it should be less likely that someone later claims that their patent is essential to implementation, and that they can hold implementers hostage, because they haven't signed up to FRAND terms.
US courts have prevented Qualcomm from blocking Broadcom's use of Qualcomm-patented technology in an implementation of H.264, because Qualcomm signed up to the patent policy.
MPEG LA's role is that some of those patent holders have employed MPEG LA to look after their interests, regarding patents considered essential to various MPEG standards. MPEG LA extracts an administration fee before divvying up the royalties among the various patent holders. MPEG LA would *like* to be a one-stop shop for licensing all patents essential to H.264 (and MPEG-2 Visual, and a number of others) but there is no compulsion for other patent holders to join. When they talked about 'forming a patent pool' they were inviting patent holders to make similar arrangements.
VP8's *reference implementation* is published under an open source licence. The *specification* is published on the WebM project's website, and Google provide a royalty-free license all patents that Google owns, or has obtained the authority to sub-licence. Google have recently agreed such authority with MPEG LA for some patents that are part of MPEG LA's other patent pools (and MPEG LA have agreed to stop trying to form a pool for VP8). However, *other* companies could still hold VP8 implementers hostage if they have patents essential to VP8 implementation.
We cannot know whether there are such patents. The national patent offices simply do not organize their patent databases in a way that you can properly search, and there is a disincentive to searching: in the USA, you can get triple damages awarded if you have 'wilfully' infringed, and wilful infringement has been decided if the implementer read the patent and decided that it didn't apply. The exact wording of the patent will only be interpreted in a court case, and courts have frequently applied the widest possible interpretation of the wording. For example, Toyota have to pay Paice Technologies royalties on the Prius and other hybrid cars, even though the patent in question specifically mentions how their implementation is different from the mechanical design used in the Prius, itself taken from an expired TRW patent; the claims were read so widely as to apply to any car that combines a petrol engine and an AC motor, AC provided by inversion from a battery.
However, we do know that Nokia believe they hold such patents, essential to implementing VP8, and therefore the IETF cannot publish the RFC as Nokia refuse to licence them.
I'm not defending patents as they currently stand. I think the issues we see largely represent a failure of imagination of the patent office staff, that they are granting the most obvious patents, combining known techniques in a not-particularly-novel way, and one that would be or was discovered totally independently, with no real exposure to the original implementation. The patent *system* makes it unbelievably difficult to actually find out if the problem you're facing *has* already been solved - if we could look up a solution and know it's going to cost us a dollar per device, rather than spending years on finding a solution, we might pay it. What's galling is when you do spend those years finding the solution, only to have someone say 'no, we invented that - pay $$$ per device' when they actually contributed *nothing* to your solution.
Re: heavily weighted towards Labour MPs
The faces presenting the policies may change with an election, but the people writing the policies don't. "Yes, Minister" is heavily fact-based: ministers 'go native' with alarming speed, though perhaps not surprisingly considering they usually have no knowledge or experience in the portfolio they have been assigned, and also no experience in managing staff.
Microsoft's Support Lifecycle policy for Windows is to support a service pack (or the original release if there has only been one service pack) for two years after the release of the following service pack. The actual end date is aligned to the next Patch Tuesday (second Tuesday of the month), which is 9 April. Future updates will only be installable on Windows 7 SP1 as a baseline.
All this means is that if you reinstall Windows 7 from a disc or image without SP1 applied, Windows Update will first offer all the security and critical updates from RTM to this month, then it will offer SP1, then any updates released after SP1.
Windows 7 *itself* is in mainstream support until 13 January 2015, and extended support until 14 January 2020. In the mainstream support period, you can call up for paid support, you can use any free incidents that you got when buying the product, you can get non-security hotfixes and if you really want to, you can make change requests. In extended support you still get paid support but the free incidents are no longer valid; you still get security hotfixes but other fixes require an extended support contract, which you have to take out within 90 days of the end of mainstream support; warranty claims and design change requests are no longer accepted.
There is no incentive if the technology is mandated
Firstly, there are six national multiplexes, not five. There are five SD multiplexes and one HD. At the moment. Ofcom are running a competitive process to launch two new ones.
The idea of the incentive pricing is to encourage the spectrum to be used efficiently. However, there is no point applying an additional tax if the broadcasters' hands are tied on becoming more 'efficient'. The spectrum plan and technology for Freeview was set in stone by government: the public service broadcasters had to achieve 98.5% population coverage, the BBC had to free up its second multiplex to convert it to HD mode, the majority of viewers had to be able to use existing aerials fitted for analogue reception, and we had to fit into the internationally-co-ordinated frequency plans. That really meant a requirement to use the 64QAM, FEC 2/3, 1/32 guard interval mode that the BBC and ITV/C4 are using. If they change that mode, to get more capacity and become more efficient, coverage will be reduced. The limits of what can be crammed into the 24 Mbps available have been pretty much reached, without reducing quality any further. There are already criticisms from many viewers that many channels are unacceptably low-quality, running 16:9 broadcasts at a resolution intended only for 4:3 pictures (544 x 576 pixels) and at a low enough bitrate to prevent the normal smoothing of macroblock edges to work properly.
The HD technology - DVB-T2 and AVC/H.264 encoding - can also be used for SD services, but are only viewable on Freeview HD receivers. The majority of viewers don't have one. The two new multiplexes - to run in this mode, and give four or five extra HD channels on each - are intended as an additional incentive for viewers to go and buy a new receiver. If a majority of viewers haven't done that, it won't be politically acceptable to turn off DVB-T/MPEG-2 support, and that will make the release of 700 MHz very difficult as there really isn't space for six national multiplexes in what remains. Viewers will be seriously angry if they lose services due to this - as it is, there are many people upset by the fact that they can't get three of those multiplexes if their local relay is PSB-only.
It won't be politically acceptable as people will expect the government to fund replacement equipment. For switchover, enough people had voluntarily switched that the government could get away with only subsidising equipment for pensioners over 75, the disabled, and other groups on long-term welfare. It was funded by increasing and top-slicing the TV licence fee, but only by a small amount as so few people were covered.
Meanwhile, the mobile phone networks are now running three generations of technology concurrently, with no end date for 2G announced or even considered. Phone still rely heavily on the 2G network for basic communications, as the promises of 3G coverage were broken and eventually the coverage requirements have been removed. O2's block of 800 MHz spectrum comes with coverage obligations - 90% of the population, if I recall - but the rest of the recent 4G auction has no obligations attached at all. It's still unclear if Voice-over-LTE even works, making voice services still dependent on 2G in much of the country.
Re: Whinging Cambridge
Cambridge's local TV service has had a frequency reserved for it which will not be available to white space devices. It is still considered a 'white space' because it isn't used to cover Cambridge from current TV services, but isn't available to run a full-power service as it would interfere. Cambridge is normally covered, for TV services, by the Sandy Heath transmitter in Bedfordshire: the local TV service will come from the Maddingley site formerly used by Channel 5, on UHF Channel 40. This frequency is, or soon will be, used by the Welwyn relay and three relays near High Wycombe, so is unavailable at Sandy Heath.
Regarding white space devices, a BBC/Arqiva joint report for Ofcom basically says that the TV spectrum is so densely used that only about a quarter of UK households could use a white space networking device. This will drop to only 3% if the 700 MHz band is reallocated to mobile phone networks and the TV spectrum is replanned, which Ofcom seem keen on doing in around 2018. See http://stakeholders.ofcom.org.uk/binaries/consultations/uhf-strategy/statement/BBC_Arqiva_preliminary.pdf for the report. I'm counting scenario 3 - where the 600 MHz band is used by two new TV multiplexes from 25 sites - as this is the model Ofcom subsequently chose from that consultation.
Re: Bands not used for existing broadcasts locally?
The BBC were required to get BBC Alba onto Freeview in Scotland, but weren't given any extra money to do so, nor allocated any more spectrum. That meant having to carry it on their SD multiplex, the second multiplex having switched to the incompatible second-generation DVB-T2 standard to make enough space for four or five HD services. (The BBC are required to carry STV HD and 4hd, and it was expected they would have to carry Channel 5 HD as well, until C5 pulled out yet again.)
So the choice was basically make picture quality terrible on all SD services while BBC Alba is running, or turn off the radio stations.
The local TV services have been granted a multiplex of their own, on frequencies that are generally close enough to the existing multiplexes that existing aerials should pick the up. The multiplex has space for the local TV service, and one or two extra slots that will be sold nationally by the multiplex operator Comux.
Re: Why no bigger cities?
Those cities were part of phase 1, a programme supplier has already been selected, and they are due to launch over the next year. This is phase 2.
Re: Yer not strange
You need to get a Windows Phone. Settings > Website preference > desktop version in WP8. That changes the User-Agent from:
Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; <manufacturer>; <model>)
Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0; ARM; Touch; WPDesktop)
The only difference between that and a Windows RT tablet is the 'WPDesktop' token.
Re: hmmm...what's the real story here
Rubbish. The 4G auction was structured in the same way as the 3G one. It didn't raise as much money as we're in the depths of a recession (not triple-dip, in my book we haven't had enough sustained growth to ever have been considered out of it), rather than at the peak of a tech bubble and with ludicrous expectations of video calling.
The UK auction actually raised 33% less money than the Treasury had put into their books for this financial year, a whole £1.16bn short.
- Asteroid's SHOCK DINO MURDER SPREE just bad luck - boffins
- BEST BATTERY EVER: All lithium, all the time, plus a dash of carbon nano-stuff
- Stick a 4K in them: Super high-res TVs are DONE
- Review You didn't get the MeMO? Asus Pad 7 Android tab is ... not bad
- FTC to mobile carriers: If you could stop text scammers being jerks that'd be just great