"the technology is not that close to productisation."
So it's basically vapourware, just like those batteries in a lab that re-charge in 30 seconds or whatever. Wake me up when they're shipping a product.
214 posts • joined 13 Nov 2012
So it's basically vapourware, just like those batteries in a lab that re-charge in 30 seconds or whatever. Wake me up when they're shipping a product.
Nexus 7 (2013) with all of the latest Marshmallow updates:
~$ uname -a
Linux localhost 3.4.0-g1fc765b #1 SMP PREEMPT Wed Jun 8 18:49:02 UTC 2016 armv7l
So no worries there - for once being on an ancient kernel is a blessing!
Stacking more than 64 layers is proving troublesome due to alignment issues with the through silicon vias (TSV) - each layer in the stack has to be perfectly aligned with the layer above and below. The more layers, the greater the difficulty of achieving the required TSV alignment. One solution is to combine two 64 layer stacks, creating a psuedo 128 layer stack - this should be relatively easy to achieve and increase yields.
Unless ARM need investment I don't see what Softbank is bringing to the table. The risk is that when one of the many other Softbank business units is haemorrhaging cash and dragging down the bottom line, ARM could be sacrificed.
ARM losing its independence sounds like a very bad trade. I wouldn't be surprised if Apple end up as owners of ARM (again) longer term, and that will be very bad news for everyone. Seriously, seriously bad.
One can't help but wonder when we'll see a monitor with a built-in Pi3 board.
It would be a piece of cake if they added the SODIMM socket for the Compute Module (currently using the RPi1 SoC but pin-compatible and so upgradeable when the CM3 is launched). Monitors have pretty much all the IO ports (USB, audio) required for the Compute Module to become a full computer, so the cost would be pennies (RJ45 port for wired Ethernet and the SODIMM for CM). The CM could even be optional, as it's as easy as a stick of RAM to install.
Connect a keyboard, mouse, network (a $2 USB dongle would enable WiFi) to the monitor, select the Compute Module as an AV Input and crack on...
If this little puppy can drive a couple of decent sized displays,
I wonder if it would be possible to add additional Citrix/RPi devices for each additional HDMI display, then it would just be a software/configuration issue and you could have as many displays as you need. Keyboard and mouse would be connected to the "master" device with all the additional screens connected to the "slave" (display only) devices.
Sorry, too little way, waaaayyy too late. SourceForge became a joke years ago, killing DevShare won't make a bind bit of difference. Any projects still on SourceForge tells you all you need to know about those projects - ether no longer maintained, or maintained by developers prone to making really bad development choices.
if Qualcomm are guilty, which apparently they are, and have been found to be, on numerous occasions.
If Qualcomm don't want their competitors dobbing them in to the competition authorities because of anti-competitive licensing, then don't make your partners sign anti-competitive licences for Qualcomm products.
It's not fcking rocket science.
Never mind, delayed 24 hours - current launch time: Tue, Dec 22 2015 1:34 AM GMT
AMD used to use .NET for their Catalyst drivers (ie. full desktop app). They've now switched to Qt, gone from 8 second application start-up to 0.6 seconds, with the added bonus of now being fully cross-platform, even on the desktop. It's not hard to see why they made the switch, and .NET Core won't be of any help.
There are better alternatives to .NET, even if your only platform is Windows.
More like 3 hours than 3 days, solved already by BT Service page
The problem actually started around 9pm Monday night, so took over 15 hours to fix, which is hardly impressive. Though better than 3 days, perhaps they're under promising and then over delivering.
AMD have recently taken to counting the combined number of CPU and GPU cores as "Compute Cores" when describing their APUs, so for example the A10 PRO-7850B has 4 CPU cores and 8 GPU cores, or 12 "Compute Cores" in total.
Although I'm a little uncomfortable with this marketing-motivated move I do understand the distinction but I'm not entirely sure it's necessary or helpful (which is not to suggest that AMD try to hide the number of actual CPU cores, as they don't). However our clueless, dickhead plaintiff would no doubt sue on the basis that he thought he was buying 12 *CPU* cores - after all, he did overhear someone speaking about CPU cores once upon a time.
> But what about memory bandwidth?
At the time of purchase (about 3 years ago) I considered over 14GB/s of DRAM bandwidth to be perfectly adequate, and considering it consistently outperforms Intel i7 quad-core systems of a similar vintage the AMD memory bandwidth (or shared FPU) hasn't proved to be a handicap.
Unlikely because Bulldozer does actually have the physical cores (although as many as half of them may not always be fed with data/instructions, depending on the workload, and depending on who you believe, plaintiff or AMD) whereas hyperthreaded cores are entirely virtual, all of the time.
Indeed, AMD do need to significantly improve their IPC. This is what Zen promises, so let's hope they deliver (and you eat your shorts) as a completely dominant Intel in the x86 space doesn't bear thinking about.
Can't see this case succeeding, nor should it. There is no doubt that Bulldozer has the AMD stated number of cores, the fact that some aspects of the design is shared between paired cores is well known, add to that if your workload is heavily FPU based you'd have to be an idiot (or a cheapskate) to choose AMD. I selected an 8-core/4-module FX-8350 specifically for kernel and OS builds, mainly because there is so little FPU action (and there is no doubt it has 8 cores).
Unfortunately the guy bringing this case failed to do his homework and is now able to bring a frivolous legal action - I hope he loses and I'd like to think it will cost him a fortune (but it probably won't, which might be part of the problem).
> Does anyone really care that much about high-dynamic range?
Apparently HDR really is the mutts nuts, offering far more noticeable picture quality improvements than even the jump from 1080p to 4K. From all that I've read about HDR, written by people that have seen HDR content with their own eyes, it really is going to be a major step-change, and far more so than regular 4K.
Correction: Nexus 7 (2013), not Nexus 5...
It's all very well Google promising to push out monthly security updates, but the design of the current Android platform ensures that frequent updates will become a major PITA and something I'm sure users will grow weary of pretty quickly.
The problem is that the Android platform takes over 20 minutes - tested on a quad-core Nexus 5 (2013) - to apply even the smallest update. Every application on the device (and I haven't installed many myself, maybe only a dozen, but the number of apps on the device still runs to about 120) has to be (re-)"optimised" - thanks to ART - every time the system is updated. And optimisation is a very, very slow process (I actually wonder if it's only running on a single core, it's _that_ frickin' slow).
1MB update? Boom, 20+ fudging minutes to apply the update.
10MB update? Another 20+ fudging minutes to apply.
200MB update? You get the picture. The size of the update doesn't matter, it's always going to be dwarfed by the colossal time it takes for ART to get it's shit together.
It's a horribly flawed process that is going to become a major burden for users if small security updates are pushed out frequently. I can see myself skipping updates just to avoid the inconvenience of the slow update process (although at least they're unlikely to be as bad as Twitter, who seem able/willing to publish new builds of their app on an almost daily basis with no hint of a changelog - it does make you wonder how crap their developers are).
did this in 2011 - it would analyse pictures in the gallery and recognise the faces of your contacts.
Sometime between midnight and 1am.
There never was a public acknowledgment of the service issue in the 48 hours it lasted, resulting in BT customers up and down the country continuing to contact India for "help" only to be told there isn't a problem and it must be their router/landline etc., wasting an hour or more of each customers valuable time.
I genuinely wonder if the lack of public acknowledgment is because of the new regulation that allows in-contract customers to walk away without paying a penny if they fail to receive an adequate service. I guess we'll know the next time there's a prolonged outage - will BT once again lie to and dick their paying customers around by treating them with contempt, or will they behave like a reputable business? I do hope Ofcom are watching...
Only one question - why are you still with them?
> 50p says that a firmware update has caused this. It's been getting gradually worse as the update is pushed out to more routers.
I use my own router/modem (Netgear DGND4000) and have never connected the supplied HomeHub to my line, so I highly doubt this is a HomeHub-specific issue - the problems are all upstream.
> Hmm. So your deicsion to not download Windows 10 must have reduced the amount of W10 downloads going on in the UK today by, what, 30%, 40%?
Obviously not, but Windows 10 background downloading is being claimed as a potential cause and I'm just saying that's not the case with me. Other users downloading Windows 10 could be a factor, as in contention issues, but not at 3-4am and besides bandwidth isn't even a issue, you could have 30Mb+ bandwidth but still can't connect to a website. It's entirely down to packet loss/routing issues within the BT network.
Oh, and BT have now removed the status announcement from their status page even though the problem is ongoing.
"Wed 29/07/2015 at 10:26
BT customers are having trouble loading webpages"
All other service incidents prior to this are unrelated.
And it's not Windows 10 - I turned that shit off, have not seen any unexpected downloads on my SNMP graphs, and bandwidth is not the issue. It's packet loss or a routing issue (secure/encrypted connections seem to fair the worst - https, rsync over ssh etc. both failing).
Who'd have guessed BT would have cornered 15% of the US ISP market.
I'm surprised any active projects continue to use it in preference to a superior and adware-free alternative such as Github. I can understand projects that are no longer maintained parking their code there, but for active projects to continue using SourceForge just seems bizarre given how crusty it is, and now all the adware nonsense.
If the latest TiVo boxes are anything like the original Series 1 TiVos (mine still going strong after 15 years) it's standard procedure to reboot in the event of an upstream failure - in the case of the S1 it will reboot if it's not receiving a TV signal (the assumption being the encoder, decoder or some other part of the chain has crashed/locked up). It's an extreme, but pragmatic, solution for a problem that happens very rarely.
If some part of the Virgin network or cable system is down this could easily explain the behaviour of the box which will keep rebooting until normal service is restored.
Can someone explain to me how MU-MIMO "will solve the poor phone performance problem" as it seems to me the phone is still stuck using only one antenna, although the AP will now better utilise it's multiple antennas allowing the AP to communicate concurrently with devices other than the phone.
However the phone itself will still perform as poorly as it would have done without the AP having MU-MIMO, so how exactly does MU-MIMO "solve the poor phone performance problem"? It doesn't do anything of the sort, at least not based on the description in this article... unless the article meant to say the phone is a performance lead weight for AP throughput?
This claim seems somewhat unlikely considering it's based on AllWinner hardware - just Google for "AllWinner GPL Violations" to see what kind of attitude AllWinner have towards "open source".
Anyone that ships AllWinner hardware and releases software is, by definition, going to be in breach of the GPL too as AllWinner deliver binary blobs to their hardware customers, so it's impossible to release "all software" and be GPL compliant.
I'm aware of hardware companies that have had to move their projects away from AllWinner (to Amlogic, Freescale, HiSilicon etc.) just to avoid the GPL stench that pervades AllWinner.
Although not yet available, a Pi2 Compute Module should be a (relatively cheap, sub-£30) drop in replacement although you'll need to reconfigure Kodi and rescan your library, etc. - a minor inconvenience considering the significant performance boost from the quad-core ARMv7 SoC and extra memory.
Death knell for the Lightning Port and cables, good riddance to that money gouging non-standard piece of crap.
Same as the B+.
No, but you posted at 1 minute past midnight, before the product had been officially announced, and with an article riddled with errors and misinformation causing mass confusion - you couldn't even get the basic CPU architecture correct.
I hope the clicks were worth it.
The BCM2836 is using four Cortex-A7 (ARMv7) cores. Default maximum clock speed is 900MHz, but they seem to overclock very nicely (1100MHz is easy to reach, and will probably go higher).
From a software perspective, the Pi2 is 100% backward compatible with the Pi1. All existing Pi1 software runs just fine on the Pi2.
You can take a single Raspbian image (updates will be made available later today) and run it on both the Pi1 and Pi2 - the Pi2 will boot using the kernel7.img while the Pi1 will use the existing kernel.img.
However recompilation is recommended for optimal performance (ARMv6 to ARMv7 with NEON), but definitely not essential.
Way to go on breaking the embargo, not cool or classy.
It failed because Microsoft hobbled the OS, cutting away useful features in order to justify the markup on full-fat Windows. We now see how that worked out, with Microsoft having to give away full fat Windows to attract hardware manufacturers, RT dead in the water and Microsoft only able to compete in the tablet space at all thanks to Intel giving away their x86 chips at zero cost (or worse).
Microsoft botched Windows RT right from the get-go. It was a poor cousin to a piss poor OS (Windows 8), and stood very little chance.
A decent version of Windows on ARM could have been a very fine thing, however the fact it wasn't had nothing to do with ARM and everything to do with the crass and arrogant design choices made by Microsoft.
I wonder how much an x86 Windows tablet would actually cost if both Microsoft and Intel weren't giving away their part of the deal...
That rare breed of Microsoft developers developing for a platform with such a small user base and little uptake amongst business and consumers? Hmm...
As per my comment, they have very little choice - good luck going out on a limb by proposing the use of a non-Microsoft language or framework on a Microsoft platform.
The widespread use of something doesn't necessarily equate with it being a "hit", just as death isn't exactly a hit with the living. Some things are just... unavoidable.
The .NET Framework and C# language were a hit with developers,
Let's be honest, it was only ever a "hit" with Microsoft developers who have very few other realistic choices.
Imagination should consider contacting OpenELEC directly, and also the Kodi Foundation, with the intention of donating a few boards for the open source developers to work on. A small financial donation to the Kodi Foundation wouldn't go amiss either.
This is how it works if you want to support for your niche hardware, otherwise there's absolutely no reason for anyone to consider spending their valuable time working on this board, which will most likely be a total pain in the arse due to the closed nature of the GPU.
Having one of the Imagination developers attached to work on GPU support in Kodi would also be a very good idea, as this is what has made the Raspberry Pi such a success where XBMC/Kodi is concerned. Developers from Broadcom/Pi Foundation have spent countless hours working on improving Kodi source code and also fixing bugs/adding enhancements in the Broadcom GPU.
It charges my Nexus 7 2013 tablet without any problem - no dimples on the device required for perfect alignment, it finds the device no matter the orientation.
Having used Qi charging for the last year, I'd never buy another mobile product - tablet or smartphone - that doesn't support Qi wireless charging. Connecting a USB cable to charge is so last century.
It's a shame the resonating charger guys are focusing all their efforts on installing chargers in public spaces rather than getting their receivers into devices - presumably they think they're playing the long game but ultimately they're just being complete dicks.
It already does - Sailfish OS includes AlienDalvik from Myriad for the running of Android apps and it works very well.
I got the same email for the MeeGo (N900, then N950) contacts I have stored.
I've already switched to a Jolla and transferred (bluetooth'ed) my contacts across from the N950, but just for a laugh I thought I'd give the export option a go that is offered by Microsoft as way of obtaining all your data. What did it give me? It gave me an empty csv file (apart from the header row) with none of my 200+ contacts.
My one and only thought: OH JUST FUCK OFF, MICROSOFT.
Banks and payment processors are once again in denial - they said the same about Chip & Pin even though the flaws are being actively exploited by criminals.
Visa are focusing on the headline 999,999.99 figure in this case and saying their systems will spot it which spectacularly misses the point as criminals are hardly likely to be so stupid as to go for the jackpot each time when they can take hundreds or maybe thousands at a time without risking detection.
I suppose the next step is a live, public demonstration. Keep up the good work, Newcastle!
Of course it's going to be shit!