Re: Roll up, roll up! You don't even need to study!
Yes. Patching remotely is automated and the device has multiple copies of the firmware so it can't be bricked if an update fails. Next question.
311 posts • joined 19 Apr 2007
Yes. Patching remotely is automated and the device has multiple copies of the firmware so it can't be bricked if an update fails. Next question.
I'm willing to bet there are similar issues in every libc and any of the runtime environments for any "safe" language like Java, .net, python..
Would be nice to see proper discussion of the issue and what people should be doing to not get caught out by it instead of retarded finger pointing and sneering.
But what if a counterfeit arduino with a fake chip that is a clone of a real chip that says in the documents that it's not to be used in critical life support systems is used to control the machine that stops the 100 kilo lead weights installed above all patients heads from dropping and killing them fails because it couldn't handle invalid data because of bad coding practices! *DEEP BREATH*
COME ON MAN! THINK OF THE CHILDREN^H^H^H^H^H^H^H^H PEOPLE THAT RELY ON ARDUINO CLONES FOR THEIR LIFE SUPPORT MACHINERY!!
>proactively avoided anything with an FTDI chip in it.
So you avoid all of the dev kits with FTDI chips as the JTAG interface.. like 90% of them because no other vendor makes a chip like that.
>The risk is just too high and counterfeits are all over the supply chain,
>even in heavily controlled sourcing.
If you source from Digikey etc you should be OK. I suspect most of the people that are getting stuff bricked are using parts sourced from Random/Cheapest Parts Dealer in China.
>Imagine the liability if a counterfeit got into a medical device
>and FTDI's driver f*ckups killed somebody.
What if the counterfeit part dies without FTDI's driver fucking it up? Surely a system that is running critical life support services A: uses only parts that can be traced back to the original vendor, B: Doesn't use Windows or is at least fault tolerant enough that it doesn't rely on Windows being remotely stable. C: Doesn't go updating critical drivers while it's doing a critical life support task?
Have you considered that potentially counterfeits might be busted by the official driver even if it doesn't intentionally try to break them because they aren't 100% compatible?
>Sorry to say, but hopefully FTDI will be out of business before that happens.....
I doubt that will happen. They don't just make this chip and whatever issues you have with their drivers the alternatives don't exist or aren't as good. If you just want a decent USB->Serial chip the Silabs CP2102 is good but if you want a high speed multiprotocol chip like the FT2322 you have less choice.
>It's a shame really as FTDI has been the defacto standard for USB-to-serial for decades,
>way to destroy your business.
Because their chips work unlike the alternatives with the exception of the CP2102 I mentioned.
The transport used to get the firmware to the device doesn't matter if the firmware is signed. If they (the vendor) just rely on transport security to stop rogue firmware that would be a problem but they (pen testers) didn't show they could change the firmware and make the device download it and run it.
All they have done is see something happens over a clear text protocol and made a noise about it.
This is much like the barbie hack where they read out plain text data from the spi flash by wiring it up to a tool that talks to spi devices and did WiFi scans using features that are part of the firmware and used during provisioning.. They "hacked" nothing but made it sound like they did and sites posted their crap verbatim. Big head security reachers and clickbait news sites are a match made in heaven.
>that drivers probably don't belong inside a kernel
Stuff like GPU drivers have a kernel component and a userspace component in a lot of cases so it's not true that "GPU drivers are in the kernel" unless all you are thinking about are dumb framebuffer devices.
malloc either gives you a pointer to some memory with the size you asked for or not.. I'm not seeing why it would be malloc's fault if you write past the memory you asked for.
This is basically like a national insurance or social security number at the moment. I suspect the way things work at the moment is a massive pain in the ass to administrate and causes a ton of head ache.
The income tax office knows someone with your name and your current or maybe 10 years previous address, filled or had filled for them a tax return at some point. They apparently pass on the income details for people to their city or town office based on the name and rough address and then that office sends out local tax, health insurance etc details. For me the tax office keep sending stuff to an address I moved from over a year ago even after telling them the change of address (good on JP for forwarding everything for so long) but that address is different from the address that I have registered at the town office... I suspect because of my foreign name they can work out how to sort it out but I imagine them messing stuff up for people that have common names and the income tax office sends income information to the wrong city or town office fairly often.
TL;DR; version - Japan gets NI numbers that make the interlinked public systems a little less fragile. Not the end of the world or some plan to document all the evil foreigners or whatever.
>Therefore you can get operating systems like FreeRTOS/OpenRTOS
>which can run with very few kilobytes of RAM.
All operating systems would only need a few K of ram if all they did out the box was scheduling, some multi-threading primitives and heap management.
If you need TCP/IP, TLS etc you're basically limited to the more expensive end of the microcontroller spectrum.
>For microcontroller programming storage is often measured in kb not meg (and not big numbers).
The sort of microcontrollers that Linux can run on (like the H8300 port that has just been reintroduced) can access megabytes of RAM (and usually have 32bit address spaces) so their storage would be measured in megabytes.
>Some consumer routers also don't come with 80 meg of flash even today.
You don't need to have every driver that Linux has.. you probably can't even select most of the drivers on non-x86 arches.
>I looked and without the modules the OpenWRT kernel itself is only about 1 meg.
>Good enough to boot a router but not much else.
Surely the 1MB kernel is enough to boot the router and load smallish modules like iptables etc to make it function as a router... otherwise you need to tell the openwrt guys they have been wasting their time for however many years they've been working on it.
>Talk to performance gamers.
My recent Nvidia GPU (I forget which..) works fine with whatever nvidia driver is in Debian/Sid and I can play all of the games I have in steam just fine.
>Radeon 6850 should've been near the top of the support list.
One hardware vendor not being very good with their Linux driver's isn't the fault of "the linux community" or Linux itself. There's nothing in the kernel that stops AMD's stuff from working if they want to support it properly. If they don't that's their problem. I'll just stick to nvidia stuff.
>Oh? I tried Ubuntu on a old Dell Inspiron. Fell flat because no nVidia driver worked on it.
>Noveaux was too slow and the nVidia blob wouldn't support the chipset.
>Dead end. And this isn't the first time.
I would guess the chipset you were trying to use is one that is A: either not yet supported in the latest greatest version of the driver or B: A legacy chipset that requires the one of the legacy versions of the drivers. Neither of these issues is an issue that is due to the kernel being Linux. Nvidia stops supporting older chipsets in their drivers for Windows too. If you have a brand new card and you want to run it with an OS that is used by less than 1% of nvidia's customers you should expect driver updates that support that card to take a little while. With the nvidia drivers at least support for their new chipsets does eventually happen.
You can't just stick any old nvidia card in a Windows machine and expect it to just work either so your point is moot from the start.
>But the Linux community, which includes the kernel community,
>should be pushing for most mainstream support, but they're not, so they're in the rut they are now.
How do you push commercial entities that rely on profits from sales to produce products that would certainly lose them money?
I'm not sure why the Linux community or kernel community should be pushing for a bunch of productivity apps that you're interested in when most of us aren't interested in that stuff either way. I use Linux because I'm a developer and I have access to some of the best tools out there and they are free and opensource. The apps you're interested in are all going "cloud" based anyhow and it won't matter what platform your on for those in a few years time.
"Except the desktop will continue to exist for performance applications like gaming.
See the above common beef PC gamers have concerning their video cards."
If demand for Linux drivers for the latest generation of GPUs goes up then driver support for those GPUs will improve.
>If Linux wants to be THE OS for the desktop, it will need several boosts here and there.
I'm not sure Linux (the git repo containing code for a general propose unix workalike kernel, or a compiled version of said code) wants to be anything. There are developers that want Linux to be the go to kernel for desktop systems, there are some that are only interested in machines with hundreds of processors, there are others that are more interested in seeing it run on stupidly under powered relics (the h8300 port might make a come back with device tree support like ARM :P). The fact that linux isn't targeted to one particular job is exactly why it's used literally everywhere and why it's interesting to work on.
>Support has improved considerably, yes,
Support for what exactly? Most common PC hardware works out of the box. The kernel side of things needed for The Desktop(TM) are there and have been for ages. If you want a totally integrated experience like Windows or OSX the distribution you're looking for is Ubuntu.
>but it can still have teething issues, particularly where vendors aren't exactly
>forthcoming with hardware support for various reasons such as protection
>of trade secrets.
IMHO hardware support under Linux is far superior than any other OS. People make out like you can just plug any old shit into a Windows machine and have it work but fail to mention all the time you have to spend hunting for drivers or returning stuff because the vendor doesn't support whatever version of Windows you happen to have.
>And then there's the software selection, particularly for the consumer
>end where people just want to put it in and work.
Linux is a kernel. It's not really the kernel's fault that the current desktop market share is mostly Windows so that's what commercial developers target. There's nothing particular about the Linux kernel that means the type of applications that run on Windows couldn't run on Linux.
>There are native applications that can do a lot such as GIMP and LibreOffice
So you're not talking about Linux. You're talking about common types of applications that desktop users need whether they are running Linux, OSX,... Anyhow this is going to be less and less of a problem now that everyone wants to use more portable development tools etc so they can get their stuff running on the desktop, web, mobile etc.
>but it will always trail the bleeding edge (and that's what killed it for me since I like to game).
Not really the kernel's fault again. There are lots of games on Android so it's not like Linux can't do games.
>You do realize that ARM, for example, is not remotely cross compatible,
>eh? Binary for one ARM isn't always (rarely, IME) going to work on a chip
>from another vendor.
Why Trev? Please come up with a good answer.
Hint: The main driver part of the nvidia solution runs in userland.
>the nVidia driver potentially has access to the entire memory space.
The kernel part of the nvidia drivers has source available so you can compile it against your running kernel. You're free to audit it and tell us about what you find.
>Because of this ANY bug you are experiencing with the kernel
>cannot be ruled out as a nVidia driver problem
>(potentially other software too, but usually it's trying to track down a kernel problem).
The kernel has the "tainted" stuff because of this. But the kernel part of the nvidia drivers have source available as I mentioned before so you don't have to resort to disassembling kernel modules to debug it you just won't get any help from the mainline kernel guys as it's not their code.
>nVidia has shipped buggy drivers
There are buggy drivers in the mainline kernel too. Using staging drivers also taints the kernel IIRC.
>and it's much harder to get dev attention if you're running a tainted kernel for this reason.
The reason for the kernel tainting stuff is so that you remove all of the external drivers etc you have loaded and reproduce your bug before reporting it otherwise you are potentially reporting a bug to the wrong place and wasting people's time. It's not about dissing people run code that isn't in the mainline. If that was the case why provide the infrastructure to build out of kernel modules in the first place?
>You'd be right. I'm not a developer, nor do I claim to be.
Ok so don't try to use someone else's wicked cool skill that you think are massively impressive to try to size up someone else because you have no idea what you're talking about.
>Linux developer - professional or not - doesn't give him
>standing to diminish someone like Chris. There's a difference.
You're taking offence to something that wasn't written. For most people that develop on Linux either at the kernel level, application or whatever are in no way affected by the revelation that nvidia has added signing to their GPUs. It affects a tiny minority of developers that are working on opensource drivers for nvidia GPUs and almost no one else. I think you're going to massive lengths to make this look like it deeply affects your friend's hobby project but I don't see how it does.
>No, vendors need to open source their frakking drivers so that the rest of the
>world isn't held up by their internal politics. There's a whole industry that needs
> to be able to move faster than they can.
In a perfect world yes. But this isn't a perfect world. There are lots of vendors that are trying to get all of their stuff mainlined. One example I can think of is Marvell that has been paying an external company to rewrite their drivers so they are acceptable as their previous binary blobs (that everyone who signed the NDA had the source code for) had no chance of getting included. Qualcomm has apparently been assisting the guys writing free drivers for their GPUs... If you look through the LKML though you'll see plenty of times where a vendor has offered their inhouse driver for mainline and it has been rejected because it's poorly written. There might be a few mails back and forth to try and correct the issues but a lot of the time the conversation dies and the driver doesn't go in. Bottom line is that open sourcing stuff that is wrapped around layers and layers of NDAs and licensed IP is not easy and once the open sourcing has happened it doesn't mean it's going in the mainline and it's going to be supported forever and ever. FYI: the kernel part of the nvidia drivers do have source available.
>So do I, and nVidia doesn't release that information with a simple NDA.
> It takes a hell of a lot of lobbying and a lot of money.
So don't use their stuff then.
"Hobbyists have a bit of a problem that they aren't very valuable to big semiconductor companies that need to ship hundreds of thousands of units to make a design profitable.""
>Yeah, but fuck 'em, eh? Awesome attitude
Where did I say that Trevor? You seem to run some sort of consultancy; If everyday a bunch of charity cases walk into your office and give you their sob story will you do work for free or for a rate that means you lose money? You might once or twice out of the goodness of you heart but you aren't going to do it everyday until you go bust are you? Hobbyists should stick to hobbyist friendly vendors that release proper documentation for their products and be hard on vendors that don't release documentation. What hobbyists really don't need is people flapping their gums about stuff they don't care about or need.
>Poor support outside of x86.
You keep going on about this horrible non-x86 thing but I don't think you have a clue what you're talking about.
>Reams of WONTFIX bugs and corporate history of simply ignoring bugs
>raised are all good reasons.
The intel GPU drivers have been opensource for a long time. They still crash the whole X server when people do certain actions in kicad with some models of GPU. The bug has been there for about 5 years. Open sourcing the drivers doesn't instantly fix hard to fix bugs.
>. 1) Inability to firmware update cards (nice to have in a lot of ways)
They haven't stopped anyone from updating firmware. They have locked out firmware that isn't signed with their key.
>2) lack of open source drivers that can be recompiled on other platforms (absolute must).
This is about that non-x86 thing again isn't it? Are you aware that nvidia have a bunch of SoCs with ARM cores and nvidia GPUs. By what I saw at Tokyo MakerFaire (I'm one of those hard done by electronics/computing hobbyists you are so concerned about BTW) it looks like they even have working CUDA on ARM. It was pretty funny really.. Nvidia had a stall with impressive CUDA and machine vision demos running on their SoCs and Intel was next door trying to flog their unimpressive buggy pentium class crap next door.. but that's another story.
>Now some of my clients have a desire to get into the firmware and tweak and tinker,
What exactly are they going to tweak/tinker? I can maybe understand that they might be able to find where values like the different core frequencies are held in the flash and overclock their cards but I very much doubt they are in IDA disassembling the stock firmware, documenting and re-implementing it on a daily basis. Take a read of this article by someone that has been reverse engineering GPU drivers, it might open your eyes a little bit: http://lwn.net/Articles/638908/
>because they need every erg of speed.
So, yeah, poking in a hex editor to tweak the settings of the cards which nvidia doesn't make available.
>But I think there's a much broader need for open source drivers that can be tweaked
What are you going to be tweaking exactly? I'm sure there are things to tweak but I'd like to hear a solid example.
>and recompiled for different architectures,
What architectures do you think could really do with nvidia GPUs but don't have binary drivers. Keep in mind that there are only 3 or 4 current architectures that have pci-e interfaces.
>and where bugs can be fixed that nVidia won't.
Unless the bugs are in the firmware that has no relation to the firmware being signed or closed source. Nvidia could have opensource drivers and closed firmware (like 99.9999999% of the stuff in your machine that has a mainlined driver but requires firmware).. would you still be demanding they remove the signing if that was the case?
FYI Trev, from what I can tell the nouveau drivers don't support OpenCL yet (http://nouveau.freedesktop.org/wiki/FeatureMatrix/) and doesn't support CUDA so the use of an nvidia GPU with the nouveau drivers for GPGPU seems to be a nonstarter.
>What have you done that of the same complexity as a "multi-million LOC proper operating
>system kernel like Linux", thus giving you the bragging rights to look down your long
>nose at others, hmm?
Where am I looking down at others exactly? You're the one trying to belittle the OP for mentioning he's some sort of developer by using someone else's apparent skills in an attempt to make him feel small. I have a feeling that the 2 or 3 lines I have in the mainline are more than the sum of *your* input to a serious kernel.
>Graphics cards aren't just for graphics. They are used for processing as well.
Which nvidia supply a public API for and doesn't require running third party firmware on the GPU. You're making out this is like some secure boot system that stops people running their own code on their CPU/GPU when it really isn't.
>part of the frustration is that the lack of open sourced drivers makes doing that integration
>work harder...especially when he's working with non-x86 platforms.
On non-x86 platforms the vendor usually supplies a BSP (board support package). Depending on your agreements with the vendor you might have a bunch of binary blobs or complete access to their live internal git repo. Open source drivers usually make upgrading kernels etc easier but a lot of the time you have to stick with the crappy old kernel and drivers the vendor supply and maintain because of weird issues with the hardware that aren't handled in opensource drivers. It's a bad situation really. Vendors need to be working to get their stuff into the mainline so it doesn't bit rot but the management is usually very much "our precious" so however much developers tell them that they should try to get their stuff mainlined it's hard work to make it happen.
> On behalf of every small business, every startup and ever hobbyist
> in the world: fuck you. In the face.
I work with small startups a lot bringing up Linux of their hardware. I can't think of a case where we haven't been able to get the complete source for all of the vendor's drivers.
Hobbyists have a bit of a problem that they aren't very valuable to big semiconductor companies that need to ship hundreds of thousands of units to make a design profitable.
You seem to be arguing along the lines of "I know more than you so shut up" and "Won't someone think of the children that for some inexplicable reason need to be able to upload their own firmware to GPUs". Neither is making much sense.
>If it were open source, perhaps someone would have gotten the bugs worked out.
In a perfect world yes. In the real world potentially there are hardware issues that can only be fully understood by looking at the designs for the chip or by lots and lots of guessing.
I would say vendors opening documentation is a lot more important than them providing the source for their (usually horrendous) drivers.
>To start off with, he writes his own kernels.
He seems to have written one kernel of limited complexity. I'm sure a "kernel" is something massively impressive to most people but a small scheduler only kernel isn't all that hard to implement once you understand how to do a context switch and how to switch tasks using a timer. There's a reason why there are lots of very simple hobby and RTOS kernels out there and not so many multi-million LOC proper operating system kernels like Linux.
>someone who is directly affected by the lack of open source
>drivers directly from nVidia, and he does that stuff just for fun
I'm not sure how you go from "writing a hobby kernel" to "needs to have custom firmware for a graphics card". I can't even see where his kernel's nvidia graphics driver is.. it seems it has a serial console only. But anyhow, he's free to do what most toy kernels do and use the standard VESA stuff that is compatible with the millions of PCs out there.
>He does, in fact, code that close to the metal.
Not massively impressed really. I know lots of people that look at an instruction sequence and tell you how many clocks it will take and how to reduce the clock count by using some weird trick.
>The lack of open source drivers really, honestly and truly does affect them,
>as there are regularly things they need to be able to change, and they have
>to fight tooth and nail to see them changed.
If the stuff they are working on is so important they should have a contact an nvidia that can help with that. Surely they want someone that has access to the engineers that put the chip together opposed to stuff that is reverse engineered.
What a lot of people don't realise is that even with proprietary hardware if you have enough cash and sign enough NDAs you can usually get access to all the information and code you would ever need. I have the complete source for the binary drivers for various ARM SoCs sitting on my harddrive.
>I'm glad that you get by just fine on the proprietary drivers.
The proprietary drivers have public specifications right? For your previous example that should be enough. If they find bugs in the proprietary drivers they should have a contact within nvidia that they can contact to get it fixed.
>"the drivers are not open source" is a problem for other people, that actually matters.
Having opensource drivers does matter but not for the reasons you gave.
Nice article. Shame about the clickbait headline though.
>There is no way I would ever consider working on Linux
one suspects that if your skin is that thin that you don't have the skills required to contribute anything worthwhile.
>It is the anti-systemd crowd who has become hysterical.
People really need to stop complaining about ad-hominem attacks on Lennart by using ad-hominem attacks against a massive group of people they don't know.
I don't care about systemd. I have it running on this debian machine because it was installed after the switch over. It's had it's problems like locking up during boot and shutdown but I've had similar issues with sysvinit too so I'm not going to use the "it broke a few times for me so it's bad" argument. Again, I don't care about systemd, but I am anti-the-steamrolling-of-many-fundamental-userland-components-that-everyone-including-systems-that-cant-run-systemd-need.
It's not written in the GPL or whatever but I think if you take over projects like udev you have a responsibility not to brake it for all of the users/use cases that don't fit with your "one daemon to rule them all" philosophy. Greg K-H giving up maintaining udev as an independent project was a massive mistake IMHO. Maybe I should have brought that up when he emailed me to ask how I wanted my commits to the kernel to be counted...
Why do you need a $500 laptop to talk to basic running on a microcontroller? You realise that old school BASIC can run entirely on the chip itself and thus you only need something that can run a serial terminal right? If you wanted to go crazy you could put a small LCD and a keyboard port on a little micro controller board. Maybe you could get the advanced kids to do that. They might even learn something!
"they can also learn a bit of Linux admin"
Ah, so you're one of the "I have no idea but let's teach it!" crew.
"BASIC? Hang on, while I fetch the DeLorean. 88MPH. GOTO 1985."
BASIC is perfectly fine for kids to pick up the notion that computers take commands and generally run them in order but sometimes there are branches and iteration. What better way to learn branching than to make them make their little LEDs do something different depending on if a button is pressed or not. The reason you sort of people don't understand that is because you have no idea what you actually want kids to learn.
>Another problem on those devices is that you have several instances of "binary blobs",
>code running with very high privileges, facing outside, but having never gone through
>some sort of security audit.
Which binary blobs on Android face outside?
>If you had a simple high speed serial port running a much simpler protocol like PPP,
Why does that help at all? If you can exploit USB drivers to take over the screen why couldn't you exploit the PPP daemon running the link between the application processor and the baseband.
>this becomes so hard it gets implausible.
How? The only way to totally avoid not having the baseband fiddle with the applications processor is to not link them at all... which makes your phone a bit useless. As soon as you link them up you have hardware and software components operating the link that can be exploited. Changing the type of link doesn't change that.
>You could have each function of your mobile phone done by an independent microcontroller.
>The software running on each of those would be simple enough that it would be
>essentially bug free, so it wouldn't need to be updated.
*essentially bug free* .. so not bug free. So still has potential to be exploitable. So back to square one.
>Simple protocols could reduce the attack surface even more.
Even simple protocols go through complex layers of hardware and software.
I find it funny that any thread that even remotely mentions the Israel vs Gaza thing turns into a conflict that is as pointless and retarded as the real thing.
>at kernels used in various Android phones, etc you will see a mix of 2.6.33 and an occasional 3.0.
The versions in use are the versions that the Android patches will apply to. I'm not sure if everything is in the mainline kernel even now.
There are lots of up to date BSPs for ARM SoCs that are still using 2.6 series kernels. Mainly vendors like Marvell that have a bunch of hacked up drivers that were impossible to mainline.
The device tree stuff has also meant that a lot of ARM stuff is still on earlier 3 series kernels because of breakage resulting from the uptake of DT in more recent kernels.
I'm wondering why the fact that support for some old M68K palm pilots should work again now was left out of this article!
It sounds like Samsung think there was some clause in the deal bars Microsoft from becoming a direct competitor and since buying Nokia that's what they've become.
> that can be universally patched as needed.
>and still allow the OEMs a-la Samsung to skin-up their GUIs as they see fit.
If you take a look at the AOSP source and maybe try to make it work on some device it soon becomes apparent why that isn't easy to do. Sure Samsung etc could just replace the framework graphics to their crappy looking stuff but they don't want to do that. They want to change the UI enough so that it looks like a Samsung and not a something else. So they will tinker around all over the place.
More times than not vendors will also need to add their own patches to core packages to make it work on their device. Mix into that some vendor binary blobs, hardware specific compiler flags that might make binaries incompatible etc and it becomes very hard for Google to be able to "universally" patch anything in the OS.
Now this issue is actually a bit different than something like heartbleed in openssl which will mean replacing that library in the system partition which means an OTA update.. This is a security issue within Google services that run on top of Android and as other people have mentioned it's been fixed.
I'm not sure why any such laws should target just phones and embedded (IoT) stuff.
Surely if you're going to make "laws" it should be along the lines of provide security updates for *any* software as long as possible and at the point that the vendor is no longer able to provide updates (doesn't want to or goes bust) they must release their source code and tools to make it possible for someone else to fix the issue.
I'm not sure it's fair to compare Intel and ARM really.
ARM vs Renesas would be a better comparison because of they both have designs for microcontroller applications through to relatively high performance. I think the fact that Renesas is also shipping ARM chips shows they are doing something right.
That appears to be a problem specific to Sony devices. Not sure how that is Google's fault. It's very possible that something in the Play Services triggers something in Sony devices that cause them to fully wake and that causes extra battery usage. The thing is the play services are used by other apps so it may very well be that one of Sony's shovel-ware apps is what is causing play services to be active... It's sort of like blaming the milk for running out when you open the bottom of the carton.
>Chrome sets up a polling loop with 1ms
I think I misunderstood your post because you have misunderstood what the issue is.
It seems that Chrome plays with the platform timer. That's not a "polling loop".
If a user process playing with that is an issue it shouldn't be available to user processes unless they run as a super user.
>An operating system is normally event driven,
Timers generate events.
>ultimately from interrupts that come from external sources such as keyboards,
>network and timers. There should be no polling.
How do you do pre-emptive multitasking if external interrupts are the only way to jump out of the running user task and re-enter the kernel?
>Perhaps you could start by explaining what Google Play
Perhaps you could explain why you *think* it's google play that is using all that power..
Sounds like an issue with how Windows uses the platform timer opposed to Chrome causing the battery drain.
Why is something that can cause issues like this available to user applications in the first place?
>And that doesn't look like a very clever idea now, does it. Come to think of it,
So you want apps to be able to mess around in the /system partition and make your device unbootable?
>Google must have looked at Linux, OS X and Windows (to name but a few) with
> their auto updating mechanisms and decided that pushable updates were a bad idea.
Google has moved more of the userland into the playstore so that it can be updated. OTA updates are supported for the OS itself. If vendors don't want to ship OTA updates then I'm not sure what exactly Google is meant to do other than take back control of the OS builds vendors ship (via their Nexus etc devices). When they do that they are accused of being control freaks. They really can't win.
>Or you could do it properly, which is what Microsoft have tried (and mostly succeeded) to do.
I'm not sure Windows update is the pinnacle of OS updating schemes. If you said they should have looked at apt or yum you might have a point.
>Which is, define a hardware architecture to which manufacturers must comply,
>and then MS can push out updates as and when necessary.
Never going to be possible for phones unless you only ship a very restricted set of devices. It's working for Apple but not for MS. Apple use the same scheme as Android devices should for OS fixes btw.
>Another option would be for Google to keep a legacy kernel interface and
>driver model in newer versions of Android. One commonly cited reason
>for so many handsets being left on the 2.3 Gingerbread is because so
> many things changed in the kernel in 4.0 Ice Cream.
Linux has backports for things like WiFi drivers etc and options in the kernel to make it compatible with older userland utils etc. That won't stop chip vendors being lazy.
>Now, if only Google can learn that lesson too..
Not sure what you're really referring too but I had to make a guess I would think this is about Android and vendors not shipping updates? OS level updates can't be pushed out via Play. The Android system partition is read only for a good reason. I guess what Google needs to do is either get more vendors shipping vanilla builds that Google will manage the over the air updates for or split the system partition up a bit so that vendors can add their junk in there but google can offer partial OTA updates for vital security updates. Kernel updates will be tricky as usually SoC vendors are very lazy. They'll get some old crap version of Linux working, release that as a BSP and forget about it. So if fixes for major issues are pushed to the Linux mainline it may take forever for those fixes to actually appear in the kernels for all of the devices out there.
The steps between "git push origin master" and a patch being deployed on millions of devices are more complex than fixing the issue itself in most cases.
The drama level in this article makes it initially seem the PRNG itself is broked.. which it isn't.
The logic to tell the PRNG that it's in a new process and needs to flush it's state being broken is a lot less "oh noes the sky is falling in! won't somebody think of the security!" I guess.
The process has worked: There was an issue, someone spotted it, the LibreSSL guys stated that it wasn't a big deal but they got on and fixed it. If things continue this way we might actually have a decent opensource TLS library eventually. Slightly off-topic; eglibc (a fork of glibc) has recently been dropped in favour of going back to glibc in Debian. Why? Because forking glibc and fixing a ton of crap that was wrong with it worked. glibc is now open to development and stupid historical mess due to one guy having a strangle hold on the project is getting cleaned up. LibreSSL could wipe out OpenSSL or OpenSSL could consume LibreSSL. Either way we should end up with something less crap and more maintainable.
>I really don't get the IoT Home Automation thing
I have a certain bias in this area (covered by NDA) but even I can see a lot of it is just junk.
But there are some good things out there...
You don't want or need every switch in your house to be internet controlled of course but adding some intelligence to certain systems in your home is a good thing. If you have something in your home that relies on the weather or can be made more efficient by monitoring weather patterns then that makes a good candidate for being made into an internet thing. Those devices wouldn't need to send any data out and could be purely data consumers too.
Maybe you have something in your home that would turn into an "oh shit" moment if it went wrong and you didn't notice while you were at work. Maybe you have a sensitive tropical fish tank that the contents of which would die if the pump system stopped working for too long. With some simple IoT tech you could have those systems tell you when "oh shit" is about to happen when you're not around.
Again, I don't think anyone except people that have to have everything shiny will need every light switch in the house hosting it's own embedded system but I think most people have one or two systems in their home that could be made more efficient or safer with some intelligence built into it.
>There are issues with using an IP-based system.
>If you change ISP or router and your home IP addresses change
Surely this is why they want IPv6. It's almost impossible to stop IPv6 from configuring itself.
>Linux 3.0 kernel
I hope that's a typo or misunderstanding.
Same here. I often have to sign up to get datasheets or software for things.
I use one of my set of 3 crap passwords for it and forget about it. If I need to login again I know it's one of the crap passwords.
>Your continual anti-Pi stance is tedious in the extrem
To be honest I was being tongue in cheek with my first comment about the broken USB.
The problem is you piboys get all defensive of your little platform and start trying to make excuses or claim there are no problems.
Now you're trying to make out I have some little grudge against the Pi that is as emotionally based as your love affair with a PCB. "Don't say nasty things or my mates will stick the lawyers on you boohoo".
It really is pathetic.
I'm in Japan so you would think they would ship them by registered post to here but they won't unless something has changed recently. $40 DHL shipping isn't really worth it on a single board. If they would stock at Digikey or something that would be great as usually it's free shipping on orders >$70 and that's still lower than the minimum for import duty.
>They were the interface from the ARM to the GPU hardware.
*Sigh* you are reiterating the same drivel that Liz and chums did.
Aside from the issues with the hardware, being stuck with a fudge debian port etc I think the thing that rubs me up the wrong way the most with piboys is this religious parroting of the official dogma.
I.e. I was looking into what it takes to make the pi run on 3.3v for doing telemetry on a quad copter. According to high ups (read: people with badges) on the pi forums it was impossible.. even when someone had already done it and had documented it. *facepalm*