Re: Google are switching to OpenJDK...
They are switching to the OpenJDK standard library not the JVM AFAIK. The reason is because the current harmony based libraries are stuck on java 5 or 6.
322 posts • joined 19 Apr 2007
They are switching to the OpenJDK standard library not the JVM AFAIK. The reason is because the current harmony based libraries are stuck on java 5 or 6.
"And, every time you change the kernel, you have to recertify (both time consuming and expensive)."
Can you provide a link that proves that? Please keep in mind that a lot of the radio stuff in Android is in userland and not the kernel itself.
>since the Linux core of Android has essentially forked from Android some time ago
Google maintain a patch set for Linux that they have been gradually getting mainlined.. not really all that different from the kernel RHEL etc use that ship with vendor patches. Hardly a fork.
>LiPo batteries would also be blended, resulting in finely divided particles
>of reactive metal being flung about,
LiPo batteries don't contain sheets of pure lithium metal. While there are videos of people cutting them with metal objects thus internally shorting them and causing them to burst into flames most LiPo explosions happen while actively pushing energy into them not when discharging.
There are also videos of people crushing them in a hydraulic press and nothing happening.
I doubt they pose a massive problem once cut into confetti sized pieces.
"Except for Android phones and smart TV's (and home routers?). Those will be screwed over."
Most of that stuff is running kernels so old that they don't contain the affected code.
@Brewster's Angle Grinder
Android apps use "shared libraries".. mainly because if you want to add native code to an app deployed in Java the native code is loaded in via a shared library.
So long story short: Android has shared libraries, system wide ones and APK wide ones.
Apps shouldn't be allowed to mess with the system wide ones as that would be a disaster and the OS has no hope in hell of patching random .so files shipped with apps. You seem to think some code being in another file == totally easy to do security updates for with no knowledge of what the .so actually is.
That will only work for apps developed in Java as that isn't replacing the system TLS library but replacing the security provider used by the "Java" runtime to create instances of SSLSocket etc.
It doesn't actually update the system wide library that is in the read only system partition...
And this doesn't work for the SDK in question because it's a platform for writing cross platform apps so it's unlikely to be able to call into Java code to setup TLS sockets.
"Where the DLL resides doesn't change the problem."
It does. If a .so is packaged with an Android application other applications shouldn't be able to load it (Each application is sandboxed in it's own user on Android) so if there is a problem with that .so it's limited to that application.
"Either the OS upgrades a library, with all the potential compatibility problems, or the app does it,"
We're not talking about anything special here because it's a library. As far as the OS is concerned is just a file owned by the application. There's no way to track every single piece of code that is shipped with third party applications and have the OS go around fixing it. The only realistic option is to contain the damage by sandboxing the applications as much as possible.
Google has options via the Play Store to press the eject button on apps that are really serious problems and they apparently do scan code (compiled from Java) for common defects (recently they have been sending e-mails out for issues that will stop apps working with N) but if you think that they can build something into the OS that can work out where each binary in an application comes from, work out what it was really built from (app specific patches etc) and supply automatic replacement binaries without breaking applications then I want some of what you're smoking.
"in which case you're waiting on the developers."
You're waiting on developers either way. If the fix for this is trivial to implement (updating the SDK and rebuilding the APK) updates will be hitting the Play store in no time.
The term "shared library" doesn't mean "system library". It means that it can be dynamically linked at runtime thus shared between binaries that link against it.. it doesn't have to be visible system-wide to be shared.
Android has shared system libraries and most apps will be using them. There are very good reasons applications can't go messing with system libraries. Take a few moments and think about it...
Now a thought experiment; What do you do if you want to provide some functionality that isn't available in the system libraries? Do you allow random applications to fight over which version of the library gets installed system-wide or do you just sandbox the application and let it provide whatever libraries it wants for it's own use without interfering with the OS and other applications?
>implement best security practices known, at least conceptually, since the 1970s.
Known since the 70s but not implemented in probably the most widely used proper (i.e. not RTOS) kernel in the world? I wonder if there is some level of "easier said than done" to this?
For the mirco controller market if royalties for ARM Cortex M? designs go up or SoftBank does something else that pisses off silicon vendors we will see all them start pushing their pre-ARM stuff again. Which would be a shame. While I'm not an ARM fanboy having decent free toolchains, debuggers etc that work across thousands of different parts from different vendors is very useful.. I don't really want to go back to using the 500MB zip with a bunch of unworkable crap and $200 or $300 debug tool per vendor/family days.
For the medium performance stuff like phone and tablet chips I'm not sure Apple etc have anywhere else they can go aside from Intel or AMD. SoftBank buying ARM could be just what Intel have been waiting for.
Yes. Patching remotely is automated and the device has multiple copies of the firmware so it can't be bricked if an update fails. Next question.
I'm willing to bet there are similar issues in every libc and any of the runtime environments for any "safe" language like Java, .net, python..
Would be nice to see proper discussion of the issue and what people should be doing to not get caught out by it instead of retarded finger pointing and sneering.
But what if a counterfeit arduino with a fake chip that is a clone of a real chip that says in the documents that it's not to be used in critical life support systems is used to control the machine that stops the 100 kilo lead weights installed above all patients heads from dropping and killing them fails because it couldn't handle invalid data because of bad coding practices! *DEEP BREATH*
COME ON MAN! THINK OF THE CHILDREN^H^H^H^H^H^H^H^H PEOPLE THAT RELY ON ARDUINO CLONES FOR THEIR LIFE SUPPORT MACHINERY!!
>proactively avoided anything with an FTDI chip in it.
So you avoid all of the dev kits with FTDI chips as the JTAG interface.. like 90% of them because no other vendor makes a chip like that.
>The risk is just too high and counterfeits are all over the supply chain,
>even in heavily controlled sourcing.
If you source from Digikey etc you should be OK. I suspect most of the people that are getting stuff bricked are using parts sourced from Random/Cheapest Parts Dealer in China.
>Imagine the liability if a counterfeit got into a medical device
>and FTDI's driver f*ckups killed somebody.
What if the counterfeit part dies without FTDI's driver fucking it up? Surely a system that is running critical life support services A: uses only parts that can be traced back to the original vendor, B: Doesn't use Windows or is at least fault tolerant enough that it doesn't rely on Windows being remotely stable. C: Doesn't go updating critical drivers while it's doing a critical life support task?
Have you considered that potentially counterfeits might be busted by the official driver even if it doesn't intentionally try to break them because they aren't 100% compatible?
>Sorry to say, but hopefully FTDI will be out of business before that happens.....
I doubt that will happen. They don't just make this chip and whatever issues you have with their drivers the alternatives don't exist or aren't as good. If you just want a decent USB->Serial chip the Silabs CP2102 is good but if you want a high speed multiprotocol chip like the FT2322 you have less choice.
>It's a shame really as FTDI has been the defacto standard for USB-to-serial for decades,
>way to destroy your business.
Because their chips work unlike the alternatives with the exception of the CP2102 I mentioned.
The transport used to get the firmware to the device doesn't matter if the firmware is signed. If they (the vendor) just rely on transport security to stop rogue firmware that would be a problem but they (pen testers) didn't show they could change the firmware and make the device download it and run it.
All they have done is see something happens over a clear text protocol and made a noise about it.
This is much like the barbie hack where they read out plain text data from the spi flash by wiring it up to a tool that talks to spi devices and did WiFi scans using features that are part of the firmware and used during provisioning.. They "hacked" nothing but made it sound like they did and sites posted their crap verbatim. Big head security reachers and clickbait news sites are a match made in heaven.
>that drivers probably don't belong inside a kernel
Stuff like GPU drivers have a kernel component and a userspace component in a lot of cases so it's not true that "GPU drivers are in the kernel" unless all you are thinking about are dumb framebuffer devices.
malloc either gives you a pointer to some memory with the size you asked for or not.. I'm not seeing why it would be malloc's fault if you write past the memory you asked for.
This is basically like a national insurance or social security number at the moment. I suspect the way things work at the moment is a massive pain in the ass to administrate and causes a ton of head ache.
The income tax office knows someone with your name and your current or maybe 10 years previous address, filled or had filled for them a tax return at some point. They apparently pass on the income details for people to their city or town office based on the name and rough address and then that office sends out local tax, health insurance etc details. For me the tax office keep sending stuff to an address I moved from over a year ago even after telling them the change of address (good on JP for forwarding everything for so long) but that address is different from the address that I have registered at the town office... I suspect because of my foreign name they can work out how to sort it out but I imagine them messing stuff up for people that have common names and the income tax office sends income information to the wrong city or town office fairly often.
TL;DR; version - Japan gets NI numbers that make the interlinked public systems a little less fragile. Not the end of the world or some plan to document all the evil foreigners or whatever.
>Therefore you can get operating systems like FreeRTOS/OpenRTOS
>which can run with very few kilobytes of RAM.
All operating systems would only need a few K of ram if all they did out the box was scheduling, some multi-threading primitives and heap management.
If you need TCP/IP, TLS etc you're basically limited to the more expensive end of the microcontroller spectrum.
>For microcontroller programming storage is often measured in kb not meg (and not big numbers).
The sort of microcontrollers that Linux can run on (like the H8300 port that has just been reintroduced) can access megabytes of RAM (and usually have 32bit address spaces) so their storage would be measured in megabytes.
>Some consumer routers also don't come with 80 meg of flash even today.
You don't need to have every driver that Linux has.. you probably can't even select most of the drivers on non-x86 arches.
>I looked and without the modules the OpenWRT kernel itself is only about 1 meg.
>Good enough to boot a router but not much else.
Surely the 1MB kernel is enough to boot the router and load smallish modules like iptables etc to make it function as a router... otherwise you need to tell the openwrt guys they have been wasting their time for however many years they've been working on it.
>Talk to performance gamers.
My recent Nvidia GPU (I forget which..) works fine with whatever nvidia driver is in Debian/Sid and I can play all of the games I have in steam just fine.
>Radeon 6850 should've been near the top of the support list.
One hardware vendor not being very good with their Linux driver's isn't the fault of "the linux community" or Linux itself. There's nothing in the kernel that stops AMD's stuff from working if they want to support it properly. If they don't that's their problem. I'll just stick to nvidia stuff.
>Oh? I tried Ubuntu on a old Dell Inspiron. Fell flat because no nVidia driver worked on it.
>Noveaux was too slow and the nVidia blob wouldn't support the chipset.
>Dead end. And this isn't the first time.
I would guess the chipset you were trying to use is one that is A: either not yet supported in the latest greatest version of the driver or B: A legacy chipset that requires the one of the legacy versions of the drivers. Neither of these issues is an issue that is due to the kernel being Linux. Nvidia stops supporting older chipsets in their drivers for Windows too. If you have a brand new card and you want to run it with an OS that is used by less than 1% of nvidia's customers you should expect driver updates that support that card to take a little while. With the nvidia drivers at least support for their new chipsets does eventually happen.
You can't just stick any old nvidia card in a Windows machine and expect it to just work either so your point is moot from the start.
>But the Linux community, which includes the kernel community,
>should be pushing for most mainstream support, but they're not, so they're in the rut they are now.
How do you push commercial entities that rely on profits from sales to produce products that would certainly lose them money?
I'm not sure why the Linux community or kernel community should be pushing for a bunch of productivity apps that you're interested in when most of us aren't interested in that stuff either way. I use Linux because I'm a developer and I have access to some of the best tools out there and they are free and opensource. The apps you're interested in are all going "cloud" based anyhow and it won't matter what platform your on for those in a few years time.
"Except the desktop will continue to exist for performance applications like gaming.
See the above common beef PC gamers have concerning their video cards."
If demand for Linux drivers for the latest generation of GPUs goes up then driver support for those GPUs will improve.
>If Linux wants to be THE OS for the desktop, it will need several boosts here and there.
I'm not sure Linux (the git repo containing code for a general propose unix workalike kernel, or a compiled version of said code) wants to be anything. There are developers that want Linux to be the go to kernel for desktop systems, there are some that are only interested in machines with hundreds of processors, there are others that are more interested in seeing it run on stupidly under powered relics (the h8300 port might make a come back with device tree support like ARM :P). The fact that linux isn't targeted to one particular job is exactly why it's used literally everywhere and why it's interesting to work on.
>Support has improved considerably, yes,
Support for what exactly? Most common PC hardware works out of the box. The kernel side of things needed for The Desktop(TM) are there and have been for ages. If you want a totally integrated experience like Windows or OSX the distribution you're looking for is Ubuntu.
>but it can still have teething issues, particularly where vendors aren't exactly
>forthcoming with hardware support for various reasons such as protection
>of trade secrets.
IMHO hardware support under Linux is far superior than any other OS. People make out like you can just plug any old shit into a Windows machine and have it work but fail to mention all the time you have to spend hunting for drivers or returning stuff because the vendor doesn't support whatever version of Windows you happen to have.
>And then there's the software selection, particularly for the consumer
>end where people just want to put it in and work.
Linux is a kernel. It's not really the kernel's fault that the current desktop market share is mostly Windows so that's what commercial developers target. There's nothing particular about the Linux kernel that means the type of applications that run on Windows couldn't run on Linux.
>There are native applications that can do a lot such as GIMP and LibreOffice
So you're not talking about Linux. You're talking about common types of applications that desktop users need whether they are running Linux, OSX,... Anyhow this is going to be less and less of a problem now that everyone wants to use more portable development tools etc so they can get their stuff running on the desktop, web, mobile etc.
>but it will always trail the bleeding edge (and that's what killed it for me since I like to game).
Not really the kernel's fault again. There are lots of games on Android so it's not like Linux can't do games.
>You do realize that ARM, for example, is not remotely cross compatible,
>eh? Binary for one ARM isn't always (rarely, IME) going to work on a chip
>from another vendor.
Why Trev? Please come up with a good answer.
Hint: The main driver part of the nvidia solution runs in userland.
>the nVidia driver potentially has access to the entire memory space.
The kernel part of the nvidia drivers has source available so you can compile it against your running kernel. You're free to audit it and tell us about what you find.
>Because of this ANY bug you are experiencing with the kernel
>cannot be ruled out as a nVidia driver problem
>(potentially other software too, but usually it's trying to track down a kernel problem).
The kernel has the "tainted" stuff because of this. But the kernel part of the nvidia drivers have source available as I mentioned before so you don't have to resort to disassembling kernel modules to debug it you just won't get any help from the mainline kernel guys as it's not their code.
>nVidia has shipped buggy drivers
There are buggy drivers in the mainline kernel too. Using staging drivers also taints the kernel IIRC.
>and it's much harder to get dev attention if you're running a tainted kernel for this reason.
The reason for the kernel tainting stuff is so that you remove all of the external drivers etc you have loaded and reproduce your bug before reporting it otherwise you are potentially reporting a bug to the wrong place and wasting people's time. It's not about dissing people run code that isn't in the mainline. If that was the case why provide the infrastructure to build out of kernel modules in the first place?
>You'd be right. I'm not a developer, nor do I claim to be.
Ok so don't try to use someone else's wicked cool skill that you think are massively impressive to try to size up someone else because you have no idea what you're talking about.
>Linux developer - professional or not - doesn't give him
>standing to diminish someone like Chris. There's a difference.
You're taking offence to something that wasn't written. For most people that develop on Linux either at the kernel level, application or whatever are in no way affected by the revelation that nvidia has added signing to their GPUs. It affects a tiny minority of developers that are working on opensource drivers for nvidia GPUs and almost no one else. I think you're going to massive lengths to make this look like it deeply affects your friend's hobby project but I don't see how it does.
>No, vendors need to open source their frakking drivers so that the rest of the
>world isn't held up by their internal politics. There's a whole industry that needs
> to be able to move faster than they can.
In a perfect world yes. But this isn't a perfect world. There are lots of vendors that are trying to get all of their stuff mainlined. One example I can think of is Marvell that has been paying an external company to rewrite their drivers so they are acceptable as their previous binary blobs (that everyone who signed the NDA had the source code for) had no chance of getting included. Qualcomm has apparently been assisting the guys writing free drivers for their GPUs... If you look through the LKML though you'll see plenty of times where a vendor has offered their inhouse driver for mainline and it has been rejected because it's poorly written. There might be a few mails back and forth to try and correct the issues but a lot of the time the conversation dies and the driver doesn't go in. Bottom line is that open sourcing stuff that is wrapped around layers and layers of NDAs and licensed IP is not easy and once the open sourcing has happened it doesn't mean it's going in the mainline and it's going to be supported forever and ever. FYI: the kernel part of the nvidia drivers do have source available.
>So do I, and nVidia doesn't release that information with a simple NDA.
> It takes a hell of a lot of lobbying and a lot of money.
So don't use their stuff then.
"Hobbyists have a bit of a problem that they aren't very valuable to big semiconductor companies that need to ship hundreds of thousands of units to make a design profitable.""
>Yeah, but fuck 'em, eh? Awesome attitude
Where did I say that Trevor? You seem to run some sort of consultancy; If everyday a bunch of charity cases walk into your office and give you their sob story will you do work for free or for a rate that means you lose money? You might once or twice out of the goodness of you heart but you aren't going to do it everyday until you go bust are you? Hobbyists should stick to hobbyist friendly vendors that release proper documentation for their products and be hard on vendors that don't release documentation. What hobbyists really don't need is people flapping their gums about stuff they don't care about or need.
>Poor support outside of x86.
You keep going on about this horrible non-x86 thing but I don't think you have a clue what you're talking about.
>Reams of WONTFIX bugs and corporate history of simply ignoring bugs
>raised are all good reasons.
The intel GPU drivers have been opensource for a long time. They still crash the whole X server when people do certain actions in kicad with some models of GPU. The bug has been there for about 5 years. Open sourcing the drivers doesn't instantly fix hard to fix bugs.
>. 1) Inability to firmware update cards (nice to have in a lot of ways)
They haven't stopped anyone from updating firmware. They have locked out firmware that isn't signed with their key.
>2) lack of open source drivers that can be recompiled on other platforms (absolute must).
This is about that non-x86 thing again isn't it? Are you aware that nvidia have a bunch of SoCs with ARM cores and nvidia GPUs. By what I saw at Tokyo MakerFaire (I'm one of those hard done by electronics/computing hobbyists you are so concerned about BTW) it looks like they even have working CUDA on ARM. It was pretty funny really.. Nvidia had a stall with impressive CUDA and machine vision demos running on their SoCs and Intel was next door trying to flog their unimpressive buggy pentium class crap next door.. but that's another story.
>Now some of my clients have a desire to get into the firmware and tweak and tinker,
What exactly are they going to tweak/tinker? I can maybe understand that they might be able to find where values like the different core frequencies are held in the flash and overclock their cards but I very much doubt they are in IDA disassembling the stock firmware, documenting and re-implementing it on a daily basis. Take a read of this article by someone that has been reverse engineering GPU drivers, it might open your eyes a little bit: http://lwn.net/Articles/638908/
>because they need every erg of speed.
So, yeah, poking in a hex editor to tweak the settings of the cards which nvidia doesn't make available.
>But I think there's a much broader need for open source drivers that can be tweaked
What are you going to be tweaking exactly? I'm sure there are things to tweak but I'd like to hear a solid example.
>and recompiled for different architectures,
What architectures do you think could really do with nvidia GPUs but don't have binary drivers. Keep in mind that there are only 3 or 4 current architectures that have pci-e interfaces.
>and where bugs can be fixed that nVidia won't.
Unless the bugs are in the firmware that has no relation to the firmware being signed or closed source. Nvidia could have opensource drivers and closed firmware (like 99.9999999% of the stuff in your machine that has a mainlined driver but requires firmware).. would you still be demanding they remove the signing if that was the case?
FYI Trev, from what I can tell the nouveau drivers don't support OpenCL yet (http://nouveau.freedesktop.org/wiki/FeatureMatrix/) and doesn't support CUDA so the use of an nvidia GPU with the nouveau drivers for GPGPU seems to be a nonstarter.
>What have you done that of the same complexity as a "multi-million LOC proper operating
>system kernel like Linux", thus giving you the bragging rights to look down your long
>nose at others, hmm?
Where am I looking down at others exactly? You're the one trying to belittle the OP for mentioning he's some sort of developer by using someone else's apparent skills in an attempt to make him feel small. I have a feeling that the 2 or 3 lines I have in the mainline are more than the sum of *your* input to a serious kernel.
>Graphics cards aren't just for graphics. They are used for processing as well.
Which nvidia supply a public API for and doesn't require running third party firmware on the GPU. You're making out this is like some secure boot system that stops people running their own code on their CPU/GPU when it really isn't.
>part of the frustration is that the lack of open sourced drivers makes doing that integration
>work harder...especially when he's working with non-x86 platforms.
On non-x86 platforms the vendor usually supplies a BSP (board support package). Depending on your agreements with the vendor you might have a bunch of binary blobs or complete access to their live internal git repo. Open source drivers usually make upgrading kernels etc easier but a lot of the time you have to stick with the crappy old kernel and drivers the vendor supply and maintain because of weird issues with the hardware that aren't handled in opensource drivers. It's a bad situation really. Vendors need to be working to get their stuff into the mainline so it doesn't bit rot but the management is usually very much "our precious" so however much developers tell them that they should try to get their stuff mainlined it's hard work to make it happen.
> On behalf of every small business, every startup and ever hobbyist
> in the world: fuck you. In the face.
I work with small startups a lot bringing up Linux of their hardware. I can't think of a case where we haven't been able to get the complete source for all of the vendor's drivers.
Hobbyists have a bit of a problem that they aren't very valuable to big semiconductor companies that need to ship hundreds of thousands of units to make a design profitable.
You seem to be arguing along the lines of "I know more than you so shut up" and "Won't someone think of the children that for some inexplicable reason need to be able to upload their own firmware to GPUs". Neither is making much sense.
>If it were open source, perhaps someone would have gotten the bugs worked out.
In a perfect world yes. In the real world potentially there are hardware issues that can only be fully understood by looking at the designs for the chip or by lots and lots of guessing.
I would say vendors opening documentation is a lot more important than them providing the source for their (usually horrendous) drivers.
>To start off with, he writes his own kernels.
He seems to have written one kernel of limited complexity. I'm sure a "kernel" is something massively impressive to most people but a small scheduler only kernel isn't all that hard to implement once you understand how to do a context switch and how to switch tasks using a timer. There's a reason why there are lots of very simple hobby and RTOS kernels out there and not so many multi-million LOC proper operating system kernels like Linux.
>someone who is directly affected by the lack of open source
>drivers directly from nVidia, and he does that stuff just for fun
I'm not sure how you go from "writing a hobby kernel" to "needs to have custom firmware for a graphics card". I can't even see where his kernel's nvidia graphics driver is.. it seems it has a serial console only. But anyhow, he's free to do what most toy kernels do and use the standard VESA stuff that is compatible with the millions of PCs out there.
>He does, in fact, code that close to the metal.
Not massively impressed really. I know lots of people that look at an instruction sequence and tell you how many clocks it will take and how to reduce the clock count by using some weird trick.
>The lack of open source drivers really, honestly and truly does affect them,
>as there are regularly things they need to be able to change, and they have
>to fight tooth and nail to see them changed.
If the stuff they are working on is so important they should have a contact an nvidia that can help with that. Surely they want someone that has access to the engineers that put the chip together opposed to stuff that is reverse engineered.
What a lot of people don't realise is that even with proprietary hardware if you have enough cash and sign enough NDAs you can usually get access to all the information and code you would ever need. I have the complete source for the binary drivers for various ARM SoCs sitting on my harddrive.
>I'm glad that you get by just fine on the proprietary drivers.
The proprietary drivers have public specifications right? For your previous example that should be enough. If they find bugs in the proprietary drivers they should have a contact within nvidia that they can contact to get it fixed.
>"the drivers are not open source" is a problem for other people, that actually matters.
Having opensource drivers does matter but not for the reasons you gave.
Nice article. Shame about the clickbait headline though.
>There is no way I would ever consider working on Linux
one suspects that if your skin is that thin that you don't have the skills required to contribute anything worthwhile.
>It is the anti-systemd crowd who has become hysterical.
People really need to stop complaining about ad-hominem attacks on Lennart by using ad-hominem attacks against a massive group of people they don't know.
I don't care about systemd. I have it running on this debian machine because it was installed after the switch over. It's had it's problems like locking up during boot and shutdown but I've had similar issues with sysvinit too so I'm not going to use the "it broke a few times for me so it's bad" argument. Again, I don't care about systemd, but I am anti-the-steamrolling-of-many-fundamental-userland-components-that-everyone-including-systems-that-cant-run-systemd-need.
It's not written in the GPL or whatever but I think if you take over projects like udev you have a responsibility not to brake it for all of the users/use cases that don't fit with your "one daemon to rule them all" philosophy. Greg K-H giving up maintaining udev as an independent project was a massive mistake IMHO. Maybe I should have brought that up when he emailed me to ask how I wanted my commits to the kernel to be counted...
Why do you need a $500 laptop to talk to basic running on a microcontroller? You realise that old school BASIC can run entirely on the chip itself and thus you only need something that can run a serial terminal right? If you wanted to go crazy you could put a small LCD and a keyboard port on a little micro controller board. Maybe you could get the advanced kids to do that. They might even learn something!
"they can also learn a bit of Linux admin"
Ah, so you're one of the "I have no idea but let's teach it!" crew.
"BASIC? Hang on, while I fetch the DeLorean. 88MPH. GOTO 1985."
BASIC is perfectly fine for kids to pick up the notion that computers take commands and generally run them in order but sometimes there are branches and iteration. What better way to learn branching than to make them make their little LEDs do something different depending on if a button is pressed or not. The reason you sort of people don't understand that is because you have no idea what you actually want kids to learn.
>Another problem on those devices is that you have several instances of "binary blobs",
>code running with very high privileges, facing outside, but having never gone through
>some sort of security audit.
Which binary blobs on Android face outside?
>If you had a simple high speed serial port running a much simpler protocol like PPP,
Why does that help at all? If you can exploit USB drivers to take over the screen why couldn't you exploit the PPP daemon running the link between the application processor and the baseband.
>this becomes so hard it gets implausible.
How? The only way to totally avoid not having the baseband fiddle with the applications processor is to not link them at all... which makes your phone a bit useless. As soon as you link them up you have hardware and software components operating the link that can be exploited. Changing the type of link doesn't change that.
>You could have each function of your mobile phone done by an independent microcontroller.
>The software running on each of those would be simple enough that it would be
>essentially bug free, so it wouldn't need to be updated.
*essentially bug free* .. so not bug free. So still has potential to be exploitable. So back to square one.
>Simple protocols could reduce the attack surface even more.
Even simple protocols go through complex layers of hardware and software.
I find it funny that any thread that even remotely mentions the Israel vs Gaza thing turns into a conflict that is as pointless and retarded as the real thing.
>at kernels used in various Android phones, etc you will see a mix of 2.6.33 and an occasional 3.0.
The versions in use are the versions that the Android patches will apply to. I'm not sure if everything is in the mainline kernel even now.
There are lots of up to date BSPs for ARM SoCs that are still using 2.6 series kernels. Mainly vendors like Marvell that have a bunch of hacked up drivers that were impossible to mainline.
The device tree stuff has also meant that a lot of ARM stuff is still on earlier 3 series kernels because of breakage resulting from the uptake of DT in more recent kernels.
I'm wondering why the fact that support for some old M68K palm pilots should work again now was left out of this article!
It sounds like Samsung think there was some clause in the deal bars Microsoft from becoming a direct competitor and since buying Nokia that's what they've become.
> that can be universally patched as needed.
>and still allow the OEMs a-la Samsung to skin-up their GUIs as they see fit.
If you take a look at the AOSP source and maybe try to make it work on some device it soon becomes apparent why that isn't easy to do. Sure Samsung etc could just replace the framework graphics to their crappy looking stuff but they don't want to do that. They want to change the UI enough so that it looks like a Samsung and not a something else. So they will tinker around all over the place.
More times than not vendors will also need to add their own patches to core packages to make it work on their device. Mix into that some vendor binary blobs, hardware specific compiler flags that might make binaries incompatible etc and it becomes very hard for Google to be able to "universally" patch anything in the OS.
Now this issue is actually a bit different than something like heartbleed in openssl which will mean replacing that library in the system partition which means an OTA update.. This is a security issue within Google services that run on top of Android and as other people have mentioned it's been fixed.
I'm not sure why any such laws should target just phones and embedded (IoT) stuff.
Surely if you're going to make "laws" it should be along the lines of provide security updates for *any* software as long as possible and at the point that the vendor is no longer able to provide updates (doesn't want to or goes bust) they must release their source code and tools to make it possible for someone else to fix the issue.
I'm not sure it's fair to compare Intel and ARM really.
ARM vs Renesas would be a better comparison because of they both have designs for microcontroller applications through to relatively high performance. I think the fact that Renesas is also shipping ARM chips shows they are doing something right.
That appears to be a problem specific to Sony devices. Not sure how that is Google's fault. It's very possible that something in the Play Services triggers something in Sony devices that cause them to fully wake and that causes extra battery usage. The thing is the play services are used by other apps so it may very well be that one of Sony's shovel-ware apps is what is causing play services to be active... It's sort of like blaming the milk for running out when you open the bottom of the carton.
>Chrome sets up a polling loop with 1ms
I think I misunderstood your post because you have misunderstood what the issue is.
It seems that Chrome plays with the platform timer. That's not a "polling loop".
If a user process playing with that is an issue it shouldn't be available to user processes unless they run as a super user.
>An operating system is normally event driven,
Timers generate events.
>ultimately from interrupts that come from external sources such as keyboards,
>network and timers. There should be no polling.
How do you do pre-emptive multitasking if external interrupts are the only way to jump out of the running user task and re-enter the kernel?
>Perhaps you could start by explaining what Google Play
Perhaps you could explain why you *think* it's google play that is using all that power..
Sounds like an issue with how Windows uses the platform timer opposed to Chrome causing the battery drain.
Why is something that can cause issues like this available to user applications in the first place?