Excellent idea of throwing out old baggage and starting fresh.
Old baggage = possible vulnerabilities.
Into a world already crowded with big name alternatives to OpenSSL, an indy project could look like “yet another SSL implementation,” but Vulture South suspects there are good reasons to take a close look at the just-launched BearSSL. One is that its author, Thomas Pornin, has ignored the kinds of legacy protocols that occupy …
Excellent idea of throwing out old baggage and starting fresh.
Old baggage = possible vulnerabilities.
Yes and no - the issue is that making that work in the real world is that everything upgrades with it. You could argue that has to happen anyway. but if you've ever done a system wide refresh you know it's not that simple.
That said, I applaud the effort because it ticks all the "sane" boxes, motivation as well as its approach.
Most of that 200Mb will be a setup program, .NET frameworks, taskbar utilities, multiple copies of the driver, etc. rather than an actual wifi driver though.
Same with print drivers. The actual printer driver is only 50Kb or less, which isn't bad when it's doing things like connecting to network printers, interpreting Postscript, offering booklet and folding, etc.
Bundle it via the HP utils, though, and you're installing 400Mb of junk to get it.
That said, when I program I'm always shocked by HOW LITTLE my programs take. On disk. In RAM. Even the processor usage. When I read the articles about how GTA V renders, I'm astounded - things like hundreds or thousands of buffers rendering simultaneously at 120fps to show the final image, it's amazing.
But when I program, I get tiny little compact things which barely approach a couple of meg even if I statically include all the libraries. And then I look in my ProgramFiles folder or my Steam folders and nearly have a heart attack at the sizes in there.
I get data sizes - they can be huge for things like 3D games. But code sizes? What the hell are we doing to make things this big? And the bigger they are, they more to go wrong and the slower they operate (or are you saying that that code is just never actually executed? Then it's data, get it out of the program).
"It's been argued repeatedly that 'things' aren't going to get decent security in their own right, because they're small and stupid"
How quickly and frequently we forget that it's possible to write extremely effective and powerful code to run on small, slow CPUs with little memory or storage access. Arguably, the availability of ever-faster CPUs, vast amounts of RAM and colossal storage over the course of four decades has allowed us to become lazier, dumber coders. I still have memories of writing printer drivers for 6502 chips in assembler (because our printer wasn't supported). I'm not the only one here today who will recall slotting a 20Mb 'smart card' into a year-old 286 based PC to support work in Ada and C++ ... and less than 30 years later my phone has an *accessory* smaller than my pinky nail that will hold six thousand times as much data. The phone itself is, by those standards, an insanely powerful computer.
In short, I don't believe it's not possible to write solid and secure interfaces for 'small, stupid' devices. In fact, simplicity of devices and of the code to run on them might well be a security asset, as the article itself frequently alludes. KISS rules!
mbed TLS requires calloc() and free() for some operations.
Plus it's good to have an alternative. Both are under very permissive licenses (mbed TLS under Apache/GPL; BearSSL under MIT). I'd say mbed TLS appears to have the upper hand in supporting more algorithms (judging from a quick glance over the documentation) and is more mature, but I do admire BearSSL's minimalist approach.
It's been argued repeatedly that “things” aren't going to get decent security in their own right, because they're small and stupid.
There is a special FEMA trailer for people who argue this kind of stuff.
Indeed, this just would mean these devices were trying to occupy an economic niche that is not acceptable. Same as a factory that can only exist if it can dump the toxic leftovers into the nearest river.
Either legislate this away or fix this.
Since BearSSL has to be small, Pornin has decided to ignore malloc() and dynamic allocation entirely: “the whole of BearSSL requires only memcpy(), memmove(), memcmp() and strlen()” from the underlying C library, Pornin says.
Somebody has taken up lessons from the Misra C manual? GOOD!
I do applaud bringing a little sanity to security libraries, but I couldn't help having a twitch of humorous response to:
----
“the whole of BearSSL requires only memcpy(), memmove(), memcmp() and strlen()” from the underlying C library, Pornin says.
Somebody has taken up lessons from the Misra C manual? GOOD!
----
As I mentioned in another comment yesterday, when I wrote a validation suite for, essentially, the functions declared in string.h, I found errors in one vendor's memcmp() and and memmove(), and another's memcpy(). So while I admire the use of the "platonic" functions, I'd suggest that actual implementors "trust, but verify"
(And if constant time is important, the vendor-supplied implementations should be _very_ carefully verified, since all those errors were caused by mistaken "optimizations", that would have altered run times even if they had gotten the right answer)
You want constant time (as in each iteration takes the same amount of time regardless of the input) because you have to consider side channel attacks. For example, if you get a hint that one input takes more or less time than another input, then you can file that datum as a hint on your original input. Side channel attacks can be done in all sorts of physical ways: measuring current draw, CPU temperature, times, etc. It's sorta like reconstructing a crashed airliner: a piece here, a piece there, but you eventually get enough together to get an idea of what happened. Same here.
So, no, you don't want a "done before time x" constraint. You want (and need) a "done IN time x" constraint or you'll be giving away hints.
>>Didn't the IETF bloke just say stop making up new protocols to do the same thing? Same goes for libraries.
No. The protocol is the abstract behaviour e.g.
RFC: 793 TRANSMISSION CONTROL PROTOCOL
Only one of those is required (plus multiple revisions!).
The 'library' is the implementation. And there can can be thousands of those, depending on detailed platform requirements.
Unfortunately, in the security sphere, both the protocol and the implementation matter, which is why it's hard.
This post has been deleted by its author
Probably because later versions of TLS basically use the same techniques as 1.0, only in more secure ways. Difference in degree instead of kind the way SSL is different. IOW, the code's already there to cover 1.1 and up, might as well support 1.0 but put it bottom of the list.
This post has been deleted by its author
It would make more sense, to support *ONLY* (draft) TLS 1.3 if minimal footprint is required.
TLS 1.3 will only allow two symmetric ciphers (initially), and they must be AEAD. The selected ciphers are AES-GCM, and ChaCha20-Poly1305. All the older ciphers are gone.
Limited support for TLS 1-1.2 might be acceptable, but only with the allowed TLS 1.3 AEAD set.
Do we really need to keep dragging SHA1 into new systems?