That's why it's vital to be able to run your own firware on everything you own
I mean with OpenWRT or something it surely wouldn't have those security holes.
3608 posts • joined 9 Mar 2007
I mean with OpenWRT or something it surely wouldn't have those security holes.
...but that you can do so remotely without having physical access to the device. A simple menu showing you "do you really want to upgrade the firmware yes/no" would have been enough, or alternatively a key combination you need to press during update to put it into firmware update mode.
you don't need dedicated fax lines as voice lines are of high enough quality to allow for faxes to be sent, even VoIP ones. All you need is a number, but small companies got 3 numbers with their ISDN line and large companies typically have a full block with more than enough numbers.
In computing security is just about avoiding to do risky things. The problem is that people doing IT seem to have learned their trade from TV shows like "Jackass".
Instead of building simple interfaces to systems which everyone can use, they design hugely complex web interfaces everyone hates as they include GUI frameworks that keep you from copying and/or pasting the information you want to transfer. Just think about how simpler it would be if you would just have your users ssh into a computer where they can access shell scripts for the things they want to do. For your average user that would still just be "magic" that's involved by copying and pasting "magic words" from their text file to their shell, but behind the scenes you'd have a _lot_ less code and a lot less things that can go wrong. Plus you can do things like authentication the sane way (public key) instead of passwords.
Well speech recognition is already common for EC cards in Germany. Here's a news article about this:
I mean you literally have it right in your face. It's on _every_ photograph of your face. You cannot hide it easily in real life. You cannot even change it in case someone got a copy of it.
On the other hand, it's trivial to fake it and fool even the most sophisticated iris scanners. It's an utterly stupid way to authenticate anything...
However we are talking about payments. Payment providers are not worried about security. Fraudulent transactions will, at worst, cost them nearly nothing, and at best they can sack the transaction fee.
Well luckily we are talking about a .com company. In 10 years Facebook probably won't exist anymore. We are already in the phase where more and more people are ashamed of having a Facebook account.
Of course the data will be sold over and over again until it finally reaches the company that can do most evil with it.
I mean after all the side effect of avoiding taxes by those "grand gestures" is that you create organisations where you can control a lot. Power is an important factor here, and even if you don't believe those billionaires to be power hungry, you can still see that a "non profit" organisation dedicated to fight X is still a place you can put your friends and family into so they'll have easy and profitable jobs.
There might also be the believe that somehow "private" organisations are "more efficient" than governmental ones. That's a mantra repeated over and over again, by certain people, particularly "Objectivists". However there has not yet been a grain of evidence supporting this, and lots of examples showing the exact opposite.
There is a chance that those billionaires are just "stupid" or at least claim to be stupid in order to have some non-selfish reason to justify their actions.
"I thought the idea behind sandboxes WAS that if malware tried to run it would be contained. Or are you saying as long as malware exists, SOME malware will ALWAYS find a way to escape the sandbox?"
No, my point is that you must always make sure you don't run software from untrustworthy sources. That's why distributions have repositorries. The idea behind an "AppStore" is that as long as there are contracts, the code will be distributed. There is no check for malware as everyone believes that malware can't be to bad as it's limited by the sandbox.
I think the main factor with this is probably that Desktop people do not understand the mobile world. They think that Android got popular because it has all that complexity and limitations inside, after all on a typical Android installation you don't even have a file manager. While in reality Android just got popular because it kinda worked and it was cheap and it was backed by a huge company.
People now see the popularity of Android (and iOS and all of the other mobile OSes) and think they need to copy that.They see AppStores and believe that that is the future while completely missing the analogy to 1990s "Multimedia CD-Roms". They even believe unlogical things, like that you can trust on sandboxes and therefore run malware inside of them.
It's also all the complexity of a "smartphone-OS" on a workstation where you don't need it. Essentially that gives you a very brittle system that, when it works, does things you don't want it to do, and when it doesn't, is impossible to repair.
Plus it has features nobody asked for, like "Flatpak" or other means of claiming that you can somehow install malware from foreign sources (without source code) without utterly compromising your computer.... and instead of acknowledging that malware cannot be contained, they claim every breach from their sandboxes on the rest of the world.
Someone can just send Microsoft a National Security Letter and they have to comply. It doesn't really matter where the servers are.
Also Deutsche Telekom works with the BND (German secret service) which work very close with US services.
Yes and I think there were lots of competitors. I think there even were people trying to use infrared for LAN and proprietary printer connections.
Since browsers have abandoned their download speed indicators, many people resort to speedtest sites to test the speed of their connection... that's why many network providers try to cheat on those... that this is probably the cause of this "glitch".
Organisations can be forced into doing anything by the use of National Security Letters. So having a system that relies on an organisation acting "correctly" is not secure. This obviously includes updates being pushed to you, as well as any non-FOSS and cloud services.
You cannot easily protect data against physical access. To encrypt data you need a secret which must not be stored on the device itself. A binary PIN is easy to brute-force. Hardware claiming to protect you from brute-forcing can be emulated or simply manipulated easily with hardware like a focussed ion beam microscope.
So what can you do to regain your right to uncompromised data processing:
1. Don't store data on mobile devices without protecting them with a strong passphrase.
2. Don't store data on computers you do not own. Ideally have all the computers you store data on in your own flat.
3. Use systems that are as simple as possible so you have a chance of understanding them and understanding updates. Try to avoid systems with large organisations behind them, use systems that are developed by loose clusters of people. That way in case a National Security Letter arrives only individual people will be informed and those can simply drop out or their code can be refused by the others.
4. Avoid systems that are completely insecure. Most "smart"-phones today have their GSM baseband sitting on your system bus allowing it to access all your RAM... considering that GSM baseband chips run closed source, never audited and highly complex code, that's just a security disaster waiting to happen.
5. Use tamper evident designs when you cannot prevent tampering. For example if you design hardware allow it to be embedded into transparent plastic, perhaps with some glitter around. That way you can avoid the firmware to be updated against your will, or the hardware being manipulated.
I mean that would solve the problems of firmware updates, you simply send in your device to the manufacturer, they will break the seal, change the ROM, seal it again and send it back.
...has a late 1980s digital VCR changer. Since it stores uncompressed video on 250 (up to) one hour tapes, the whole thing could have a storage capacity of up to about 3 Terabytes.
Sorry, should have said that I meant 5 different kind of chips. I thought it was obvious that, given the low density of ECL you couldn't have things like single chip microcontrollers in ECL.
For those not fluent in IC designs, ECL essentially works by using transistors not as "fully switched" switches, but as amplifiers. So (in a nutshell) a one is one transistor having a higher output current than another, and a zero is the other transistor having a higher output current. Since both transistors will let current through, those chips burn _lots_ of power. However you can get them running at many gigahertz easily.
Burning that much power is what makes ECL rather useless for general purpose computers. It's hard to get that much heat away from those chips if you pack them densely. However you need to pack them densely to not have long transmission lines in between which introduce a delay into the computation. However there are specialist applications which are not general purpose where you can simply have a "pipeline" of stages processing some data. ECL is rather suited for this if you have a layout with controlled impedance and controlled line lengths.
...since the Cray I which essentially consisted of 5 off the shelf ECL logic chips.
I'm sorry, I'm not a native speaker of English, but shouldn't that word be "ambiguous"?
I mean they are talking about "FTTP". That could be anything from a bunch of dedicated fibres which can be patched to the "central office" to a passive fibre optic network which barely can handle cable television. It could also mean a data network which only allows for the services of the provider (thanks to non-existing or weak net neutrality) or simply fast unlimited Internet.
FTTP means nearly nothing without going into the details.
...we would know that that "hacking a printer to cause a fire" thing was mostly PR. The fixation unit of printers has a hardware overheat protection once it gets to hot. Essentially there's a little heat activated fuse. So even if you manage to put new firmware on (which is unfortunately possible without interaction on the printer itself), you can only break it, but not cause any problem. And fixing printers would be simple, just put an "upgrade firmware" mode into the menu, perhaps allow for a PIN to be required, and have it not print anything while that mode is on. If you have an USB interface, you can even upgrade your firmware from that, instead of "printing" it to the printer.
Signing firmware will only make it harder for legitimate changes to the firmware, for example to get out security holes by removing services you don't need.
For actual attacks changing the firmware probably isn't a sensible way to go. It's far easier to use the features provided in the default firmware. I wouldn't be surprised if there are postscript engines that allow for network access.
Yes, particularly since it is such a huge project with virtually no isolation.
Today people have a strong incentive to participate in "Open Source" projects. Recruiters look for names in such projects, and honestly this isn't the worst way to look for new talent.
However there is currently very little public incentive to make sure that code is useful or good. The prime example (because it's so clear) is the OpenSSL "keep alive" feature. There was someone writing a thesis on this feature... which is of limited use... then he writes a patch which contains a glaring error and it gets accepted.
We need people like Linus Tovalds which question new features. We would need them in projects like Debian or Gnome or xfce. Unfortunately we have to little of those people.
Since you can connect "anything" to USB, you can also connect things you don't expect, like ethernet cards, mass storage devices or input devices. Previously Windows didn't actually support USB in any meaningful way, but now since it does, there is some focus on USB security.
Obviously the sane way to go would be to have dedicated ports again. Connect printers and scanners via Ethernet, connect input devices via some sort of overclocked PS/2, and have a special port for mass storage devices. That way you could essentially eliminate all harmful device spoofing...
Of course now some dimwits are saying that "signed USB devices" will save us all. Well first of all I'd like you to acknowledge that the new USB keyboard you just plugged in is the one you actually want to have so it's signature can be stored. Secondly this will probably only be used for vendor lock ins.
I mean we all just assume that it's good for websites to make money. However we've all seen that leading to things like clickbait. Optimizing your site for maximum monetisation will still be a problem with such systems.
Maybe we should just give up on the idea that you can earn money by putting trivial things on some website.
Believe me, the "cameras in your fridge" is the most sensible feature. In the demos it was able to automatically recognize what's in there... which was of course just faked.
With that particular brand the cameras were supposed to be connected via USB. They somehow got a trigger and then switched to one of the many "mass storage device" modes to deliver those images... that's probably the most complicated way to do it. From there it goes to a central server as the appliance itself doesn't have the space to store those images.
a) That kind of bar graph is probably the worst way you could show that data.
b) You don't use JPEG for that kind of image.
For those having a computer you can lug around seems like a decent idea... though this laptop is far from replacing a desktop. After all that display has a to low resolution.
For actual portable desktop replacements, there's obviously companies like Ariesys.
They also seem to have models with multiple screens.
Some people now sell touchscreen keyboards or have keyboards on their laptops that fall off when you hold them.
I mean we are talking about crypto here, and cryptography can protect your secret against eavesdropping under certain circumstances...
However that's not what the FBI claims to want, They claim to want to be able to extract data from telephones. Once you have physical access to that device, you are in a while different position, you can then extract every bit stored in Flash... and unless you have very special hardware, every bit in RAM. Of course you could encrypt that, but for that you'd need to enter a key. Of you only have a touchscreen, the best you can get is a 8 digit PIN... which is easy to brute-force.
Yes, people have had ideas like having a special chip which only releases the key when given the right PIN, and yes those are advertised to have a "wrong tries" counter, but keep in mind that you can erase individual Flash cells easily when uncapping the chip, or you can just read out the internal flash of such a chip with a bit more effort.
Even that is assuming that the rest of the software is flawless. Today we have mobile operating systems which seem like they were deliberately made more complicated to introduce new bugs. Even lock screens can often be bypassed by simple user interaction.
Of course solving those problems is feasible, just make your mobile device a terminal to a server that sits somewhere safe. That would really get the FBI into trouble.... and that's what the device companies won't sell you. So in a way the interests of the FBI and the actions of the device manufacturers already seem to overlap.
So essentially use ssh over Tor Hiden Services or mosh and authenticate via public key authentication, have your local key with a moderately strong password (of course a hardware keyboard helps) and have your sever remove that "authorized keys" entry once there has be no login for n days, and you would be moderately safe... if you could trust your operating system on your mobile device.
First of all, there are all the security problems of any wireless connection, except that windowless rooms can contain them better than WLAN.
Second those advertised speeds are bogus, yes you can reach them on fibre easily, but in actual rooms you have lots of reflections. Those reflections will give you your signal delayed by a certain amount... without very expensive DSPs working at speeds unobtainable today, that's very hard to get rid of. Also, yes, we can modulate Lasers to those speeds, but not by turning them on and off. LEDs have further problems as they are, comparatively, wide band when not modulated. It's very much like those early transmitters that use an arc to transmit.
Since Thunderbolt is PCI-E on an external connector, it's a _huge_ security issue. You could simply read out all the RAM through it, and even patch code into our RAM so you could take over that computer. I fail to see how that could be desirable.
"Content owners always demand DRM"
Yes, but being faced with not selling anything or dropping DRM, they would quickly drop DRM. We could simply outlaw it, just like we outlaw electric appliances that give you a shock while using them. DRM is a defect at best and a civil rights issue at worst.
I mean for IoT the CPU platform is kinda irrelevant. Not all IoT things have power restrictions x86 processors couldn't meet, after all there used to be a x86 palmtops which ran for weeks on a pair of AA batteries.
The problem is that Intel doesn't understand what would be needed. They bring out x86 SoCs with 16 kilobytes of RAM... so you couldn't even run MS-Dos on them. On the other hand, they push Windows for their larger devices, which doesn't make much sense as it's hard to modify it enough to be useful in that area.
I mean I can understand running Linux on a Raspberry Pi, essentially it gives you a full developmental platform where you have all the tools to directly develop and debug your software. In the area where the Raspberry Pi and platforms like the Arduino overlap, that's the big advantage of the Raspberry Pi...
But here we have a "headless" Windows, a Windows without windows. You cannot even run Visualstudio or VisualBasic on it. Just like on an Arduino you need to develop on a PC and then "flash" it to your Rasperry Pi.
I could understand if they would have simply ported a slimmed down Version of Windows to it. Something like a Windows 2000, perhaps with x86 emulation for non-native software. That would have had some use.
Hmm that opens a new business model. Just get them a malevolent ad, which hijacks their electronic billboards and stays on them, then, perhaps a year later, make those billboards display your customers ads instead the normal ones. If you don't overdo it, nobody will notice.
I mean seriously, there's now software like "Info Beamer" which you can install on a Raspberry Pi and even "Cloud manage" if you'd like. You can write your screens in Lua and playing something like a video is trivial on those.... and if you subscribe to their optional "cloud service" all you need to do is write the system onto SD-cards and just swap your hardware for another Raspberry Pi in case one breaks.
that back then, companies went bust and left lots of valuable stuff behind. We had companies laying fibre on mass, we had moderately well educated engineers. Whole carriers were formed by buying "failed" companies cheaply.
Now we have companies like Uber, which are of no actual value for society. They work by exploiting people and cannot even get a profit from that. If Uber goes down, and it eventually will, there will be nothing left except for lots of data which might get sold around for a bit, until it eventually becomes valueless.
Most of the Linux kernel is drivers. It makes sense for hardware vendors to write their own drivers as they do have the full design specs.
It would be more interesting how much they contribute to the non hardware related areas of the kernel. Commercial contribution is much more problematic there.
... regarding Microsoft the biggest contribution Linux (and the BSDs) did was to force Microsoft to clean up their business. Before that, Microsoft simply didn't care about security bugs. Security was no priority. Eventually they put it onto their priority list and even hired external consultants. They simply told their programmers to fix bugs.
So in theory you could now have a decently secure Windows machine.
Of course now the problem is that the Windows ecosystem (along with the Android and Apple one) constantly trains their users to behave insecurely. Software is distributed as binary files you download from random locations, or you have an AppStore harboring all kinds of malware.
Just receive a packet, look at its address, look in your address table where to forward it to and forward it.
The problems with modern router insecurity stem from the fact that routers today have so much more code. They have web interfaces, they are supposed to implement complex protocols, etc.
If you'd just build a router that routes and has a simple external management interface, it's probably downright trivial to make such a thing. As you'll end up with very little code, 4 years seem like a lot.
"Proprietary Unix is on its last legs and the BSDs are woefully lacking development resources."
Well one of the beautiful things about the UNIX philosophy is that it allows you to get lots of bang for the buck. While the BSD people might only have very little resources, they can simply spend it on their operating system.
I mean systemd is mostly about wasting programmers lives. Things like "binary log files" not only need code to generate those files, but also code to read and fix those files. On contrast, text based files can be simply written with basic programming language features, features you need anyhow. They can be read with any software that reads text. Having everything as text saves you from having to have lots and lots of specialized code. I know that, because in a previous job I have written a small unixoid operating system. It's amazing how far you can get with just a simple text editor, a file system and a simple shell.
"Definitely not SysV which falls flat with dynamic hardware which is the norm these days on most systems."
I'm sorry, but systems with "dynamic hardware" are getting less and less common. Laptops rarely have PCMCIA or PCCard slots any more, and even if they have them, they are rarely used. Network cards used to be something you could unplug, today they are a standard part of your chipset, and even if you install additional ones, those are typical PCI-Express based by now.
I mean there used to be a time when your computer might have had 2 PCI network cards a non-PnP ISA one and one that was PnP and it actually dependent on the order in which the modules were loaded how those cards were named, however today you just have one network card, and if you have more that's all on the same bus, which will always be scanned the same way.
The same goes for "multi user" features, particularly "multi seated" features. Yes, that used to be a cool feature back in the early 2000s, but today you can literally buy a Raspberry Pi acting as an X-Server for less than a special multiseat graphics card would set you back.
The people who are targeted by this have mostly moved on to Tablets and Smartphones using Android or IOS. The remaining people understand that you must mount and unmount drives or use cloud or network services. And even if they don't, just mounting it sync will get rid of file corruption for the rest.
Now this wouldn't be a problem if he'd simply write his own version of mount which would just replace any mount you'd want. The problem is that there are dependencies. You will probably no longer be able to use systemd without that new mount, and you will probably not be able to use that new mount without systemd. I mean that whole systemd thing wouldn't be a problem if Poetterling would have just started his own operating system and leave the rest of the Linux community.
Separate the hardware from the operating system. Mandate a single hardware platform which can be scaled and extended into the numbers of devices we have now, just like on the PC, and let people install any firmware they want.
Just like on the PC they would then take the hardware vendor out of the loop for operating system updates. It would also allow people to install other operating systems on their devices to gain special features or simply more security.
I mean we now see "kickstarters" for devices close to what we're looking for. One example is the Pocket Chip. It's by no means perfect, but it's good enough to be able to try things out. It already comes with a keyboard and 2.4 GHz wireless LAN. I've got the Kickstarter version, but even the final one only costs $70.
All the competitors are trying to bring out exactly the same product, however people are desperately screaming for an alternative.
There would be an alternative for a simple portable computer. Something like a blank pocket PC where you can install just about any operating system you want. A place where you can experiment with new ideas on how such devices should work. Or perhaps a simple device just booting your Linux kernel and dropping you into a framebuffer shell, with some shell scripts to activate your wireless connection or get the device into suspend.
However there's no space left for yet another "Facebook"-Machine. The swiping idiots market has been thoroughly grazed by Android and iOS.
I mean sure, LTE has thrown a lot of things overboard like isochronous connections, however it's still deeply rooted in a mindset that is based on the mobile telephony business model. For example the network always knows where you are in order to get packets to you while most client protocols today only make outgoing connections where this is largely irrelevant.
So how would a network look like that's just "there" paid by the people just like the road network? You wouldn't need to log into it. You could use techniques like "stateless autoconfiguration" to gather an IP-address from the cells near you. Gradually as you roam, you'll gather new addresses while the old ones drop away. With the right network protocols (e.g. mosh) that would give you seamless handover without a central piece of equipment having to track you constantly. Even web browsing would work fine as your connections are rather short lived.
"If I had a Powershell script that needed to be ported to Linux, rather than learn Powershell to port it to bash I'd first try installing Powershell on Linux (when it is polished) to see if it provided an easier way."
The problem with that is that shells are designed to string together existing ecosystems. Without the ecosystem the whole thing is useless.
I mean things like systemd aren't inventions by Microsoft, they are inventions by people who have never experienced the elegance of a simplistic system, people who design their software for weird edge cases that never happen. It only makes sense to do everything to keep those people from learning about the UNIX philosophy. The potential goal is of course to turn the operating system market into something like the browser market, where you have a small oligopoly of vendors which can easily cooperate against the will of the user.
Then of course there's the even simpler explanation that software running on only one platform is kinda seen as irrelevant these days. I wonder how usable it is without the rest of the operating system. After all it does not just simply pipe around text as unixoid shells do.