Why don't they simply make different standards?
I mean, IoT and Youtube have vastly different requirements. Why are they all trying to stuff it into the same standard, but with incompatible sub standards?
3630 posts • joined 9 Mar 2007
I mean, IoT and Youtube have vastly different requirements. Why are they all trying to stuff it into the same standard, but with incompatible sub standards?
CAPI already stands for the Common ISDN Application Programming Interface, a rather bad API to talk to your ISDN card. Unfortunately that API was so widespread it even got ported to Linux and depreciated much better APIs. That's one of the reasons why classical ISDN cards on Linux suck.
Yes, we still have a surprising amount of work. One reason for this is of course that we can burn through more and more resources. However resources are typically finite. There's only so much oil you can turn into cheap plastic toys.
For areas where the limiting factor is the workforce, we have found other ways to keep more people employed. In engineering we purposefully stop giving students good education so they will get worse and worse. This results in engineers needing exponentially more time to solve problems. Essentially since they have never learned how to actually solve problems, or how other people have solved problems in the past, their solutions often involve creating more problems than they were trying to solve. This causes a chain reaction which can even become critical.
In other areas like management, we are seeing the creation of "bullshit jobs". Jobs which serve no purpose but to create things for people to do. There are companies producing household appliances which have whole departments thinking about how to create an overarching theme of management so they can justify, more or less logically, why they have production plants.
We are currently still doing rather well at wasting work, however I believe it is very naive to think that this can go on for ever.
I mean writing a patent isn't a very creative project, you just combine existing ideas and find a new use for them. There is no creativity involved as you can just brute force your way through a finite space of potential patents.
You won't get very novel or useful patents, but that's not the idea behind it, is it?
However you will easily be able to overload the patent system, and nobody will be able to find out if they are infringing on patents. Essentially the whole absurdity of much of the modern patent system would become even more obvious.
"If I had a Belkin product, the absolute last thing I'd want it to do is communicate with Belkin's cloud service."
Of course, but that's your opinion. In the commercial IT world you are not the customer you are the product. It's always possible to extract more money from you being there if you are the product than if you just pay.
"Belkin" (or any other company of course) believes they have the right to your data or the right to turn the light bulb into a subscription service. This cannot work without a connection to their cloud services. They believe that whatever data they gathered about you, will be valuable eventually... and seriously once you have a live feed of 10 million light bulbs there surely is some sort of fake business model you can come up with that's plausible enough to extract money from investors.
Every programmer goes through a phase where they do not understand that complexity is a huge problem. Therefore they design systems which lay one layer of complexity on top of another, without doing that in a way that actually works towards solving your problem.
So only hire programmers and software architects which have learned that the more lines of code you write and the more boxes you draw on a whiteboard, the worse your code will be.
If you look at todays systems, you'll notice that they don't get popped because of things like buffer overflows, but because someone left a debugging option open over the network which should only have been available over the serial port... and that debug port gives you access to a full fledged operating system.
I mean of course you can for example use DVB-T signals of a SFN and estimate the distance differences to the individual transmitters. However that requires a receiver that can tune to those frequencies as well as process them in a way to estimate the impulse response.
It's much simpler to just ennumerate the WLAN access points and then go from there. WLAN chipsets are cheap as they only need to work on a comparatively small band.
So in short its one of those things that are fun to try, but probably won't have much practical use in the foreseable future. Just like those "Lifi" setups which transmit data via LED lighting.
" if used properly the phone is WAY more secure than it would be if you rooted it and installed Linux."
I'm sorry, but unless you root your phone you cannot even prevent your vendor from installing new malware via the update feature, or your browser from exposing its security bugs to the web.
"Concentrate on making the encryption secure"
Actually secure encryption on a mobile device is mostly an illusion. Encryption always requires you to have a secret which is ungessable. However entering a secret is virtually impossible on a touchscreen. Even if you could use a strong passphrase, since your device will be always on, you can often just fish the secret out of RAM.
Storing a secret un a security chip doesn't solve the problem, as there are multiple attacks against chips theese days. Pay-TV companies use the most secure chipcards you can have on a budget, and yet they have in the past regularly broken their competitor systems.
So actually your chances of security are best if you root your device and install some propper Linux OS. Once you have iptables you can enforce actual security by only allowing your device to talk to your server. (big security benefit!) Then use ssh with public key authentication and make the server erase you key regularly so you are forced to rekey.
So... it looks as if VMS certainly will outlast Windows. :)
I have a Pocket Chip which is one of the most interresting mobile devices I've seen in recent years. Unfortunately mine has a severe display problem, plunging me into support hell. For what seems like half a year (got one of the first ones) I'm trying to get a fix or a replacement.
Adding to that is probably the most braindead way of flashing the firmware. It requires you to install Chrome _and_ an extension for accessing the USB. No other way seems to be available.
There is virtually no affordable PPC hardware. It kinda moved to the high-end sector with IBM workstations and servers.
Of course you could take the specifications made for PPC and just apply them to ARM. After all there were full specifications for PPC-PCs. They even included bizarre things like the boot sector having to contain some x86 code to display an error message when you run it.
"As an appetizer: How about this google.... for hardware to be certified for use with the google apps, all of the drivers must be open source."
That's essentially a business decision. Google has little interest in hardware and software being open to competitors. Every device that gets rooted and runs non-Google software means less revenue to Google. In the past, they simply may not have cared, but they will more and more.
Also Google is a platform provider here, and their actual customers want DRM and they want it to be impossible to copy their crappy Apps.
There's a third point and that is that SoC manufacturers like vendor lock-ins. They want to make it as hard was possible to change hardware. This is why SoC hardware typically is as obscure as possible.
As offering a server which can only run one operating system is kinda pointless, they really need a common platform.
For example for Windows CE you also got the source license, and if you didn't you at least got the "Board Support Package" from your SoC vendor.
All operating systems in the embedded world are highly customizable. It's nothing special to Linux.
"I think TalkTalk deciding that they didn't want to pay for any serious investment in IT security infrastructure was."
Problems in IT security don't happen because of a lack of money, but because people decide to do incredibly stupid things.They happen because people choose to go the complex route instead of the simple and elegant one. They happen when someone creates a complex web GUI using multiple highly complex frameworks, just to do something a couple of shell scripts could have done, accessed via ssh.
"1GB is nothing. Think of all the audio which needs to be uploaded to the "cloud" for voice recognition."
The standard for sending compressed voice to a central server is 4800 bits per second. 600 Bytes per second. So a Gigabyte will last for 20.7 days of uninterrupted voice.
(Those 4800 bits are not meant to be turned back into voice, but instead the output of the first stage of the voice recognition.)
So far the results were fairly mixed. Windows, probably the most famous system based on OOP principles, has changed so often into so many directions, you can hardly see the original idea of objects (Windows and GUI elements) passing around messages (events).
BeOS seems to have been rather decent, but thanks to it being closed source and rather incompatible, it didn't actually have a chance.
My guess, and I actually hope that people will proof me wrong, is that it'll be just a mess like Android. A system far to complex to be maintained without the help of Google. A system that offers so little useful functionality under a coat of shiny stuff. A system that sees locking out the user as a security feature. Much of this won't be because of the system design itself, but because of the people such a design will attract.
However there is one really good thing that could come out of this. It could attract the systemd/freedesktop people away from Linux.
First of all Sun has already done this in the 1990s:
What you can do to actually make this moderately secure is to have a public key authentication scheme. Just have a private key on the device near your body and the public key wherever you want to authorize. This works great for ssh and would eliminate passwords in the browser once browser manufacturers would get off their asses and make TLS client authentication usable.
Microsoft would have needed someone like that in the early 1990s. The problem was caused by a mirror image of that we currently have in the FreeDesktop world.
The problem is that Microsoft has lots of half-baked non-orthogonal features. Developers jump on every new feature just to find out it's not yet usable a couple of month later. This causes a mess of workarounds which tie the bugs down, making it harder to fix them... and then just before the feature would be usable... it gets discontinued by Microsoft.
Microsoft never managed to find a decent way to string together orthogonal features. They have tried with OOP, promising things like being able to add a feature into every program by just adding an additional program. Since they haven't managed to get OOP running across multiple programs (OLE was one attempt) that project kinda failed.
Now, even if Microsoft hat the right people, it would be to late. Microsoft is trapped in legacy. People don't buy Windows because of its cool new features but because it continues to run the software they bought somewhere in the 1990s. With .net they would have had the chance to change that. Unfortunatety since .net wasn't open and minimal from the start, it seems to have only attracted the "bottom of the barrel" programmers.
... but I'd think that base load is not particularly high in the UK, because thanks to Thatcher it's largely de industrialized and people in the UK tend to do things like making cups of tea at the same time with an electric kettle.
Base load is essentially the minimum load you have on your network. If you have to many nuclear power plants, you will have to get rid of that power as you cannot regulate such power plants that quickly. (same goes for coal, BTW)
Of course meanwhile lots of little solar power plants pop out most of the world which deliver electric power just when private homes need it.
So essentially there is no simple solution, but to find a solution one has to consider much more than oil prices. The problem is that there is lots of FUD from many sides, but mostly from large plant operators. That's why in Germany you constantly get headlines that the power grid gets unstable during a solar eclipse and other such nonsense. If you look at the hard facts, like the frequency data, you'll see that virtually all disruptions occur at 15 minute intervals, exactly at the times when trading blocks start and end.
Disclaimer: I'm generalizing here, obviously.
I mean it seems like they have a fetish for "full screen". For example there's a 1990-ish software called "Praxident". It's a typical piece of software from that time. It has lots of "fixed size" forms, forms you couldn't make bigger or smaller to show more or less content. Sure that's bad idea, but at least once you have more than 640x480 you can have multiple windows... Well that's not what the designers intended. Instead of allowing you to use the leftover space more efficiently, they had a setting where you would select your screen resolution. Based on that setting the forms would be _scaled_ so they'd simply take up more screen space. They weren't even smart enough to do that based on the current window size... and Windows users are perfectly contempt with that. For some reason Windows users just want to always have maximized windows.
I mean with OpenWRT or something it surely wouldn't have those security holes.
...but that you can do so remotely without having physical access to the device. A simple menu showing you "do you really want to upgrade the firmware yes/no" would have been enough, or alternatively a key combination you need to press during update to put it into firmware update mode.
you don't need dedicated fax lines as voice lines are of high enough quality to allow for faxes to be sent, even VoIP ones. All you need is a number, but small companies got 3 numbers with their ISDN line and large companies typically have a full block with more than enough numbers.
In computing security is just about avoiding to do risky things. The problem is that people doing IT seem to have learned their trade from TV shows like "Jackass".
Instead of building simple interfaces to systems which everyone can use, they design hugely complex web interfaces everyone hates as they include GUI frameworks that keep you from copying and/or pasting the information you want to transfer. Just think about how simpler it would be if you would just have your users ssh into a computer where they can access shell scripts for the things they want to do. For your average user that would still just be "magic" that's involved by copying and pasting "magic words" from their text file to their shell, but behind the scenes you'd have a _lot_ less code and a lot less things that can go wrong. Plus you can do things like authentication the sane way (public key) instead of passwords.
Well speech recognition is already common for EC cards in Germany. Here's a news article about this:
I mean you literally have it right in your face. It's on _every_ photograph of your face. You cannot hide it easily in real life. You cannot even change it in case someone got a copy of it.
On the other hand, it's trivial to fake it and fool even the most sophisticated iris scanners. It's an utterly stupid way to authenticate anything...
However we are talking about payments. Payment providers are not worried about security. Fraudulent transactions will, at worst, cost them nearly nothing, and at best they can sack the transaction fee.
Well luckily we are talking about a .com company. In 10 years Facebook probably won't exist anymore. We are already in the phase where more and more people are ashamed of having a Facebook account.
Of course the data will be sold over and over again until it finally reaches the company that can do most evil with it.
I mean after all the side effect of avoiding taxes by those "grand gestures" is that you create organisations where you can control a lot. Power is an important factor here, and even if you don't believe those billionaires to be power hungry, you can still see that a "non profit" organisation dedicated to fight X is still a place you can put your friends and family into so they'll have easy and profitable jobs.
There might also be the believe that somehow "private" organisations are "more efficient" than governmental ones. That's a mantra repeated over and over again, by certain people, particularly "Objectivists". However there has not yet been a grain of evidence supporting this, and lots of examples showing the exact opposite.
There is a chance that those billionaires are just "stupid" or at least claim to be stupid in order to have some non-selfish reason to justify their actions.
"I thought the idea behind sandboxes WAS that if malware tried to run it would be contained. Or are you saying as long as malware exists, SOME malware will ALWAYS find a way to escape the sandbox?"
No, my point is that you must always make sure you don't run software from untrustworthy sources. That's why distributions have repositorries. The idea behind an "AppStore" is that as long as there are contracts, the code will be distributed. There is no check for malware as everyone believes that malware can't be to bad as it's limited by the sandbox.
I think the main factor with this is probably that Desktop people do not understand the mobile world. They think that Android got popular because it has all that complexity and limitations inside, after all on a typical Android installation you don't even have a file manager. While in reality Android just got popular because it kinda worked and it was cheap and it was backed by a huge company.
People now see the popularity of Android (and iOS and all of the other mobile OSes) and think they need to copy that.They see AppStores and believe that that is the future while completely missing the analogy to 1990s "Multimedia CD-Roms". They even believe unlogical things, like that you can trust on sandboxes and therefore run malware inside of them.
It's also all the complexity of a "smartphone-OS" on a workstation where you don't need it. Essentially that gives you a very brittle system that, when it works, does things you don't want it to do, and when it doesn't, is impossible to repair.
Plus it has features nobody asked for, like "Flatpak" or other means of claiming that you can somehow install malware from foreign sources (without source code) without utterly compromising your computer.... and instead of acknowledging that malware cannot be contained, they claim every breach from their sandboxes on the rest of the world.
Someone can just send Microsoft a National Security Letter and they have to comply. It doesn't really matter where the servers are.
Also Deutsche Telekom works with the BND (German secret service) which work very close with US services.
Yes and I think there were lots of competitors. I think there even were people trying to use infrared for LAN and proprietary printer connections.
Since browsers have abandoned their download speed indicators, many people resort to speedtest sites to test the speed of their connection... that's why many network providers try to cheat on those... that this is probably the cause of this "glitch".
Organisations can be forced into doing anything by the use of National Security Letters. So having a system that relies on an organisation acting "correctly" is not secure. This obviously includes updates being pushed to you, as well as any non-FOSS and cloud services.
You cannot easily protect data against physical access. To encrypt data you need a secret which must not be stored on the device itself. A binary PIN is easy to brute-force. Hardware claiming to protect you from brute-forcing can be emulated or simply manipulated easily with hardware like a focussed ion beam microscope.
So what can you do to regain your right to uncompromised data processing:
1. Don't store data on mobile devices without protecting them with a strong passphrase.
2. Don't store data on computers you do not own. Ideally have all the computers you store data on in your own flat.
3. Use systems that are as simple as possible so you have a chance of understanding them and understanding updates. Try to avoid systems with large organisations behind them, use systems that are developed by loose clusters of people. That way in case a National Security Letter arrives only individual people will be informed and those can simply drop out or their code can be refused by the others.
4. Avoid systems that are completely insecure. Most "smart"-phones today have their GSM baseband sitting on your system bus allowing it to access all your RAM... considering that GSM baseband chips run closed source, never audited and highly complex code, that's just a security disaster waiting to happen.
5. Use tamper evident designs when you cannot prevent tampering. For example if you design hardware allow it to be embedded into transparent plastic, perhaps with some glitter around. That way you can avoid the firmware to be updated against your will, or the hardware being manipulated.
I mean that would solve the problems of firmware updates, you simply send in your device to the manufacturer, they will break the seal, change the ROM, seal it again and send it back.
...has a late 1980s digital VCR changer. Since it stores uncompressed video on 250 (up to) one hour tapes, the whole thing could have a storage capacity of up to about 3 Terabytes.
Sorry, should have said that I meant 5 different kind of chips. I thought it was obvious that, given the low density of ECL you couldn't have things like single chip microcontrollers in ECL.
For those not fluent in IC designs, ECL essentially works by using transistors not as "fully switched" switches, but as amplifiers. So (in a nutshell) a one is one transistor having a higher output current than another, and a zero is the other transistor having a higher output current. Since both transistors will let current through, those chips burn _lots_ of power. However you can get them running at many gigahertz easily.
Burning that much power is what makes ECL rather useless for general purpose computers. It's hard to get that much heat away from those chips if you pack them densely. However you need to pack them densely to not have long transmission lines in between which introduce a delay into the computation. However there are specialist applications which are not general purpose where you can simply have a "pipeline" of stages processing some data. ECL is rather suited for this if you have a layout with controlled impedance and controlled line lengths.
...since the Cray I which essentially consisted of 5 off the shelf ECL logic chips.
I'm sorry, I'm not a native speaker of English, but shouldn't that word be "ambiguous"?
I mean they are talking about "FTTP". That could be anything from a bunch of dedicated fibres which can be patched to the "central office" to a passive fibre optic network which barely can handle cable television. It could also mean a data network which only allows for the services of the provider (thanks to non-existing or weak net neutrality) or simply fast unlimited Internet.
FTTP means nearly nothing without going into the details.
...we would know that that "hacking a printer to cause a fire" thing was mostly PR. The fixation unit of printers has a hardware overheat protection once it gets to hot. Essentially there's a little heat activated fuse. So even if you manage to put new firmware on (which is unfortunately possible without interaction on the printer itself), you can only break it, but not cause any problem. And fixing printers would be simple, just put an "upgrade firmware" mode into the menu, perhaps allow for a PIN to be required, and have it not print anything while that mode is on. If you have an USB interface, you can even upgrade your firmware from that, instead of "printing" it to the printer.
Signing firmware will only make it harder for legitimate changes to the firmware, for example to get out security holes by removing services you don't need.
For actual attacks changing the firmware probably isn't a sensible way to go. It's far easier to use the features provided in the default firmware. I wouldn't be surprised if there are postscript engines that allow for network access.
Yes, particularly since it is such a huge project with virtually no isolation.
Today people have a strong incentive to participate in "Open Source" projects. Recruiters look for names in such projects, and honestly this isn't the worst way to look for new talent.
However there is currently very little public incentive to make sure that code is useful or good. The prime example (because it's so clear) is the OpenSSL "keep alive" feature. There was someone writing a thesis on this feature... which is of limited use... then he writes a patch which contains a glaring error and it gets accepted.
We need people like Linus Tovalds which question new features. We would need them in projects like Debian or Gnome or xfce. Unfortunately we have to little of those people.
Since you can connect "anything" to USB, you can also connect things you don't expect, like ethernet cards, mass storage devices or input devices. Previously Windows didn't actually support USB in any meaningful way, but now since it does, there is some focus on USB security.
Obviously the sane way to go would be to have dedicated ports again. Connect printers and scanners via Ethernet, connect input devices via some sort of overclocked PS/2, and have a special port for mass storage devices. That way you could essentially eliminate all harmful device spoofing...
Of course now some dimwits are saying that "signed USB devices" will save us all. Well first of all I'd like you to acknowledge that the new USB keyboard you just plugged in is the one you actually want to have so it's signature can be stored. Secondly this will probably only be used for vendor lock ins.
I mean we all just assume that it's good for websites to make money. However we've all seen that leading to things like clickbait. Optimizing your site for maximum monetisation will still be a problem with such systems.
Maybe we should just give up on the idea that you can earn money by putting trivial things on some website.
Believe me, the "cameras in your fridge" is the most sensible feature. In the demos it was able to automatically recognize what's in there... which was of course just faked.
With that particular brand the cameras were supposed to be connected via USB. They somehow got a trigger and then switched to one of the many "mass storage device" modes to deliver those images... that's probably the most complicated way to do it. From there it goes to a central server as the appliance itself doesn't have the space to store those images.
a) That kind of bar graph is probably the worst way you could show that data.
b) You don't use JPEG for that kind of image.
For those having a computer you can lug around seems like a decent idea... though this laptop is far from replacing a desktop. After all that display has a to low resolution.
For actual portable desktop replacements, there's obviously companies like Ariesys.
They also seem to have models with multiple screens.