Re: They cracked this in the 70's
I think they are nearly a thousand pounds, aren't they.
However to be honest the musical numbers composed by it never quite got out of the UK. Even "Little Mouse" is barely known in Germany, for example.
4850 publicly visible posts • joined 9 Mar 2007
I actually think that moving to raw samples is a rather bad thing to do. After all that greatly increases the complexity of the problem by adding lots of irrelevant information.
I mean no animal on earth hears by samples, they all hear by intensity over frequency and perhaps phase differences between different bandpass filters. Musicians also don't output samples, but manipulations of instruments.
Gaining access to unrelated systems in order to know about the social graph of the target.
You then use that information to pose as a trusted partner, e.g. the vendor of the software, and send "updates" or office documents with which you can infiltrate the system.
This can be done via e-mail or, depending on the typical way software updates are distributed, postal mail. If your vendor sends you software updates via mail, sending a fake update which looks the same as a real one won't raise any suspicion and it will be installed.
BTW probably _all_ secret services do that kind of thing.
... there you typically have a rack nearby where all the noisy stuff is. From there you use long cables to your desk. Plus there are situations where "switching" your desktop to another place can be usefull. For example radio newsreaders can prepare their texts on their desktop and when they join the main host in the studio they can just switch their desktop in there to read it. In short there are many special applications where that can be useful.
Also considering that actual workstations are rather expensive devices, you need to look into the future. Having something you can buy a rackmount kit for, can give you new options for secondary use.
since the certificate system of TLS has been largely compromised to a point where some countries and companies MITM every connection, Google decides that HTTP is insecure.
I mean we are long past the time when a passive attacker was a realistic scenario (unless you are at a penny pinching cable ISP). If you want to track a user today, you use one of the many ad-services to do so.
If Google had security in mind, they'd warn about websites using Javascript. Particularly when those scripts are loaded from external servers. They would gradually work on reducing the numbers of features webbrowsers need to implement to make web browsers smaller and therefore more secure.
We now are at a point when browsers are the most complex single pieces of software a regular person comes into contact with. We now are at a point where TLS, the protocol that is supposed to save us all, is so complex that there's just a handfull of implementations around.
This is not a healthy situation.
In a way it's even worse than that. If you had chemists work out which reactions would occur, you are left with a set of rules bringing you more understanding into those kinds of reactions.
If you just train an AI to predict something, you gain no understanding at all. You just get a black-box which may or may not predict the outcome correctly without actually giving you reasons for it.
I mean I have seen a BBC piece on "Watchdog" about a decade ago. That surely must be fiction, no self-respecting water supplier would ever keep a leak more than a couple of hours.
I mean there once was a broken pipe in the street were I was living. Around 3am I noticed the water pressure being irregular. When I got up the next morning the leak was already fixed and they were preparing to fill up the hole provisorial. That's how it's supposed to be. It's an emergency situation which needs to be dealt with immediately.
... just install termux with the ssh server and you only need a random PC to log into it.
There's a talk about it from a rather weird person using that day to day
https://media.ccc.de/v/zeteco-59-termux_als_betriebssystem
It works, but I personally wouldn't put anything personal onto a mobile phone. It's just by orders of magnitude to risky.
... this could be done trivially inhouse. Just look at what ssh with tmux can do for the console. I mean if my computer at work exploded, I'd just go to the next computer, log in and can continue from there.
It's just that the "web" never was intended to be a terminal standard, so it sucks at being one, making it extremely hard to offer any service over it... which leads to concentration.
Repeating the signal means demodulating, regenerating and remodulating it. This means that you'll be forced to know the modulation scheme in advance. So if some time in the next 10-20 years a new technology comes along, you cannot use it. The advantage of repeating is obviously that you don't amplify the noise.
What's done instead is to use optical amplifiers. There are several types. One obvious one is a laser. Unlike normal lasers which have mirrors on both ends to have an ever growing avalance of photons, you don't have mirrors there, but send in your signal on one side, and it comes out amplified on the other side.
Raman amplifiers use non-linear properties of the fibre and use one strong laser to turn those properties into amplification.
"Also, why does the cable get thinner further out into the Altlantic?"
Probably because there is less danger to the cable out there. It's unlikely, for example, that a ship will anchor in the middle of the atlantic near the cable.
I could imagine that that cable would be gradually covered by sediments over times.
Well the change is obviously underway. There used to be times when we stored 20 Megabytes on harddisks. The point from which on we use disks has gradually changed from kilobytes to Tterabytes. Today for sizes below a terabyte flash is cheaper than harddisks. As flash gets cheaper and harddisk sizes and prices start to stall, that border will move more and more upwards.
I mean Kodak used to be a company deeply rooted in Film. They could have easily survived in the film busines if they had only gradually shrunken down that area and gradually invested in new technologies.
This is what Western Digital seems to be doing now. They gradually move out of a business which might not exist any more in 10 or 20 years. During that transition period they can still use their existing expertice and gain new one.
I know the march of progress in the flash world in the last 2 decades feels like a revolution, but it is just normal gradual progress.
Well the "16 year old hacker" is not the problem here, those understand ethics and therefore won't do any intentional harm.
BTW using drones for this makes the effort explode. Not only would you need n-times as many transmitters, you'd also need drones which would have to be fairly far away from the receiver, moving at potentially impossible speeds, transmitting at powers which would get noticed.
Such a spoofer simulates all satellites with one antenna. So all receiving antennas will get the same signal (but delayed by different amounts).
So if you use multiple receivers at the corners of your car, you can either compare the time the receivers believe (should be different when spoofed) or you can simply compare the position the receivers report (should be the same when spoofed).
Of course simple plug in navaids don't have that possibility. For vehicles like planes it should however be utterly trivial to detect spoofing.
Well if you allow "something you know" and "something you are", your standard username/password combination would perfectly, as your username is something you are, and the password is something you know.
Same goes for biometry where you have a public element like your fingerprints (which you leave everywhere) or features like the look of your iris (which everyone can see) and combine that with something you know.
On about the same level are cellular phones as "second factor". While hypothetically you could build a moderately secure one... if you wrote secure GSM/UMTS/LTE stacks for them, but no one has bothered to do that so far. Adding insult to injury there are now application processors which run highly complex OSes of their own and nudge you into running only code from manufacturer controlled malware ridden "Appstores".
In any case, we are talking about a web service. Using actual 2FA (like with public keys stored on your computer or a smart card) is like putting a high security padlock on a paper bag.
and they are low risk machines. It's far more problematic that they _still_ use unhardened Windows boxes without the budget/competency to run them in any moderately secure way.
Complaining about Fax machines is like switching from fast terminal based unixoid systems to buggy and slow web services.
"But it's simple and easy to have a POTS phone plugged directly into the line"
Yes, but that only works with old analogue telephones. And for that you'd need to replace the line cards at the switch to downgrade you from ISDN to that. Plus you'd need to buy a new phone.
That is all provided you can still get analogue linecards, which seems unlikely as ISDN carrier equipment is no longer manufactured and the equipment still in operation is working on salvaged parts from closed down ISDN exchanges and doing that with an ever increasing failure rate.
It's simply not feasible to convert an area back to dialphones in case of an emergency.
There is no realistic alternative to fibre out there. And fibre is a technology which is easily ready for use for 3 decades now.
Having a dedicated pair of fibres per household will carry us easily into the multi Terabit age. That's far more than you could do with radio, even on a theoretical level.
Essentially giving up on fibre is giving up on bandwidth increases. Sure wireless might eventually reach 100 MBit/s on a fairly used network, however there are limits to how much a cell can carry and once you are at a cell per household... which needs a fibre backhaul, you might as well install fibre directly.
"One humongous one at the switch, regularly checked and maintained rather than a lot of little ones, probably costing n times as much to get the same capacity, fitted and forgotten and half of them dead when needed."
Well but then only one phone would work, and only if you unplugged it, plugged it directly into the NTBA (which depending on the type of your line your phone may not support) and configured it to work with the "emergency power" mode which not all phones support.
"How do emergency calls work over a fibre link when something like a power cut happens at the house ?"
Well just like you do with copper lines, have an UPS on your PBX.
In fact, propper fibre links are probably even more reliable as you can do away with "curbside equipent" which is hard to power in an emergency. Instead you'd have fibres directly to your switching office which probably will have emergency power.
... that's the whole point. All AV products have a theoretical benefit at best, far outweight by the many actual practical problems with them.
It's simply not wise politcially to demand or present evidence, because then you'd be forced to act logically and would therefore have to publicly declare your goals.
I have seen multiple Windows users looking for software by going to google and typing "$product free download" into it...
Yes, that's apparently still the norm for large numbers of people. BTW if you come across one of those, tell them to go to the Wikipedia page for that product (yes there are still people not knowing Wikipedia) and tell them to follow the link to the website of the manufacturer. That's much better security wise. (though not perfect)
Seriously using some full fledged webserver when you just want to return a static page, isn't the best idea.
However you should always know what you are doing. If you have unbound writes in your code, chances are that your CGI-script would have simmilar problems even if you used a pre-made webserver.
Both Samsung and Apple depend on instruction set architectures. So whatever they do they need to be able to run ARM-code.
So they can't make it ARM compatible as ARM would sue, and they couldn't make an emulator, since ARM would sue.
The hope is that Risc-V might become important enough that Application developers see it as important enough to compile their software for it. Yes, Android can use Java, but none, but the most trivial apps can work without additional native blobs.
The netlist is not where you would hide your backdoor. That would be both hard and dangerous to do. The far better place is to put it into any kind of "security" subsystem, as that makes it far easier to get it working and won't ruin your chip if it doesn't work. Simply put, if you already have a seperate CPU on the die, it's far simpler to make that one send out memory regions than to somehow modify the main CPU in a way that it only misbehaves in certain situations while still passing all the tests.
"Hopefully AMD will make enough money from this to survive whatever Intel's next anti competitive action to keep AMD out of the market is."
Actually Intel isn't likely to do anything severely "anti competitive". A duopoly is a great place for both Intel and AMD. Should AMD fail, Intel would be in the highly problematic situation of having a monopoly. That means regulation and perhaps even breakup of Intel. Giving a couple of percents of revenue to AMD is a low price to pay to keep that out.
The interrested customer got a call around 16:55 from the company asking her if she wanted to get the line, she said yes. Around 17:00 she got a call from the installer apologizing that he wouldn't be able to make it today, but he could come round tomorrow at 09:00.
The next day at 09:00 the installer came and not only plopped the equipment on the floor, but also neatly mounted it on the wall and left. At 09:30 an inspector came asking her if everything was alright and working fine.
Bottom line, this was the slowest line they offered, something like 10 MBit for less than 10 Euros a month. Ohh and that company was only semi-legal, they apparently had no license, but nobody cares.
Source should be this:
https://wrint.de/2013/06/11/wr185-anruf-bei-helena-in-antalya/
You know it used to be developed by people who knew what they were doing. Adding to that were empiric experiments to find out how well the users could use it.
Also back then they had decent font rendering. Unlike todays mushy vector fonts with "anti aliasing", they had vector fonts with embedded bitmaps for common sizes. So in the likely case you used it in a standard size, you got crisp black on white bitmap characters, lovingly hand crafted to look good.
Well there's mosh which runs over ssh... and for some reasons at least some pre-paid carriers even allowed you to use that if you had no money on your account.
In any case, what's nice about the system used in German ICE trains is that you get a local copy of OpenStreetmap with an indication where your train is.
Despite of heavy abuse (including running it in a dusty environment for a while) it still shows little signs of getting old.
The point simply is, if you are a large business you'll notice if 0.01% or 10% of the laptops you bought fail within the first years. Since a failed laptop can be hugely expensive, you will stop buying from the 10% companies.
And 0.01% over 2 years simply more or less translate to a high percentage of computers still working after 10 years.
This is not like the consumer business where there is no correlation between broken devices and returns. People will return perfectly working devices, and people will use severely broken devices thinking it's supposed to be that way.
I mean you can use git with github purely by using ssh public key authentication which is orders of magnitute more secure than passwords. However there's still the github website which cannot use public key authentication as browser vendors haven't been making this as comfortable as it's for ssh.
Now if github had some ncurses based system running over ssh, you'd never even need a password. Instead you'd send them your public key (e.g. via mail or via some webform) and get a user account based on that. You can submit more public keys once you're logged on where you can also select a username and so on.
Now if the web wouldn't have adopted Javascript it would have been a decent alternative. Unfortunately during the browser wars browser vendors were mostly concerned with features for webdesigners, not for web users. Otherwise they'd automatically handle tables including things like hiding columns and sorting.
If everything is wrapped in an application which is allowed to execute code, you'll always have the problem of rampaging malware, since you need lots of applications and if one of them is malware, you're toast.
The more sensible way is to only exchange data and have a (nearly) fixed set of applications which can work with a multitude of data sources. Kinda like online services used to be before Javascript. You logged in via a modem connection or telnet and had access to a database. You didn't need to have any kind of special software.
Installing new code should be something you only do rarely from sources you personally trust. It shouldn't be something you casually do when a QR-code tells you to do it or something your browser run automatically as a feature.
... doesn't sound like it would generate a _lot_ of data. I mean it's "hundreds of wind turbines and scores of solar plants scattered across 8 countries". Let's assume it's 10000 "devices" they monitor, each one giving a vector of 10 readings every minute. That's just 6 Million values per hour or roughly about a Gigabyte of data per day.
That's something even your "run of the mill" SQL-database should be to store and process... on a single system. Using specialiced databases (this is not a relational problem after all) you can probably even handle a lot more on a simple modest system.
Yeah, particularly in a RAID situation where you'll end up with several independent blocks with connected cleartext. Essentially you could, for example, have the same cleartext encrypted with 2 ciphertexts in a very predictable manner. This is one of the things that could eventually become exploitable.