"Who knew that the when the company derived its name from the number Googolplex"
They didn't they derived it from a googol, their HQ came from a googolplex.
711 posts • joined 30 Aug 2012
They aren't typing the password in multiple times. They have already stolen the password, it's just that it's in an encrypted form. What they then do is try to find the original password (or even a different password that would give the same result when encrypted!).
They can do this by trying every possible password one by one (against the same encryption method) until the encrypted result matches. So they start by trying 'a' then 'b' then 'c' ...a loong time later.... then 'Abhg75^&%fgtrds'. All these encryption methods cannot simply be reversed. ie. they are one-way so you can't just enter the encrypted (hashed) password and get the original plain text password as the plain text password no longer exists in any form. However some encryption schemes have vulnerabilities in the random number generator or method used that can reduce the number of attempts significantly. They might also demand a minimum of 6 characters so the attacker doesn't need to check for passwords less than 6 chars. However they would normally start by checking a dictionary list that would contain popular passwords, all the passwords from major breaches, all the words in a dictionary, every birth date, peoples names, including multiple capitalisation, swapping letters for common symbols (such as pa$$w0rd) etc.
In then end they may get a match for the password (and - it doesn't always need to be the exact same password, but nowadays it normally is, it just needs to produce the same output when encrypted). They then use this password to log in on their 'first' attempt.
How do they steal your encrypted password? Well either they have access to your PC/Network and have dumped the 'encrypted' password file or more likely they have stolen it from a website or intercepted it when sending it remotely.
"...it was not aware of anyone selling or misusing the pilfered information"
Well they didn't notice someone breaching their system so the chance of them 'being aware' of anything is slim. It's quite galling when this line is trotted out, as though them being aware makes any difference whatsoever to whether someone is at risk of their information being abused. You can assume that if someone went to the trouble of hacking their systems and gaining some extremely valuable data then it already has been misused and it is likely to be misused further - why wouldn't it.
I don't really see what the problem is.
Whether we leave with a 'deal' or not does not impact data as we will still be a 'third country' to the EU. It's only if that deal specifically includes a clause that the EU will, using section 101 of EU Regulation 2016/679. The withdrawal agreement in Article 71 suggests some protections of personal data but does not state that the UK will be found to have equivalent data protections under this agreement. However having fully implemented GDPR then the European Commission could very quickly agree adequacy of data protection whether there is a deal or not - remember the USA is still deemed adequate despite being refer to the courts saying it sin't and obviously doesn't have the same safeguards as the UK.
Therefore accessing of data that is stored in the EU can still be access just by the UK determining that it is holds sufficient data protection when they formalise the Great Repeal Bill.
The issue then comes if the EU determine that they refuse to grant the UK a status that would ensure it is seen a adequate to protect EU data and they also feel that the data sat on the servers in the EU is now EU data due to residency and refuse to allow it to be processed by the UK. However how would they know if that data holds PII without somehow demanding to see that data.
I don't think anyone stopped using US servers when it was found that Safe Harbour was not adequate - I'm not sure why our GDPR protections and the EU GDPR protections would suddenly seem to be invalid and therefore the data storage location immediately relevant?
And there in lies the problem. You get geographical separation, however you need to do synchronous replication to ensure consistence, which has issues if you have a distance with even moderate latency as you have to await the ack from the remote site before processing the next bit of data. So you then use a cached synchroniser which keeps the latency down but must be physically separated from the rest of the network, separate power etc. However you also need local redundancy so you don't have to rely on your separate geographical location. So you can end up with three to four parallel systems (possibly each running RAID 10 ) and you storage requirements get quite large.
You also need a third location to ensure you don't get a split brain scenario. To use your second geo location you also need the infrastructure to be able to run from that location - extra internet connection, switch hardware etc. Then you might also need a physical location to use that connects to it. Don't get started about the live testing that you need to do to make sure it all works (and what if it doesn't during that test - all hell breaks loose)
Or you could just host it in the cloud (which has some of its own risks, for sure) - you can see why it can be an attractive option. Don't need to worry about it and your head isn't on the chopping block if it your expensive "bullet-proof" system stops working.
Hmm, very different from "if you can't afford for it to go down".
There's also still many ways that a system can go down, other than a single or even multiple server outages.
Also a backup will only restore to the a certain recovery point in a certain recovery time. May be fine for your file server but if you are dealing with real-time high volume databases then restoring from backup might be pointless - if that is your 'solution' to a system you can't afford to go down.
I doubt it is all rubbish it is an exercise in risk. You aim to mitigate risk and put procedures in place and analyse the impact. Sometimes pen and paper might suffices. Sometimes it's running a script every hour to create a report of all current orders/customers etc which is save to a different location.
However the idea that every organisation can revert to paper just because some can is a fallacy. Even in some case where they could revert to paper you can get to a stage where that data would need to be reentered into a system before any new data (so the new data also has to be handled manually) can be accepted once it is back up. After a certain period of down time (will vary for all systems and organisations) you can get to a point where the outstanding queue of data becomes too large to be able to re-enter.
I would always look to engineer a fallback to the lowest common denominator, however sometimes it is not possible and you have to accept that if there is a systems failure, you're better off shutting up shop until it is resolved and then re-opening again and hope you don't go bankrupt in the meantime.
Really? Have you tried?
"The choices are fingerprints or facial recognition, and privacy advocates will (rightly) protest either."
Err, what? They can do facial recognition without privacy concerns quite easily. It's been done on ID cards for longer than even computers have been processing such data.
Here's how it works - the pass has the holder's photo, the human uses "facial recognition" to check the holder matches the photo on the season pass.
I disagree, Whatsapp is a great solution for multi OS mobile communication for groups of people. It's one of the few that has bridged the iOS and Android platforms and allows easy photo sharing, quick chats, group chats, video chats, quick decisions and end-to-end encryption. POTS you can't share photos or easily run a group chat and you risk interrupting someone (rather than reply in their own time). mobile MMS is expensive and groups are harder to set up and maintain (e.g. the group is set up locally by one person not for everyone), e-mail is much better but not a great short communications tool and harder to have a back and forth conversation, can't do a video call, not as good for quick responses, easy to lose the group if someone doesn't hit reply-all. Snail mail - great but limited in ways that are known.
So choose what suits you best but as the OP said using it to collaborate between families is great and I too am annoyed that it is being merged into tohe facebook family. I purposely don't have facebook messenger or facebook app on my phone as it is way too slurpy. Trying to get everyone to switch to another platform (including 70+ year olds) is going to be a pain and there is no obvious contender.
FTA: "And so it dug back into the annals of internet browsing history and specifically Joe Belfiore's patent for "Intelligent automatic searching" which he developed while working for Microsoft back in the Internet Explorer days (Belfiore is still at Microsoft btw). He filed it back in 1997."
Apple heavily restrict the NFC to use primarily for Apple Pay.
Therefore it can't be used for passport verification, unlike android where the NFC can be used for anything the developer wishes to use it for in both secured and unsecured mode.
"Pin sentries are not specific to any bank so I found out when I used a Barclays one with Nat-west and vice versa, certainly a security error on the banks part."
Why on earth is that a security error? This is by design, it is an open standard that is used by many banks in different countries. It means that if you have 6 different bank accounts then you don't need 6 different pin devices - less plastic waste. Also if you need to make a cash transfer you can borrow one from a friend - especially useful when travelling the world. It also means they are all secured (or insecured) to the same standard rather than having weaknesses in specific ones. They also wouldn't need to all be swapped out every time a card range is changed for a certain bank (which happens many times a year).
So, completely failing to see why it is a negative...
"The firm couldn't provide evidence for any consent having been given for some, while for others it claimed consent had been gathered via privacy policies on certain websites.
However, the ICO ruled (PDF) that the wording of the policies wasn't clear or precise enough for people to understand they would receive direct marketing messages advertising the firm's services."
Now this is annoying. This seems to suggest that the ICO thinks that if the wording of the policies was clearer then this would be acceptable.
This is PECR and although I don't have the inclination to read through the regulations again just for this post, I'm pretty sure you need to get actual tick box (not pre-ticked) consent for communicating via MS, Phone and email, not just a policy stating you can, however clearly written it is.
I just don't understand the idea that it must be a single user's problem when there is an outage like this. Surely they have network monitoring systems that flag up within milliseconds that there is a significant problem and should let them know it isn't a user problem before the first call/tweet/letter comes in.
"You can absolutely do advertising without spying on everybody, it's just less lucrative."
Need a citation for that - on a specialist site like "the register" then surely knowing its content and therefore its intended audience is enough to know what ads to run. You don't need to track/personalise/etc me to show me an ad.
True, but every bit of bandwidth is shared regardless of the medium and to varying degrees.
BT used to have a contention of 50:1 on the ADSL product - not sure what it is now or what it is on different products but probably a lot less. At various points you'll get contention on any connection.
SSLlabs checks you SSL security and other associated bits an pieces. It doesn't check for XSS, CVE vulnerabilities, patch management etc.
A simple CSP header would have stopped this attack (and other script injection attacks) and should be a basic security measure for most sites, especially one like this that has a credit card checkout and uses third party content.
"He suggested it could be the type of breach where..."
A bit behind. It was due to a keylogger using a fake Google Analytics script called "https://g-analytics dot com". This was inserted into the page which skimmed the details and intercepted users and cookies.
Vision Direct claimed that the developers had tried to mitigate attacks like this but the signature was different, however they had completely inadequate security against an attack like this and were not following PCI best (required?) practice. The security scan of their site -> https://ibb.co/m35V20
How did the script get on there? Well they use Google Tag Manager so if someone gets access to the console of that then they can put any tags they want on.
Just because you ride a motorbike and wear a crash helmet doesn't mean that you take it off every time you stop. I don't even take it off when filling with fuel and going in to pay as it is a pain and most fuel stations don't mind nowadays (banks are a bit more concerned!) I made sure my latest phone was waterproof just so I can mount it without a full waterproof case to my handlebars. This has the advantage that riding around southern France in the summer is has maximum cooling - in a case it will generally overheat and switch off constantly.
However I am not riding at speed reading the latest Brexit news and searching for a new saucepan. I do stop, pick up the phone and take a photo or select a different route when stuck in a long traffic jam, or open the translation app when at a roadside to read a sign, or look at reviews of places to eat when pulled up on the outskirts of town.
However what I actually do about unlocking is use smart unlock to detect the bluetooth on the bike intercom to keep it unlocked (the bluetooth switches off with ignition) and it has a gloves mode so that I can still use the touchscreen with summer riding gloves on (when stopped!)
"there are a few trades and professions that benefit from it"
Those trades and professions are not likely to be relying on a smartphone display for business critical colour accuracy. In fact I can't think of any trade or profession who would proof something on a smartphone that would require *that* level of colour accuracy and a decent non-10bit display would work just as well.
Could you imagine the conversation "The colours are slightly off brand"..."That's impossible, I used my iPhone to visually confirm they were correct"
"NAS is primary storage and should be at least RAID 5, so unless your disks all fail truly at once, your primary storage should remain intact. "
Not true unfortunately (if talking about HDD not an all flash array) - having a hot spare wouldn't help either. It used to be okay with smaller disks however the issue with much bigger individual disks is that they can easily fail on the RAID5 rebuild. All disks will have their read/write limits and on average once one goes unless it was a dud then there is a higher risk that the others are in a time period where they will also start to fail. Now a RAID5 rebuild is a very very intensive processes that hammers the disk, making another failure quite likely (actually very probable). If another disk fails during rebuild then you have lost the lot.
A recovery of data from a failed Raid 5 array would need a very expensive process to do it and for most situations near impossible.
DO you then cluster the RID master over geographic locations to ensure redundancy? Do you then need a RID master master to oversee your RID cluster?
How will the ID blocks work when they are re-merged, the transactions will be all over the place? If you don't need the ID for anything useful outside of a unique index then you could just do a compound index with the server name or start your index block at different starting points that will never overlap.
In reality it isn't so much about inserting data once or reading data, it is about changing data or a set of transnational commands that needs to be done in a set sequence where some of that sequence may exist on one server and some on another and where the times could be ms out. Or where some data is changed that only exists on one system, or has only made it into one index.
"Well, it's nice to know that the firewall configuration is easy. No offenses, but care to take a turn being responsible for..."
I have run network border security for more users than that. However the number of users - 1000, 10,000, 100,000 doesn't affect your configuration or make it harder. You are also not talking about a firewall if you are talking about a web proxy, they are different things. You may have a UTM with both firewall and Web Proxy included but the configuration of these is pretty standard, not sure why it would be difficult especially when talking about user requests.
I'm not trying to tell you how to do your role, but if you spend a *lot* of time dealing with users who have to inform you that they can't resolve a hostname and you have to spend any significant time troubleshooting it then I would suggest changing the procedures somewhere.
|In an organisation as the firewall owner and the directory administrator you could choose how to do it.
You could only allow standard DNS requests and then convert them to TLS at your gateway providing oversight locally but not outside your administration or you could stop them altogether.
It's your choice, you have a root cert on every PC.
" If you want to have software that can have it's database hammering with multiple processors on multiple harddisks ..."
Doesn't necessarily mean this. You may have high intensity procedures doing significant number crunching running on multiple threads which are just storing and retrieving small amounts of data from the SQLite database. The database isn't being hammered or particularly big but it still needs to be accessible and consistent across multiple threads.
Currently under appeal, hence why it has gone to the EU (in Escrow) and not Ireland.
If appeal is successful Apple will get it back, if not the money, I believe, will go to the Irish government.
Come back in 10 years for the answer.
So what do you currently do with websites? Do you block browsers on the desktop or just whitelist/blacklist individual websites?
I don't fully understand what the issue is? PWAs don't get admin level control, they can't open up ports on your machine at random, any ports they send out on can be blocked, there is almost as much control over malicious websites as there are malicious programs, and more control over categorized websites, whereas I'm not sure a categorized system exists for applications.
So you can block PWAs globally or individually or block access to the web completely and restrict their remote connections and activities, this seems like quite granular control and would seem far safer than an application that has to be installed (and therefore has admin privileges at that point).
Well a PWA is still just a web site nothing more. It can utilise hooks that can do some 'clever' os level stuff like add link to your homepage etc, but these are dependent on your browser and OS. So access to sensors and hardware has been granted by the browser so any app, whether it is a 'PWA' or a web page can access it.
Therefore your firewall ports will be as useless against PWAs as they would against any old web site. However blocking access to specific sites and to remote hosted data stores is just as easy with a PWA as another website.
As for offline/online. That is completely up to the developer - they can use web workers or service workers to allow use of a cache api or small db to do some offline work. Often this is regarded as a temporary storage state which will sync and clear down once an internet connection is achieved.
Trust may be dead for some devices and by some technical people but the average consumer will go on amazon, buy a cheap device install it, download their app agree to 501 permissions required and put it on their network.
Why are they to know any better? There is no mandatory test and qualification required to buy a IoT device, they don't presume the ones on sale are dangerous.
As well as extending this bill to a larger are (e.g. all of the US or all of the EU) where every manufacturer would be forced to comply, as the author states it should be extended and certified further. A beep for an update will not work as very few cheap IoT devices ever get updated out of the factory.
# All devices need to have security assessment to provide a test of the device based upon current most likely threats. Devices must pass this and be certified before going on sale.
# All internet connected - or connectable - devices have a grading which shows a length of time in which they guarantee updates for a device. All source code is held in Escrow in case the supplier goes under in that time.
# Any security threats discovered in a device during its service guarantee time must be fixed in a standard length of time based upon the severity
Therefore the customer can understand that by pay $5 for an IoT device there are likely to only get 1 year of usable life from it, someone who pays more might get a much longer guarantee.
Biting the hand that feeds IT © 1998–2019