Re: Simple workaround?
Yes you can, but it's rarely used.
226 publicly visible posts • joined 22 Dec 2007
Some research recently published on the arXiv preprint server examined inserting back-doors in algorithms during the training phase. The rationale was that training was likely to be outsourced - sent to the Cloud - to get the compute resources and that the training data could be manipulated while it was in the Cloud. Worked well. Back-door could not be detected looking at the model and it survived additional training largely unscathed.
@Warm Braw
That really needs to happen!
And the security industry has to stop pushing snake oil 'solutions' that haven't solved anything and start pushing solutions for secure architecture, design and coding. Most of the 'Top This' and 'Top That' coding flaws are either validation or error handling problems, or stupid design/architecture decisions.
Both likely to be driven by time to market pressures and the latest rapid development fad. Look how long DevOps was around before we saw the Ooops moment leading to DevSecOps...and then the next fad will blow it all away again.
Companies don't care about their customers and also outsource a lot of the email to them. Then you have the fact that this is MTA to MTA technology rather than down at the client level and it is now down to the customer ISP to implement it to help the customer.
Use by companies to protect themselves is also non-trivial. Outsourcing and partnership arrangements that originate emails as if they were internal have to be dealt with...and you will find a lot of shadow technology once you embark on this path!
The report is, obviously, written from the point of view of the insurer and the numbers are what they would consider the insurance cover to be. Think of insuring your house, if it burns down you have the obvious replacement cost but also additional costs covering where you live in the interim.
The Reckitt Benckiser number is the immediate replacement cost. There will be other costs that would not have been considered in this statement. So while the report numbers seem high they might not be as overstated as the commentators suggest.
Secondly, if you are going to attack the numbers it would be more productive to flag where their model is wrong. They have at least run something up the flag pole so maybe make it better.
@ Steve Button
Thinking much the same way. 4 seconds is considered opinion and would probably not match instinctive reactions. The assumption is that this is a good thing but now you may be deliberately mowing down A to preserve B which will be making lawyers salivate.
Monitoring on the Internet is not a good idea either, you are exposed to DoS and possibly spoofing.
Air gapping also gets interesting when WiFi or Bluetooth enabled components come into the mix. These can get deployed in areas where physical access is awkward, and of course, they will have an App for the techies smartphone which is another vector for compromise.
Agreed. I have had sites reject random passwords with 'special' characters in them without any indication of the allowable character set. The error message - logs - have displayed the password string in full so just changing a bit here and there is not an option.
Desperation may lead you to 12345 just to move forward. Finding some way to go back and rectify that accommodation may be non-trivial.
So users should not shoulder all the blame here.
That would certainly be a step in the right direction. The consumer environment is not going to be able to cope with a 'trust nothing' model for quite a while. Migrating 'old school' corporate technology into this space would be a viable alternative in the short term. Consumer edge devices become UTM by default.
A good start if you are talking about filtering outbound as well as inbound. Then you get tunnelling and encrypted traffic which is probably going to be beyond the capabilities of consumer devices to inspect (are you listening Google?). This is on top of the bizarre connectivity requirements some devices seem to require.
Throw it all away and start again in a universe far far away...
Absolutely. Serious efforts by major organisations have clearly shown that with current technology it will remain so.
Abandon all shiny things and go back to simplicity. We may actually have a chance of improving things, after we have ditched all the 20th century technology we rely on that was built for an era where hats were all White.
First, this relies on the Internet which can be taken away at any time because the technology it is built on is not up to the job. The temptation/motivation to take it away will only be increased by this sort of shift. You would have to be mad....oh.
Second, you can always change the economics...don't pay them as much!
More likely to be a very carefully chosen string particularly when the parser has been identified and it's parsing quirks are known.
Quite a large number of the parsers tested supposedly parsed input they should have rejected. That would be an interesting path to explore if you wanted to inject invalid data into an application.
Tools such as Nmap rely on implementation differences to fingerprint end points. These implementation differences are invariably fuelled by sloppy specifications - aka RFCs - that use the terminology of RFC2219 (and all too frequently RFC6919) to specify the technology we rely on.
These should be reduced to MUST and MUST NOT before things get any better and even that is probably not going to be sufficient.
I assume tools like nmap will jump on this :-)
The WhoIs information for the sites leads to WhoIs Privacy Corp domiciled in the Bahamas.
Their web site claims it will protect your identity as the owner of a domain and only reveal it under specific circumstances. These include "To comply with a subpoena or other legal process served upon us.".
I would assume that Elsevier drew a blank here as well if they are now going after Cloudflare.
It is not at all surprising that the domain registration process allows this to happen.
I pointed out that the Canadian owned (at the time) NextGen were in the picture in a response to this post:
http://www.theregister.co.uk/2016/08/07/it_analyst_oz_census_data_processed_as_plain_text/
The SSL/TLS connections terminated on their network. They potentially had access to all the responses on their network.
So we have at least two foreign powers having access to the data submitted online.
"Though individuals may be distressed or otherwise upset at an unauthorised access to or unauthorised disclosure or loss of their personal information, this would not itself be sufficient to require notification unless a reasonable person in the entity’s position would consider that the likely consequences for those individuals would constitute a form of serious harm."
Consider a series of breaches where each one releases some information about an individual, none of these are considered serious enough to report in isolation but taken together they provide enough information to create the risk of 'serious harm'.
They all need to be reported.
The concept of 'notification fatigue' also seems to imply that a large number of breaches are expected to be taking place which increases the aggregate risk issue.
Disclaimer: I have no HR affiliations.
"and HR and sales departments are the most often hacked because they are the least computer security aware"
HR is also at the pointy end when it comes to receiving legitimate unsolicited emails so they have to be far more aware than the average employee. Fake resumes and expressions of interest are very common vectors for phishing. So this is actually a bit harsh.
Actually not much will break and if you adopt soft fails initially then this will be further reduced.
Anything and everything on the Internet can be compromised. It's really about building a framework that supports defence in depth and therefore requires multiple compromises to subvert.
Still possible but at some point the effort required and the reduced returns will start to have an effect.
It's all about doing something rather than passively accepting it all. And the tools are there right now.
All companies/corporations must digitally sign their outgoing email. A number (increasing number?) of email clients can handle this. This provides end-to-end integrity and assurance of origin.
Additionally clients need to be able to perform SPF/DKIM checks rather than hoping (in vain) that the ISPs MTA has done this. Companies then need to implement SPF/DKIM for ALL their domains which many companies don't do.
This will make it harder to impersonate legitimate emails but still requires an informed user and appropriate client software support. All the standards already exist and are used go some extent.