Wouldn't ROT-X be "Military Grade" at one point. That point being 2500 years ago?
The marketing dept just weren't specific on the timeframe!
65 posts • joined 22 Jun 2009
Yes, Jesus wept at this article... A cursory check of Wikipedia would have spotted half the issues.
It will also mean saying goodbye to the protocol that effectively made the internet possible: TCP.
TCP will continue to be a fallback, not least because there is no support for UDP tunnelling under a HTTP proxy
"And the reason is that TCP intrinsically assumes you will stay at the same address on the network while you are sending and receiving information. As soon as you starting moving around however, that address shifts. If you leave your house and your home Wi-Fi to join a 4G network, that's one shift."
Yes, that would be. At which point you'd have the break down the old TCP conneciton and build a new one. But UDP despite being stateless is likely still going through NAT / GiFW, so you'll still need to send packets to get traffic.
"If you get on a bus or a train to head to work in the morning, or if you stroll home at the end of the day, you will be constantly shifting your network address as you move from cell tower to cell tower."
Handoff between cells generally keep the same IP. Not all subscribers, but the vast vast majority
"This modern use of the internet has already led to plenty of other changes and improvements to existing internet protocols – for example, the shift from HTTP 1.1 to HTTP 2.0 was largely because people now use multiple applications at the same time and expect each to be able receive data."
Jesus wept. HTTP 2.0 allows multiple streams of data to a single service, not multiple services, not from multiple applications. With HTTP 2.0 you'll establish a new TCP connection for each app to each destination, or with QUIC UDP.
"What's more, if you are moving around from network address to network address, this UDP approach should end up much faster because it pulls out TCP's checking mechanism, speeding things up."
Checksums are offloaded to hardware, so the "Effort" is minimal. With UDP over IPv4 checksumming is technically optional, but if you skip it you have to zero pad the checksum field, so you don't reclaim bandwidth. Under IPv6 it's mandatory anyway, as skipping checksumming makes no sense. Besides, you need to hash for DTLS anyway.
What's faster is you have direct control of the congestion control algorithms, fewer roundtrips to bring up a "Connection", etc.
"And that's what first Google and now the IETF internet engineers have been working on: how to add TCP-style encryption and loss detection to UDP. It will also add in the latest standards like TLS 1.3."
TCP doesn't have encryption. TLS only runs over TCP, true, but DTLS (UDP transport) has been around for a very long time
"It will create problems for people using NAT routers as a way to handle the painfully slow move from IPv4 to IPv6. NAT routers track TCP connections to work seamlessly and since QUIC doesn't use TCP, its connections through such networks could well drop out."
Bollocks. NAT routers track UDP "Connections" in more or less the same way as TCP. Plus QUIC clients fall back to TCP in case of issues
"Likewise, if a network is using Anycast or ECMP routing – both used for load-balancing - the same problem will likely occur."
Anycast and ECMP break TCP too. And require more work to re-establish
Nail on the head, a huge portion of ISP traffic is to google, youtube, facebook, etc. The large services are IPv6 and therefore if you just implement CGN with IPv4 only, you're going to pay an awful lot of money for the kit and you're going to need to work out how you cleanly expand that over time. Implement IPv6 and >50% of your traffic zips straight past your CGN box.
So you've got a clear cost/benefit on the ISP side, either do a IPv6 project or pay way more than you need for your CGN solution.
On the content provider side, things are a lot less clear. Unless you're hyperscale like facebook, you don't need IPv6 for any particular reason, and don't care much than the users might need to be go via CGN and incur a bit of cost for their ISP and maybe an extra couple ms latency. It's just extra complexity, which means extra cost. Hence el reg is V4 only.
Typically PBA or DNAT will be used, whereby a subscriber is given a source port range on a particular public IP (EG 100.64.0.1 -> 184.108.40.206:1025-2048, 100.64.0.2 -> 220.127.116.11:2049-3072). Saves a lot of logging, but then you've got extra fun with the likes of SIP which need a lot of TLC to run through the solution.
Try A&A. Pricey but decent support and very effective at dealing with openreach
We'll fix your line even if you are with another ISP!
If you are migrating your service to us, even though you know you have a problem with your line, we'll take on the fault. We'll tackle the problem and get it fixed within one month. If we don't then you can migrate away and owe us nothing for your migration to us and your service charges for that month. Details.
I would argue that "Good" design would mean you don't have HA pairs of switches and consider that a redundant solution. This stuff can and does break, hence you're much better with DCs which aren't attached at L2 (Which I presume is the case here). Better to use L3 or DNS - but of course this is an old design, and there may well have been good reasons to follow this model at the time.
Typically classified and unclassified are separated by air gapped networks. Potentially with 2 stations on the same desk.
If that wasn't the case now, and say you wrote to either the classified or unclassified CIFS / Sharepoint then you'd have the same sort of mixups now...
If i recall correctly from 2005
How to check the oil, tread depth ("Must be 1.6mm over the inner 3/4 of the entire circumference of the tyre"), check tire pressure ("Using a reliable pressure gauge"), break fluid, lights are working ("Turn them on and walk round")
Nobody covers high-beams vs dip though!
Point is you need a fingerprint to unlock the (standard apple) device in order to use apple pay. This is generally less of a faf than FaceID, but if you have gloves on it's preferable.
FWIW android pay doesn't require unlock to pay. And nor does my contactless card. Honestly not sure why I'd bother with either instead of the card itself. Not like everywhere takes contactless anyway.
You wouldn't use this type of box as a Government or ISP, it's obvious that the certs have been resigned if you know where to look (Just a case of checking the CA who signed the cert), and a government will very much struggle to insert a CA into every citizen's device.
This technology is for corps who have control of the devices on their network (GPO) and are looking to protect themselves against Malware, Dataloss, etc. And that sort of technology is very much going to be needed to meet GDPR.
Couldn't agree more. Malware is pretty much always delivered over SSL/TLS because the writes know this is a blindspot for many organisations. The SSL/TLS Interception proxies are there to decrypt this and stop the malware. If there is a fallback to ATLS, then the malware will move to the new blindspot created by it, and the middleboxes will shortly follow.
There is plenty of potential for abuse of this technology, and it doesn't always work very well (CA's need to be distributed, applications pin certificates, breaks client tls auth, etc), but that doesn't make ATLS a sensible option.
As far as privacy is concerned, the browsers should be placing a massive "Eye of Sauron" icon in the address bar rather than "Secure".
Firstly, HSTS is not "a cryptographic technology", it's HTTP Header signalling used to tell the browser to only connect via HTTPS next time.
Barclays domain doesn't support Forward Secrecy, which they "absolutely should". "There is no reason not to"
Well, given CPU decrypt I would agree, but most banks will offload these to crypto cards (Generally on an ADC, perhaps with a FIPS card / NetHSM which makes PFS much less of a requirement in that the key is very well protected), and a good number of those don't support PFS ciphers. Not to mention depending on architecture lack of PFS may be very helpful for IDS type devices.
"The most crucial thing the bank has missing is a HSTS policy which, for a secure website using HTTPS, is an absolute requirement."
Well, it's clearly not an absolute requirement, as the site works without it. Good practice, sure.
Not saying that the banks shouldn't up their game, but there may be perfectly good reasons not to support PFS
I had a sustained period (Months) of getting severe packetloss and latency during peak times. Usable bandwidth varied from 15-40mbps on a 150mbps service during peak times.
Eventually I moved to A&A, who you can see are currently posting daily updates on their hunt for 0.05% packetloss on their talktalk backhaul. https://aastatus.net/apost.cgi?incident=2401
I have a more expensive and capped service as a result of the mood, but the connection is fast and stable
Since the advent of same day delivery from Amazon, I can get the tool / component I want, with a set of reviews to prove that.
I can get a bag of 10 STDP switches for the price of one at maplin, same day, without having to make a special trip to a town centre with no parking.
Asking your suppliers for bribes to keep selling their stuff is a low blow from a dying company
The best advise a friend gave me when looking at BRTFS stability was to look at the mailing list, and see how many puppies it was killing.
While it may be the default for boot disks, boot disks are rarely multidisk raid.
There are a number of shortcomings with RAID5 style configuration (Write holes, poor re-balancing, etc) which made me feel decidedly uncomfortable trusting it with my data.
Which is a shame, as from a convenience point of view, being able to simply add disks of varying sizes would have been much more convenient.
I went with ZFS in the end. (Only about 4 months ago, so the info is reasonably current)
Both have a great reputation. A&A don't offer an unlimited service though, which you might be used to...
By the time you include line rental, it's very hard to beat the virgin media "Broadband only" option (the price rise for this was back in september/october) in terms of price and performance.
While the TLS stack isn't compliant with PCI-DSS 3.1, it doesn't need to be until June 2016. 3.1 is relatively recent, and organisations have some time to bring themselves into compliance.
The only thing the audit picks up on the PCI side is a SHA1 certificate, which will most likely be fixed on renewal.
The report flags Camellia as not a NIST standard, which is true - it tends to be preferred in europe / asia.
PFS is available.
As High-Tech says, A rating, and a good indication that TLS has been configured by hand for security, or that they've done pretty well out of the box. Total red herring as far as "indications of the security culture" is concerned.
Now, why an SQL attack (if that is the case - my level of trust in Rory Cellan-Jones is rather low...) was possible is another matter. You'd hope coding techniques and libraries have sorted this problem. At the very least a PCI mandated Web Application Firewall should have caught that sort of attack (WAF is, of course, a safety net - not an excuse for poor coding), assuming it was put in and turned on...
I hear claims all the time that IPv6 accounts for 30-40% of an ISPs traffic when dual stack is enabled. The reality is that 30-40% of the traffic is just a few big sites (Facebook, Google). Wider adoption is still miles off, and brings little advantage.
Until there is some reason to get your website IPv6 capable, IPv4 will remain - why would any sensible company fund a project without any tangible benefit. (For mega scale, simply address space management is a good enough reason. For small scale, not so much!).
The only way I see any rush to adoption happening is if google starts including IPv6 reach-ability in the search rankings.
On the "Showing everybody your address space" silliness, if you want to carry on NAT'ing traffic, well, there are plenty of firewalls, load balancers and proxies which will very happily allow you to do this for IPv4 and IPv6.
Elliot consider netscaler to be a non core business, and want Citrix to spin it out.
Not sure how that will play out, a lot of the netscaler sales are based on hard sells to get netscaler into Citrix environments, and leveraging existing customers relationships to sell netscaler for more general ADC requirements.
Question is, without these sales, do they have enough revenue to spend enough in r&d to stay the number 2 in a profitable and highly contended market.
I test drove the MiTo QV when looking for a car a few years ago (In addition to the Fabia VRS, Polo GTI, Swift Sport). The ride was very harsh and crashy, the DNA settings made no difference, and it was generally an unpleasurable experience.
The next test drive was a Swift Sport, which might not have the BHP of the rest, but put a great big grin on my face for 7k less. (And still does, especially on a B road)
The electronics are getting xrayed anyway. Batteries stick out like a sore thumb on the scanners. My guess is a battery full of lithium isn't too easy to spot vs pack of something nasty - but it's easy to spot if you have a small battery next to a separate pack of something nasty in the cavity.
"OK so I can change the whole battery faster than some cars can fill up at 1/2 the price but I have to come back here to collect my (recharged) pack for free?"
While your statement here is technically correct, how far does the huuuugggeee tank on the audi get you, and how far does a fresh battery pack on an EV get you.
The price comparison is pure marketing.
Don't get me wrong, the technology is getting better, and you can make it work as a second car. But all this faffing about with supercharge stations and battery swaps is a marketing bandaid on a fundamental issue.
What a load of marketing tosh
It does a miserable job of saying that you can take your own paid for vyatta license and slap it on any cloud service you like. Vyatta have AWS Marketplace appliances, and the cost for support is about the same if you go annual, or considerably less if you're willing to stump up for a few years.
"Up until now, RackConnect has required an F5 Big-IP or Cisco ASA hardware appliance, but now customers will be able to use vRouter virtual routers instead if they so choose."
And yet for a pittyful 100mbit throughput on the firewall you need to run a fairly large server instance. The ASA appliance will run rings around it.
And on the F5 Big-IP front, load balancing in Vyatta is pittyful, and that's not even scratching the surface of what an F5 can do if you turn the right knobs.
"One important thing, says Engates, is that both the Cloud Networks service and the vRouter service are both IPv6 compliant, so you don't have to mess around with IPv4."
Unless you actually want any users to be able to connect to your services...Don't get me wrong, IPv6 support is nice, but it has so little adoption in this country that having an IPv6 only service is pointless.
Biting the hand that feeds IT © 1998–2019