Re: That's what happens when you use a Huawei router....
China Telecom is much better. British Telecom will do silly things disrupting your internet connection such as filtering your prefixes.
26 posts • joined 21 May 2014
If DoH (DNS-over-HTTP) as a protocol was used in the same way as transitional DNS then this would be true.
However the way the Mozilla and Google are envisaging to implement it is that their browsers use fixed DoH resolvers directly, thus completely bypassing the ISP's DNS servers.
So Mozilla and Google would choose which DNS provider theirs browsers use. Cloudflare and Google and the two main contenders for that.
Self driving cars will have a similar but even stronger effect.
People largely talk about effects improving traffic. Driving coordinated between cars, closer distances, smoother traffic.
However I believe that many people will move from public transport to self driving cars, just because it's so much more convenient and you can make good productive use of the time. I know I would and most people I've spoken to would too.
So there net effect is less public transport use and much more congestion.
This article, like others before, seems to misunderstand the incentives for IPv6 adoptions.
If the number of IPv6-only services grows from 1 to 10 you might call it a tenfold increase, but overall it's still next to nothing. No reputable services will be IPv6-only unless practically all clients are IPv6 enabled. Please name just one significant service that's IPv6 only to support your claim of "more websites and online services will begin to only be available via IPv6".
Some access and corporate networks have good reasons to enable IPv6. However there will always be a many that don't and that will only change if significant services were IPv6-only.
Many IPv4-only services have little incentive to move to IPv4+IPv6 unless significant number of clients are IPv6-only. ElReg is a great example of this; even years of mockery of its user base were not sufficient and it continues to operate on IPv4-only just fine.
Crucially a critical mass of adoption is not enough to break this stable cycle. You can have 95% of clients IPv6 enabled yet still need to provide service on IPv4. You can have 95% of services IPv6 enabled yet still need to provide IPv4 connectivity to your network.
The only way would be to if either practically all access and corporate networks become IPv6 enabled, or significant services become IPv6 only. There is next to no chance of either of this happening.
> Typically blocked content is not copyright infringing where the host server is situated
Yes it is. Section 97A orders are issued in cases where the content are illegal in most jurisdictions.
> or they would attack the source
Being illegal does not mean that it is feasible to address the problem at source. See for example the Ecatel case which was decided last week where it took four years for the court to decide.
> Netflix for example has most of it's content unavailible outside of the US
Which blocking scenario are you considering that relates to Netflix? Netflix asking for a 97A order against UK ISPs to block access to Netflix content ? That would be most bizarre.
No, these orders are not used (or possible to be used) for geo restriction.
> So if the Government instead removed the distribution monopolies [...] then the whole "piracy" issue would disappear
How exactly would that stop counterfeit Cartier watches?
Of course all cases of copyright infringement could be stopped immediately by abolishing copyright. Whether or not a world without copyright (and hence a world without Game of Thrones and possibly Cartier) would be a better world is debatable, but there doesn't seem to be widespread support for it.
Outside Torrentfreak an ElReg that is.
What is a multiplexing ratio? How would consumers be able to interpret that? How would you even define it? It's fiendishly difficult to do this meaningfully.
Let's take it to mean sum on sync rates divided by sum of external network connectivity, counting things such as CDN as external connectivity.
Assume an operator has 10:1 "multiplexing" ratio in this meaning and they offer a 100Mbps service today. Let's say they decide to upgrade that to 200Mbps. Then that suddenly becomes 20:1. Does that mean the network has become worse for the customers?
On the other hand you can have 10:1 which a couple of years ago may have provided a completely uncongested service and today it's slow due as average consumption went up.
There are much more meaningful metrics such as off-peak vs. peak transfer speed. That is actually measured by Ofcom today.
Actually most ISPs today score very well on that.
Considering that a bin liner sells for about 4.5p and is of worse quality that a single-use supermarket plastic bag, let alone the Sainsburys long-life ones, it's not difficult to see that "reasonable costs" would be justified as more than 5p.
Whether they'll request money back from the good causes remains to be seen.
Whether that's a lot to deal with depends on the nature of the attack. If it's a simple (reflective) UDP attack to a non-UDP service then you can easily filter that at the network borders where such a capacity is available in the large national networks and certainly in the Tier 1 ISPs.
If it's an attack simulating the application (eg. a HTTP attack to a HTTP service) from similar networks as legitimate clients then you need a more intelligent scrubbing capability that can analyse and block the traffic in detail. For that it's big.
"100% global availability SLA" http://www.akamai.com/html/resources/cloud-architecture.html
"Rackspace guarantees that its data center network will be available 100% of the time" http://www.rackspace.com/pt/information/legal/cloud/sla
Many people would call 100% stupid yet accept 99.95% as perfectly valid. However if measured monthly any long outage would breach a 100% SLA as much as it would breach a 99.95% and hence in reality 99.95% can be guaranteed as much or as little as 100% can be.
Stupid? Reality of SLAs is much more difficult than looking at a single number.
“We attribute the low comprehension rates to the difficulty of creating an SSL warning that is simultaneously brief, non-technical, simple, and specific"
What's the solution? Changing the wording of the warning clearly isn't sufficient. As long as the browser doesn't have sufficient information to distinguish a harmless forgotten renewal or incorrect local configuration from a genuine attack that problem will remain.
I believe we need new protocols and infrastructure that focus on the negative validation case. The client such as a browser needs more information to make an informed risk assessment. For example:
- has the certificate recently changed
- do I get the same certificate as other clients
- do other clients also have certificate failures or is it only me
- would it validate ok if the client had a missing root certificate
- contact the certificate issuer in case a client verification fails
Telling the average user to not proceed any time there is a certificate error is not realistic or practical. With such additional information however the client could distinguish between a low-risk configuration error and high-risk targeted attack and hence make clear recommendations that can sensibly be followed.
"It would likely be easier, they think, to issue a fixed IP address by default than to set up an infrastructure of DHCP servers, and then charge a monthly fee not to use them."
I wouldn't comment on what you think, nor discuss your obvious enjoyment in Big Telco bashing, Let's look at the facts instead.
Static IP addresses have a number of drawbacks. You need to have them in the first place. RIPE is not going to allocate large amounts of them without significant justification and that just isn't there for the vast majority of customers.
You need to have systems to assign them and communicate them to your customers.
You need to have a support organisation that understands them.
Depending on the network architecture your IP session (PPP or IPoE) can often terminate on different devices (LNSs, BNGs). With static IP addresses you need to have a huge amount of dynamic, deaggregated routing information in your network to get to the right device. Think about millions of /32s if the network. Dynamic addresses are assigned to the device and hence pretty statically routed in your network (and aggregated).
All of this to satisfy the 0.01% of technically savvy El Reg reading (or writing) customer base. Really?
Apologies for the late reply but I couldn't let this stand
"IPv6 offers significant performance enhancements over IPv4 (big packets, better routing, etc.)"
IPv6 packets aren't bigger, nor is its routing better.
(replying to another comment), backbone providers did not make any equipment investment to enable IPv6. Backbone routers have been IPv6 capable since around 2000.
If only it was so easy. Whether it's 1ms or 20ms RTT doesn't make any different to streaming quality. The impact comes from packet loss and potentially packet reordering which you can't reliably measure on a hop by hop basis as there are too many factors affecting it (hidden hops, varying return paths than end node, routers not responding, ICMP throttling, ICMP being treated differently from TCP etc.).
But let's assume it was established that the first hop on Verizon's network had higher packet loss for TCP than the last hop before (taking the same path), indicating some capacity issue on the connection between Verizon and the next network. That still wouldn't tell you who is responsible for that this link doesn't have sufficient capacity - is it Verizon or their peering partner?
"The digital break-in of staff accounts was detected about two weeks ago" ... "no evidence of the compromise resulting in unauthorized activity"
Really? My sister notified them on 22nd April about an eBay phishing email she received which contained her very personal contact details as provided to eBay. The phishing email was asking to fill in a form with all credit card details.
The personal details provided made it look very credible I have to say.
Biting the hand that feeds IT © 1998–2019