El Reg is a great place for IT news and opinion. I'm not sure it's the place to have a hand-holding tutorial on how to configure something.
One IP address, multiple SSL sites? Beating the great IPv4 squeeze
We're fresh out of IPv4 addresses. Getting hold of a subnet from your average ISP for hosting purposes is increasingly difficult and expensive, even the public cloud providers are getting stingy. While we wait for IPv6 to become usable, there are ways to stretch out the IPv4 space. There are several big problems with IPv6 that …
COMMENTS
-
-
-
-
Thursday 2nd March 2017 03:30 GMT Anonymous Coward
Re: Don't care (@ A Non e-mouse)
@Jack of Shadows: "Yep. As soon as I saw "Do a minimal CentOS 7 install, disable SELinux, and follow the basic steps outlined here", I was saving the page and bookmarking the page. I can already see my future doing the arcane here, and arcane it is."
Yea, what we need is more articles about DevOps and Continuous Deployment :)
-
-
Thursday 16th March 2017 08:13 GMT Anonymous Coward
Re: Don't care (@ A Non e-mouse)
Agreed, many shy away from SELinux, but once you get into it, it's not that hard. I lose a lot of respect for anything that starts "Disable SELinux" - as it usually means the author doesn't know about SELinux and just wants it out the way for the purpose of their guide, which likely isn't best security practice for whatever it is they're guiding you through setting up.
Spend the time to learn SELinux and bake the config into your guide. At most here I reckon you'd need to set the context on the caching data directories and perhaps allow nginx to listen on unusual ports.
-
-
-
-
-
Wednesday 1st March 2017 12:40 GMT Anonymous Coward
We'd have plenty of IPs
If there werent massive swathes of unused blocks assigned to universities and the military.
Universities have enormous blocks of IPs most of which are likely to be unused.
Also thanks for the nginx tut, but we're all familiar already thanks.
Anyone here that read that and learnt something new, I'm deeply concerned for you and the firm you work at.
-
Friday 3rd March 2017 05:54 GMT Kiwi
Re: We'd have plenty of IPs
Anyone here that read that and learnt something new, I'm deeply concerned for you and the firm you work at.
So you know every command for every piece of software written for every os? No? I feel concerned for whatever firm sad enough to employ you then.
I know it surprises you, you being such a leet ex-spurt and all, but there is other software out there that does the same job as nginx, and other non-Red Hat OS's as well. Afraid I've only run stuff from the MS, Apple and Debian branches of the OS world, with some dabbling in a couple of the BSD's (and a Devuan VM I haven't got round to doing much with). Have never run anything from RH and have never run nginx, so had plenty to learn from this. Much grats to the author.
-
Monday 6th March 2017 09:46 GMT Anonymous Coward
Re: We'd have plenty of IPs
No I dont know every command (who does) but you can be damned certain that for any tech I use in production I have more than basic knowledge like this article.
I support one of the biggest websites in the UK, pretty much single handedly (its me and a dev) as well as countless other sites where im in a similar situation.
Its easy to say you don't have the time or luxury to properly learn something, but its entirely different when you don't have the luxury of trawling google for solutions.
Its not snobby to expect your peers to work to a good standard. I personally build things with my peers in mind. I won't be at my clients forever someone at some point will have to take over, I therefore have to assume a certain baseline of knowledge and experience. I like to assume that anyone taking over from me is likely to better and more knowledgable than I am, rather than crapper and dumber than I am.
If we as an industry constsntly assume that our proteges are less knowledgable and document / build with this in mind we will drive the quality of our successors down as they wont have to be as smart or knowledgable.
I talk to all engineers as engineers, I dont talk to engineers as if whatever they're being handed is the first work they've ever done.
Think about how absurd it would be if other types of professionals overused Google.
A pro golfer googling which club to use for his next shot.
A builder googling how to build a wall.
A high end chef googling recipes.
A pilot googling how to get the landing gear down.
...a network tech googling how to set up a simple reverse proxy.
-
Tuesday 7th March 2017 11:52 GMT Kiwi
Re: We'd have plenty of IPs
No I dont know every command (who does) but you can be damned certain that for any tech I use in production I have more than basic knowledge like this article.
I call bullshit. So you know every bit of web server software out there? Every trick with networking? Bullshit.
And that's what those of us thanking the writer are speaking about. Something that may be useful later on when we look to try something new.
I support one of the biggest websites in the UK, pretty much single handedly (its me and a dev) as well as countless other sites where im in a similar situation.
That all? I support the entirety of all websites and the entire internet for the whole Andromeda galaxy! As well as several smaller galaxies as well. (Oh, and something stinks about your statement.. Aside from the AC handle...)
Its not snobby to expect your peers to work to a good standard. I personally build things with my peers in mind. I won't be at my clients forever someone at some point will have to take over, I therefore have to assume a certain baseline of knowledge and experience. I like to assume that anyone taking over from me is likely to better and more knowledgable than I am, rather than crapper and dumber than I am.
And yet you put so many people down who probably are far more knowledable than you are.
But if you really do have any more clients than mum's home network, I pity them. I've met arrogance like yours in the workplace, and it often means that while you're spouting off about how great you are and how you support such big clients etc etc, your network really is a badly fucked up mess and they clients would be better hiring the CEO's 3rd cousin's former roomate's severely retarded 3yo niece - much more likely to do a decent job.
If we as an industry constsntly assume that our proteges are less knowledgable and document / build with this in mind we will drive the quality of our successors down as they wont have to be as smart or knowledgable.
If you really worked in the industry you'd have long ago realised that there is so much software out there, so many different ways of doing things that are all very different yet all just as right as each other. It's impossible for any team of people to have even in-depth knowledge of even 1/10th of what is out there in use today. I've used software for jobs that you probably don't even know exist, likewise if you're really in the industry there's things you consider run-of-the-mill and use in your day-to-day work that I've yet to come across.
Professionals should know where to go to find out the answers they need when they need them. Those who think you can know it all should be avoided. There is absolutely nothing wrong with going to a search engine (Google or otherwise) to find a solution to a problem. Knowing how to get the best out of the results counts, putting people down for looking up a tut or how-to simply marks you as a class-1 wanker who spends way to much time alone in mum's basement.
I talk to all engineers as engineers, I dont talk to engineers as if whatever they're being handed is the first work they've ever done.
That's fine. But bear in mind that they may've been to busy with x to have yet looked at y.
Think about how absurd it would be if other types of professionals overused Google.
"Use" and "over-use" are very different things. If you were so great as you claim, you'd know that.
A pro golfer googling which club to use for his next shot.
Er, they do. Ok, not while in a game, but some use such tools to start to familiarise themselves with courses they're expecting to play at in the near future. Their caddy, however, is a person who is supposed to have extensive knowledge of the course and who advises them on which club to use next.. So instead of Google they have someone there with them all the time to tell them what to do next.
A builder googling how to build a wall.
Lots do. Admittedly it's often materials research or looking at new ideas in construction. And sometimes they have an unusual case they want to look at other solutions on, or they want to (re)check building codes for something. Did you know that there's lots of new ideas in construction and materials every year? No, of course not, you're such an expert in everything!
A high end chef googling recipes.
I wouldn't be surprised, if they're making something they've never made before. Why would that be an issue?
A pilot googling how to get the landing gear down.
They don't use Google. They use flight sims and extensive training. I guess in one of those movie-style emergencies where something knocks out the entire flight crew and they only help is a cessna pilot or something.. Oh, did you know there's a shitload of difference between single engined, twin engined, prop vs jet, light planes etc? Did you know that the cockpit of different brands and models are quite different? A person who can fly an A300 would struggle with a 747. Oh, and what about all the checklists that they go though, the ones that tell them what they need to do for each plane at each airport? Yes, landing a 747 at Wellington is different to landing at Sydney, and without the notes that a pilot normally uses their chances of a successful landing are reduced, Google? No. Detailed instruction sheets for each stage of flight in each craft and for each airport? You betcha.
...a network tech googling how to set up a simple reverse proxy.
So you set up secure reverse proxies every day? I doubt it. If you do, then you can't be any good at your job, obviously what you do doesn't stay up very long and needs rebuilding so much you can learn every bit of it by rote. Only, it's no good because it doesn't stay running and you have to do it again. Maybe you should re-learn it?
When I first set up Apache to handle multiple sites I looked on Google (or whatever other search engine was around at the time) for a decent tutorial. Having found one, I set it up. Next time I had to add a site I could simply copy things over from the first site. Before setting up Apache (or any web server) for the very first time I had built a fairly extensive VPN for the firm I worked for (to allow other branches access to the main databases and other software), had built up a number of other nets including a wired version of a mesh network for the neighbourhood I lived in (at a time when internet connections were not a household thing, but you might have 3 or 4 homes in a block with it in), which was done as a prototype and test case for something that didn't happen due to the advent of much cheaper internet. IOW, before I first set up a web server I had done a fair bit of network stuff. There's a first time for everything.
I've not yet set up a reverse proxy with Nginx, never ever looked at it. If I ever decide to I will either remember that this article is there, or I will turn to DDG or Google or some other search engine to familiarise myself with what is involved, and then decide if it is the right tool for the job, worth doing, and within my abilities. Like any real professional would.
Now, toddle off to your dreamland where you're not really living in mum's basement but you run the entire internet!!!!
(Speaking of network experts - El Reg can you please get the captcha sorted out so when we have to use it we don't lose the entire content of the post while jumping through the hoops needed to post from an IP we've only been using for 20 seconds? Some of us still have dynamic IP's that change often. Especially annoying when it's someone who is already logged into El Reg!)
-
Friday 10th March 2017 00:59 GMT OGquaker
Re: We'd have plenty of options
So, my father who patented R.A.M. and computerized image analysis US2933008 in the 1950's,
my brother, a Ham at 17 with 15 years as chief metrologist in aerospace, nephew, now with 10 patents at Microsoft, and i with a year (1977) rebuilding R2D2 in my garage, we spent an hour trying to recharge and/or jump start the kids '56 Buick. Everyone knew the others were getting it all wrong.......
-
-
-
-
Saturday 4th March 2017 02:43 GMT Evil Auntie
Re: We'd have plenty of IPs
Most of those Class A domains were converted to class B or C years ago when the first IP4 shortfall occurred. NAT is now the standard for internal corporate use as it is the basis for first level firewalling. It is pretty common for an international corporation to run a 10.x.y.z domain with a different x for each country, a different y for each site and a unique z for each node. VPN is used to tunnel through the Internet.
-
Friday 17th March 2017 09:10 GMT ravenstar68
Re: We'd have plenty of IPs
"NAT is now the standard for internal corporate use as it is the basis for first level firewalling."
While I don't disagree with the statement, NAT was not designed to be a firewall. It was designed to make the internet last longer.
The term "NAT Firewall", was I suspect coined by marketers.
-
-
-
Wednesday 1st March 2017 12:46 GMT Anonymous Coward
Doesn't a proxy defeat the purpose?
This is exactly why I'm not a big fan of the sudden push for HTTPS; the issue with the dedicated IP addresses which is less of a problem with HTTP and name based virtual hosting.
Sure, a reverse proxy can help, but doesn't it also basically create a new weak link? What is to stop attackers from pointing their attention on the proxy so that they can use that as leverage to gain access to the rest of the traffic? It's not as if we haven't been down that path before...
-
Wednesday 1st March 2017 13:00 GMT Anonymous Coward
Re: Doesn't a proxy defeat the purpose?
It can reduce the exposure of your backend servers so it's not a loss to security or reliability. If you've got redundancies in your backend you can have redundancies in your frontend too. The backend redundancy can be easier in fact, as the reverse proxy can handle monitoring and failover.
Many commercial load balancers act as reverse proxies and many of those have failover built in. I'm not sure if any support SNI though as I haven't looked at any since IE6 was still supported - IE6 didn't support SNI.
-
Wednesday 1st March 2017 13:06 GMT Warm Braw
Re: Doesn't a proxy defeat the purpose?
the dedicated IP addresses which is less of a problem with HTTP
Well, since SNI, there isn't any difference in the number of IP addresses you need for HTTP or HTTPS virtual hosting.
The encryption/decryption load, though, can be very significant once you swap to HTTPS and using a reverse proxy is probably not a solution if you want to pack a number of heavily-used sites onto a single IP address: dedicated hardware appliances may be a better bet under those circumstances.
-
Wednesday 1st March 2017 14:59 GMT DaLo
Re: Doesn't a proxy defeat the purpose?
"The encryption/decryption load, though, can be very significant once you swap to HTTPS"
[Citation needed]
I'll give you a head start https://www.maxcdn.com/blog/ssl-performance-myth/
-
-
Thursday 2nd March 2017 09:18 GMT Anonymous Coward
Re: Doesn't a proxy defeat the purpose?
That's not correct. You either set up a shared session system or you use a load balancer that pins a visitor to a server for the duration of their visit.
The benefit of a shared session system if that the user won't even notice if you reboot the backend server from underneath them.
-
Friday 10th March 2017 02:18 GMT chuBb.
Re: Doesn't a proxy defeat the purpose?
Nope, either put your reverse proxy in front of the load balancer and have redundant rps, or share session state between app servers using memcached or red is etc. Or combine reverse proxy and load balancing into a single role as nginx is capable of load balancing too.
My current favoured approach is to distribute session state meaning i can spin up app servers and add to pool and not really care about maintaining an affinity between them, I.e. Any server can handle any request then use a redundant cluster of nginx images to reverse proxy port 80 and 443 only to the app pool making use of the load balancer in nginx. Management of the pool is done via vpn to the management lan of the cluster, with the only publically accessible entry points being the ports open on the nginx box it sounds like a complex setup which is true in terms of initial deployment but 99% less work from an operational point of view, as security largely comes down to app design and sensible coding rather than masses of network policy as any traffic coming in from the net on a port which isn't port 80 or 443 just gets logged and sinkholed while app traffic is easily monitored using off the shelf tools, logging and other insight frameworks.
This approach isn't just for web/http, with a few port swaps a very similar config underpins the voip platform at the day job...
-
-
Thursday 6th April 2017 13:40 GMT plugwash
Re: Doesn't a proxy defeat the purpose?
It is possible to reverse proxy TLS/SNI without decrypting it. Just grab the hostname from the initial packet the client sends, then proxy at the TCP level.
Doing that does have the downside that you can't use "x-forwarded-for" but there is an alternative called "proxy protocol" that at least some backend servers support.
-
-
Wednesday 1st March 2017 13:07 GMT Brewster's Angle Grinder
Re: Doesn't a proxy defeat the purpose?
The former method is like a office block where there's a public lobby and each office has its own individual key. If an attacker break in, they only get one office. But there are lots of doors to secure and possibly somebody forgets to lock one, or it gets damaged and nobody notices.
Trevor's method puts a single guarded door on the entrance to the office, but doesn't locked any of the doors within. So the attackers only have one door to focus on; but the defenders only have one door to monitor. Swings and roundabouts. That said, if the attackers get in, they have full access. But there's no reason you couldn't encrypt the traffic between the proxy and the backend servers.
-
Wednesday 1st March 2017 13:12 GMT Anonymous Coward
Re: Doesn't a proxy defeat the purpose?
Not really. Your reverse proxy doesn't need any special access to the backend servers - just HTTP. It also has a much smaller attack surface. You aren't running Wordpress on your reverse proxy server for example - you're running that behind it.
If you managed to break into the reverse proxy server, the only bad thing you'd be able to do is to sniff all the traffic. While that's admittedly bad, your reverse proxy server is going to be considerably more secure than your backend servers due to the lack of anything running on it.
You can also make that more difficult by using TLS between your frontend and backend servers ( although if ngix can decrypt it, a determined attacker can too )
-
Saturday 1st April 2017 17:34 GMT patrickstar
Re: Doesn't a proxy defeat the purpose?
End-to-end SSL (applies to other encryption as well of course) comes with its own security risks.
Basically it means you have no reasonable way to inspect the traffic except on the host itself. Which, if the host is compromised, means that it might very well be flat out lying to you. Even if it's not, you'd have to instrument or trace the server software to do it, which in the best of scenarios is a major hassle and slows incident response tremendously. Worst case you end up having to do something insane like MITMing the traffic yourself. Either isn't very good for long-term passive observation, plus alerts the attacker that you're on to him.
So the attacker might be exfiltrating lots of juicy data from your backend, or working on compromising the rest of your network, and all you'd see is SSL traffic with no way to tell the contents. While the attacker can (and often will) mask his traffic as somewhat legit HTTP requests, it's easy to fool a computer pattern-matching the contents and another thing entirely to fool a human observer consistently.
Not saying it's universally a bad idea or anything, just that it's something that has to be considered, and weighted against the risk of terminating SSL somewhere else than the actual web server.
-
-
-
Wednesday 1st March 2017 13:23 GMT PyLETS
Re: Doesn't a proxy defeat the purpose?
No particular reasons not to run the proxy on the same host for low traffic multiple domain name sites, allowing more modular webserver configuration. Then no part of the link between the proxy and the back end web server becomes any less secure than the host OS. In most cases the threat model being defended against with HTTPS in preference to HTTP isn't likely to concern the link between the proxy host and the backend host if these are running on different hardware within the same secured LAN anyway.
-
Wednesday 1st March 2017 12:55 GMT Anonymous Coward
Even putting aside the inability to get IPv6 addresses directly from the ISPs on consumer lines, getting an IPv6 subnet for use with business fibre connections can often be a nightmare of justification forms and bureaucratic nonsense.
Wow even in rural Alabama , fiber is IPv6 . Comacast, ATT ,Verizon are Rolling IPv6. Almost all .gov sites have an IPv6 address.
-
Wednesday 1st March 2017 16:51 GMT Ken Hagan
I noticed that, too. I wonder how much of Trevor's hostility to IPv6 would disappear if the ISPs that he is forced to work with (and I mean forced, since they are chosen by his clients and presumably that decision isn't one that El Trev can easily overturn) suddenly offered working IPv6 with no restrictions on its use.
As things stand, the choices appear to be:
No IPv6. Nada. Fuggeaboutit.
Well, maybe we could fix you a tunnel through some provider and you could try to funnel all of your client's network presence through that tunnel (and hope that the tunnel provider never fails).
Well, maybe we could give you IPv6 but it would have to be on our "special plan for lab-rat customers" whilst we figure out ourselves how to do it.
Well, maybe but we won't provide you with any level of 4/6 interop, so you'll have to implement your client's operations twice -- once in IPv4 and once again in IPv6.
Faced with those choices, and knowing that the extra leg-work would be different for every client I have, I'd probably be like Trevor: work out an IPv4 workaround for everything and ignore IPv6 until the ISPs of the world get their fucking fingers out.
-
Wednesday 1st March 2017 17:08 GMT Trevor_Pott
I use SixxS tunnels. They randomly stop working and cause problems. I'm not a fan.
Even if they did work, however, there's still the renumbering problem, which was never solved. Every other complaint I have aside, renumbering is a massive problem that you simply can't get around without 1:1 NAT, something which causes the purists to ooze out of the wall and start wailing about how the world isn't fair and we're trying to take away their toys.
Which means, of course, that you have to choose between downtime and a :lot: of administrative effort whenever you need to fail over between links (because you don't get BGP access for SMB internet connections) or you have to very carefully pick your software such that it doesn't require some stupid end-to-end configuration because there's some gods-be-damned IPv6 purist working as a dev at the wretched urine factory that made the app you want to use.
So you know what? Not so fond of IPv6. Maybe if it wasn't drafted by, and subsequently lorded over by, a bunch of elitist fuckbaloons that don't give a rat's ass about anyone who can't stump up a few million a year in internet connectivity I might care. Bunch since the poxy whoresons decided to just abandon the majority to the wolves because we "don't matter", I'm not particularly inclined to give them a free ride.
-
-
-
Wednesday 1st March 2017 22:27 GMT Gerhard Mack
Wrong.
"NPT *is* 1:1 NAT, and IPv6 purists hate the ever-living crap out of it, with many refusing to code for it, add support for it, etc.
I even wrote about it in the article I linked to..."
It would have helped if the article you linked to wasn't completely full of crap.What IPv6 Purists hate is 1 to many NAT. NPT on IPv6 is easy and has been supported for years (I've used it) and support is firewall based so application independent.
Don't even get me started on the bits of IPv6 doing away with static IPs, it was actually DHCP they wanted an alternative to. On public servers, you will want to renumber anyways if the ISP changes your address. On private servers, you will want to assign them to a local (non routeable) IPv6 range and either 1:1 NAT at he gateway or use the local IPV6 addresses internally and allow the machine to auto assign the external IPs for internet access. Again, IPv6 makes this easy.
-
Thursday 2nd March 2017 04:36 GMT Yes Me
Re: Wrong.
"it was actually DHCP they wanted an alternative to."
Historically, not. DHCP wasn't even there when IPv6 autoconfiguration was invented, modelled on Novell Netware IPX. DHVPv6 was an add-on some years later, after DHCPv4 saved IPv4 from configuration collapse.
(While I'm here, NAT44 wasn't there either, in terms of actual products, when IPv6 was invented. NAT44 saved IPv4 from an early grave, but *after* IPv6 was already designed.)
-
Thursday 2nd March 2017 05:33 GMT Trevor_Pott
Re: Wrong.
And it took 20 years to get the bastards to admit we needed Network Prefix Translation, and it will be 20 more before it's widely supported enough for use. NAPT in IPv4 scared the IPv6 purists enough for them to fight a generation-long war against the simple idea ease of use matters for someone other than developers, universities flush with grant money and large corporations.
-
Thursday 2nd March 2017 13:24 GMT Gerhard Mack
Re: Wrong.
"And it took 20 years to get the bastards to admit we needed Network Prefix Translation, and it will be 20 more before it's widely supported enough for use. NAPT in IPv4 scared the IPv6 purists enough for them to fight a generation-long war against the simple idea ease of use matters for someone other than developers, universities flush with grant money and large corporations."
Again, it has been supported and completely usable since before you wrote the original article in 2012.
You are like the Breitbart of the tech world.
-
Thursday 2nd March 2017 20:28 GMT Trevor_Pott
Re: Wrong.
An RFC existing doesn't make anything supported or usable. Being incorporated into working products does. Having applications not coded to expect end-to-end and having them not die when there's a prefix change does.
In short: years and years of IPv6 "support" has to be completely undone and redesigned. NPT hadn't been done then, and is still incredibly rare today. Of course, we could always use the traditional IPv6 purist answer: everyone should throw away everything they have and buy the most expensive possible new everything and just hope it supports what you need. Just do that regularly and you'll clearly be fine.
Or, you know, not use IPv6 until everyone gets their shit together.
RFCs are only "usable" once broadly implemented. Still fucking waiting...
-
-
Friday 3rd March 2017 15:24 GMT Trevor_Pott
Re: Wrong.
@Orv: then you'd clearly be surprised at the number of network equipment vendors still shipping models today that don't support it. Let alone any of the midmarket, SMB or consumer level stuff, which are the folks that really need it. You know, because of renumbering. We're still a decade away from NPT getting to the folks as need it. And judging from the reactions of IPv6 purists here in this very thread, we might have to wait more than a decade before the purists decide they'll support NPT in the software they develop.
Awesome. And just think, had the IPv6 elites not been stubborn asshats for 15 years, we could have solved all of this ages ago and could be using it today in a manner that met everyone's needs. But people suck.
-
Monday 6th March 2017 18:21 GMT Orv
Re: Wrong.
I will admit that consumer-level stuff mostly doesn't support NPT. But consumers are generally not running static IPs to begin with. If their prefix changes they just get the new prefix via router discovery and carry on.
I can't speak to SMB equipment. But for the cost of a low-end server you can use pfSense, which does support it, and has for quite a while now. That code originated with OpenBSD, and no one has ever accused them of being insufficiently pure. ;)
-
-
-
-
-
Saturday 4th March 2017 22:40 GMT Gerhard Mack
Re: Wrong.
"There's no need for that sort of language around here."
How else to describe it? The guy has invented motivations in his head for missing features that aren't actually missing, ignored several people here who told him hes wrong, and continued to heap insults on the IPv6 designers based on his original misconceptions.
The only thing that might be true, is that SMB and home equipment doesn't support it(I don't know one way or the other. But it's hardly the fault of the IPv6 designers if manufactures didn't bother to implement features available by other manufactures.
-
-
-
-
-
-
Thursday 2nd March 2017 04:31 GMT Yes Me
NPTv6
Um, prefix translation is not the same as address translation. So some of the downsides of NAT44 don't apply - no issue with port sharing, since there's a full set of ports for each client. But there simply are no IPv6 scenarios that *need* translation; you firewall off the threats, so NPTv6 brings no benefits, only the downsides.
-
Thursday 2nd March 2017 22:43 GMT Orv
Re: NPTv6
"you firewall off the threats, so NPTv6 brings no benefits, only the downsides."
The advantage of NPTv6 isn't security; it's not having to individually renumber all your machines if you change ISPs. With NPTv6 you can put them all in private IP space and just translate whatever prefix your ISP gives you.
-
-
-
-
Thursday 2nd March 2017 05:30 GMT Trevor_Pott
Re: SixxS
Well, I don't go hanging websites off of a SixxS tunnel. But it's really the best solution for the end-users who want to, for example, learn about IPv6 at home so that they aren't left behind as the rest of the world moves on. You know, because their ISPs are from the bloody dark ages.
-
-
Thursday 2nd March 2017 10:37 GMT Wensleydale Cheese
"Not so fond of IPv6. Maybe if it wasn't drafted by, and subsequently lorded over by, a bunch of elitist fuckbaloons that don't give a rat's ass about anyone who can't stump up a few million a year in internet connectivity I might care ...
Nice rant there, Trevor.
Perhaps OSI wasn't so bad after all. :-)
Back in the day I cussed quite a bit about OSI, but it was more to do with the lack of documentation and a cumbersome admin interface, and the bit of it I used did actually work once you'd sussed out the arcane commands.
It wasn't really a surprise that IPv4 won that battle.
-
Thursday 2nd March 2017 04:25 GMT Yes Me
Real programmers *do* use IPv6
Yes, people who want to offer IPv6 service for the increasing number of IPv6 clients can just do it. Either (boring) use a tunnel provider or (exciting) use a CDN such as Cloudflare. No need for your own ISP to lift a finger.
Also, Trevor: a 2012 reference for IPv6 issues? The world has changed a number of times since then. However, I agree: any applications or web service provider needs to support IPv4-only users well into the future, as well as the growing IPv6 population.
I'm still puzzling about what https://nir.regmedia.co.uk at 2400:cb00:2048:1::104.25.78.107 has to do with anything. It's pingable.
-
-
Wednesday 1st March 2017 13:23 GMT DesktopGuy
There are quite a few IP addresses if the corporates share
If you want to free up addresses, get the big old corporates who hopped on in the early 90s to return some addresses.
There are a whole swath of /8 blocks that each contain 16,777,216 IPs.
Get Apple, Ford, GE, Prudential, UPS to return a few as good corporate citizens.
Maybe get the US Department of Defence (which has the most addresses of any entity by a massive margin) to be a little more sharing.
https://en.wikipedia.org/wiki/List_of_assigned_/8_IPv4_address_blocks
-
Wednesday 1st March 2017 14:08 GMT Arthur the cat
Re: There are quite a few IP addresses if the corporates share
There are a whole swath of /8 blocks that each contain 16,777,216 IPs.
Get Apple, Ford, GE, Prudential, UPS to return a few as good corporate citizens.
I don't know if it's still the case, but 10-15 years ago a friend of mine worked in IT for a GE subsidiary and GE had the stupidest net policy I've ever come across. All internal GE machines had addresses in their assigned 3/8 network but none of them could be externally visible. World facing machines were usually on class C network addresses. They could have switched their internal machines from 3/8 to 10/8 and handed back the entire class A network for reuse.
-
Thursday 9th March 2017 07:16 GMT Vic
Re: There are quite a few IP addresses if the corporates share
All internal GE machines had addresses in their assigned 3/8 network but none of them could be externally visible
Ericsson has the same policy.
They could have switched their internal machines from 3/8 to 10/8 and handed back the entire class A network for reuse.
The trouble is, that's not free. There's a chunk of work to do the change, mae sure every machine actually has moved, test it all, sort out anything that doesn't work. That's potentially quite a bit of outlay for a large company - and the return on it is absolutely nothing at all. Thus there is no motivation to do the job...
Vic.
-
-
Wednesday 1st March 2017 17:07 GMT Ken Hagan
Re: There are quite a few IP addresses if the corporates share
Trouble is, those addresses are probably hard-coded in several thousand configuration files and compiled into dozens of bespoke apps, at least some of which no longer have any source code (or build system even if the source code could be found).
Even if nothing needed to be done, it would cost squillions for the company in question to verify (to some level of confidence) that they could safely move away from a block of 16 million addresses. It would then cost even more to actually make the switch in an organisation that presumably runs 24/7 and doesn't want an awkward "transition period" even if that period is only a few hours long and can be scheduled over a public holiday.
After classless routing became a thing, Stanford Uni spent several years (as far as I can tell) patiently hunting down such references and cleaning out their 36.0.0.0/8 block by block until they were able to hand back most of it in 2000. This is a university and so they (and their users) have the expertise to spot problems and fix them. They presumably could tolerate a bit of collateral damage within a student population that churns every few years anyway. And this was the 90s, when usage was probably less than now in any case.
I doubt whether anyone else will ever try it. For a commercial outfit, the proposition is so implausible that I doubt they'd even suggest a price if ICANN approached them.
-
Wednesday 1st March 2017 20:58 GMT John Crisp
Re: There are quite a few IP addresses if the corporates share
"Even if nothing needed to be done, it would cost squillions for the company in question to verify (to some level of confidence) that they could safely move away from a block of 16 million addresses. It would then cost even more to actually make the switch in an organisation that presumably runs 24/7 and doesn't want an awkward "transition period" even if that period is only a few hours long and can be scheduled over a public holiday."
So I presume they'll never be moving to IPv6 then.....
Yup, IPv6 is a clusterfsck.
-
-
Wednesday 1st March 2017 18:46 GMT Anonymous Coward
Re: There are quite a few IP addresses if the corporates share
I know ATT is currently working on a project to try and reclaim some of it's IPv4 blocks. Part of the problem is that they have merged with other ISP and there was not always good record keeping. Iwas told that it would be impossible for ATT to reclaim all of the unused IP blocks.
-
Thursday 2nd March 2017 05:56 GMT plugwash
Re: There are quite a few IP addresses if the corporates share
By the end the RIRs were getting through about two /8s worth of IPv4 each month. If all the legacy blocks could have been freed up instantly it might have bought us a couple of years but realistically it would have taken years for those companies to execute their migrations and it would have been a massive fight to force them to do it.
Free and easy IPv4 had to end. Embarking on a massive legal fight to slightly delay that end would not have been a good use of anyone's time or effort.
-
Thursday 2nd March 2017 14:33 GMT Charles 9
Re: There are quite a few IP addresses if the corporates share
Plus there's the other matter IPv6 solves: ROUTING, which IPv4 scrounging will only complicate because you'll be scrambling the routing tables even more, and they're ALREADY to the point that backbone routers are starting to choke.
-
-
-
Wednesday 1st March 2017 13:35 GMT g00se
Letsbecareful
Letsencrypt certs don't last long, so you'll want to set up a nightly cronjob to make sure certbot looks for any certificates about to expire and renews them. The cronjob command is simply /root/letsencrypt/certbot-auto renew, which in the case of this guide would have to be run as root.
I know nothing of that script/binary yet it's worrying. Am i a reserve soldier in a botnet army just waiting to be called up? It's free, so am i the product? How much do i need to worry?
-
Wednesday 1st March 2017 13:36 GMT WonkoTheSane
Simple answer
"The ISP of one of my clients, for example, wants me to detail the name of each computer that will be attached to a given IPv6 address and what's used for. I just stare at the spreadsheet like a deer in headlights unsure where to even begin with something like that. I'm not sure that drawing a penis in the spreadsheet cells and sending it backed labelled "my answer makes as much sense as your question" is going to get me what I need."
Just enter "The router YOU supplied me, you wonks!" and send the sheet back.
Anything behind the router is NONE of their business.
-
Wednesday 1st March 2017 14:07 GMT Steve the Cynic
Re: Simple answer
Two days before Christmas 2016, two guys from Orange (France) came to my flat dragging a reel of optical fibre and a Livebox 4. When they left, I had a new public IPv4 address (I've no idea if it changes), 200+ Mbps down / 100 Mbps up internet service, and a 2a01:stuff/56 IPv6 prefix. (No, I'm not telling you what specific stuff...)
OK, that's cool, but how hard did I have to fight to get this?
I didn't. I'd have had to fight to *stop* it. Yes, that's right. They almost forced it on me. And no questionnaire on what's inside my network...
-
Thursday 2nd March 2017 13:35 GMT SImon Hobson
Re: Simple answer
They almost forced it on me. And no questionnaire on what's inside my network
That's becoming the norm for up to date ISPs. The one wanting info on everything down to your inside leg measurement is probably still stuck in the "IPs are scarce, everything has to be justified" mindset and just haven't woken up to the fact that they can hand out a /56 or /48 to all customers without having to think about it.
Many ISPs are now turning on IPv6 in their supplied routers, so many users have it without even realising. My own ISP at home ran a trial, then it went quiet and we've heard nothing from them for a couple of years - so I'm still stuck using a tunnelbroker tunnel (thanks Hurricane Electric) for now.
-
-
-
Wednesday 1st March 2017 22:35 GMT Down not across
Re: Simple answer
They supply a media converter. I build my router out of an Atom and CentOS. :)
Ok, I'm curious. Did you try pfsense, and if you did what made you choose the way you did?
(yes, I'm aware Linux network stack has improved a lot and does support more obscure hardware better than FreeBSD)
-
Wednesday 1st March 2017 23:08 GMT Trevor_Pott
Re: Simple answer
I know pfsense. I prefer Linux. The reason is simply Webmin. Webmin is a great GUI. We use it for other Linux endeavors. Also: I can load up the edge device with a bunch of filters, packet sniffing and the like and roll to taste.
Familiarity does have its positive attributes.
-
-
-
-
Wednesday 1st March 2017 14:34 GMT Lee D
If anything, I've only ever required one IP per workplace but been given 5 per connection, plus 5 for any external server we rent.
Were we able to get, e.g. BT and Virgin, to properly co-operate on their lower-end of leased lines so we could have proper AS announcements, we would only need the one IP address per site.
From that, SNI is a given and you can safely NAT thousands of users without a problem. I have Smoothwall reverse proxy for a number of things (not least, it can inspect the traffic en-route to internal services acting as IPS at the same time as SSL-wrapping services which aren't SSL-capable internally) but I wouldn't use it to save on IP addresses.
There's no reason that I need lots of external IP's any more. One is sufficient and every connection that gives me a second IP, I'd be more than happy to set up in a load-balance/failover configuration on the same IP anyway, but it's generally not possible with normal business offerings. If anything, having one IP makes things so much simpler.
Currently I run two sites with an external server range in a datacenter, including two leased lines and two VDSL lines. I currently have three ranges of five IPs (six including the gateway IPs, and the VDSLs include a Cisco router which is doing bonding to one internal range), each of which only one is used to refer to that range and add it to the firewall. And only one is publicly advertised as the destination of every DNS setting we use. On top of that I have a small range for the external servers, again, we only use one.
I make that a wastage approaching 85% just for a small business. If everyone is doing the same, it's no wonder there are no IPv4 left.
And again, Reg, when you are going to deploy AAAA records. Look - you could use this article to put a machine on IPv6, add it as the AAAA record, and just have it proxy to your IPv4 site. But I'm guessing the answer will be "we're working on it" for about the seventh year in a row.
-
Wednesday 1st March 2017 14:42 GMT Eugene Crosser
Thumbs up, but have to respectfully disagree with some things
> ...the real barrier to adoption is that consumer-facing ISPs in many parts of the world still aren't handing out IPv6 addresses to subscribers.
Indeed. For some reason, this fact is often overlooked, while other less important obstacles are undeservedly highlighted.
> NAT breaks the end-to-end model obsession that is responsible for most of the horrible things about IPv6.
As long as you consider withdrawal from NAT addiction to be the most horrible thing about IPv6...
> NAT is a
fantastic meanshorribly hacky way of plopping an entire network down behind a single IP address and making individual servers behind that IP available on different ports.And it is only possible because the original design accidentally overbooked for the port namespace, and underbooked for the address namespace.
(Perhaps, the concept of classless subnetting should have been extended to include the port part... Though dealing with ICMP and other non-TCP-or-UDP protocols would be tricky. And it is too late anyway.)
> cd ~/letsencrypt DOMAINS="-d example.com -d www.example.com" /root/letsencrypt/letsencrypt_gen
Except you will have to use one certificate for all domains hosted on your server. Which kind of defeats the purpose of TLS, at least in part.
There have been suggestions to make it possible to pass the `host` indication before the TLS handshake, but none of them took off, to the best of my knowledge.
-
Wednesday 1st March 2017 17:17 GMT Trevor_Pott
Re: Thumbs up, but have to respectfully disagree with some things
A) I'm sorry, NAT has a purpose. That purpose is renumbering. SO I'm not listening to anything else you have to say about IPv6. Your opinions are now invalid.
B) You don't have to have one certificate with all the domains on your server using my method. Only one certificate per server {} block. Each server {} block gets it's own cert and you can have multiple server {} blocks point to a single backend server, if you want.
So um...NEXT!
-
Thursday 2nd March 2017 13:42 GMT SImon Hobson
Re: Thumbs up, but have to respectfully disagree with some things
NAT has a purpose
NAT is horrible, cludgy, breaks stuff (I have a special, NSFW, vocabulary for the "we are proud to break stuff" approach taken by Zyxel), and forces countless developers to have to WASTE time working around the shit it causes. If all the effort that went into NAT and working around it's breakage had gone into other stuff, the world would be a better place today.
NAT comes into the category of "if the only tool is a hammer, all problems look like a nail" - and too many people still haven't learned to recognise screws, bolts, etc !
NPT on the other hand does have a place as long as it's done sensibly, but even then I'd argue there are few places where it's the right tool.
-
Thursday 2nd March 2017 17:36 GMT Eugene Crosser
Re: Thumbs up, but have to respectfully disagree with some things
> A) I'm sorry, NAT has a purpose. That purpose is renumbering. SO I'm not listening to anything else you have to say about IPv6. Your opinions are now invalid.
Renumbering, yeah... I guess it should be possible to do IPv6 NAT for that, which would just rewrite the prefix, without touching the port. It will be easier to implement, more robust (because stateless), and work for port-less protocols without any special arrangements. Still have to deal with the packets that have addresses in the payload (like ICMPv6 Destination Unreachable etc.), but much less ugly than IPv4 NAT mess. Did not check if a standard exists.
> B) You don't have to have one certificate with all the domains on your server using my method. Only one certificate per server {} block. Each server {} block gets it's own cert and you can have multiple server {} blocks point to a single backend server, if you want.
I really fell behind on this one.
-
Wednesday 1st March 2017 17:53 GMT Mike Dimmick
Re: Thumbs up, but have to respectfully disagree with some things
Server Name Indication has been supported in browsers for many years. The last major web server to support it was Microsoft's IIS in Windows Server 2012.
-
Wednesday 1st March 2017 19:35 GMT bombastic bob
Re: Thumbs up, but have to respectfully disagree with some things
"server name indication" - like for name-based hosting? works for https, too.
At least, last time I tried it. Which was recently. Using Apache. Just sayin'.
I have multiple https certs pointing to the same IPv4 (and IPv6) address, but by name (and not IPv4 address) so that the IPv6 works, too, with the same cert. Seems to do just fine. Yeah, I self-cert. Load my own root cert in the browsers and it all just works. There's a spot in the cert where you can specify where to download it from, as I recall.
proxy would work just to re-direct it to private web servers, but if you want name-based hosting on a shared IP address, it's pretty straightforward.
-
-
Wednesday 1st March 2017 18:21 GMT really_adf
Re: Thumbs up, but have to respectfully disagree with some things
> There have been suggestions to make it possible to pass the `host` indication before the TLS handshake, but none of them took off, to the best of my knowledge.
SNI, mentioned in the article, has taken off. And the configuration described makes nginx use the information to select the certificate.
-
Thursday 2nd March 2017 05:56 GMT plugwash
Re: Thumbs up, but have to respectfully disagree with some things
"There have been suggestions to make it possible to pass the `host` indication before the TLS handshake, but none of them took off, to the best of my knowledge."
Server name indication was standardised back in 2004. Unfortunately it has taken a long time for client support to reach the point that websites can seriously consider requiring it.
But we are now at that point, clients that don't support it (most notably Internet Explorer on Windows XP and the built in browser on andriod 2.x) are an ever-shrinking minority. Furthermore clients old enough to not support it likely also don't support modern cryptography and so are increasingly likely to find themselves frozen out anyway.
-
-
-
Wednesday 1st March 2017 20:43 GMT Dwarf
Re: WTF
Thats because the ISP is trying to apply IPv4 public IP address logic to IPv6 space and not understanding that there is no purpose.
The purpose of the original IPv4 logic was to make sure that requests for the limited resources were at least given a sanity check, this is generally a requirement of the RIR
-
-
Wednesday 1st March 2017 19:12 GMT Adam 52
"even the public cloud providers are getting stingy"
Are they? We specifically asked AWS and they said nothing to worry about.
A quick Google suggests they've got enough reserved to last about 30 years assuming current growth rate growth, and a lot longer (centuries) if they maintain linear growth.
-
Thursday 2nd March 2017 02:16 GMT itzman
You dont need a reverse proxy to do this
Simply set up e.g. apache to direct ALL https traffic to a script, inspect where the user thinks he has got to and vector to the appropriate web pages.
Of course it breaks the authentication of HTTPS whichever way you do it. https expects that a single IP address will be a single authenticated object.
-
Thursday 2nd March 2017 05:26 GMT Trevor_Pott
Re: You dont need a reverse proxy to do this
What makes you think all of a person's websites can even run on the same version/configuration of Apache, PHP et al. Indeed, by using nginx I can inject a bunch of security into the stream for those sites that demand usage of ancient versions of things.
-
-
Thursday 2nd March 2017 12:33 GMT mythic-beasts
Fancy an IPv6 rebuttal?
We're Mythic Beasts, we run the Raspberry Pi website with IPv6 only back ends. We also run a Raspberry Pi cloud, again with no IPv4 connectivity on the Pi, entirely IPv6 only.
We think you're wrong dismissing IPv6 and embracing NAT, we think IPv6 makes your configuration easier and adds additional flexibility. It can improve your configuration and interactions with a CDN, with monitoring and with email.
How do we write the rebuttal for el-reg?
Pete Stevens (Director).
-
Thursday 2nd March 2017 21:45 GMT Blotto
End to end is a myth
When does anyone truly connect directly to a remote web server?
For any half decent website you go through at least the following::
Firewall
Ids/ips
icap
Reverse proxy
Load balancer
SSL offload
Web server
Auth server
Content engine
Database
You rarely connect to a single remote host, even though the impression is you do. For security, capacity and speed attributes for a connection are deliberately spread over a number of devices.
Decades of IPv4 usage has developed robust mature methodologies that IPv6 is hellbent on ignoreing for no good reason.
The bumfight between IPv4 & IPv6 has resulted in stupid decisions for IPv6 that bring back problems solved decades ago. While the original intention of IPv4 was end to end connectivity, we know know it's not always best practice, and breaking the session has many advantages. IPv6 needs to embrace and evolve the mature innovations in IPv4 instead of just throwing them away and asking us to start from scratch even though v6 has been around for more than 2 decades. Just do NAT, tell everyone how, bad it is and let everyone make the choice themselves rather than force "no nat" on us
-
Friday 3rd March 2017 07:19 GMT Charles 9
Re: End to end is a myth
But without true end-to-end connectivity, you necessarily limit the abilities of many Internet users, preventing things like home-hosted servers. Peer-to-peer systems also take a big hit. And these problems get worse with CGNAT. These in turn are creating more central-controlled systems that become threats to privacy. Which would you prefer?
-
Friday 3rd March 2017 12:30 GMT Blotto
Re: End to end is a myth
I'd prefer IPv4's maturity with more addressing so people can run home servers / IOT if need be.
the war on NAT is idiotic, allow people to choose to NAT or do end to end.
How many people will host servers at home?if you want, do as this article suggests and share out your port 80 and 443 to a number of named URL's if a static IP is that important, buy some from your ISP
Peer to Peer, how many internet users care about peer to peer, not many in the grand scheme.
How many people are affected by CGNAT, If eBay, Amazon, google where impacted by users on CGNAT not able to reach them they'd devise means to over come that.
The vast majority of Internet users just want to securely and safely connect to google, amazon, ebay, BBC news, Government, Bank, facebook, snapchat, or whatever. All those main stream popular sites rely on tech that breaks the end to end session, by design, in order to offer scalable resilient connectivity for visitors
We have evolved from end to end being a necessity, its no longer a barrier & in many respects is a burden. Virtual End to End is what we have and the world carries on efficiently and profitably.
Think for 2 seconds, do you really think you are constantly connected to the same server when you connect to google or facebook? your not, requests are sent to the next available machine for processing, your session is a DB entry in some session controller system, removed from the server actually dishing out the http(s)
-
Friday 3rd March 2017 13:10 GMT Charles 9
Re: End to end is a myth
"I'd prefer IPv4's maturity with more addressing so people can run home servers / IOT if need be."
You can have one or the other, NOT BOTH because you'll scramble the IPv4 routing tables, and these are choking the backbone routers. One of the things IPv6 fixes is this by structuring the front half of the addresses to prevent a recurrence.
"How many people will host servers at home?if you want, do as this article suggests and share out your port 80 and 443 to a number of named URL's if a static IP is that important, buy some from your ISP"
Oh, so you want the ISP to be Big Brother?
"How many people are affected by CGNAT, If eBay, Amazon, google where impacted by users on CGNAT not able to reach them they'd devise means to over come that."
Ask the Asians, many of whom are now behind one or more CGNATs. Or big cell phone providers, who can have more than 16 million customers at a time: too big for even an /8 internal network. Thus why they're some of the biggest forerunners of IPv6. You want to talk to cell phones? Better learn IPv6. Amazon, Google, etc. ARE on IPv6 because they know this. And BTW, the reason Asia doesn't help push IPv6 is because most of their commerce is LOCAL (BEHIND the NATs) in nature. Like how Baidu's the main e-commerce site in China.
"Think for 2 seconds, do you really think you are constantly connected to the same server when you connect to google or facebook? your not, requests are sent to the next available machine for processing, your session is a DB entry in some session controller system, removed from the server actually dishing out the http(s)"
Think for 2 seconds. Do you want your home systems controlled by YOU and ONLY YOU, with only a home server to link up that uses neither HTTP, HTTPS, OR SNI? Or do you want what's happening now, with vendors providing the NAT-piercing links and becoming Big Brothers while they're at it?
TL;DR: I would prefer the anarchy of IPv6 and the ability to determine whether or not my endpoints are hooked up (using firewalls) than the police state of being forced to run behind one or more NATs that aren't likely to be under my control and therefore be beholden to big Internet companies and their lack of humanity or due care and attention.
PS. There's more to the Internet than just the World Wide Web.
-
Friday 3rd March 2017 15:59 GMT Blotto
Re: End to end is a myth
#"You can have one or the other, NOT BOTH because you'll scramble the IPv4 routing tables, and these are choking the backbone routers. One of the things IPv6 fixes is this by structuring the front half of the addresses to prevent a recurrence."
What a load of bollocks, IPv6 doesn't fix this, it just resets the mess for it to slowly become a mess again.
IPv4 routing tables are not choking, its a matter of memory and cpu. Due to ever smaller subnets allocated in non-contiguous allotments assigned to varied AS's, summerisation is no longer as efficient as it used to be.
Adding NAT to IPv6 does not break IPv4.
IPv4 working practices, including NAT, with double the addresses or even with IPv6 style addresses would be fine.
#"Oh, so you want the ISP to be Big Brother?"
WTF? the ISP is already big brother, they see all your outbound and inbound traffic, they have to as they have to route it for you.
#"Ask the Asians, many of whom are now behind one or more CGNATs. Or big cell phone providers, who can have more than 16 million customers at a time: too big for even an /8 internal network. Thus why they're some of the biggest forerunners of IPv6. You want to talk to cell phones? Better learn IPv6. Amazon, Google, etc. ARE on IPv6 because they know this. And BTW, the reason Asia doesn't help push IPv6 is because most of their commerce is LOCAL (BEHIND the NATs) in nature. Like how Baidu's the main e-commerce site in China."
I'm not hearing people moaning about CGNAT. How the hell can a whole continent (clients and servers) be behind CGNAT? Just how does that work. If commerce and clients are behind CGNAT and it still works and no one complains because they are all behind it, then what's the problem? Sounds like the only one with a problem is you.
#Think for 2 seconds. Do you want your home systems controlled by YOU and ONLY YOU, with only a home server to link up that uses neither HTTP, HTTPS, OR SNI? Or do you want what's happening now, with vendors providing the NAT-piercing links and becoming Big Brothers while they're at it?
My home systems are currently only controlled by me and only me. yes i have remote access, IOT etc all works fine. If i had a need i'd purchase static IP's from my ISP, the 1 dynamic address i currently have is ok.
#TL;DR: I would prefer the anarchy of IPv6 and the ability to determine whether or not my endpoints are hooked up (using firewalls) than the police state of being forced to run behind one or more NATs that aren't likely to be under my control and therefore be beholden to big Internet companies and their lack of humanity or due care and attention.
the only NAT of significance that my home traffic passes through is the one i have control of, yes i hide my proxy traffic behind my FW IP, I can also do manual NAT too especially useful for the non www systems at home i remotely connect to. The NAT on my works infrastructure hasn't stopped me connecting to my home either, their proxy polices have stopped access to my home web server (curse that bluecoat not letting me connect to IP's or free hosting URL's), there are however workarounds.
The problem is the insistence & belief that NAT is evil, its not, its an enabler & should not be disregarded. IETF has even recognised and conceded that IPv6 NAT is necessary and come up with a partial reinstatement with NAT66. https://tools.ietf.org/html/rfc6296
IETF is expending great effort in not NAT'ing yet coming up with NAT like solutions. Why emulate NAT if its such a bad thing?
https://tools.ietf.org/html/rfc7157#page-3
-
Friday 3rd March 2017 19:46 GMT Charles 9
Re: End to end is a myth
"What a load of bollocks, IPv6 doesn't fix this, it just resets the mess for it to slowly become a mess again."
If you can choke 64 bits of addressing, I'd love to see how you produce the matter needed to create that many nodes.
"IPv4 routing tables are not choking, its a matter of memory and cpu. "
And guess what? Backbone routers have FIXED memory, not to mention not a lot of time to do their work so they do most of their routing in hardware, limiting the amount of RAM they can use. Thus why IPv4 routing tables are FIXED at 512,000 entries. Plus because they're high-performance, they're expensive.
"I'm not hearing people moaning about CGNAT. How the hell can a whole continent (clients and servers) be behind CGNAT? Just how does that work. If commerce and clients are behind CGNAT and it still works and no one complains because they are all behind it, then what's the problem? Sounds like the only one with a problem is you."
Because like I said most of them do business locally (which to them is BEHIND the NAT).
"the only NAT of significance that my home traffic passes through is the one i have control of, yes i hide my proxy traffic behind my FW IP, I can also do manual NAT too especially useful for the non www systems at home i remotely connect to. The NAT on my works infrastructure hasn't stopped me connecting to my home either, their proxy polices have stopped access to my home web server (curse that bluecoat not letting me connect to IP's or free hosting URL's), there are however workarounds."
Wait until you're behind a CARRIER-grade NAT (CGNAT). Then you WON'T be in control, and odds are asking for a port or even an exposed IP address will be harder than a moonshot. Then you'll be at the mercy of other providers who can abuse their position to become Big Brothers.
"My home systems are currently only controlled by me and only me. yes i have remote access, IOT etc all works fine. If i had a need i'd purchase static IP's from my ISP, the 1 dynamic address i currently have is ok."
And if NONE of the ISPs in your area offer it? It's not like you can just move (which in the US may not be an option, either, since you'll just move from one monopoly to another).
"The problem is the insistence & belief that NAT is evil, its not, its an enabler & should not be disregarded. IETF has even recognised and conceded that IPv6 NAT is necessary and come up with a partial reinstatement with NAT66. https://tools.ietf.org/html/rfc6296"
Which is one-to-one. They don't have any problem with one-to-one NAT. It's one-to-many they don't like because it removes the capability. Give us the option. CGNAT prevents the option from ever existing.
-
Saturday 4th March 2017 11:04 GMT Blotto
Re: End to end is a myth
±"If you can choke 64 bits of addressing, I'd love to see how you produce the matter needed to create that many nodes."
What are you on about, your displaying a huge lack of understanding here
±"And guess what? Backbone routers have FIXED memory, not to mention not a lot of time to do their work so they do most of their routing in hardware, limiting the amount of RAM they can use. Thus why IPv4 routing tables are FIXED at 512,000 entries. Plus because they're high-performance, they're expensive."
Again more nonsense. Backbone routers do not have fixed memory. Depending on who's backbone router you buy depends on how much memory it has and its ability to upgrade that memory or not, or its totally possible to roll your own in a VM and guess what, you can chuck as much RAM and SSD and CPU and whatever you want at it. IPv4 routing tables are not fixed at 512k entries, and just what flavour of routing tables are you talking about? The global BGP IPv4 table is ~ 659k entries so should be well broken by your reasoning, yet the internet continues to work and no one is complaining about a capacity issue or a need to migrate to some new version of BGP. As IPv6 will offer significantly more addressing, it increases the potential for exponentially more entries in the global BGP tables for IPv6. No one is flagging this as an issue.
Fragmentation by people moving their IP ranges from one ISP to another is a cause of additional entries in BGP tables.
Proper networking kit is expensive especially by comparison with the dross you can pick up off the shelf at Maplins or PC World. £1500 for a 48 port L2 2960, £4k for a 3850, its not because they're high-performance (Nexus, ASA's etc running pentiums, celerons or core-3's costing tens of thousands), a large part goes to the software and hardware design that ensures reliability and longevity.
±"Because like I said most of them do business locally (which to them is BEHIND the NAT)."
so CGNAT works great for everyone behind CGNAT that connects to someone else that's behind CGNAT, that really doesn't make any sense.
±Which is one-to-one. They don't have any problem with one-to-one NAT. It's one-to-many they don't like because it removes the capability. Give us the option. CGNAT prevents the option from ever existing.
having NAPT in IPv6 gives us the option of using it or not using it. Mandating that the standard do everything possible to avoid NAT, then just allowing 1:1 NAT is a major headache for all and preventing IPv6 roll out. Permitting IPv6 NAT will spurn the CGNAT BB carriers to move to IPV6 and not do CGNAT, they must do CGNAT for their subscribers to connect to IPv4. If IPv6 was so great, they'd have moved on to IPv6 first, both clients and servers, Why have they not done so?
Lastly do you know what the IPv4 & IPv6 FW policy on your mobile phone is? can you change it? When was the last time you heard of masses of mobile phones being hacked?
It would help if you did some research into things before you posted them.
-
-
-
Monday 6th March 2017 03:14 GMT Kiwi
Re: End to end is a myth
Or do you want what's happening now, with vendors providing the NAT-piercing links and becoming Big Brothers while they're at it?
Serious question here.. You keep banging on about ISP's being able to route directly into machines behind NAT routers (especially if they know the internal IP of the machine in question) yet I cannot find any reference to this online.
Can you provide a link to some article describing how this is done please? I'd love to know so I can mitigate any risks.
(I am aware that some routers have ports open for ISP techs to "fix" it if things go wrong - I personally "fix" it by closing off any remote-admin ports and pointing DMZ to a non-existant machine (eg DHCP pool is .100-.200, I'll point DMZ at .236 - aside from these I am cannot find any means the ISP could use to route traffic from outside to inside without specific NAT port mapping and appropriate services listening on said port)
Thanks.
-
Monday 6th March 2017 21:24 GMT Nanashi
Re: End to end is a myth
NAT does not make life easier in the general case. It's just an extra layer that gets in the way and adds extra admin overhead to everything you do. It can be useful in some targeted cases, but "on every network, all the time, for everything" isn't "some targeted cases".
> Can you provide a link to some article describing how this is done please? I'd love to know so I can mitigate any risks.
I don't have an article handy, but there's nothing complicated involved. Literally all you need to do is send the router a packet with a dest address of 192.168.1.42 (or whatever IP you're trying to talk to) and it'll route it on. It's just basic routing.
-
Monday 6th March 2017 23:00 GMT Blotto
Re: End to end is a myth
@Kiwi
"Serious question here.. You keep banging on about ISP's being able to route directly into machines behind NAT routers (especially if they know the internal IP of the machine in question) yet I cannot find any reference to this online.
Can you provide a link to some article describing how this is done please? I'd love to know so I can mitigate any risks."
in theory, given a home ISP provided router running NAT but no firewall: a router directly connected to that home isp router could route to addresses on the LAN side using unnat'd addressing. for example, LAN address = 10.0.0.1/24, WAN IP = 12.1.1.5, ISP's router at cab or exchange = 12.1.1.1,
some one on 12.1.1.1 may be able to route to 10.0.0.0/24. this could be done from a distance if routes are added from the attacker across the ISP's network to your 10 range or if the attacker had some sort of tunnel to that carrier side router directly connected to your router, or if the carrier side router had a NAT mapping your 10 range to a routable address.
i say in theory as this is reliant on the NAT table not having a default null rule at its end for when incoming traffic does not match an entry. NAT is stateful, running NAT requires each packets state to be checked against its NAT table, remember those routers that claimed stateful packet inspection, routing all traffic that doesn't match the NAT table to null turns the NAT router to a cheap barebones firewall. Routing all traffic that doesn't match the NAT table to null is a sensible idea, but is no good if the traffic is then to go through a more feature rich firewall policy like iptables, where unsolicited traffic may be permitted. so if your cheap isp provided router claims SPI it may drop all unsolicited traffic from its WAN, If it also has a firewall policy then it will likely send the traffic to the next process like fw policy or route policy if the fw policy was run before the NAT. its possible the fw policy may run before or after the nat depending on the traffic flow.
either way, the only way someone is able to access your LAN addressing is through the targeted malicious configuration of the directly connected carriers router. If thats happening, you've got bigger problems, that even a FW on your router wont solve, as you'd likely be subject to Man in the middle attacks and all sorts of spoofing.
even if your ISP where routing to your internal LAN, someone in a different ISP wouldn't be able to route to your NAT'd LAN as the routes wouldn't exist across the internet (unless they are doing NAPT to a public address).
@Nanashi
yes thats routing, provided the default NAT policy is to not drop traffic that does not match its table.
-
Tuesday 7th March 2017 12:02 GMT Kiwi
Re: End to end is a myth
in theory, given a home ISP provided router running NAT but no firewall
Thanks.. So my trying to figure out a way to do it making use of whonix (so I can be both outside and inside my network on the same computer), coupled with the router's sievewall (blocks some things but probably lets a lot more through), coupled with my setting a DMZ outside the DHCP range (eg dhcp .100-.200, dmz set to 236), coupled with the router hopefully dropping stuff (ssh isn't on 22 on the external side) should explain why I haven't managed to do this..
Now.. I wonder what the ISP will say if I pop over to the cabinet and have a poke around... (actually the ISP merely charges me for internet use, I doubt they have much more than billing hardware anywhere in the country)
Oh, and there probably would be several levels of hell to pay if any NZ ISP was caught trying this stuff on anyway.
Much thanks for your post.
-
-
Tuesday 7th March 2017 11:47 GMT Kiwi
Re: End to end is a myth
NAT does not make life easier in the general case. It's just an extra layer that gets in the way and adds extra admin overhead to everything you do.
Really? How so? Seriously. I use it often. How does it add "extra admin" and "get in the way"?
I don't have an article handy, but there's nothing complicated involved. Literally all you need to do is send the router a packet with a dest address of 192.168.1.42 (or whatever IP you're trying to talk to) and it'll route it on. It's just basic routing.
Doesn't seem to be a common practice though, or I am badly writing my search terms. No links to articles? That's what I want to read up on. Not "it's just basic routing" because I've tried a few ways and obviously I'm doing something wrong, eg instead of ssh <somewhere> <port> I should be able to set things up so I can effectively ssh <ip> and somehow it'd be routed to the internal IP effectively bypassing the port forwarding of the router? Charles9 says the ISP can route stuff directly into the machines (even if they don't have specific holes in the router), and you agree. But what I want to read up on is how it actually is done, because somewhere I am missing something that gets these packets through the router's NAT, through the router's firewall, and also past the firewall on the end machine.
-
Wednesday 8th March 2017 02:52 GMT Nanashi
Re: End to end is a myth
Because instead of connecting to the machine you want to connect to, you have to connect to a different IP, and then configure that machine to carefully rewrite the dest addr to try and pretend that the packet was going to the first machine. This is clearly more effort than just using the target machine's IP.
Because it means that addresses on packets change mid-flight, making debugging and reasoning about the network harder than if they didn't change.
Because software can't accept inbound connections anywhere near as easily without manual setup or relying on third-party servers. (Remember that IoT software that broke when S3 went down? Wouldn't've happened if the Android remote-control software connected to your IoT device, but because of pervasive NAT this sort of software tends to go through a server hosted by the manufacturer these days.)
Because split DNS is more effort than not split DNS is.
Because you can't run two services on the same port, leading to having to manage and keep track of lots of alternate ports for things like ssh. (Then there was that one time that I wanted to run two DNS servers -- can't do that with a single public IP because DNS doesn't do alternate port numbers.)
Because we don't have enough IPs to even give one public IPv4 per person, which means you're going to end up being put behind an ISP-run NAT at some point. Have you ever thought about how you're going to accept an inbound connection when your ISP has you behind NAT and doesn't let you log into their router to configure it?
I'm guessing that you, like I, grew up using NAT everywhere. It doesn't seem so bad when that's all you've ever used and you don't know any different, but then I got a decently-sized block of IPs which let me get rid of NAT completely on my home network, and life really is easier without it.
-
Thursday 9th March 2017 18:36 GMT Charles 9
Re: End to end is a myth
Here's a VERY simple test for you. Run a VirtualBox VM on your machine, with network add-ons included. The virtual machine can reach your machine and vice versa, this IN SPITE of both machines carrying different RFC1918 IPs in different subnets (yours is usually in the 192. range, VB uses the 10. range). This means, as long as you know how to route it (and the ISP would know how since you subnet to them), they can reach you if something doesn't get in the way first, and NAT doesn't get in the way here.
-
Friday 10th March 2017 19:17 GMT Blotto
Re: End to end is a myth
@Charles9
That's routing not NATing
ISP's are meant to drop traffic to/from rfc1918 addresses as they are not globally unique and unless directly connected, the internet can't route to them. As I explained at length, Yes it is plausible for your isp to route to your internal nat'd addresses, but they must specifically target you and do some esoteric configuration to get that working. If they are doing that you have other problems a firewall won't protect you from. NAT alone is effective against other unsolicited connections from those not directly connected to you.
-
Saturday 11th March 2017 13:38 GMT Charles 9
Re: End to end is a myth
"If they are doing that you have other problems a firewall won't protect you from."
And that alone is enough of a threat since they can be coerced by the law. Remember, trust no one.
With routing, you don't NEED to NAT. You GO AROUND it. If you really, REALLY wanted to protect your intranet, don't use NAT. Use a proxy.
PS. ISP's aren't SUPPOSED to route RFC1918 addresses, but many still do. If you take a very close look at a connection log, you'll probably run into some of them at some point.
-
Saturday 11th March 2017 19:03 GMT Blotto
Re: End to end is a myth
If you or someone else makes a bad firewall rule then NAT has your back as no one will be able to route into you. It is totally allowed to use all the tools available to you to secure your environment, that includes routing, NAT, firewalls, proxies etc. NAT is so simple, especially as for most it's already turned on and needs no additional ongoing administration. Routeing and Firewalls need additional work to setup and then ongoing support if changes are needed.
Please explain how rfc 1918 addresses can be routed to you, and me and all the other people that'll read this that use rfc 1918 addressing, across the internet. Let's say the source is 12.10.8.6 and our destination is 10.1.1.6/24 ( pretend we are all using the same lan subnet)
I'm very interested to know.
-
Sunday 12th March 2017 13:18 GMT Charles 9
Re: End to end is a myth
You don't use EVERYTHING because someone can exploit one of those somethings to get around others (to use cybersecurity parlance, pwn one layer of security and then use it to bypass others). Use ONLY what you need. Plus you have to consider that more hoops to jump irks users who reach their limits and then start creating (exploitable) shortcuts.
-
Sunday 12th March 2017 23:10 GMT Blotto
Re: End to end is a myth
the whole point is to use systems with different exploits.
there are accreditation requirements that mandate the use of 2 different manufacturers of security systems like for example 2 different firewalls (yes 2 firewalls in the path) precisely because they will both have different exploits. Its called defence in depth, done right its all invisible to users. A firewall, a proxy & NAT is a common setup and absolutely no one is complaining.
So come on, explain how someone can route across the internet to just your rfc 1918 addressing when billions of others are using the same rfc 1918 addressing. Im really looking forward to your explanation.
-
Monday 13th March 2017 20:46 GMT Nanashi
Re: End to end is a myth
> So come on, explain how someone can route across the internet to just your rfc 1918 addressing when billions of others are using the same rfc 1918 addressing. Im really looking forward to your explanation.
Did you read any of the other posts here? This makes it seem rather like you didn't.
-
Tuesday 14th March 2017 07:52 GMT Blotto
Re: End to end is a myth
@nanashi
Yes I did, clearly you didn't.
From charles9
"PS. ISP's aren't SUPPOSED to route RFC1918 addresses, but many still do. If you take a very close look at a connection log, you'll probably run into some of them at some point."
I've asked Charles to explain what he means. If you read his post then mine you'd understand, but clearly you've not read either.
-
-
-
-
-
-
-
Tuesday 14th March 2017 09:44 GMT Roland6
Re: End to end is a myth
@Charles 9 - Here's a VERY simple test for you. Run a VirtualBox VM on your machine, with network add-ons included. The virtual machine can reach your machine and vice versa...
Well you've only proved that both the VM and "your machine" are on the 'same' private networking domain and thus your machine's router will route whatever to whatever. This was very common in the 80's before we started connecting businesses to the Internet and had to address the problem of address duplication.
For your scenario to have any real meaning you need to separate the two machines by a public network - who's routers should be configured to implement the provisions of RFC1918, namely private addresses don't get forwarded to public network addresses (unless there is a VPN rule).
Now IF your two machines can communicate via the ISP router then something else is happening. This might be possible if the ISP has been handing out private IP addresses and both of your machines are connected to the same ISP. IF however they are using different ISP's then the ISP's and others haven't set up their routers correctly.
-
-
Thursday 16th March 2017 23:30 GMT Blotto
Re: End to end is a myth
if your worried about law enforcement getting your isp to route to your NAT'd addressing you seriously have other issues a firewall, vpn or whatever you can think off wont help with. If you don't want an opportunist hacker scanning random ip ranges deciding to hack your home NAT will fully protect you against that.
-
Friday 17th March 2017 06:26 GMT Charles 9
Re: End to end is a myth
"If you don't want an opportunist hacker scanning random ip ranges deciding to hack your home NAT will fully protect you against that."
NO, it's the firewall that protects you against that, and that doesn't go away with IPv6. Besides, opportunist hackers know about firewalls and the like and use techniques like drive-bys that rely on the USER initiating the connection, meaning the firewall LETS it through. It also happens to penetrate NAT, thus why it's a key tool of LAN intrusions. Well, that and the fact that, as the comedian said, "You can't fix Stupid."
-
Friday 17th March 2017 09:53 GMT Blotto
Re: End to end is a myth
as the Charles 9 said "You can't fix Stupid."
that is very true, god knows i've tried but you just won't understand.
you lack an understanding of what NAT is, what routing is and what a firewall is. RFC 1918 addresses can't route to anything on the internet without NAT, the Internet can't route to RFC 1918 without NAT. If you run NAT on your LAN someone not directly connected to your LAN can't connect to your internal systems without esoteric cooperation from your ISP regardless if you have a correctly configured firewall or not. NAT can't connect to a session that does not exist.
-
Friday 17th March 2017 10:17 GMT Charles 9
Re: End to end is a myth
But you don't NEED NAT to connect to a session that DOES exist on the LAN and can be routed through directly via your ISP through a preconstructed route that could be arranged by an insider or the law enforcement, and if the ISP can do that, they can connect that route to the outside via another route.
All without touching the NAT. If it's AT ALL possible, then you have to assume someone WILL use it at some point without your knowledge. Remember, we're one scandal from a DTA world.
Owen Bytheway, I've seen logs with RFC1918 source addresses trying to link up, so don't say they're non-routable. You can't trust all the links on the Internet to obey all the rules.
-
Friday 17th March 2017 16:56 GMT Blotto
Re: End to end is a myth
So you don't understand routing. You route to the destination. When you route your not so fussed aboyt the source, just get the packet to the destination. There is no way to route back to an rfc 1918 address across the internet without NAT or a vpn. If you think your isp is colluding with law enforcement to infiltrate your LAN your either stupid or up to no good, either way a firewall, vpn or encryption won't stop them from getting access to your stuff. For everyone else that accepts that the authorities will make life very difficult for you in their pursuit for data, NAT is effective against opportunist hackers happening across their WAN ip.
IOS devices have no firewall yet we don't hear of them getting hacked across the internet. (Yes they have a reduced attack surface)
http://apple.stackexchange.com/questions/48060/does-ios-have-a-firewall
-
Friday 17th March 2017 17:06 GMT Charles 9
Re: End to end is a myth
"There is no way to route back to an rfc 1918 address across the internet without NAT or a vpn."
Then how come I see addresses like 10.0.16.154 in the logs when I don't use a Class-A RFC1918 subnet? SOMETHING must be letting them through.
And as for iOS getting hacked, I hear plenty of stories of them getting hacked. They do it through the APPS.
-
Saturday 18th March 2017 07:48 GMT Blotto
Re: End to end is a myth
Like I've written countless times, it's ROUTING!
The traffic is routed to your destination wan ip regardless of its source ip. ISP's should drop it as no one can route back to the rfc 1918 addressing. It was more of an issue when bandwidth was low but now not so much. When troubleshooting and checking logs It's useful to know traffic is routing through but the NAT isn't configured properly. The destination would send the syn ack but the source will never receive it. Any way all the destinations in your logs are your wan ip, non will be your lan ip's as the Internet can't route to rfc 1918.
You've obviously got some deep rooted wrong paranoid ideas about how this works. Maybe learn how basic routing works and especially the basic mechanics of routing.
-
Sunday 2nd April 2017 12:38 GMT Charles 9
Re: End to end is a myth
"The traffic is routed to your destination wan ip regardless of its source ip. ISP's should drop it as no one can route back to the rfc 1918 addressing. It was more of an issue when bandwidth was low but now not so much. When troubleshooting and checking logs It's useful to know traffic is routing through but the NAT isn't configured properly."
But they DON'T. That's what I'm saying. Otherwise, I should not be seeing a 24-bit RFC1918 source address (first octet 10) in my logs (since as you say, they should not be routed or routable) as I use a 16-bit RFC1918 subnet (first octet 192). Which means something's amiss here, and basic knowledge of routing isn't going to do squat when nothing's ever that neat and simple.
IOW PS. It isn't paranoia if they really ARE out to get you.
-
Wednesday 5th April 2017 22:18 GMT Blotto
Re: End to end is a myth
As i've written plenty of times, you need to understand routing in order to understand this.
https://tools.ietf.org/html/rfc1918
page 5
Because private addresses have no global meaning, routing information
about private networks shall not be propagated on inter-enterprise
links, and packets with private source or destination addresses
should not be forwarded across such links.
Routers in networks not using private address space, especially those of Internet service
providers, are expected to be configured to reject (filter out)
routing information about private networks. If such a router receives
such information the rejection shall not be treated as a routing
protocol error.
most routing is destination routing & doesn't look at the source ip. While ISP's should drop traffic with rfc 1918 source ip's its a SHOULD rather than a MUST. if you had a basic understanding of routing you'd understand why you see rfc 1918 addresses hitting your public IP, regardless of what the ISP's should or shouldn't be doing.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Sunday 5th March 2017 03:28 GMT Kiwi
Re: End to end is a myth
But without true end-to-end connectivity, you necessarily limit the abilities of many Internet users, preventing things like home-hosted servers.
So the webservers, the email server, and the cloud server running in my mates wardrobe are all not running at all because they're all behind NAT? Bugger. And here was me and the Owncloud client software etc etc etc etc all thinking it could connect to it all happily.
My old office we had everything behind NAT, a single IP off a single VDSL (best available in our shopping mall at the time). Had our email and web servers, another Owncloud server, often we had machines running various tasks over a weekend where we set up Teamviewer so we could check on progress (I think while most weekends we had everything off, one weekend we had 5 or 6 TV clients running). All sitting on one IPV4 ip behind NAT.
I've set up a number of home-based servers for various people over the years. Nothing as extensive as Mr Pott's article covers (but nice to know I now know a decent guide as to how :) ), but many consisting of more than one specific machine, for all sorts of things. Some for short-term jobs and some still running several years later. Never once run into any issues with it behind NAT, and have always found NAT damned easy to work with. Never seen a need for this mythical "end-end connectivity" which, due to load balancing and CDN and so on means for a lot of services you might think you're connecting to a server in your hometown but really you're connecting to a server on the other side of the world. I can see that some devs assume everyone will always want to use the same port they think is best, so they make the security risk of making their software only use a specific port (imagine the security fun if you couldn't move SSH off port 22!).
NAT makes life easier, especially for people who aren't System or Network admins but wind up doing a lot of that work anyway.
And I'm not entirely sure that IPv6 will help much. The computer I am typing this on visits a lot of addresses on different ISPs. I could give it a couple of dozen different IP6 addresses depending on which house I happen to be visiting at the time, or it could sit behind their routers, behind NAT, and talk to whatever it wants to anyway.
-
-
-
Saturday 4th March 2017 02:43 GMT Evil Auntie
How about Debian & Ubuntu? btw: The REAL benefit...
Everything works a little differently in Debian based distributions and not all of us are familiar with Red Hat (any more in my case - I used Red Hat from about 20 years ago until Ubuntu came along). It takes a lot of skull sweat to puzzle through translating stuff from distribution to distribution and few of us have that kind of time any more. So how about a HowTo for us...
I run a multidomain colo server with about 40 domains piggybacked onto one Ubuntu box with one IP address. Without this kind of trick up your sleeve it is not possible to provide SHTTP services to each and every domain that don't come up with the the annoying GET ME OUT OF HERE message from the browser when the SSL cert doesn't match the domain name - even though it might match the IP address.
-
This post has been deleted by its author