Really, a hosts file with thousands of entries fcks up your (advertless)experience. Your hosts file is not loaded in cache (ok ok, but just for a bit). Get rid of hosts files. They suck. Move to a DB driven solution, speeds things up by a LOT!
9 posts • joined 14 Jun 2019
[quote]If you want privacy, you'll need to get an up to date copy of the Internet-wide "hosts" file, the name resolution technology used before DNS came along. Good luck with that. There are ways to solve this, but they require a great deal of engineering work not to mention a disruptive and challenging global rollout. Not only do few people have the appetite, but all the large players have strong incentives to make sure it never happens.[/quote]
I can very easily make my DNS servers think they are the top root servers and have never get any request leave my network. But as you stated, it would be a helluva job to make google or el reg available to my users without forwarders
In practice however, doing a recursive query, i.e going through the root servers for the TLD’s and get sent down for each and every subdomain is fscking slow, so in practice noone ever configures that. I think 98% of all DNS queries over the Internet is not recursive, but just bounced off from Authoritive Zone server to the next.
And also, after rereading your comment, I think you got recursive requests confused just the other way around, since recursive starts at the root servers (with all TLD’s), trickling down to all respective authorative servers, while “normal” DNS just asks its configured DNS server for a record, DNS server does not know, forwards to next “configured” server etc.. up until no answer is given and just THEN a recursive query is done
To make things more clear (doing a simple ssl handshake, but explaining the loophole afterwards);
Alice and Bob want to talk about their secret shit. Alice calls Bob through a BT land line and says; hey Bob, gonna send you a message. To do that, I need your public key to encrypt my message. (Lets assume there is a spy on this land line). So Bob tells Alice his public key to encrypt the message so he can decrypt the message with his own private key. However, since we have a spy on the line who knows that Alice will send Bob a message that is encrypted with his public key, our spy could think; right, I will send a message in name of Alice with the public key of Bob encrypting the message and Bob can decrypt it with his private key. So we need some verification. To make sure Alice sent the message, she SIGNS the final message with her private key. So when Bob uses Alice’ public key, he can make sure that Alice sent the message.
Now... what this article is implying, is as follows; Mozilla (Jasper) says; dude! You are yelling that you want to reach Alice, but instead of yelling, I could be the one that silently tells Alice that you want to reach her. However, for Alice to know that I am not lying; I need to resign your “shout out” with my own ciphers. To do this I need to resign your original message with my own keys though. When you get contact with Alice and all is mighty fine, but you keep exchanging messages through me (Jasper). I can read your (Bob’s) messages, I encrypt them for you, Alice reads it, sends messages back, but all in all I can still read all messages between you (Bob) and Alice.
So.. would you rather have people hearing that you as Bob want to talk to Alice and encypt messages after when you do a ‘handshake’, or move the “Hey Alice” shoutout to a middle man that will see all your messages in between afterwards?
And how is this implemented? To encrypt DNS and decrypt DNS requests, there has to be a ‘man in the middle’ that can ‘read’ your DNS requests. Thus to make this work for all major vendors, we all need to get another (root)certificate that encrypts and sends our DNS requests to our preferred DNS server, there it gets decrypted and forwarded, only to have the answer encrypted back to the user and gets decrypted in the browser to get redirected to the requested website.
Thus as I read it; we either encrypt the DNS requests and have our favourite browser vendor be a man in the middle that can decrypt _everything_ after that... OR we just accept that our DNS requests are unencrypted, but ALL traffic afterwards is unable to read for ANY other party (As it is now).
I know which one I would choose...
Re: Trusty UPS's...
Sure... an array of 100W bulbs. We all do that to replace a UPS and test the load. Then again, the fire system gets tested quarterly, but have you ever tested those red flasks filled with inertious gas at 300bar? Just because the detectors and the controller are tested, who knows if all works at the vents when the signal is given from the main controller? Ever did a test on that system to verify if the fire department is actually notified? Ah right, that line was an ISDN line... just being decomissioned by the provider...
My point is; you _could_ do all that, but when is enough, enough? And you should be able to trust on your supplier and testers and tell you all is ok?
“Support for emoji”
What your predecessor meant was that “full utf-8” support also means “support for emoji”, since those are all defined within utf-8. So if you talk to the Direct* text API’s you get support for ClearType rendering, as well as scaling for all kinds of resolutions AND emoji support =D =D
So, not a native English speaker over here, but I read this article like; MS _will_ bring out a new FS(2020) and while we enjoy the 4K trailer, we are still assuming that MS will use the old physics engine, so it _might_ suck anyway. But until there’s more information released, we actually don’t have a clue, but try to bloom it with ifs and maybes. ¿Que?