The security defenses available to us are clumsy and inadequate. Anti-malware applications are grand at dealing with well known threats, but pathetic and worthless at dealing with emerging ones. Software vendors are too entrenched in politics, feasibility studies and bad attempts at public relations to bother to properly and …
Agreed. When internet was an academic tool there was distributed authoritative control.
In the business world, registration and legislation of standards is normal experience.
The public Internet is now just another business and should be treated as such.
Given so many backbone providers charge by the packet, they dont care that the end user drops 75% of the traffic sent to them. The user is charged for waste and rubbish.
Routine standards quality control needs to apply. Would you accept milk if 75% of cartons were diluted or poisoned, merely because you have some belief in open opportunity of supply ?
In Oz the ANU net support staff were white listing long ago. Whitelisting still suffers when a normally reliable source is compromised, so OS and apps need to be built with security as a primary design goal. Stuff this wave of idiot friendly just works IT. Make it hard to make insecure, even if it is hard to use. Computers are the most complex machines humans build. Why should they be simple ?
"Whitelisting still suffers when a normally reliable source is compromised, so OS and apps need to be built with security as a primary design goal. "
Well, that kind of makes the whole theory unworkable. Are you also going to implement a department for policing software producers big and small - or you are going to enforce complete control of the software writing and signing process - kind of like a huge appshop - kind of a global software market place along the lines of mobile phone app shops.
It is all very well and utopian - but it is viewed from the end users perspective - not from the perspective of providers of services and software, free and open market economy and low barrier of entry competition.
And one more thing. With freedom comes cost. The web is as free as it will probably ever get - and the malware, security threats, spam and the rest of the nuisance is the price. I'm not sure that turning it into a global dictatorship regime is exactly the answer.
How about "free speech"?
"In the business world, registration and legislation of standards is normal experience."
Yes and that means that corporate standard is Microsoft and you not may even suggest anything else, no matter how broken it is, because the standard is defined somewhere so high that no knowledge is needed. Like any "business standard". Normal, yes, but brain dead also.
"The public Internet is now just another business and should be treated as such."
No it's not: It's basically a public media and not a business and trying to treat it as business is blatant cencorship and trying to prohibit people's right to free speech. There is no such thing as free speech in corporate world.
Simply but eloquently put, i think you are right on the money with this post.
The snag with whitelists is that they assume you know who you want to be in contact with, whether it is for email, P2P or anything else. Of course, that is typically not the case. The whole point of the Internet for most users is the freedom to communicate with any other user, regardless of their location and whether or not you know them.
The idea of authentication mechanisms run by government bodies might appeal to the governments themselves but probably not to many of their citizens. Didn't the last UK government have some idea of organising everyone's keys for email authentication and encryption? I seem to remember this idea was about as popular as a fart in a spacesuit.
off his rocker
There is nobody fit to certify who is real, trustworthy or likely to leave their credentials lying about for abuse.
His central control would give only the illusion of safety.
Right now these sorts of decisions don't need to be delegated to government, but we can each choose. I can accept mozilla's set of CA's (or not) as I choose.
The guy complains at the failure of non-governments to mitigate the threats - nonsense; they have been. He is really stating that the threats have not been mitigated to an arbitrary non-specified level - but thats a claim that can always be made and can't be used to demand any specific degree of intervention.
This statement is meaningless:
"Basic protocols such as email don’t inherently contain a way to verify that the sender is legitimate. "
The legitimacy of a sender is subjective to the recipient; this guy is just disappointed because the system doesn't do what he thought it did all these years (because it's impossible) and so now wants all pharoah's magicians (I mean government technology advisers) to make it work ho he thought it did.
The guy just pushes back the problem further from the people who need the solution:
"The day has come to start building confidence ranking into the DNS system itself. "
We need to start building confidence rankings of the confidence rankers - but wait - we already have that! I can use blacklists - or not - as I see fit. I can use different DNS roots if I see fit. I don't need someone to publish their measure of my confidence.
However signed identities with whitelists is a good idea
but it can't be forced.
But there are also major problems inherent with this setup
Lets say i have a company that relies on Email 100%
The only way for a spammer to get spam out is to use a domain on said whitelist
They steal the domain or attack the computers in offices. now they have also been comprimised and the hackers/spammers would also try and appropriate any email lists of legitimate company
Then legitimate company is knocked off whitelist for spamming and now cannot do business and looses X amount of money.
Get newsletter from Comapny A
click spam cause i cant be fucked to go throug their unsubscribe option (normally hidden) and said company gets blacklisted because people are lazy.
A step (or more) backwards)
There are some vital points missing from the above article. The approach suggested - of having centrally managed white-lists, approval procedures and overseeing bodies is nothing new. It is actually how society has pretty much been organised for hundreds of years. One would need approval from several regulatory bodies to start a business, a radio, to get married, to buy a house, to build a house and so on. The whole point of the Internet was its revolutionary approach, it's lack of red tape, the ease with which, with low resources, one could get done almost anything, - is fragmented structure - which resulted in great freedom to innovate, but also amazing efficiency at extremely low costs. Turning back the pages of history and modeling the Internet on the patterns of real world might just reduce the security threats, but it will also remove the main advantages of Internet - it will effectively eliminate what the Internet has become to all of us.
When suggesting things such as dns white-lists, email domains white-lists, peer-to-peer white-lists which cover the Internet globally (otherwise it wouldn't make much sense), it is worth taking back one step and glancing at the scale of it all. The bureaucracy and organisational complexity of such a system would be overwhelming. What about all the departments involved, including disputes resolution, legal departments? Somebody will have to carefully analyse the case for each domain that is bumped off - otherwise justice will not be served. Somebody will have to investigate each claim and counter-claim. What about complying with laws? In some countries the activity of certain domains would be legal, while in others it would be illegal. Would that imply separate white-lists for each country? Then, which countries are going to be allowed at the discussion table - there will be the need for another department and committee to decide who are the friendly and unfriendly nations. Then you will need another department on top of that, to deal with the necessary PR effort, when you will get attacked from all directions by lobby groups, for various industries, for human rights, for animal rights etc. Any domain that is banned can potentially attract public and corporate pressure.
Well, you can see how it could all go on an on. And somebody would have to fund it. The reason why the Internet foundations have to a certain extent very little security built into them is to a great extent also what made the Internet so successful all the way since it went public - it's focus on utility, simplicity and efficiency. An extremely resilient, amazingly flexible and cheap (compared to most other options) environment for connecting people and resources.
The Internet - a Worldwide Network
Now let's cut it into pieces!
Rearrange these words into a well-known saying: baby bathwater throw don't the the with out
If the lads and lassies in Intel are serious about Services this is what they should be doing. $7 Billion would be loads spare to spend doing it well.
Rather than buying a "Close the stable door after horse bolted" Marketing company.
Now if anyone can figure how to make this idea work...
splendid though the ideals represented in your article are, I can't help feeling, (being an eternal pessimist), that such "Whitelists" would eventually become subject to abuse themselves and start to turn grey, as it were. You might be able to clean all the bad apples out of the barrel, but then some of the good apples that are left could turn bad. The White List idea could have considerable value attached to it and that in it's self will be enough to attract the attention of some bad apples looking for a new home.
It's all comes down to trust and human nature I guess.
I'm a optimist...
I think it would probably take ~2 years from the time when most people are using the Whitelists to the time when the Whitelists become so dirty they are unusable.
(A pessimist would say everything would look fine, until just after we've spent all the money putting the infrastructure into place, then it collapses)
Remember, the bad guys are intelligent, too.
Mine's the one with the half-full bottle in the pocket.
P2P? Rely? Free software?
> I rely on peer-to-peer to get access to things like Linux ISOs that are vital for my work.
Erm, why? Free software has a massive network of perfectly good mirrors providing at least http and ftp protocols, if not also additional protocols such as rsync. And you can easily whitelist these since they're usually on fixed domains, if not fixed IP (range)s.
The only reason P2P exists is also the reason that prevents you from white listing it (easily) - the users don't want anyone else to know what's in it, where it's going or where it's come from (I realise it isn't totally anonymous in its present form, but you get the point)
It's an article from a *specific* POV
And *from* that POV this approach seems to be a pretty good and simple idea.
My old History teacher warned me to to beware of simple solutions to complex problems.
Who would want to run such Whitelists?
Who would *others* want to run it?
Who *might* run it if no one else committed the resources or effort in running it?
Google already offer a DNS service. Are you thinking what I'm thinking?
Your URL's reputation in the hands of that nice Mr Brin. What could go wrong?
Whitelisting is all well and good
but all that will happen is the miscreants will make compromising them their highest priority, and then we're back to square one
Slippery slope to total lock down
Whitelists are a lovely idea, but if whitelists ever become the norm, how long will it be before said whitelists are forced on us users. I can see the isps jumping on it as soon as they think they can get away with it. Join BT and you'll get a lovely white internet, none of that black internet where all the nasty stuff lurks. And once the isps start using whitelists you can bet that the only people with any real level of control over what counts as white will be the biggest companies (MPAA RIAA) and governments.
The article states everyone should still have access to the raw internet, yeah right, I can just see the powers that be leaving us with that power. "What do you want with the nasty black internet? Nothing legal on there sonny".
Whitelsiting the whole of the internet is the first step on the way to somewhere I don't want to visit.
you want everyone to implement a global\national firewall, I can't see how that could possibly be abused by anybody not even China or Iran or Australia or .......
"A series of central registries with whom operators of legitimate email servers can (freely) register is the only way to make spam go away. If you are caught spamming, you fall off the planetary whitelist and getting back on should not be easy."
Ha, that's a good one. You mean, a bit like you can't host services on a domain without registering it first, and if you do nasty things in that domain then you get kicked off. Well lets look at how well that works with domain registries ... nope, not very well. Can we expect any mail registry to be any better ?
Actually someone did come up with an alternative email protocol that would shift the economics of the problem such that spam would cease to be economic in general. In spite of there being a proposal on the table that would fix several problems at once, all we've had for the decade is people trying to put ever bigger sticking plasters on the fundamentally broken and unfixable smtp protocol. Some of these attempts to fix the problems create bigger problems by breaking legitimate uses.
Try http://cr.yp.to/im2000.html and http://homepages.tesco.net/J.deBoynePollard/Proposals/IM2000/
By the time you've read those, and a few other things, you'll realise that the mail registry suggested is no less broken than other anti-spam suggestions like SPF. If I register my one outbound mail server for mail from my domain, then what about mail I send via mailing lists ? Guess what, it's now classed as spam as it didn't come from my server but from someone else's list server. The answer to breaking that is, apparently, for every mailing list server in the world to be modifies to work around the problem - in a way that can also be used by spammers to work around the restriction too !
IM2000 sort of re-creates the same problem... OK, the sender's ISP (or company? I run my own mailservers, I want my ISP to provide a pipe, not a bucket) stores the message, but, "The sender's ISP can periodically retransmit notifications until it sees confirmation." So the bloody spammer keeps bugging me until I've acknowledged the message! Then they bug me about their next bloody message!
Storage and bandwidth consumption are no longer the worst thing about spam (sorry to anyone still on dial-up, hope you catch up soon). The problem is the number of messages I need to check to see if they are useful, and IM2000 does not solve that problem. My time is valuable (at least, I tell my customers that). In fact, it makes it a bit worse: if my server stores the message, it can check the content as it receives it, and "store" it in the bit-bucket, if appropriate. >90% of my incoming messages are classified like that. With just a notification, my server does not have the information to make that decision.
Hope the next idea is better.
> other anti-spam suggestions like SPF
SPF is *NOT* an anti-spam measure. It is an anti-forgery measure.
The correlation of forged emails to spam means that there is usually a coincidental reduction in spam when SPF filtering is intrioduced - but that is not the purpose of the technology, and making claims about how effective SPF is at stopping spam merely demonstrates a lack of understanding.
> If I register my one outbound mail server for mail from my domain, then what about mail I send via mailing lists
If you - as the domain owner - make a positive statement that any mail not sent from your server is a forgery, then any recipient receiving mail allegedly from your domain but not from your server can only go by the statement you have made - that it is a forgery.
There are many ways to permit sending mail through other servers, but you can hardly blame recipients for taking you at your word. If you don't mean it - don't say it.
You've got a network problem with whitelists, in that no-one is going to be interested until its all-inclusive, and it's not going to be all-inclusive until everyone is interested.
This is why people use blacklists, because by default anyone can send you email. Like potential customers, for instance.
Couldn't get past third paragraph
> The internet was built on the presumption of innocence. Basic protocols such as email don’t inherently contain a way to verify that the sender is legitimate. We all know how well that has worked out. Peer-to-peer protocols have many legitimate uses, but their nature lends them to illegal uses and so the vast majority of peer-to-peer traffic infringes copyright.
What? Have you just mixed two completely different meanings of "innocence"?
Censorship / Prior Restraint
The establishment of "whitelists" will rapidly lead to government controlled censorship in many countries.
In any case, in the USA a requirement to obtain approval from a governmental regulatory body before publishing some information via the internet would likely be ruled unconstitutional as a form of "prior restraint".
What about startups....
If you're only going to trust traffic which has made it onto a white list how are you ever going to introduce new content and services. If noone is able to access a new service to discover it then it can't be able to be found trustworthy so it will never make it onto the whitelist. A loss of the presumption of innocence seems to be the rise of corporate/political online censorship and the true end of net neutrality!
I begin to wonder how many folks finish articles before commenting.
I’m guilty of it myself sometimes: jumping straight into the comments section without really being thorough on the reading part. There are a couple of things I’d like to point out to all the folks who are Heap Big Angry at the ideas in the article.
The first is that I in no way believed that whitelists should be /mandatory/. I think that they should be something that folks have the choice of opting into or not, as they see fit.
The second is that in some situations whitelists really do make sense. A great example being business internet usage. Businesses have very little reason to communicate with a lot of the dangerous, offensive or even borderline offensive sites out there. A global whitelist or five, run by companies who take the time to hunt down and verify the businesses behind the websites would go a long way towards separating the signal from the noise.
The third thing I’d like to bring to everyone’s attention is that this is not a suggestion as “the ultimate solution,” but rather as a replacement for the blacklists of domains currently used as part of any proper defense in depth. Yes, whitelisted sites can be compromised, (see Apple,) but that is where the other elements of your defense in depth are (hopefully) going to save you.
The goal is to minimize your exposure to compromised sites by only dealing with websites that meet whatever arbitrary standard defines the whitelist to which you are subscribing. In my perfect world, there would be several whitelist providers, all with different standards in order to meet the differing needs of the corporations and individuals who would like to subscribe to them.
Lastly, I’d like to talk about the censorship bit. Properly run, with very stringent standards set outright at the creation of the whitelist, and rigidly adhered to, it should be possible for anyone who feels they have been improperly left off of a whitelist to add themselves. If the whitelist explicitly states that they will not be adding porn sites to the list, then I am sure Bob’s BDSM website isn’t going to get on the list. That new toilet paper cleaning company trying to make a name for itself probably could get verified and added to the list.
Despite the anger, the beginnings of this process already exist. There are initiatives out there to certify websites, from various categories of /very/ thoroughly checked SSL certificates to “site seals” provided by various organizations who do the hard work of verifying the legal existence of the individuals or corporations behind the registration and operation of a domain.
Where this falls down is that firstly, no big push has been made to increase the number of websites participating in these ventures; very few sites are part of such programmers today.
Secondly, there is no way (currently) through either a browser plugin or firewall addition to limit yourself to viewing only websites which have passed muster at one of these verification organizations.
What I would like to see corporately is exactly that: if your website has passed muster with selected “site seal” checking organizations, we’ll let our users view it. If not, we’ll dump the user on a landing page that says “the website you want to view has not been certified and is potentially malicious.” It would then allow the user to click through, but would by default disable all scripts etc. from that domain.
Easy, pain-free browsing whereby we generally “trust” certified websites, and many warnings and default total distrust of websites that haven’t been checked out. Think of it sort of like noscript meets web of trust meets malwaredomains.com implemented as an opt-in right from the corporate firewall.
I am sure that looks like censorship to some, but don’t forget that censorship is something forced on people. What I am suggesting is not
bloody well said Trevor
The shipload of can't-do whining about what will go wrong and how we're all stuffed so let's not bother - pathetic.
Hint to complainers - it's a partial solution which needs many layers, some working better for some users (ie. businesses) than others (ie. individuals). All can be compromised & abused. If you want your uncensored web[*], work for it, don't whinge here and assume you've done your bit.
And BTW lack of censorship is just great until it's (ab)used for child abuse piccies & calls to racist violence then Something Must Be Done! (by someone else, natch).
It will take time, hard work (not by some of you lot though), tradeoffs, and mistakes will inevitably be made. That's how progress occurs.
Jesus, the sheer bloody 0-to-60 kneejerk crapness of some posts, I don't know how our ms Bee copes, I'd be on a gin drip after a fortnight.
[*] or anything else.
Heard of Websense, have you?
I thought whitelisting was the basis of their service.
Looking at the article in depth...
The article makes reference to BOTH corporations and individuals. While I agree that what a corporation does is its own business and that such a whitelist would be in their best interests in terms of limiting risk exposure (and thus protecting their trade secrets and confidential assets, etc.).
But the article also delves into the Internet in general and to systems that actually rely on openness to function to their fullest. E-mail for example. Sure, the blacklist system for e-mail is a mess, but it's also the most practical way to avoid false positives, especially with newcomers (and domain registration is going up with new TLDs and so on). If you whitelist e-mail, you risk losing contact with innocent people or (worse) potential clients that happen to be in a domain that hasn't yet been fully vetted (perhaps because they're a startup with a newly-registered domain).
As for the DNS reputation system, I get a bad feeling that this system will make itself a target for miscreants to game the system the way Google ads and eBay reputations can themselves be gamed: mostly by the reputation's own mechanisms (thus making them impossible to completely scrub).
Thanks, Trevor ...
... for making me think why I would not want such a scheme. Most of them have been put by previous commenters, so I'm not going to elaborate.
I fail to see why anyone should even want to apply the system you apply, unless, as you say at the top of your article, that someone needs to be seen to be doing something so that governments don't get jumpier than they are.
To each their own. I have yet to find the "one true solution" that solves all security problems. Educating users is a lost cause, and frankly if we were doing our jobs well you shouldn't have to know so bloody much about computers to use them.
For that matter, I would personally like to be able to browse the internet on my computer at home without constantly having to be vigilant about everything all the time. I work with computers all day long at work, when I go home the very last thing I want to do is fix another one. I don’t even want to have to exert the brainpower to ponder if the link I am clicking on is going to blow up my PC or not.
Like so many other “users,” I just want the bloody thing to work. This is why I like things like blacklists and whitelists, even though they might prevent me from accessing the wider web. So long as I can turn the thing off when I want, it offers a nice comfortable cushion from which I can just get on with the business of using my computer instead of constantly fretting about HOW I am using it.
It is for the same reason that I drive an automatic, not a standard. Similarly, I use a coffee pot that requires minimal effort to produce decent coffee as opposed to crushing the beans with a mortar and pestle and using an open fire and percolator.
It’s the oldest argument in all of IT. Convenience versus the requirement to understand everything about the system you are using. I accept that no matter how godlike systems administrators think they are, we are never going to /force/ users to understand the fundamentals of their computers, the internet or anything else IT related. No more than car mechanics are going to force the entire population to fundamentally understand fuel injection, ABS braking systems or traction control. Regardless of how easy and important that knowledge might be.
Instead, the computer industry should be putting it’s efforts into protecting those who can’t or won’t learn better whilst leaving the door open to the hobbyists who wish to go farther than is ordinary.
I’ve heard the argument that knowing all the deep nigglies about internet security is no different than having to learn road signs to drive on the road, and I don’t buy it. Those road signs are largely informative, and designed to be so. They aren’t cryptic and forcing you to guess at what might or might not be true, what may or may not be safe to obey.
Should you need a driver’s license to use the internet, or should we as an industry work towards making technology in general as easy to use as a television? The answer will depend on your point of view and your contempt for your fellow human beings.
Personally, I can say that most of the time, I just want the bloody things to work.
It makes sence right? After all, if your paranoid you tend to meet the right people.... eventually.
Black and White?
I think a white list is probably a good idea but what about the size of the fecker server processing of said list! I'd hope the black list is in order of magnitude smaller category! Cheers
Your assumption would be incorrect. For quite some time now the number of malicious domains being registered has /far/ outpaces the number of legitimate ones. A true whitelist versus blacklist comparison five years from now would likely find the whitelist an order of magnitude smaller than the blacklist.
It's one of the reasons I vote for the whitelist approach: there's simply no reasonable way to keep up with all the malicious or illegal traffic that's out there. We have reached that point where it’s actually /less effort/ to vet every website in existence than to try heuristics, pre-scanning, or educating users as to the quite literally THOUSANDS of different kinds of threats for MILLIONS of different malicious domains.
If there were only a few thousand, or even a few hundred thousand “bad” domains then a blacklist would make perfect sense. As it stands, even if you subscribe to myriad blacklists simultaneously you are exceptionally lucky if you get 75% coverage. The defences we have – from anti-malware and blacklists to heuristics – are inadequate. They aren’t coping with the onslaught, and if we don’t come up with a different approach they will fail.
That’s not an attempt to be a doomsayer on my part; it’s an observation based on decades of experience. We aren’t winning this war. In fact, despite all the advances in technology I would have to say that we are /less/ secure than we were ten eyen ago. While we are no longer vulnerable to the threats that existed then, there are at least an order of magnitude of new and interesting threats today.
Many of them aren’t even technological. They are social engineering traps. Phishing scams, password scams, or even things to trap you into take an endless series of surveys to pump some scammer’s numbers with a dodgy advertising company.
I believe we honestly are at the point that it is less effort to identify the legitimate traffic and discard all the rest. Even if building that list has to be done one site at a time.
Freedom through guilt?
I don't agree with lumping corporate interests in with mine. It also looks to me like the author is using a completely different internet than I've come to know.
"The security defenses available to us are clumsy and inadequate."
What are you talking about? I've been using the client side of the internet since the beginning and I'm not having any problems with security. You must be doing something wrong and assuming that other people are too. Email works just fine. I have several different mail services and the amount of spam I get is almost unmeasurable - and thanks to the professional server management which we've come to expect nowadays, I get all my mail too. I can only assume that someone who has a problem with spam either runs their own amateur server or made a particularly bad choice of service provider. Yes spam is a huge percentage of e-mail traffic, but it is not a huge amount of bandwidth. It is just not a problem any more. As for security; one does not have to be a professional to run a secure computer nowadays, even with MS-Windows, but if you're having a problem, why not find an easier to manage OS while you still have the freedom.
"If we, as corporations and individuals, want the internet to remain free and open as it is today, then we have to solve these problems before the governments of the world try to do it for us."
I agree that it would be bad to have too much government interference, but you make a mistake in lumping together corporations and individuals. Corporate and government interests are similar, and both counter the freedom of the individual. There are not a lot of problems that need to be solved as far as internet freedom for the individual is concerned. However, I agree that there is indeed a threat of governments and corporations from taking away the freedom that the individual has now.
"It could be that the only to preserve the freedom of the internet is to do away with the presumption of innocence."
No it couldn't be. That way of thinking has been tried many times by governments in the past. It has always ended in disaster. Let's not go there.
Until someone hacks your whitelists.
Whatever idea you come up with, someone comes up with a counter plan, that will foil it completely.
Maybe we should stop trying to control the uncontrollable. Internet is like the weather, it follows chaos theory. My guess is that at any point a process becomes so complex it crosses the threshold and becomes a chaos system. Internet has long passed that threshold.
Yet another central bureucracy for nothing
" A series of central registries with whom operators of legitimate email servers can (freely) register is the only way to make spam go away"
This isn't anything new. But, as proven earlier, faulty logig in several levels.
Tell us, how are you supposed to get mail someone you don't know when only whitelisted domains are allowed? What was the _primary_ function of the internet again?
"Central registry" means the state, ie. police force. You really would trust police to decide who may send (or receive) emails and who may not?
What prevents one of the whitelisted domains to change ownership and new owner is a spammer? Nothing. So no stop for spams either.
So essentially you want a central bureucracy for nothing.
Unworkable and dangerous
Ok, so the worldwide top 500 companies decide to fork out $1m each, and establish a list that they maintain and make available for all to use.
Maybe. Far stretched but why not.... The upside is that it is a private list and private list maintainers don't have to justify themselves to anyone.
That's the only way I could see this taking off.
Any public (aka government) or commercial (aka Norton & co) list would be open to errors, complaints, appeals, suits, etc, so that wouldn't work.
Even this hypothesis is open to abuse: why are www.bhopaltruths.org and www.thebigpetroleumcon.net not accessible? or: yes Mr Bush, we'll see that www.iraqtruths.org is taken care of.
The biggest threat however is of course goverment taking over the idea (for everyone's good obviously, ask the aussies) and censoring away. If something can go wrong, it will, and this is decuply likelier with any technology that has a big-brother potential.
We have whitelists now
They are built into browsers now, and the entries are called certificate authorities and certificates.
Hands up - how many have looked at the list recently? How is that working out?
So your solution is to manually verify every security certificate to ensure it has an adequate level of site verification before browsing? Would never fly. It would require an automated system to look at these certs; one aware of the various categories of certification available from the different CAs. "This level means we grant SSL certs to pretty much anyone." "This one means we verified they are who they say they are." Etc.
Then you need to be able to establish a vetting system: “view these sites and run scripts on them by default.” “View these sites without scripts by default.” “drop any attempts to access these sites to a landing page that informs people of the risks, and allows a one-click bypass.”
Again we run up against “well everyone should just know all there is to know about scams, security, and protecting oneself online.” The one question proponents of this approach have yet to answer for me is…why? Why should we expect the average user to know this, and why do we expect they won’t tell us collectively what to go do with ourselves?
People have a right to freely access information. That said, noone has a right to force their information upon me. Thus I believe a well researched and properly backed set of whitelists has an important role to play in ensuring the Internet “just works.”
It’s sort of like having an appstore/thoughtpolice/whathaveyou that provides a layer of protection for the baddies…but with the all important bypass button. The ability to climb the walls of the garden when you choose, or stay safe within its confines at whim. The concept that any attempt to build a whitelist for the Internet is immediately censorship is bogus. I have heard the same claims bandied about as regards blacklists.
The scammers and the zealots claim “you have no right to block sites!” I must return with “you have no right to scam me and mine, infect our machines with malware or otherwise disturb us in our own digital homes.” The internet may be a public area, but your PC is like your car. Random people do not have the right to open the door to your car, sit down and attempt to convince you the sky is falling (unless you buy a space in vault 13 which only they can provide.) I have the right to lock my doors, and to boot out unwanted guests in my car. That car may be travelling a public road, but within it’s doors it is my little bubble.
ANYWAYS: break time is over and it’s back to work. I have now officially been awake for 82 hours working on this network migration. I need to go finish a bunch of end user support and then collapse in a heap in the corner.
Whitelisting for spam does not work
There is a boilerplate form you can fill in every time someone proposes a solution to spam. I can't be bothered to dig it up but simply speaking it will never work and here's why: Your scheme does not allow for zombi networks. By far most spam comes from perfectly legitimate desktop PCs using completely legit ISP networks. Throw in dynamic IPs and you start to see why it can never work.
Maybe with IPv6 this kind of scheme actually had some hope with the built-in MAC address field, but how long until zombie boxes start randomizing this for each spam message? Or perhaps the concept was total complete cooperation from every ISP to enforce 24/7 kill switches to every PC connected to the network or face being blocked from th whitelist? I'm sure they'll go for that..
- JLaw, Upton caught in celeb nude pics hack
- Google flushes out users of old browsers by serving up CLUNKY, AGED version of search
- GCHQ protesters stick it to British spooks ... by drinking urine
- Review Boiling point: Tech and the perfect cuppa
- Facebook to let stalkers unearth buried posts with mobe search