back to article Twitter plugs black-box website vuln

Twitter's security team said it has fixed a serious vulnerability on the site that created micro-blogging mayhem on Tuesday. The cross-site scripting flaw on the site creates a means for posting code into updates that activated when users rolled their mouse over a link. Moving a mouse over redacted (blacked out) …


This topic is closed for new posts.

As simple as..

onmouseover in a tweet?

Am I really *really* glad I don't follow any twittering twits..


This post has been deleted by its author

Anonymous Coward

Thing is...

Thing is, this just proves the level of incompetence of the developers, sorry to say.

When dealing with user input, you have two choices: filtering or regurgitation. Filtering means blacklisting what you know is unsafe and assuming everything else is safe, regurgitation means blacklisting everything to start with and only allowing what you know to be safe.

Thing with filtering is, there's always a way around it. No matter how smart your filters are there will be a way around them that will coincide with what a browser will accept.

The only sane way to handle user content is to sanitise it all (e.g. PHP's htmlspecialchars on *everything*) and then only allow the stuff back from that to real HTML if you know it's safe, so you might then reprocess <b> and </b> tags, plus <i>, <u> etc. - stuff that you can guarantee in that form is safe, and leave anything else effectively neutered.

Security-conscious developers have known about this for years; I even studied the theory briefly in A-Level Computing a decade ago, and if I can get it right, I'm damn sure they should be able to since they're more highly paid than I am!


Re: Thing is...

It's more than just "sanitation", you also need "canonicalization". When data comes in from external (i.e. untrusted) sources, first you need to interpret it as its "normal" form, and then proceed to sanitise it based on that.

For instance, HTML entities, %hex encoding, ASCII values, Unicode characters, EBCDIC, etc. all need to be translated to the final representation of the data as the application will use it. Only then can you be sure whether you have, say, an angled-bracket or an innocuous letter.

Many a developer has fallen on the trap of assuming that all data will be input in exactly the same format and encoding, and then use this as the source for sanitation. Understanding all the various formats in which your input data may be interpreted and normalising it to a single, final interpretation (i.e. its canonical form), *must* be the first step towards data sanitation.

If at any point the data fails canonicalization by not being reducible to the encoding or format expected by the application, then you know that it's invalid and must be rejected.


Silver badge

"including the former prime minister's wife, Sarah Brown" How come?

Undoubtedly to her own, and everyone else's surprise, this woman is married to an InterNet expert.

Just roll over lady and give the object in bed with you a kick and then pop the question.


alternative twitter feeds

Well I guess anyone using the vast array of third party twitter clients (gwibber, tweetdeck) wont be caught out and its just if you use the main web site though seems to be their old UI and not the new one that many users are still to receive. Personally, I am only using twitter for news feeds from Sky and some vendors I need to follow and as I want it delivered via jabber (XMPP), I use tweetjid (at to do the task. So no concerns for me on mouseover, under or any other direction.

Silver badge

Never understood the point of Twitter anyway

Why can;t why just use RSS? Dies the exact same thing, except its more secure, provides a larger message length and doesn't sound suspiciously like zwitter. And of course, we already have numerous clients for reading RSS feeds (See all good web browsers and email clients).

You could extend this to all this Web 2.0 bullshit out there.

This topic is closed for new posts.


Biting the hand that feeds IT © 1998–2017