How can it be HACKED?!?!?
It's open source, how can it _possibly_ have been hacked? :-()
An internet voting system designed to allow District of Columbia residents to cast absentee ballots has been put on hold after computer scientists exploited vulnerabilities that would have allowed them to rig elections and view secret data. The system, which was paid for in part by a $300,000 federal grant, was hijacked just 36 …
examined so well it was hacked with a month to go before elections!
The philosophy of FLOSS is that given enough people and enough time to review, things will get fixed. Great as far as it goes - because it doesn't specify WHEN.
But what about projects and software that ABSOLUTELY have to go live in a short amount of time? When is there the time for FLOSS to be reviewed, discovered, and fixed? And then those fixes themselves reviewed and tested?
Given a short amount of time, all FLOSS seems to do is expose your vulnerabilities to those with the time and inclination to put the resources into finding them in a short amount of time - i.e., hackers. As was provably the case here...
The fact that EVENTUALLY all the vulnerabilities will be discovered and fixed is poor consolation to the millions of voters that may have their votes challenged, thrown out, or even stolen in the election in a month's time. And even if the vulnerabilities are eventually fixed, the system NOW has the perception of being "unsecure" to Joe Voter (no relation to Joe the Plumber). You never get a second chance to make a first impression.
I'm not saying FLOSS isn't a great idea - I use a bunch of it, including Blender 3D. But let's at least examine when it MIGHT have limitations...such as short time-frame deliverables on systems that have to be verified. I would rather have an open source voting system than one owned by "Republican-controlled Diebold" anyday...
"examined so well it was hacked with a month to go before elections!"
Examined so well that it never went live, CONTRARILY to the closed-source machine which went live without examination, were shown to be skewed after the fact a few years ago and which irremediably and deeply undermined confidence in the electoral process.
What was your point again?
The voting software which contained the flaw was NOT open source so therefore it was NOT examined.
It was the serious flaw(s) in this closed source software that allowed the box to p0wned.
If this poorly written application had have been OSS as well then it is very likely that the system would not have ended up going live while the flaw(s) were present.
They put up some code and a system and said, "Please look for vulnerabilities."
You think this is bad and that they should have just put it live without testing, kept the code secret (good luck with that), and hoped that nobody found the holes?
I'm not really sure what this argument has to do with open source but whatever - you're an idiot.
> The philosophy of FLOSS is that given enough people and enough
> time to review, things will get fixed.
> Great as far as it goes - because it doesn't specify WHEN.
No. That's not what FLOSS is about.
The advantage of open-source is that your bugs get found. There are far more people looking at the code, so it gets far more testing by people who have a good idea how to stress the code in question.
This says nothing whatsoever about how you go about *fixing* those bugs - it just provides a mechanism for discovering them.
> But what about projects and software that ABSOLUTELY
> have to go live in a short amount of time?
You pay someone to debug the code - just as you would in a closed-source application. Being open-source does not hamper this development flow at all - it just provides additional volunteer testing resources.
> When is there the time for FLOSS to be reviewed, discovered, and fixed?
> And then those fixes themselves reviewed and tested?
Being FLOSS does not preclude exactly the same reviewing and testing that closed-source would require - so the same job can be done. Being FLOSS does get extra testing for free.
The only thing that FLOSS precludes is security through obscurity. For a voting system in particular, that is a very god thing to prevent...
> But let's at least examine when it MIGHT have limitations
Sure. This is not one of them.
> such as short time-frame deliverables on systems that have to be verified.
Short-time deliverables is when I absolutely would want to go open very early - get people on-board and helping out.
You appear to be describing FLOSS as projects with no paid, fuill-time staff. This is not even close to the truth. FLOSS has the same development resources available as closed-source - but it *also* has unpaid volunteers looking over the code to help find bugs. This is additional resource, not alternative.
... that a configuration error was discovered month before the system went live.
That sounds like a rather straight support for OSS to me. Thank you Robert Hill!
At least with closed source the flaws don't surface until it's far too late. That's good, right?
Just kidding of course, as the current issue has nothing to do with the software -as should be obvious to any non-luser type: it's a classic PEBCAK case. Of course you wouldn't know...
the only thing I could come up with is that the app had been given a user account with full read/write access to the whole DB.
The actual voting app would only need write access to a table storing votes cast and read access to a table of voter IDs (random numbers that another table not accessible to the app could tie to actual voters names and addresses).
You'd have to post that ID to a voter and have him enter it (rather than name & address) when he casts his vote.
At least that would stop a hack on the system revealing voters' ballots.
"Why is is a "Shocker" that a web application would have the username and password of the database into which it is inserting data?"
Because it shouldn't.
The application should be supplying verified text to the databases procedures, which then perform the updates on the database, after re-verifying the information that has been supplied.
The web application really shouldn't have any elevated privileges or login details for anything.
> "Why is is a "Shocker" that a web application would
> have the username and password of the database
> into which it is inserting data?"
> Because it shouldn't.
I don't have aproblem with the app having *a* username/password. The problem, AISI, is that the phrase "the username and password" has any meaning...
ACL is known technology. There is no reason to give your web application any more privilege than it needs :-(
If the creators of that system had been advised by just a moderately competent developer/computer scientist they would have known that
Comfort Features != Security
That's because these features are very difficult to understand in their entirety. Ruby is a "dynamic" language intended to save developer's effort so that results can be delivered much faster.
Using SQL is already a dangerous concept, as there might be a ton of bugs to be exploited in the query engine, the query optimizer and all the lower-level stuff like indices.
Simplicity is in general a virtue also because this is what allows engineers to formally verify the correctness of their code.
The whole mindset of Voting System Developers must be driven by "Security First". Using Pascal or Ada is the right language; using simple, formally verifiable hash indices and fixed, file-based record lists.
Trusting "A Patchy webserver" is also quite a treat.
My suggestion is to use inetd, OpenSSL and a single Ada process per HTTPS connection. Parse the HTTPS GET, then the PUT request using a very rigid format. Don't even think about regexp. Calculate address in Voting File from one-time-pad hash. Verify otp, record vote. Say "thanks for voting", terminate process.
Simply pull the network cable at 1800 hours and run the Ada program which does the counting. A simple for-loop, basically. Sign & publish Voting File using GPG.
In addition to that, a series of purpose-built protocol-verifiers written by different people in different, safe languages like Pascal, Java, Eiffel and C# could be used to "defend" the core voting system.
Ada has the major benefit of Verified Compilers being available. Compilers we depend on for correctness when we take off in any modern airliner. SPARK Ada helps developers verify their code mathematically.
Using the latest hippie technologies is always the objective when some morons meet to create something for the government. No consideration to reliability, security, safety, availability.
Indeed PL/1 would have been a much better choice than Ruby.
One could even argue they should have used a assembly language, because that would have cut all the compiler-related bugs out of the system.
What matters in this case is the security, availability and reliability aspect. What matter is that you can formally verify correctness. Whether it is hippie-buzzword compatible does not matter at all.
I really don't get it why the gubmint does not simply shell out a few million dollars or Euros for a research project which includes the Best Computer Scientists in the field of Formal Verification and Real-Time System development. They could certainly deliver something better than that. It would still require lots of review by anybody competent and interested, but I am sure it would not be pwned so quickly.
When they spend billions on diseases, they don't give the money to the Moron Pharmacist around the corner, right ? They set up a program which is led by respected academics of that specific field.
Spew on about this language or that server or some process architecture as you wish, but you are quite missing the problem - the implementation of "own goal" code.
None of what was described had anything to do with the tools used. Who *cares* about whether the compiler is 'verified' if idiots are using it? Will the compiler, the language, or whatall you think best hold their hands and type the correct code for them?
Talk about bikeshedding! Please go pick your next breakfast cereal. Should take you long enough so serious work can get done by people who know what a "root cause" looks like.
"The voting application was written on the Ruby on Rails framework and ran on top of the Apache web server and the MySQL database."
I would've chosen Java under the J2EE framework. Why? Because it has so many safeguards in place that pulling something like this would be pretty difficult to do. Ruby on Rails is basically "the new PHP" in the sense that it allows to make really buggy stuff at a really fast rate. Finding out about big errors will usually happen when the thing is in production, and everything blows up!
What you need in a voting system is anonymous ballotting, making sure every voter can vote exactly once, and a transparent and accountable process. How you achieve that hardly matters, as long as you do achieve it. If we know how to do that with paper, heck, use paper.
So far all "electronic" and "internet" voting things have been solutions to an already-solved problem. With shockingly bad results. Even such things as "hanging chad" can make your fancy solution into a quagmire. Which is exactly what honest ballotting committees do not want.
Yes, there would be advantages like convenience and speed. But then again, politics doesn't move that quickly, so there's no real point. Apart from the media frenzy and voters complaining that vote counting doesn't happen instantly. But the latter that is easily fixable with a couple sentences of explanation, and the former we can do even more without. Running a country shouldn't be entirely hot-button driven.
Well, it was BASED ON open source software. Whoever wrote the bits that check the submitted files to see if they can't do funky things to your machine was evidently sleeping.
I am convinced that it is possible to code and run a secure web facing service, secure enough that it will be impossible to falsify votes without alerting the people running the system. I'm also convinced that it'd take months or probably even years of coding and testing, using an open source platform, to get it to that point.
If there are unexpected holes in an interpreted application then the host is at fault. All design should accept only valid entry from an approved source.
This started thirty years ago with demonstration that buffer overflow accessed the system. Since then authors have continued to get things to just work when they show it to the moneybags. Except Microsoft, of course, who give all the appearance of doing it on purpose.
You seem to have written this article under the misapprehension that the compromise was an unofficial, 'grey hat' exercise; it wasn't, the authorities explicitly *asked* people to try and discover any vulnerabilities in the system, which was deployed for exactly this kind of testing (among others). As the source post states, "Before opening the system to real voters, D.C. has been holding a test period in which they've invited the public to evaluate the system's security and usability." The use of terms like 'hackers' and 'hijack' gives the misleading impression that this was some kind of rogue compromise.
You also muffed this bit: "DC officials deployed the system even after Common Cause and a group of computer scientists and election-law experts warned city officials that the trial posed an unacceptable security risk that "imperils the overall accuracy of every election on the ballot,” The Washington Post reported."
The group of experts warned that deploying the system *for an actual election* would imperil the accuracy of that election. They didn't have any problem with deploying it in a test configuration, which is what happened here. You're misreading the Washington Post article in stating that the test deployment was against the advice of the experts; it wasn't. A *live* deployment would have been against the advice of experts, but no live deployment happened.
You are obviously right. The people who put the system together are still in need of a good spanking though. That's not even a beginner's mistake, that's dilettantism of the worst kind. Surely "let's try and sneak system commands in the input" is the very first thought of anyone trying to compromise a world-facing system. And surely that's the very first thing any admin worth his salt <db db db db db> any admin at all would consider.
None of the open source components were hacked, right? It was just a matter uf unsanitized data input (i.e. poor system design). Actually the system was not even "hacked", a design flaw was exploited. From a pure technical point of view, the system behaved exactly how it should have (i.e. how it was designed to behave). No software flaw is at fault. Ask bobby tables (http://xkcd.com/327/) what he thinks about it.
Nt nt nt. Trolling title if I ever saw one.
> None of the open source components were hacked, right?
Wrong. The application was hacked.
None of the platform was broken - but the designers had seen fit to give enough privilege to the the process running the application to do silly things. And the application was so poorly-written that that privilege was easily taken by anybody who wanted it.
> So basically you say exactly the same thing as I did
No - I'm saying the exact opposite of what you said.
You said none of the open-source elements was cracked.
I said that one of them was.
I am disagreeing with you.
That's why I responded to your question of "right?" with the answer "wrong".
Judging by the github graphs, this project is only a few weeks old. It absolutely beggars belief that this could be used on a live voting system. This is unbelievable mismanagement.
Failing to catch the sort of injection attacks outlined in the article is just beginner stuff. That the code went to production in such a state says bad things about the whole of the project's management. It might be the case that this project is permanently stigmatised - to have launched such poor quality code into a live application implies that the leadership wouldn't know security if it slapped them with a wet fish.
I remain unconvinced that Ruby on Rails was a good choice for the project, but that seems to be a near-irrelevancy; this code is clearly so badly-written that no language would have been appropriate. It's just lucky that the code is open-source - that means the testers can do white-box testing, and tailor their stimuli to what they can see in the code.
Mind you, the chances are that this sort of thing would have been caught by black-box testing without much more effort - flaws of this scale will always be exploited if they are present, and should be absent by design.
"to have launched such poor quality code into a live application implies that the leadership wouldn't know security if it slapped them with a wet fish."
That's why we should be glad that the system never went live, I guess.
There's still a serious problem as flaws of this magnitude should not even have reached the testing level, but chill dude, it never went live fo' realz.
Management may still be at fault, because according to some other comments here the project is only a few weeks old (I can here middle-management types say "OK, we spent the last 2 years discussing it without telling you, and we finally agreed that we need an online voting system up and running by next month. Make it happen. How hard can it be?").
> That's why we should be glad that the system never went live, I guess.
But it did go live. It might not have been actually counting votes - but it is still a live application exposed to users. And it wasn't even close to being ready for that.
> flaws of this magnitude should not even have reached the testing level
Exactly. This project is broken by design.
> but chill dude, it never went live fo' realz.
Well, that rather depends on what you mean by "live fo' realz". That this story is being used in an attempt to discredit the open-source development model is plenty live enough for me.
> the project is only a few weeks old
That's what I get from the github graphs. I don't find this the easiest tool for such things - but it is rather popular at present :-(
But that doesn't excuse the fact that something as important as e-voting was entrusted to a bunch of neophytes so incompetent as to have included such trivially-avoidable beginner errors in their code. Nor does it excuse the fact that this was not picked up in code review - so that review is either very faulty or very absent.
This application had no business being presented as part of Officialdom. If it had been three skiddies playing at coding, we'd just have had a laugh. But this is a Governmental system; it provides negative publicity for Government procurement, for e-voting, for FLOSS, and for computing systems in general. And that's not good for any of us.
I believe it is technically possible. First idea that springs to mind would be the use of a separate server generating one-time credentials from non-falsifiable user data. The separate, tightly regulated server would take user data and return strong random one-time credentials, while storing both the user input data and the delivered credentials in separate, unlinkable databases. There are quite a few requirements for the credential-issuing (auth) server:
-the user data must be at least as strong as physical ID is. Passport number + physical adress + some other kind of verifiable but unrelated ID, like social insurance number or anything that the state would already know but virtually unguessable by a third party.*
-the issued credential must be very strong (100s of random characters will do the trick. Think strong-encryption key)
-the issued one-time credential ("key") MUST be independant from the user data (no "clever" hash allowed, just generate a strong pseudo-random key for each request and compare with the list of previously-issued keys until you get one that you didn't already issue).
-user ID and issued credentials must obviously be stored. (to avoid duplicate connections or duplicate keys)
-But they MUST be stored in separate databases and be ABSOLUTELY impossible to link to each other AND impossible to link to the actual voting process. For this, it is absolutely necessary that ONLY the user ID be stored in the user ID base, and ONLY the issued key be stored in the key base (forget IP, time of connection, ordered database indexes and all that crap). Both databases should be shuffled at random with each new entry, just to make sure**
-connections to and from the key-issuing server must be strongly secure. That problem needs to be discussed, but here is not the place.
-connectivity MUST be assured at all time.***
Requirements from the voting server:
-must have access to the "key" database on the "auth" server to verify the authenticity of the vote.
-must be denied access to anything else.
now for the actual voting process: connect to the "auth" server, enter your credentials, get a one-time key. Connect to the voting server using your key****, enter your choice, disconnect.
You read it here first, folks.
(come to think of it, if I was a whore I would patent that).
* That's probably the hardest nut to crack. It depends on what info your state already has that a felon can't guess. Of course you don't want to go all big-brothery but you need some kind of data integration to beat the crackers. Tough choice.
** just a half-arsed paranoid attempt. ANY type of possible cross-link between databases should be avoided, including the entry order. I'm no DBA, so that's mainly a wild guess.
*** Second in the "hardest nut" contest. Vote anonymity and verifiability means you won't have a second chance. In the physical space you won't be thrown out of the bureau halfway, on the internet you must not experience random disconnections.
**** The most secure way would probably be using the delivered key with SSL, but that might be out of the reach of non-tech punters. Copy-paste might be acceptable as long as all connections are kept secure. (yeah, I know, but let me believe!).
>>Just shows the superiority of paper and a pencil on a string.
If you are referring to voting in good old Blighty.... your pencil and paper vote is not secret for all values of secret. Each ballot paper has a serial number written on it, and each voter has that serial number logged against their name by the returning officer's representatives in the voting hall.
/mine is the one with the unmarked ballot papers in the pocket
See especially commandment number five:
Thou shalt check the array bounds of all strings (indeed, all arrays), for surely where thou typest ``foo'' someone someday shall type ``supercalifragilisticexpialidocious''.
As demonstrated by the deeds of the Great Worm, a consequence of this commandment is that robust production software should never make use of gets(), for it is truly a tool of the Devil. Thy interfaces should always inform thy servants of the bounds of thy arrays, and servants who spurn such advice or quietly fail to follow it should be dispatched forthwith to the Land Of Rm, where they can do no further harm to thee.
Just because thou hast not to programmest the entire web in C meanest not that thou may behave as heathen. For surely the Goddess Ansi shall smite thee for thy impudence.
And it hasn't anything to do with whether thy source be open or closed, for She sees all.
> the Enlightened people use Ada, where you can have
> built-in bounds checking of many kinds.
Note that in Ada you *can* have built-in bounds-checking; that doesn't mean you *do* have it.
The upshot of this is that some programmers stop coding defensively, because they expect the compiler to do the work. They don't worry about designing array indices not to overflow any more.
This work fine until someone switches off the auto-checking, believing it to be an overhead they can't accept any more. I've seen this happen in many projects.
In fact, wasn't that a large part of the reason for the Ariane 501 accident?
Remember, this is the city that voted a convicted coke head, Marion Barry, mayor after he got out of jail. You can bet the election officials are only upset with this test because it means they can't rig the election this way. This time.
This is basic stuff. These sorts of flaws are easy to guard against, why do people keep asking boys to do a man's work? It is perfectly possible to write a secure website. I despair;, it could be breached with basic lack of escaping, and probably sql injection and/or xss. I really begin to wonder if there are real progammers left.
Also, Register, as others have pointed out this is just bad journalism. The while point of the site was to see if it could be hacked, it was never used for real votes and did not need to be 'pulled'.
How do we report an article that is so clearly misleading and riddled with half truths so that it gets taken down or marked as such for more casual readers to see?
As some comments suggest, the big security flaw had nothing to do with the framework or back-end software. These developers were clearly inexperienced (says I, the recent college grad).
We need a way to say "HEY! REG! This article was clearly not fact-checked!". (Perhaps I just said it).
"No, we never said the gaping holes that were exploited stemmed from the open-source software. FOSS fanbois can now stand down."
Well, I think it did in a sense -- Ruby On Rails is supposed to sanitize input to prevent exactly this type of attack. Either there is a flaw in RoR, or these guys bypassed some of RoR (for instance taking in a form directly instead of filtering it through RoR's sanity checks first.)
"Actually the system was not even "hacked", a design flaw was exploited."
I think you're splitting hairs -- that is what almost every hack amounts to, finding a design flaw and exploiting it. Of course it does appear to be in the web app as opposed to Apache, MySQL, etc. themselves. But still.
. . . though a big finger is pointing in their direction, but also the testers (presuming they actually had any).
One of the biggest failings i've seen on many (read : almost all) projects over the years and a personal bug-bear of my own, is that almost all testing is positive - I expect the user to type in either an N or a Y, so I will test that it works when they do so. It's so rare that negative testing gets done, it still shocks me when I see someone actually doing it - e.g. I expect the user to type in either N or Y, I shall type in Q and see what happens.
Still, it is the old Bobby Tables exploit and the fact it has a cartoon strip thats many years old related to it says more than anything else. Anyone releasing code (for testing or for production) that is vulnerable to one of the oldest and well-known issues of database based applications, should basically just be fired for gross incompetence.
Testers (whether they existed or not) can't fairly be blamed for not catching this particular flaw. (And, in fact, it was caught during testing - by the UoM researchers.)
This wasn't the "old Bobby Tables exploit". That XKCD cartoon is about SQL injection attacks, which are certainly extremely well-known now and inexcusable, but which arise in a fairly obvious design pattern that inexperienced programmers working in a LAMP or similar environment can easily reinvent. It's easy to see how many SQL injection vulnerabilities are introduced, because dynamically generating SQL statements is an obvious approach when you're given an API for executing SQL queries in string form. It's much like many C buffer overflows; the API encourages dangerous behavior.
*This* bug was a huge design flaw that should never have been seriously considered for a moment, much less implemented. They let the client supply a filename extension to be used on the server. That became a shell-injection attack vector when someone used it to construct a command line that was passed to the shell for processing (two foolish, ignorant errors), but it was *never* reasonable, even for a moment, to think that the filename on the server's filesystem should be constructed directly from data sent by the client.
In this case it's not even a question of sanitizing the data. There's no need to take the filename extension from the client request. The app is supposed to be receiving a PDF from the client at this point; either the data is a valid PDF file (so the app can just use ".pdf"), or it's a bad request (in which case the app really ought to have determined that before this point).
Halderman characterizes this as a "small bug" in his note, but I disagree. It's an easily-explained bug. It's easy to fix. But it's not small. It indicates a complete failure on the part of whoever wrote that code to think about program security in a systematic way. That person shouldn't be working on a system that will, if deployed, inevitably be a frequent target.
All that said, Internet voting is a lousy idea, regardless of whether it can be made safe. Accessibility issues can be solved in other ways, and there are no other advantages - and plenty of disadvantages (eg, it's much easier to coerce or suborn voters) - to online voting. The idea that it'd increase democratic participation is poor economics; it just means we'll have more lazy, uninformed nitwits who formerly couldn't be bothered exercising their franchise.
(besides the commentards jumping at the red herrings of platform choice and white v grey v black hat) is this:
"...Washington DC elections officials began testing it ahead of live elections scheduled for next month. "
A system which needs to be as complex and secure as an online voting system must requires a bit more that a week's testing less than a month before it's implemented.
I've has a thought or two about more-electronic voting systems, particularly the thought of "Voter-verified Paper Trails", and I think I know a way to both clarify their output AND alleviate the problems of the old paper ballot (misreads and the infamous "hanging chads").
Simply make the final ballot that comes out of the voting machine Universally Readable. Instead of using barcodes or other mumbo jumbo, produce a ballot showing the votes cast in clear legible text readable BOTH to the voter AND to a machine. This is best accomplished by having the text printed in a typeface specially designed for OCR (the kind used on checks and the like). When the ballot prints the voter can see for him/herself what's to be voted (allowing quick verification), and when put into an OCR ballot machine, the votes can be tallied quickly and likely accurately (since OCR fonts are tried and tested tech). Plus, in the event of manual recounting, it's much more difficult to misread a vote since the ballots actually contain legible text instead of holes, marks, and so on.
Biting the hand that feeds IT © 1998–2019