The point is to calibrate your uncertainty
Once you have done more than 4 quizzes, you get a calibration curve, showing where you are over- or under- confident. The point is to see if it is possible to develop your skills in estimating your subjective Bayesian probability. The scoring formula used is Brier's proper scoring rule - but modified so that you cannot get negative scores.
The point was not to make a commercial game, but to develop a research tool to help us find out if people can improve their skills in estimating odds. It was tested on over 500 members of iPoints. It turns out that if you are in a hurry, you don't improve at anything - in fact you get worse. If you take more than 20 or 30 seconds per question (i.e. you stop and think), you get better over time.
As for the coding, it was done by a games developer attached to Brunel University - nothing to do with Queen's University Belfast (where a Ph.D. student has been researching the learning effects in the Management School, not CS). In any case, a commercial game costs $10,000 to build and test. There is a limit to what one person can do when contracted part-time over a year. The big difference from commercial games is that the uncertainty is explicit, not hidden in the code that drives a monster. Imagine someone developing an uncertainty engine, analogous to physics engines, that could then be used in commercial games.