18 posts • joined Thursday 14th June 2007 00:33 GMT
Logitech value optical mouse, 5.99 from PC World
I was very impressed with the Logitech Value optical mouse. Standard sort of thing - two buttons plus clickable scroll wheel. USB with PS/2 adapter. The scroll wheel doesn't stick and get accidentally clicked when you scroll, unlike my more expensive one at work. 5.99 from PC World (collect at store).
Re: common misconception
"The only ways to effect a change are either to stop voting in the hope that it'll bring about a hung parliament and a collapse of the current system... or a military coup, but we're just too British for that."
How about taking back control one small piece at a time? Find out about who's standing for election, and volunteer to campaign for the one who you most agree with. Or stand yourself, and get a few friends to go around to everyone's house to explain what you believe the options are, and why you believe your compromises are better than those of the alternatives. Talk to your friends and neighbours about politics, philosophy, history and economics so you can learn from each other, and you will all be in a better position to evaluate the policies of those seeking to represent you. As more people come to understand politics better, the politicians will need to take their views into account more because attempts to fob them off with spin will backfire.
Self-reliance and education - a very (traditional) British approach. :-)
Apathy and ignorance among the majority of the population (which seem to be actively encouraged by the mainstream media) are what allow those in political power to ignore their constituents and concentrate on serving those who fund them (including, no doubt coincidentally, the mainstream media). You don't have to assume that politicians are inherently bad people for this to be true - it makes their lives much harder if they have to take into account the informed views of their constituents.
"C++ started out as a preprocessor hack"
That's a common fallacy. Early C++ implementations by Stroustrup were full compilers whose back-end generated C instead of machine code. Very sensible, given that there were lots of C compilers around which could generate machine code for a wide variety of binary targets, and there was only a single C++ compiler writer.
", and a lot of the "power" and "versatility" of C++ is still in the C preprocessor."
That's not true. The power and versatility of C++ come from the support for OO and generic programming, and the powerful standard library. Preprocessor hacks are rare in most modern C++, and various features of C++ were put in with the deliberate aim of reducing use of the preprocessor to a minimum e.g. templates for functions like min()/max(), constant variables being constant expressions, and inline functions.
I agree that a large part of the popularity of C++ is because of the popularity of C. I think that a major reason for that is that Stroustrup (like K&R) doesn't force his ideas of good style down your throat, instead allowing you to make decisions for yourself based on your own constraints, though the structure of C++ is designed to allow you to express your designs very cleanly.
You do have to learn how though, and that does take a lot of effort. Trying to write C++ using only those skills you've learned while writing in C and similar languages will end up with you producing some very poor C++ code. The key is to understand what problems the various language features were designed to solve. Reading "Effective C++" by Scott Meyers, and "The Design and Evolution of C++" by Stroustrup helps a lot.
From my reading of "The Design and Evolution of C++" by Bjarne Stroustrup, I don't think that's really the way that C++ has evolved. The last major features to be added to the C++ language were templates, exception handling and run-time type information, and the essence of the design was already in place 20 years ago, before standardisation began. From what I've read recently, there will be some minor tidying of the language in the updated C++ standard next year, but no major language features. Stroustrup makes it clear that he is determined not to add language features where there are already alternative ways to express the idea in the existing language, or where libraries can be used instead.
Again in the D&E book, Stroustrup shows where the various language features come from. Templates came from a desire to create container classes in which type errors could be detected at compile time. Exceptions are an important part of allowing applications to be composed of independently-written components, in which the code which detects an error condition is unlikely to be in a position to know how to respond, and vice-versa. C++ features were generally introduced in response to needs of a wide section of the C++ programming community, and ideas were discussed and refined for years before being accepted as part of the language, considering the experiences of experimental C++ implementations and of other programming languages. Features which were unlikely to be of use to a wide cross-section of C++ programmers were consistently rejected.
It's interesting that you point to Java as being a promising clean language. The major features of C++ not present in Java were basically multiple inheritance and templates, and I suppose operator overloading if you consider that major. However, right from the beginning the Java language introduced several new features, including a form of run-time type information with a much greater scope than C++'s minimal version, garbage collection as standard, interfaces, serialisation, and multi-threading and synchronisation support. That was soon followed with inner classes.
The question I'd like to ask you is: what is the minimal set of features for an effective general-purpose programming language? I take it you'd want a variety of data types, control flow and function calls. Is floating-point support necessary? Would you consider it excessive to have classes? Should there be language support for polymorphism, or are function pointers or references sufficient? What about support for type-safe collection classes? What about exceptions, or is it good enough to have return values which have to be checked on every function call, so that the likely control flow is hard to see because of all the inline error-handling code? Is it necessary to have built-in constants for pi and e? What about complex number support, or should that be expressed through structures with two members, and if so, what types should those members have? What about closures? What about run-time type identification? What about the ability to create objects by the name of their type? Garbage collection? Support for multiple threads? Should it be able to call functions written in other programming languages? Should it allow unchecked access to arrays? If not, what should it do in response to an access violation?
More tellingly, how would you decide what was the appropriate set of features for a programming language? Would you decide for yourself, and tell people to take it or leave it?
Yes, that does seem to be the cycle. I think the problem is that approaching a mature language is very difficult, and requires a lot of patience and dedication to learn. I've been writing C++ for over 10 years, and for most of that time, everything I've learned has demonstrated to me that there are several more things that I hadn't realised that I didn't understand. Now that I'm writing a C++ language support library to make GCC work with our platform, I feel that I've finally reached a tipping point, understanding multiple inheritance, RTTI and exception handling in some depth.
When a developer doesn't understand a complex feature of a language, and can get by with a simpler approach, it is very easy for them to believe that the complex feature is just clutter. (And vendors exploit this perception to promote their proprietary languages, which allow you to write impressive demo code very quickly, but aren't up to the job of fleshing out and maintaining a full application).
As programmers set out with a new, simple programming language, they start to understand the problems which the complex feature addressed, and start demanding that the feature be added to their simple language. Eventually the simple language evolves into something that the next generation of programmers turn away from in horror.
So what's the solution?
Well a good start would be if programmers spent more time learning, particularly learning about programming in general, rather than programming with a particular language. I find it saddening when some developers criticise C++ saying that Java and C# are "more object-oriented than C++", demonstrating that they don't understand (i) the difference between supporting a design and programming paradigm on the one hand, and forcing programmers to put all data and functions into syntactic structures labelled "class" on the other, and (ii) that object orientation is only one paradigm which is eminently suitable for many problems, but is not ideal or is even irrelevant to others.
Another would be if skilled developers were given the ability to choose the technology suitable to the problem to be solved, rather than being forced by managers to use whatever whizzy product some vendor has just convinced them will magically create a solution to all their business problems just with a few mouse clicks performed by people with no knowledge of software development.
I've seen software development work well and work badly over the years. Where it's gone well, it's been due to a good team of skilled developers, where the most experienced ones mentor the less experienced; and where the immediate management have prevented higher management from interfering with day-to-day activities, but have made sure that the development team are focussing on solving the business goals (without trying to force them to do so in any particular way).
Similarity of syntax misses the point
While the syntax of Java is derived from C++, the semantics are very different, and I expect that that is what the two academics are uncomfortable with.
There's nothing wrong with teaching Java - I consider it a good programming language, with a good set of class libraries. But it is a much higher-level language than C, C++ and Lisp (I don't know Ada), in the sense that the language constructs do not map closely to operations in a typical physical machine. Java is designed to run in a virtual machine with complex semantics; a lack of understanding of lower-level concepts would make it hard to understand how this virtual machine can in principle be implemented on common hardware, and thus what the cost in time and space of Java constructs is likely to be.
One of the best books I read at university (early 90s) was 'Structured Computer Organisation' by Andrew Tanenbaum, which presented logic circuits, microcode, several assembly languages, and then C-like languages, showing how each level of abstraction could be built on an implementation in the level immediately below. Java could usefully be taught once these lower-level abstractions are understood. The student would then be equipped to understand which problems are suitable for tackling in a Java-like language, and which would be better implemented in something more like C++, or even machine code.
Re: More nuanced
"And your point about exceptions is where things get really sloppy for those who don't adhere to single exit.. If you call a function that throws exceptions, catch them and handle them. Don't let them percolate up. Talk about impossible to test code paths!!"
I don't think that catching all exceptions from every function is the right thing to do normally. Exceptions are designed to convey information from the place where the details of an exceptional condition are known but the appropriate response is not known to the place where the details of the condition would not otherwise be known but a sensible decision can be made about the appropriate response. Otherwise you may as well not use exceptions at all, and just return a value from the function.
If you have a networking library, and it detects deep down in its implementation that a cable has been disconnected, it _should_ throw an exception, and that exception should not be caught by the top level of the library. It should be caught by the application code which can decide whether to exit with a message and an error status (quick hacky program), to display a dialogue box (interactive GUI program), to generate some suitable HTML (web servlet), or some other response appropriate to the type of application.
For general-purpose programming, rather than the specialised area of safety-critical systems (about which I know little), I tend to agree with the author and some of the comments - clarity is the most important thing. Functions should be written so that they can be understood in their entirety, in which case the structured programming techniques are much less relevant. I personally find the "bail out early" style to be much clearer than the "if (still_no_errors)" style. In my experience, it is much more important to get the overall structure of the program right than the structure at the function level.
Swing doesn't promote separation of GUI and logic?
I don't understand the comment about Swing not promoting separation of GUI and logic. IIRC, separation of GUI and logic was one of the main design goals of Swing. Not only are there models for essentially every Swing component (which admittedly often belong to the UI domain), but there are some excellent tutorials on java.sun.com showing you how to use Swing in the way it was intended i.e. storing data in Java beans, and using PropertyChangeEvents to update the GUI. In my experience, it works extremely well.
As a simple illustration, a few years ago I wrote an application which involved the user completing an on-screen form while talking to a customer on the telephone. It had a save button (which saved the details entered to be completed later) and a submit button (to begin processing the details). When the model fired a PropertyChangeEvent to say that the "valid" Boolean property had changed (which was done in response to updates of model fields), a PropertyChangeListener changed the enabled state of the 'submit' button accordingly.
Having said that, I completely agree with the thrust of the article - test the underlying model, and just let the GUI be a way for a user to communicate with that model.
Most drivers and cyclists that I see are quite good
I'm another one of those people who both drive and cycle. I have to say that the vast majority of drivers are courteous, keeping a good distance from me when I'm cycling. Also, I find that the vast majority of cyclists that I see behave appropriately. I don't want to see the groups getting antagonistic towards each other because of a minority of bad examples. Let's try to respect each other, and remember that we all make mistakes on the road at times.
I do want to say a couple of things against some of the more extreme comments made above:
1. Saying that children are entirely responsible for being injured by a vehicle if they step out in front of it is kind of missing the point that THEY'RE CHILDREN. They haven't got as good an awareness of their surroundings as adults do because they haven't had as much experience. It's up to adults, particularly those travelling at speeds that can cause serious injury or death, to be ready to respond to other people's mistakes. Yes - teach the children awareness, but don't abdicate your own responsibility to make allowances for the inevitability of other people's mistakes. Driving at the limit of one's ability will inevitably lead to accidents. I remember running into the road once when I was about 4 - I know that I caused a car to brake, but I don't know how close it got. (I was running away from a neighbour whom I found a bit scary).
2. You can't safely cycle 12" from the kerb. The edges of the roads aren't maintained well enough, and are too full of debris, for that to be safe. It's even worse in the rain when you can't see the potholes, and the drains and inset manhole covers don't offer much grip. So it'd be really nice if drivers would just hold back for a bit. Yes, I know the sense of frustration at being held up when there's an empty road ahead, but try to think happy thoughts and overtake when it's safe to give a 3' gap, or a bit less if you're in a low-sided vehicle and travelling just a few mph faster than the bike. In my experience of driving around Reading, it's rarely more than about 20 seconds.
By the way, I hate seeing cyclists going past red lights too. Act a bit more mature, will you?
What do software developers provide?
I don't feel worried about being replaced as a software developer by a business analyst and some magic tool to help them to write programs.
Software is composed of very simple components, which are easy to understand in isolation - anyone with modest intelligence and a bit of training can do it. Lots of people write Excel macros to do clever things. The hard bit is trying to put millions of instructions into some coherent, maintainable shape. That is what the experienced software developer provides to a business, and that is what no tool that I can imagine could provide to someone without the knowledge and experience of complex software development.
I doubt whether laser eye surgeons will be worried about DIY kits sold in Boots any time soon either, for that matter. I believe that the future of technology is in allowing specialists to become more productive through progressively better tools, not in creating snake-oil tools that supposedly allow people to be productive without needing to have any understanding of what they are doing.
I'm sure that some specialised areas will be served by configurable standard tools, but I'm sure that there will be plenty of new problems to be solved for a long time to come.
Re: Occam's Razor
You have to be careful in using Occam's Razor in philosophical discussions. It's not telling you anything about truth. It's more of an aesthetic appeal to minimalism, and a way of cutting out unnecessary complexity, where "unnecessary" means that it adds nothing at the current time to the ability to predict outcomes of experiments. It's ideal for science.
If Occam's Razor were saying that a more complex hypothesis is actually _untrue_ when a simpler hypothesis explains all observations under consideration, that would imply that general relativity was untrue before the 20th century, and only became true when observations started being made which challenged Newtonian mechanics. You could argue that case, but I doubt many people would.
By the way, there have been several comments stating that religions do not allow questioning of what they teach. Can we either have those statements substantiated, or have an end to them please? In my experience, it's occasionally true, but not generally. And it seems to be equally common for secular groups to refuse to listen to people arguing "heresies" against what is commonly believed. Let's take an example from comments on stories in The Register: ignoring the facts for now, many Linux users just "know" that their operating system is more secure than Microsoft Windows, and will not listen to statements to the contrary. There will often be some grounds for believing it to be the case, but I'm convinced that it is often a matter of faith and dogma.
Science = provable?
People who see science as proving things should be more careful about their beliefs. The empiricists believed that they were proving their theories by making observations, but it's worth reading a bit about Karl Popper who pointed out that science only disproves theories - it never proves them.
All experiments are observations. Looking back, one of the best physics lessons that I had at school involved looking for Brownian motion. We were asked by the teacher to say what we saw through a microscope. Everyone claimed to have seen smoke particles being buffeted by the air. The teacher insisted, to everyone's initial discomfort, that all we had actually seen was shifting light patterns. We had simply interpreted it as smoke particles, knowing in advance what the expected "correct" answer was.
We can produce theories that are self-consistent, and appear to explain observable phenomena, but it's important to remember that the theory isn't necessarily describing the truth. The great power of science is not in its ability to prove, but rather its ability to disprove a theory: any theory that cannot explain an observed phenomenon is shown to be untrue. And any theory that can predict phenomena successfully is just a good theory.
May as well put an opposing view
There are some valid points in the comments, but some demonstrate a self-righteousness of some atheists and a lack of clear thought.
"Half of the world's problems stem from religion".
I'd say it's not religion per se, but arrogance and self-righteousness. The state terrorism of Stalin, Mao, and Pol Pot certainly can't be blamed on religion. Should atheism be denounced because of their actions?
The abolition of trans-atlantic slavery was in large part prompted by the religious beliefs of campaigners. Many charities are founded and funded by people who want to help their neighbours out of a sense of religious conviction.
"Maybe if the church is so against promoting violence they could get rid of all those statues and pictures of that bloke being brutally tortured to death on a cross?"
Correct me if I'm wrong, but I don't think that the church is seeking to promote violence by depicting the crucifixion. As far as I remember, Christians aren't called upon to torture people to death. I think the idea was more about reflecting on what Christians believe that God was prepared to suffer for mankind's sake.
"CofE complains of violence ... fails to mention Treasons Act of 1534 (outlawing Catholicism on pain of death)."
Was that Act passed by the General Synod of the Church of England, or by the secular Parliament? Were there any political motivations to it, rather than religious ones?
For the record, I am a Christian, but in the absence of much evidence available to me I don't have strong opinions either way about the effects of gun-related video games.
E-R vs UML
I'm no expert on E-R modelling, but from the examples that I've seen it doesn't seem well suited to expressing the idea of hierarchies of types / entities, which can be useful in some situations. In those cases, I'd have thought that UML would be more suitable than E-R. If the business people have a problem understanding UML, they should go and learn a bit about it, say as much as they expect (entirely reasonably) IT people to learn about the business. It's at least as much their responsibility as the IT people's to ensure that the requirements are communicated clearly, and surely communication is the main point of modelling?
I don't understand the point about UML not expressing concepts such as identifiers/primary keys. That seems to me to be a low-level implementation detail that should be kept well clear of logical modelling.
By the way, it's "principles", not "principals" in 3 places in the article. Yes, I was able to correct it in my head, but I find that having to do so is quite distracting.
- Xmas Round-up Ghosts of Christmas Past: Ten tech treats from yesteryear
- Analysis Microsoft's licence riddles give Linux and pals a free ride to virtual domination
- Review Hey Linux newbie: If you've never had a taste, try perfect Petra ... mmm, smells like Mint 16
- Special Report How Britain could have invented the iPhone: And how the Quangocracy cocked it up
- Massive! Yahoo! Mail! outage! going! on! FOURTH! straight! day!