Do you really think it could be that easy?
Hell, if that was the solution, I'd no doubt be using the 'BristolBachelorBot' search engine, today, wouldn't I? The reason why there's only one serious contender, and one wannabe, in this market, is because it's hard.
Even if the Googlebot did not explicitly identify itself, as such, the spider can easily be recognised, simply by the patterns of its behaviour. For instance, unlike regular web-scrapers, search engine spiders tend to poll their requests at regular intervals over a given period of time, and will avoid requesting certain content (like, for instance, javascript files whose functionality is not, in some way triggered by the page request), to avoid consuming a site's bandwidth: a visit from the Googlebot can easily take half a day, if you have a lot of content. The regularity and nature of the requests can act as a signature.
Even if those factors didn't alert you that a search engine was on-the-visit, the very fact that it reads your Robots.txt file is a bit of a giveaway. I'm sure you wouldn't advocate search engines stop reading robots.txt?
Google regularly and deliberately haze the behaviour of their search engines, to throw these people off, but its a constantly moving battle. I really don't think people outside of search, realise the enormity of the problem of automatically gathering realistic data on the Web, these days. We only notice it, when it fails.