Googlebot is Google's web crawling bot (or "spider"). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.
Google uses a huge set of computers to fetch and index pages on the web. An algorithmic process determines which sites to crawl, how often, and how many pages to fetch from each site.
Googlebot's crawl process begins with a list of webpage URLs, generated from previous crawl processes and augmented with Sitemap data provided by the site webmaster. New sites, changes to existing sites, and dead links are noted and used to update the Google index.
The description above if from the Official Google Blog, visit this site for more information on the Googlebot.
Google advises that you need to used a reverse DNS lookup to verify the Googlebot.