Once you’ve built a great site and you’re ready to have it indexed in Google, you’ll want to ensure that Google knows about your site’s pages. In many cases, Google may already be aware of your pages because one or more quality sites have included a relevant link to your site on theirs. Indeed, naturally acquiring such links is a great way for Google to learn about your site’s pages. If you’d like to let Google know about a brand new site of yours, however, you’re welcome to jumpstart the process by submitting your site via the AddURL link here. No need to list all your pages; just the top level one, like www.example.com, is sufficient. To let Google know about all of your pages, you can submit an XML Sitemap. Whether your site is old or new, we highly recommend that you create an XML Sitemap, which can help Google and other search engines better find and understand the pages on your site. These are especially useful for sites that feature dynamic content or a large set of new and updated pages, or have few incoming links. You can create a general XML Sitemap in minutes, as well as XML Sitemaps for other types of information like video; learn more from the links listed here. Note that this isn’t the same thing as an HTML, or user-visible sitemap. HTML sitemaps can complement XML Sitemaps, and can help people quickly discover and navigate to content deep within your site. We realize that you may have some pages that you don’t want Google to access. For instance, you may not want Googlebot, our automated page-fetching robot, accessing documents with private information or pages you’re simply not ready to show the world. In cases like this, you’ll want to use one of two reliable methods for blocking us from this content:  a “Disallow” line in your robots.txt file or a noindex meta tag on each page you don’t want indexed.

Google for Webmasters Tutorial: Discoverability
Tagged on:                 

11 thoughts on “Google for Webmasters Tutorial: Discoverability

  • April 5, 2009 at 9:38 am
    Permalink

    I agree that you can block Googlebot indexing a page using the meta tag robots directive. But if you try to block Googlebot with a robots.txt using the disallow directive, that does not hinder pages to be indexed, if they are linked from other sites. Therefore I recommend using the "noindex" and not "disallow" directive in the robots.txt too.

    Reply
  • May 13, 2009 at 5:14 pm
    Permalink

    very helpful information,

    Reply
  • July 12, 2009 at 5:17 am
    Permalink

    Oh very clever -NOT! How the hell are we supposed to read that Robots disallow line, ie. anything in green. Also, the quality of the picture is really bad; it is so easy to produce better Youtube quality.

    It makes me wonder about the thoroughness and calibre of Google staff that put these 'tutorial' movies up.

    Reply
  • June 23, 2010 at 7:19 am
    Permalink

    SEO Optimization is critical. Thanks for the info…

    Reply
  • October 12, 2010 at 2:42 pm
    Permalink

    ­čÖé

    Reply
  • October 27, 2010 at 3:34 am
    Permalink

    Nice video!

    Reply
  • February 23, 2011 at 5:58 pm
    Permalink

    I do not think that Google is giving the "addurl" function any atttention anymore.

    Reply
  • April 8, 2011 at 8:06 pm
    Permalink

    Search Engine optimization is a critical and core investment that a few corporate houses in tours and travel industry can afford.

    Reply
  • May 11, 2012 at 2:48 am
    Permalink

    Excellent presentation and simple to understand!

    Reply
  • October 24, 2012 at 3:32 am
    Permalink

    show xml site map

    Reply
  • January 29, 2015 at 10:06 pm
    Permalink

    my site was not discovered by google

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *