About Sitemap A New Standard: Search Engine Giants Adopt the XML Protocol

In 2005, the search engine Google launched the Sitemap 0.84 Protocol, which would be using the XML format. Google's new sitemap protocol was developed in response to the increasing size and complexity of websites. Business websites often contained hundreds of products in their catalogues; while the popularity of blogging led to webmasters updating their material at least once a day, not to mention popular community-building tools like forums and message boards. As websites got bigger and bigger, it was difficult for search engines to keep track of all this material, sometimes "skipping" information as it crawled through these rapidly changing pages.
Through the XML protocol, search engines could track the URLs more efficiently, optimizing their search by placing all the information in one page. XML also summarizes how frequently a particular website is updated, and records the last time any changes were made. XML sitemaps were not, as some people thought, a tool for search engine optimization. It does not affect ranking, but it does allow search engines to make more accurate rankings and searches. It does this by providing the data that a search engine needs, and putting it one place-quite handy, given that there are millions of websites to plough through.

To encourage other search engines to adopt the XML protocol, Google published it under the Attribution/Share Alike Creative Commons license. Its efforts paid off. Recently, Google happily announced that Yahoo and Microsoft had agreed to "officially support" the XML protocol which has now been updated to the Sitemap 0.9 protocol and jointly sponsored www.sitemaps.org, a site setup to explain the protocol. This is good news for website owners, and an applaudable sign of cooperation between known competitors.

The shared recognition of the XML protocol means that website developers no longer need to create different types of sitemaps for the different search engines. They can create one file for submission, and then update it when they have made changes on the site. This simplifies the whole process of fine-tuning and expanding a website.

Through this move, the XML format will soon become a standard feature of all website creation and development. Webmasters themselves have begun to see the benefits that this file provides. Search engines rank a page according to the relevance of its content to particular key words-but until the XML format, there were instances when that content was not properly picked up. It was often frustrating for webmasters to realize that their efforts to build a website were left unseen. Blogs, additional pages, or even the addition of multimedia files took hours to create. Through the XML file, those hours will not be wasted, and will be seen by the three leading search engines-Google, Microsoft, and Yahoo.

In a recent move Ask.com has now begun to support xml sitemaps and in an update to the sitemaps protocol it is now possible to tell all search engines the location of your xml sitemap by placing an entry into your robots.txt file.
Sitemap: http://www.mysite.com/sitemap.xml .

ESTABLISHED GOOGLE PAGERANK DOMAINS

One of the easiest and fastest ways to start your online business or to make money with Adsense is not thinking and creating websites with new domain names but picking up older domain names that have already been spidered and indexed in the search engines for months and years... just waiting for you to pick up and start making money! It makes sense doesn't it? Let's say you were starting your own store. Where you would be more successful... building a store in the middle of nowhere in the Mohave desert or leasing space at a highly trafficked area in a crowded suburban mall?

There are tons of "highly trafficked areas" on the internet in the form of expired or expiring domain names, looking for new owners waiting to take advantage of visitors that already visiting. Every day over 20,000 domain names are abandoned and expire for whatever reason. Many of these are domain names that had websites that were listed in: *Yahoo *MSN *DMOZ *GOOGLE and many other valuable directories.

There are thousands of valuable domain names with websites that have been created and forgotten, many with hundreds of backlinks pointing to them, and are abandoned every day just waiting for someone to find them... Each week thousands of domain names expire and become available again.

These expired domain names are names that were previously registered, but where the registration has not been renewed, or where the registrant has defaulted on payment. These unclaimed domain names are then repossessed and are made available for anyone to register again. Many of such domains are already registered with search engines, have established back links, google page rank and traffic. Buying such name you can either build your own website or you can just sell links on the page.

For example a single link on a Google PR5 domain worth anything from $30 to $60 and more per month! It takes quite a long time to achieve a PR4/5/6/7 ranking. If you have other websites that aren't developed or indexed, it's your chance to get indexed by all major search engines within days by simply adding a text link on such domain to your site.

How to Check Pagerank?

For what we need to know the pagerank of a site? certainly if we want to buy a domain, so do not be fooled quality or Checking faked or false Pagerank. Fastest way to check pagerank is to install the Google one way to find out pagerank Toolbar (toolbar.google.com). It can be customized to take up minimal space, and fit nicely at the top of your browser. There are several methods of spotting faked pagerank.
  1. One simple way is to check to see how many backlinks the site has. If a site has no (or very few) yahoo backlinks, then it should not have a high pagerank. For example a website claims to have a pagerank of 5+, but when you check it shows no or very few backlinks (100 is very low for a website of PR5).
  2. check is to use Google's INFO: DOMAINNAME

post