've looked at a lot of websites over the years and helped a lot of clients, and I've yet to meet anyone who is totally committed to making their site as successful as it could be.
FREE ePolicy Guidebook
Reduce the risks of uncontrolled email and web usage....
FierceIPTV
Businesses Beware - New Battlefront in Email and Web
Research Report: Magic Quadrant for E-Mail Security Boundary, 2006
IronPort Email Security Appliance Overview
Hundreds more titles...
This is particularly true when it comes to search engine optimization. Most online businesses make a half-hearted attempt to get better search engine rankings, but rarely implement anymore than one or two things that could really help make their site a success.
Which of the following does your website have?
1. Unique Page Titles.
Take a look at a selection of web pages that come up high in the search engine results for any search term and you'll notice they all have one thing in common: unique page titles. Denoted by in HTML, page titles tell the search engines what your web page is about, and are thought to be a critical factor in how search engines determine the order in which pages are displayed in the search engine result pages (SERPS) for a specific search term.
2. Descriptive Keyword Links.
How do search engines find and index your site? They follow links from one page to another, indexing content as they go. If the links they follow contain keywords and phrases that are relevant to the page content, the search engines will boost the ranking of that page in their results. You should also use keywords and phrases in the site's navigation (menu), as those terms are (or should be) highly relevant for the page they link to.
3. Keywords in Your On-page Copy.
If you want the search engines to know what your page is about, and rank it appropriately, you must scatter keywords throughout your on-page copy. Keywords in your copy establish your site's relevance for words searchers use when they're looking for your product or service.
4. Clean, Accessible Website Design.
Messy, bloated HTML code, 404 errors, re-directs, too many graphics, content hiding behind forms etc., all hinder the search engines' ability to index your site. And if they can't index it, they can't rank it. Follow W3C's accessibility guidelines when building your website. Better yet, ask your website designer what he knows about standards compliant website design. If he or she can't answer, find someone who can.
5. Focused Site Topic.
It seems logical that the more focused your site is, the higher it will rank for related search terms. For example, a site that is focused on the sale of exercise mini-trampolines will probably do better for the search term "mini trampolines" than a site that tries to sell a number of unrelated or a selection of different exercise equipment. In addition, it makes sense to create specialty websites whenever possible. That way, you don't fall into the trap of trying to do too many things and end up doing none of them well.
6. Relevant Incoming Links.
The number of other sites that link to your site's pages is important but the quality of those sites, and the text used in the link, carries much more weight with the search engines. For example, one relevant link from an "authority" site such as a .org, .gov, or a site that's proven itself as a reliable source, provides more value than several links from unrelated or "unproven" sites.
Of course, there are other several other factors that go into determining a site's ranking (far too many to go into here) but these are some of the most important.
They're all easy to implement, so there's really no excuse for not taking advantage of them especially if you want to make sure your site is found and visited by as many people as possible. Then all you'll need to worry about is getting those people to buy something from you on a frequent basis.
And that's really what it's all about.
To your success,
Julia
Comment on this article at: Read this article at http://www.juliahyde.com/mw/high-ranking-websites
About The Author:
Julia Hyde is an advertising copywriter and consultant specializing in search engine optimization, search engine marketing, and traditional advertising. She currently runs Creative Search Media, a full-service advertising and search engine marketing agency. You can contact Julia via her website at http://www.juliahyde.com
Friday, May 18, 2007
Factors Effecting the Page rank
PageRank is a link analysis algorithm which assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references.
FREE ePolicy Guidebook
Reduce the risks of uncontrolled email and web usage....
FierceIPTV
Businesses Beware - New Battlefront in Email and Web
Research Report: Magic Quadrant for E-Mail Security Boundary, 2006
IronPort Email Security Appliance Overview
Hundreds more titles...
Google might use the following to determine the ranking of your pages:
• The frequency of web page changes
• The amount of web page changes (substantial or shallow changes)
• The change in keyword density
• The number of new web pages that link to a web page
• The changes in anchor texts (the text that is used to link to a web page)
• The number of links to low trust web sites (for example too many affiliate links on one web page)
Domain Name Consideration:
• The length of the domain registration (one year vs. several years)
• The address of the web site owner, the admin and the technical contact
• The stability of data and host company
• The number of pages on a web site (web sites must have more than one page)
How Google might rate the links to your web site:
• The anchor text and the discovery date of links are recorded
• The appearance and disappearance of a link over time might be monitored
• The growth rates of links as well as the link growth of independent peer documents might be monitored
• The changes in the anchor texts over a given period of time might be monitored
• The rate at which new links to a web page appear and disappear might be recorded
• The distribution rating for the age of all links might be recorded
• Links with a long life span might get a higher rating than links with a short life span
• Links from fresh pages might be considered more important
• If a stale document continues to get incoming links, it will be considered fresh
• Google doesn't expect that new web sites have a large number of links
• If a new web site gets many new links, this will be tolerated if some of the links are from authorative sites
• Google indicates that it is better if link growth remains constant and slow
• Google indicates that anchor texts should be varied as much as possible
• Google indicates that burst link growth may be a strong indicator of search engine spam
For more details on Page Rank visit at www.halfvalue.com and www.halfvalue.co.uk
http://www.selfseo.com/story-19296.php
FREE ePolicy Guidebook
Reduce the risks of uncontrolled email and web usage....
FierceIPTV
Businesses Beware - New Battlefront in Email and Web
Research Report: Magic Quadrant for E-Mail Security Boundary, 2006
IronPort Email Security Appliance Overview
Hundreds more titles...
Google might use the following to determine the ranking of your pages:
• The frequency of web page changes
• The amount of web page changes (substantial or shallow changes)
• The change in keyword density
• The number of new web pages that link to a web page
• The changes in anchor texts (the text that is used to link to a web page)
• The number of links to low trust web sites (for example too many affiliate links on one web page)
Domain Name Consideration:
• The length of the domain registration (one year vs. several years)
• The address of the web site owner, the admin and the technical contact
• The stability of data and host company
• The number of pages on a web site (web sites must have more than one page)
How Google might rate the links to your web site:
• The anchor text and the discovery date of links are recorded
• The appearance and disappearance of a link over time might be monitored
• The growth rates of links as well as the link growth of independent peer documents might be monitored
• The changes in the anchor texts over a given period of time might be monitored
• The rate at which new links to a web page appear and disappear might be recorded
• The distribution rating for the age of all links might be recorded
• Links with a long life span might get a higher rating than links with a short life span
• Links from fresh pages might be considered more important
• If a stale document continues to get incoming links, it will be considered fresh
• Google doesn't expect that new web sites have a large number of links
• If a new web site gets many new links, this will be tolerated if some of the links are from authorative sites
• Google indicates that it is better if link growth remains constant and slow
• Google indicates that anchor texts should be varied as much as possible
• Google indicates that burst link growth may be a strong indicator of search engine spam
For more details on Page Rank visit at www.halfvalue.com and www.halfvalue.co.uk
http://www.selfseo.com/story-19296.php
Google's New Web Page Spider
Search engines use automated software programs that crawl the web. These programs called "crawlers" or "spiders" go from link to link and store the text and the keywords from the pages in a database. "Googlebot" is the name of Google's spider software.
FREE ePolicy Guidebook
Reduce the risks of uncontrolled email and web usage....
FierceIPTV
Businesses Beware - New Battlefront in Email and Web
Research Report: Magic Quadrant for E-Mail Security Boundary, 2006
IronPort Email Security Appliance Overview
Hundreds more titles...
Types of Google Spiders:
Many webmasters have noticed that there are now two different Google spiders that index their web pages. At least one of them is performing a complete site scan:
• The normal Google spider: 66.249.64.47 - "GET /robots.txt HTTP/1.0" 404 1227 "-" "Googlebot/2.1"
• The additional Google spider: 66.249.66.129 - "GET / HTTP/1.1" 200 38358 "-" "Mozilla/5.0"
Difference between these two Google spiders
The new Google spider uses a slightly different user agent: "Mozilla/5.0". This means that Googlebot now also accepts the HTTP 1.1 protocol. The new spider might be able to understand more content formats, including compressed HTML.
AdWords Spider
Google is using a new crawler software program for their AdWords advertising system that automatically spiders and analyzes the content of advertising landing pages. Google tries to determine the quality of the ad landing pages with the new bot. The content of the landing page will be used for the Quality Score that Google assigns to your ads. Google uses the Quality Score and the amount you are willing to pay to determine the position of your ads. Ads with a high quality score can rank higher even if you pay less than others for the ad.
Purpose of Google Spider
Google hasn't revealed the reason for it yet. There are two main theories:
• The first theory is that Google uses the new spider to spot web sites that use cloaking, JavaScript redirects and other dubious web site optimization techniques. As the new spider seems to be more powerful than the old spider, this sounds plausible.
• The second theory is that Google's extensive crawling might be a panic reaction because the index needs to be rebuilt from the ground up in a short time period. The reason for this might be that the old index contains too many spam pages.
What does this mean to your web site?
If you use questionable techniques such as cloaking or JavaScript redirects, you might get into trouble. If Google really uses the new spider to detect spamming web sites, it's likely that these sites will be banned from the index. To obtain long-term results on search engines, it's better to use ethical search engine optimization methods. General information about Google's web page spider can be found here.
Receive Email When Google Spiders Your Page
A search engine spider is an automated software program that locates and collects data from web pages for inclusion in a search engine's database. The name of Google's spider is "Googlebot". If you have a web site that allows you to use PHP code then your web pages can inform you when Google's spider has indexed them. This little piece of PHP code recognizes Googlebot if it visits the web page, and it informs you by email when Googlebot has been there.
For more details on Google’s Spider visit at www.halfvalue.com and www.halfvalue.co.uk
http://www.selfseo.com/story-19305.php
FREE ePolicy Guidebook
Reduce the risks of uncontrolled email and web usage....
FierceIPTV
Businesses Beware - New Battlefront in Email and Web
Research Report: Magic Quadrant for E-Mail Security Boundary, 2006
IronPort Email Security Appliance Overview
Hundreds more titles...
Types of Google Spiders:
Many webmasters have noticed that there are now two different Google spiders that index their web pages. At least one of them is performing a complete site scan:
• The normal Google spider: 66.249.64.47 - "GET /robots.txt HTTP/1.0" 404 1227 "-" "Googlebot/2.1"
• The additional Google spider: 66.249.66.129 - "GET / HTTP/1.1" 200 38358 "-" "Mozilla/5.0"
Difference between these two Google spiders
The new Google spider uses a slightly different user agent: "Mozilla/5.0". This means that Googlebot now also accepts the HTTP 1.1 protocol. The new spider might be able to understand more content formats, including compressed HTML.
AdWords Spider
Google is using a new crawler software program for their AdWords advertising system that automatically spiders and analyzes the content of advertising landing pages. Google tries to determine the quality of the ad landing pages with the new bot. The content of the landing page will be used for the Quality Score that Google assigns to your ads. Google uses the Quality Score and the amount you are willing to pay to determine the position of your ads. Ads with a high quality score can rank higher even if you pay less than others for the ad.
Purpose of Google Spider
Google hasn't revealed the reason for it yet. There are two main theories:
• The first theory is that Google uses the new spider to spot web sites that use cloaking, JavaScript redirects and other dubious web site optimization techniques. As the new spider seems to be more powerful than the old spider, this sounds plausible.
• The second theory is that Google's extensive crawling might be a panic reaction because the index needs to be rebuilt from the ground up in a short time period. The reason for this might be that the old index contains too many spam pages.
What does this mean to your web site?
If you use questionable techniques such as cloaking or JavaScript redirects, you might get into trouble. If Google really uses the new spider to detect spamming web sites, it's likely that these sites will be banned from the index. To obtain long-term results on search engines, it's better to use ethical search engine optimization methods. General information about Google's web page spider can be found here.
Receive Email When Google Spiders Your Page
A search engine spider is an automated software program that locates and collects data from web pages for inclusion in a search engine's database. The name of Google's spider is "Googlebot". If you have a web site that allows you to use PHP code then your web pages can inform you when Google's spider has indexed them. This little piece of PHP code recognizes Googlebot if it visits the web page, and it informs you by email when Googlebot has been there.
For more details on Google’s Spider visit at www.halfvalue.com and www.halfvalue.co.uk
http://www.selfseo.com/story-19305.php
Subscribe to:
Posts (Atom)