Search-friendly website architecture is important to ensure that search engines can access your site’s content.
First a bit of background about how search engines build the indices from which they derive the website listings displayed on their results pages. Google and the other search engines don’t have teams of people who archive every single page on the Web. It relies on programs called “spiders” – automated robots that move between links and store the information in a site’s code in their databases.
Making sure these spiders can access all of the content on your site is extremely important for search engine optimization (SEO). However, there are a number of website architecture mistakes that can make large portions of your site unavailable to the search engines’ spiders.
Here are four of the most common mistakes, as well as tips on how you can avoid them.
1. Too much content in image or script files.
The solution is to duplicate the information stored in these alternative formats with text versions – such as putting descriptive, keyword-rich text in the alt tag for your site’s header graphic. Try using a tool such as Webconf’s Search Engine Spider Simulator to observe what the spiders see after arriving on your site. If you notice that chunks of content are missing, either provide the excluded information as text elsewhere on the page or use your site’s robot.txt file – what gives instructions about your site to search engines – to redirect the spiders to specially designed, text-based pages you’ve created to provide them with the same information.
2. Deep vs. shallow navigation.
It can be problematic when a site’s navigation becomes too deep. Because search engine spiders move between the pages of your site through the links you’ve created, it’s important to make this movement as easy as possible for them. If your navigation structure is deep, meaning certain pages can be accessed only after a long string of sequential clicks, you run the risk that the spiders won’t crawl deeply enough into your site to index all of your pages appropriately.
The solution is to implement a “shallow” navigation structure, in which every single page on your website can be accessed by both human visitors and search engine spiders within two to three clicks. You can accomplish this task by breaking up your navigation structure into sub-categories or incorporating additional internal links.
3. Inconsistent linking practices.
As you build these links, you’ll want to be careful about how you name them. Again, because the search engines can’t apply human judgment to see what you meant to do, their spider programs may index the URLs “www.yoursite.com/contact.html” and “yoursite.com/contact.html” as two separate pages – even though both links direct visitors to the same location.
To prevent these indexing errors, be consistent in the way you build and name links. If you’ve made this mistake in the past, use 301 redirects to let the search engine spiders know that both the “www” and “non-www” versions of your URLs are the same.
4. Failure to include a site map.
As you improve the accessibility features of your website’s architecture, make sure you have a site map in place. This file provides the spiders with an accessible reference of all the pages on your site, allowing indexing to proceed correctly.
If your website is built with WordPress, Joomla or similar CMS platform, you should be able to install a plugin that will automatically generate a site map page for you. If not, creating a site map can be as simple as building a single HTML page with links to all of your other pages and submitting it to the search engines for consideration.
Would you like to learn more about search engine optimization? Click here for recommended SEO resources.
Would you like support for your website? Webb Weavers Consulting based in Ventura, CA can help! Contact Debbie today to schedule a website consultation.