Javascript and Search Engine Spiders

Historically, javascript has been 'hidden' from search engines; their spiders did not understand javascript and could not interpret even the most basic of javascript code.

All of the major search engines seek to be able to understand a web page as well as a human visitor could. But for an index of billions of individual pages, human interaction is an impossibility. This is why search engines make algorithms. More recently, the forward-thinking search engines have been trying harder to understand javascript.

Search Engine 'Friendly' Javascript Links and Redirects

Strictly speaking, no javascript is 'search engine friendly'. At the time of writing, engines such as Google and Yahoo are still getting to grips with basic semantically-rich HTML. That said, some javascript is much easier to interpret. So, a redirect or link containing a recognisable URI is likely to be followed:

<a onClick="location.href=http://www.example.com">Example</a>

Such links and redirects don't seem to carry as much weight as those written in plain HTML, but there is hope yet for sites with a heavy reliance on client-side scripts.

Just as telling is that major spiders such as Google's Googlebot have been retrieving some examples of external script files. In the case of link-hungry Google, references to any complete URI seem to get spidered at some point, even if buried within a PDF file or Word document.

Category: 

Add new comment