Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work at Google in web search, and I have a few comments about this discussion.

Firstly, I think this whole discussion about page speed is the wrong way to approach it. The primary motivation for page speed should be user happiness, which affects the key metrics you care about like user acquisition, conversion, and revenue. The fact it's a (small) ranking signal is a nice benefit, a cherry on top. Here is a nice case study about page speed and user metrics from Lonely Planet:

http://cdn.oreillystatic.com/en/assets/1/event/88/Performanc...

The conversion rate graph on slide 9 is what pretty much every study that looks at performance and user engagement finds. And here is one from Google search about the effect of page speed on searchers:

http://googleresearch.blogspot.co.uk/2009/06/speed-matters.h...

Secondly, there could be other issues with how the experiment was conducted on a technical level:

1. The experiment was about using JavaScript. Was Googlebot allowed to crawl the JS files? If robots.txt blocked crawling, that would have translated to less content visible to Googlebot, and so less content to index, which can easily result in a loss of ranking.

Note that the JS file itself may have been crawlable but it may have made an API call that was blocked. Same end result in terms of indexing.

2. Related to (1), we only started rendering documents as part of our indexing process a few months ago. When was this experiment conducted? If it was before full rendering was the norm, it's very likely we didn't index JS-inserted content that we could now, which, again, may have resulted in lower ranking.

For both of these, using the Fetch and Render feature in Webmaster Tools gives you the definitive view of how our indexing system sees your content. Before running any such experiement, it's worth running a few tests using Fetch and Render.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: