So currently this is based on taking a sizable rolling window of textual posts for each topic then running it through our synt library (http://github.com/tawlk/synt) which does sentiment classification with a NaiveBayes classifier trained against iterations of a couple million samples. This setup averages about 80% accuracy against new labeled sample sets.
We however only have the server resources to have a large rolling window for a limited number of topics (currently the ones in the side bar).
When you searched for "Barack Obama", it based the sentiment score on just what your browser collects live on the fly, whereas when you searched "obama" you got sentiment calculations based on our server collected rolling data set which is far more stable.
Both are fairly accurate, however they differ in how large of a window they are averaging against.
As topics become more popular and as server limitations allow we automatically migrate them server side for more aggressive collection to provide more reliable sentiment.
We also only provide scores and reach assessments for topics we posses enough server-side data for to justify it.
We however only have the server resources to have a large rolling window for a limited number of topics (currently the ones in the side bar).
When you searched for "Barack Obama", it based the sentiment score on just what your browser collects live on the fly, whereas when you searched "obama" you got sentiment calculations based on our server collected rolling data set which is far more stable.
Both are fairly accurate, however they differ in how large of a window they are averaging against.
As topics become more popular and as server limitations allow we automatically migrate them server side for more aggressive collection to provide more reliable sentiment.
We also only provide scores and reach assessments for topics we posses enough server-side data for to justify it.