Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Webcrawlers have a pattern too.

Nope. The ones that have a pattern are the ones that play by the rules. It's extremely easy to write a web-crawler that performs random actions (download torrents, seed data, make bing/yahoo/google/duckduckgo random searches and click on 25 random indexed results, etc.).

In order for a sniffing party to understand what it's going on, it will (probably) take a Bayesian approach which will require more data than one can generate in 100 years.

Writing such a crawler for an experienced developer is extremely trivial (e.g. ruby + mechanize + nokogiri).



https://blog.torproject.org/blog/bittorrent-over-tor-isnt-go...

> download torrents, seed data,

I'm not sure that part of your idea is good.

The random crawler is what I had in mind but I doubt I'd implement it simply because I don't have a need to use Tor beyond curiosity.


Yeah okay, torrents aside, the idea is to generate random bursts of data transmission and you can do that easily.


What is that even supposed to solve? You're just making random network connections... it really does nothing to obscure origin or destination. It's not the target traffic so it'll get filtered out, and timing attacks still work on the target data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: