Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
DeSopa Circumvents SOPA — Despite its Anti-Circumvention Measures (brajeshwar.com)
24 points by Brajeshwar on Dec 31, 2011 | hide | past | favorite | 16 comments


How many websites does a typical Internet user visit in a year? If you discover ten new websites per day that would be less than 4,000 sites. Let's make that 40K sites per year. It is absolutely trivial to have the client computer handle DNS for these sites. Maybe that's the way the industry pushes back on ignorant politicians: Obsolete their law before it can even affect the system. Decentralize DNS and put it in the hands of every single Internet user and every single device connected to the Internet.


Ah, see. Now we're getting somwhere.

But what about load balancing and CDN's?

DNS was not designed for those purposes.

What if users are not completely ignorant of the geographical locations of the IP addresses they choose to store and use, and if we allow them to make their own choices?

Determining which is the closest server or the most responsive is not rocket science.

A good HOSTS file coupled with a good local cache is in my experience faster than any DNS service. But it's relatively rare to see users setting themselves up this way. My guess is that is not due to difficulty, it's due to ignorance. Maybe even peer pressure.

The "experts" will tell you to use a shared cache, exposing you to all manner of security flaws.

Ask yourself how many lookups the average user makes in a day to the DNS?

How many of those lookups are for the same sites, day after day?

How many times do the IP addresses for these sites actually change?

Finally, ask yourself how many of those lookups are for IP addresses not attached to any website you will ever visit (i.e., they are for serving advertisements, behavioural tracking elements, etc).

You can also restrict queries only to authoritative servers. This is something I threw together as an exeriment. For me, it works beautifully.

Then the "experts" will tell you we need DNSSEC, to counter the problem posed by using shared caches.

The impetus for its resurgence is the use of shared caches and "cache poisoning". Do we have to use shared caches? No.

DNSSEC has become like security theater - the simple fact is that no one is accountable for the information in the current DNS except the site owners themselves.

All the DNSSEC proponents can do is pray that more people will start using it. It's a cash cow for some consultants.

The other simple fact is that the most important TLD servers do not change IP addresses very often. They are more or less static.

Anyone can download a copy of those numbers and store it.

Does it matter if each individual record is signed? It only matters if you like to do recursion. Maybe what matters more is that the file you download is itself signed.

For the DNSSEC system to work, to restore some confidence in shared caches (which may potentially be censored by SOPA-like legislation), the most important people who need to use DNSSEC are the authoritative servers for the websites themselves.

Will they undertake this? Weighed against the triviality and increased security to the user of using a local cache and HOSTS file, thereby avoiding cache poisoning altogether, is anyone going to bother with learning DNSSEC?

DNSSEC is a huge burden. Unless of course you offload responsibility to someone else. Cha ching. But no one is going to be more secure using something they delegate to someone else and cannot themselves understand.

To someone who wants to learn, I can explain a HOSTS file and how to do non-recursive lookups much easier than I can explain DNSSEC.

We have wider acceptance of EDNS. And there are people advocating TCP. Obviously some people really want DNSSEC to take off. Why?

If the Internet can handle the load of EDNS and TCP, all for a simple number lookup that otherwise fits in 512 bytes and requires no connection setup/breakdown, querying authoritative servers instead of doing the inherently insecure recursion routine with other people's caches is not going to bring the Internet to its knees.

algoshift, you are absolutely correct. Decentralisation is the way to go and, imo, is in the true spirit of the Internet.


He's working too hard (but it is good for publicity). Per the writeup, DeSopa uses a foreign DNS service (which would not be under SOPA jurisdiction) to look up IP addresses. My off-the-shelf firewall/router (Buffalo/DD-WRT) can set a fixed DNS IP address. The simple and transparent way to implement DeSopa is to set that to a foreign DNS service.

Real hackers run their own caching DNS servers, of course, and thus have many more options for bypassing SOPA. ;-)


True, but not everyone who will be affected by SOPA is a hacker. After (and if) SOPA is passed, releasing this sort of program will be illegal. The point was to illustrate to congress that SOPA is ineffective, and hopefully turn the conversation to something more workable.


Ahh, but my main point is that the capability is baked into every firewall/router and it does not take a hacker to configure it to bypass SOPA. It would be easy to make instructions that "anybody's mother" could follow to configure to point to a non-SOPAed DNS server.

If SOPA is passed, it would make firewall/routers illegal unless the manufacturers removed the ability to set the DNS server, which is clearly nonsense and unenforceable.


You may be surprised about how challenging it may be for "anybody's mother". Anyway, DeSopa was a quick way to prove a point. I expect that if SOPA passes offshore DNS services will advertise their IPs for manual setup (resolv.conf, TCP/IP windows settings etc), and local applications will be developed that circumvent SOPA in this and other ways.


Yup, and this is exactly why MAFIAAFire got a takedown notice, even though you could "explain to your mom" how to implement a HOSTS file.


gvb is correct. Local cache is the way to go. Some have known this for a long time. SOPA could be a blessing in disguise if the meme spreads. Cache poisoning becomes irrelevent.

If SOPA neuters search engines, maybe users will resort to their own scans of port 80 (or whatever we designate to the "public" port) to find websites. The legality of scanning, and what is and is not public, may become a hot issue. We may get some legal clarity.

There's a decent comment to the blog post. This is not rocket science to understand. This could (unforseeably) spell the end for ICANN and the vast domain squatting business.

A user can attach any name he wants to an IP number.


If an IP hosts multiple sites, you'd need to figure out the correct http HOST header to send it, unless you're use a static IP for every site. Most web hosts are using name-based virtualhosts: https://httpd.apache.org/docs/2.2/vhosts/

We implement both name- and ip-based vhosts, but only for sites with dedicated IPs (read: sites that need a dedicated IP for SSL.) You can do SSL on a shared IP, but it gets more complicated. Cloudflare does it by using a cert with multiple "certificate subject alt names", but security / site spoofing would still be an issue if you're ditching SSL at the same time (which would be a good idea).

This is the /real/ web 2.0...maybe even web 3.0: peer-to-peer hosting without ICANN or Verisign and co. We can do it, and people are working on building it right now. Reddit has Meshnet and NameCoin seems like a good idea. Now we just need a similar solution for SSL and a good way to host/update things. The future looks bleak for ISPs, CAs, registrars, and non-free countries.


There is a solution for SSL. Of course it's not SSL, it's more secure and it's faster.

You can set several "domain names" (hostnames) for a server.

There is a working prototype.

Seek with open mind and ye shall find.

I agree what you allude to is the real web 3.0 but, imo, it's not accurate to call it "web 3.0" because it's more than just "the web". Lots more than just web servers will run on a properly constructed peer-to-peer platform.

The public "web" is for Google and Facebook, their marketers and massive surveillance.

The internet is for users.


Err, perhaps I'm missing something, but what does this comment mean? Anything?

You're new to Hacker News, so I'm not going to downvote this, but a piece of advice: this sort of vague, useless-without-context comment adds nothing to the dialogue and will be driven down before you can blink.


Yes I agree that local cache is the way it will go, but I do not think think that SOPA will be a blessing in disguise. In fact, I think the internet will begin to more closely resemble the television industry, with a large technical underground. Perhaps some P2P DNS service will become popular, however if I recall correctly the bill outlaws blacklist evasion software.


SOPA does not have to pass to achieve the effect I'm suggesting. It will simply open people's eyes to the centralisation that is ongoing. The "single point of failure" will become evident and people will start to think.

No software is needed. Nor is a DNS. All modern PC OS's have the necessary capabilities built-in.

To connect to a website all that is needed is the knowledge of an IP number, a port (almost always 80) and, optionally, a hostname.

Is it really possible to prohibit this knowledge?

Imagine a world where there are "forbidden" phone numbers. However no one is forbidden from dialling them. The sole prohibition is against telling anyone what they are.

This is what SOPA blacklisting purports to achieve.


Sounds like a plan :)


As soon as I saw this, I pinged the author to get it on Github. He did so like 9 days ago: https://github.com/TamerRizk/desopa

Also this is a repost: https://news.ycombinator.com/item?id=3374282 https://news.ycombinator.com/item?id=3382439


DeSOPA is a nice yet simple tool which I could recommend to others or modify if necessary should the need arise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: