Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are something like 20-ish papers here. I have skimmed several of them without any clear answer to this question:

How exactly do you stop sockpuppetry while maintaining anonymity?

In order to ensure that a person cannot create more than 1 account, it would be necessary to observe some property of the person that is difficult to alter (such as their appearance, or their social security number) for uniqueness on the network.

Now if this network is truly anonymous, then it must be disjoint from every other network. And there is at least one other network ("real life"), so whatever this property is, must be something that an adversary in the real-life network would not be able to compare to the anonymous network.

And so certainly, it must not be some property that a person in the real-life network could observe by following you around or ruffling through your drawers and then compare to the answer you gave to sign up for the anonymous network. So it must be something in your head (like a password).

But if it is in your head, isn't it easily changed, and this property would be exploitable to create sockpuppets?

I must have missed something fundamental here, possibly in terminology. Can somebody who is current on the research here enlighten me?



I am current on this research.

The group for anonymous communication in Dissent is formed using unspecified means. It could be everybody who signed up in the last month, obtained a credential from some authority, has a key within the web of trust for the group, etc.

Some of these methods obviously allow the adversary to create Sybils (aka sockpuppets). The ones that don't may not provide anonymity about who is in the group, but the protocol will provide anonymity for who said what during the group communication. This is still extremely valuable. Consider voting as an example: the group is known, but individual vote anonymity matters.

If the group formation mechanism does allow Sybils, that still doesn't violate anonymity. For a message from an honest member, the adversary cannot tell which honest member it came from. It also doesn't violate the accountability of the protocol - any disruption will be attributable to some Sybil, who will be punished.


I think it's unfortunate that the page mentions that Dissent protects groups against Sybils and sockpuppets. I too spend lots of time going trough these papers looking up what the algorithm was. Dissent clearly does not even try to solve Sybil/sockpuppet problem.


It does in the following senses:

1. Anonymity is provided as long as there is a single trustworthy member, regardless of how many phony members there are.

2. Denial-of-service resistance is provided even against many Sybils - eventually they will all be kicked out of the group and communication can proceed.

This is in contrast to protocols (e.g. onion routing, Aqua[0]) that only provide their security properties when the adversary doesn't control too much of the system. I think it is a fair claim to make and in particular is clear to people familiar with this area of research.

[0] "Towards Efficient Traffic-analysis Resistant Anonymity Networks" <http://www.mpi-sws.org/~stevens/pubs/sigcomm13.pdf>


You seem to be outsourcing the trust mechanism to the users, while the page implies that you've solved the trust problem internally through the protocol.

Don't get me wrong, the research is very impressive in it's own right, but that's at best misleading.


What is the mechanism to kick Sybils out of the group, though? How is it ensured that enough sockpuppets don't try to kick trustworthy groups out? I am not following the mechanisms here.


> How exactly do you stop sockpuppetry while maintaining anonymity?

I don' t think this is possible. Nor is it desirable; someone might well want to use different online identities for different parts of their personality.


If I'm reading the slides correctly (and there's a good chance I'm not) you have to have a preselected group. From the problem statement:

"A group of N>2 parties wish to communicate anonymously, either with each other or with someone outside of the group. They have persistent, real world identities and are known, by themselves and the recipients of their communications, to be a group."

That of course leaves me with a lot of, "But how would you do X?" questions. I'd love to see a FAQ for the project, and a list of things you can and can't do with it.


If you're correct, I don't really see what is novel here. If I already had a group of 50 people I could trust, what would I need this software for? How is this better than just handing out credentials to my IRC Tor hidden service?


Again, I've just skimmed a couple of presentations, so I could be off. That said, the tech definitely looks novel.

One application that comes to mind is human rights abuse reporting. Let's suppose I get 100 volunteers in the field with software based on this. Some of them are agents of the abusive regime; most are sincere. All communications are monitored by the government. If I read the presentation correctly, anybody can submit an incident report that we can be sure came from one of the probably-trusted, but that can't be traced back to an individual.

Another application they mention is secure voting. Let's say you're part of a cabal like WikiLeaks. You could use it to do untraceable but reliable internal anonymous voting on whether a given document gets released or not.


The way this is usually handled is anonymous credentials. You force someone to identify themselves uniquely (full name, passport, SSN, DNA, photo, etc). You then issue them a credential that is anonymous (i.e. when shown, you can't link it with when it was issued or previously shown). Furthermore, you set up these credentials so that if they are used too many times (i.e. cloned) then the identity is revealed.

You then require them to show the untraceable credential each time they submit a message.


So the way this maintains anonymity is by not trying?

What you've described is a central authority architecture which is in charge of issuing the credentials. But we already have CA authorities that issue SSL certs. They're brittle to government influence, because it's impossible to know whether the government has acquired the secret keys. In your case it'd be impossible to know whether the hypothetical CA has stored your secret credential / whether they've told anyone your real identity.

Even if this model were to work, it wouldn't protect users from themselves. Here's a fascinating read about the problems of staying anonymous even in an environment with perfect anonymity guarantees: https://whonix.org/wiki/DoNot

For example, no anonymity network can protect against stylometry, so that's always a concern. I was also shocked to realize that something as simple as automated time synchronization will reveal your general location when using Tor, because your machine requests a time update for your specific timezone. You have to set your clock to UTC to avoid that. There are about two dozen other vectors by which you can accidentally reveal your true identity even when using a rock-solid protocol. Anyone who's interested in this should read the entirety of the Whonix wiki. In addition to being comprehensive, it's also a lot of fun to read.

(Most of my comment was meandering and not really related to yours. It's just interesting how difficult perfect anonymity is. It's probably true to say that getting the tech implemented correctly is only a small fraction of the total amount of work required to be truly anonymous.)


I should clarify. The credentials are anonymous even given a malicious CA. This is a cryptographic guarantee assuming the Strong RSA assumption holds.

The only thing the CA can do is produce a list of people they issued credentials too.


So the problem I see with that solution is twofold:

* If there is a central authority collecting the information, then as far as Jeff the User knows, his information could be stored in plain text on a subpoenable hard disk by the central authority.

* If the information is verified by a set of moderators, then the moderators must be numerous enough that no single moderator could capture a substantial portion of the identity data. But to achieve that, the mod requirements would be low enough that an attacker could infiltrate their ranks and uncover the identities of users.


And then the next day the NSA forces the credential authority to give up all their information.

This is not a solution.


I don't know how Dissent does it. The only solution I can think of right now is to ask people to perform some time consuming human task before they can create an account. That would limit the scalability of any sock puppet creator and at the same time it could serve as payment for the service if that task is useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: