Hacker Newsnew | past | comments | ask | show | jobs | submit | mightyham's commentslogin

Genuinely curious: is Tailscale actually providing any values to this use case beyond what you get from a raw Wiregaurd exit node with port forwarding instead of Tailscale's NAT traversal? I've never used Tailscale, but I have a Wiregaurd setup on my home server for the same purpose as described in the article, and I've never had any issues with it.

Edit: Noticed some sibling comments asking effectively the same thing as me. I've been meaning to write a blog post covering the basic networking knowledge needed to DIY with just Wiregaurd. My impression is that many people don't realize just how easy it is or don't have the requisite background information.


If you're just doing hub-and-spoke anyway, yeah, you can do it yourself. I did for years. But holy smokes, is it a PITA to manually copy keys around to devices; especially when they might not even be yours. I have my Tailscale account hooked up to my self-hosted identity server and now it's just a matter of logging in on whatever device I want to be on the network.

Plus, I have the option of spinning up a random EC2 box whenever I want and instantly joining it to the network with basically no fuss.


I feel like articles like this do Tailscale a disservice to a certain degree. Most people know Tailscale helps with managing the mesh of connected devices. And as many people have said here you can do this manually with Wireguard, Netbird, Nebula, ZeroTier and many others. Why Tailscale is so helpful is the ACL system. I have about 40 devices connected to my Tailnet and depending on tags devices can or can't access direct communication and also certain exit node networks. Traditional VPNs generally suck because you dump out of a host and have flat access to everything. Tailscale allows you to segment access without disrupting general Internet access with minimal friction and ACLs allow segmentation to happen at the user / device level. Most people aren't using Tailscale ACLs, in fact I rarely hear it discussed. Also the article fails to mention Tailscale Peer Relays [0] which decreases the dependency on DERP relays significantly and are controlled by, you guessed it, ACLs.

[0] https://tailscale.com/blog/peer-relays-beta


The article does list what Tailscale adds on top of WireGuard:

> WireGuard by itself is mostly the data plane. Tailscale adds the control plane on top: identity/SSO, peer discovery, NAT traversal coordination, ACL distribution, route distribution (including exit node default routes), MagicDNS, and fast device revocation.


I think you missed the point. There's nothing in the article going into any of why this would help differentiate Tailscale from plain-old-Wireguard. Simply saying this and moving on is not that.

Hey, OP here. Thanks for the feedback. I will dive deep into this too!

I have a phone and laptop; those are my only two "mobile" devices that I might ever use to access my home network remotely. I set them up once, it took a few minutes, and I won't have to do it again unless I replace one of them.

I can completely understand using Tailscale for enterprise networks, but it seems very overengineered for my personal VPN needs.


Yeah, sure, that seems simple enough.

I have a family of four. Plus a couple relatives who like having access to some of my self-hosted stuff. So, that's 6 people, each with at least one phone and one laptop, but probably an iPad too, or an extra work laptop, or something else random. Plus my youngest is addicted to buying old laptops on eBay and switching to them.

You made me curious, so I looked it up: I have 17 machines. Yeah... I'm not going back to plain WireGuard. :D


How do you handle home network IP changes?

i had this issue, with an even more wild set of restrictions, so i used Caddy to "output its own access log" and i had a cron job on any server at home that would hit that caddy server with a pre-defined key, so like `http://caddyserver.example.com/q?iamwebserver2j` for one server and "q?iamVOIP" for another.

https://github.com/genewitch/opensource/blob/master/caddy_ge...

https://github.com/genewitch/opensource/blob/master/show_own...

And now i have bi-directional IP exposure. it's cute because you can't tell if you just drive by, it doesn't look like it does anything. you have to refresh to see your IP, which is a little obfuscation.

if you care about security, not sure what to tell you. use port knocking.

Please note: this doesn't require installing anything on any remote, just a cron job to curl a specific URL (arbitrary URL). I used it to find the IP to ssh on remote radio servers (like allstar, d-star) for maintenance, for example.


Not OP, but a static IP was about US$10 as a one off payment.

It’s really nice.


Dynamic DNS

Cloudflare tunnels

It has plenty of useful control plane features out of the box. Nothing much you _couldn’t_ do yourself but you don’t have to. Or with Headscale as the self-hosted open-source version

with wireguard i found that pretty almost every public wifi blocked it and even a lot of private internet connections at my friends houses did as well

if my mobile provider blocked it as well it would have been completely useless

probably depends on your location a lot though


Dynamic IP addresses.

Update your DNS when it changes. Pretty trivial.

Yeah I tried writing a script for that, but at a certain point using an off the shelf tool that does everything is easier.

> Genuinely curious: is Tailscale actually providing any values to this use case beyond what you get from a raw Wiregaurd exit node with port forwarding instead of Tailscale's NAT traversal?

Yes, but I guess it depends on how much of an adoption barrier/pain you want to deal with. Tailscale's control plane is dead simple and they ship apps on basically every platform so its easy to onboard mobile devices in addition to anything else. I'm a literal former network engineer with over two decades of experience, and I tried Tailscale randomly one of the first few times it popped on HN and stuck with it precisely because of how easy it was and how trivial it was to verify the security of my tunnels. Doing this manually is definitely possible on devices you control, but it's not a fun time, and Tailscale is dead simple.


I've worked on large React and Solid codebases and don't agree at all. You can make a mess of either one if you don't follow good practices. Also dynamic dependency management is not just a nice to have, it's actually critical to why Solid's reactive system is more performant. Take a simple example of a useMemo/createMemo which contains conditional logic based on a reactive variable, in one branch a calculation is done that references a lot of reactive state while the other branch doesn't. In React, the callback will constantly be re-executed when the state changes even if it is not being actively used in the calculation, while this is not the case in Solid because dependencies are tracked at runtime.


> Take a simple example of a useMemo/createMemo which contains conditional logic based on a reactive variable, in one branch a calculation is done that references a lot of reactive state while the other branch doesn't.

Then you create two useMemos, and recalculate based on the branch. Or you can cache the calculation in a useRef variable and just not re-do it inside the memo callback. There's also React Compiler that attempts to do the static analysis for automatic memoification.

> it's actually critical to why Solid's reactive system is more performant.

I've yet to see a large Solid/Vue project that is more performant than React.

> In React, the callback will constantly be re-executed when the state changes even if it is not being actively used in the calculation, while this is not the case in Solid because dependencies are tracked at runtime.

And in Solid the dependency tracker can't track _all_ the dependencies, so you end up using "ref.value" all the time. Often leading to the code that that has a parallel type system just for reactivity. While in React, you just use regular objects.


> I struggle to see the point. The paper in question doesn't claim to be practically faster...

I struggle to see the point of your comment. The blog post in question does not say that the paper in question claims to be faster in practice. It simply is examining if the new algorithm has any application in network routing; what is wrong with that?


I publish a Firefox plugin and needed help a few years ago. Not to get too far down that rabbit hole, but they suddenly blocked my plugin because they couldn't build my source code, even though the issue with their build environment was pretty obvious. Anyways, I had to use their Matrix support channel and they recommended Element. I was immensely frustrated with how buggy the experience was, and it turned me off from ever trying it again.


You’re not alone, their iOS apps have 3.4 and 3.6 star ratings. Anything below 4.0 isn’t good.

I’ve downloaded them, and neither has proper dark mode icons. Instant fail.


I don't understand how you could read the nine theses essays and think they are anything but reasonable. Even if you disagree with his politics, the results of his suggestions would almost certainly make Wikipedia more pluralistic, welcoming and neutral.


Because they have all been tried before and had the opposite affect.

Anyone who likes them should make their own site to try and see. Oh wait, sangar already did that multiple times and it crashed and burned every time.


> Because they have all been tried before and had the opposite affect.

Did you even read the document? Claiming that Wikipedia has implemented all of these suggestions in the past is just plainly false. If you disagree with the documents contents, why don't you provide a substantive argument instead of just belittling efforts at changing the status quo?


> Claiming that Wikipedia has implemented all of these suggestions in the past is just plainly false

I'm claiming people, not necessarily wikipedia, have tried them. However many have been tried by Wikipedia too.

> just belittling efforts at changing the status quo?

The status quo is pretty good. Change for change sake is an anti-pattern.

Regardless, i think people who like these ideas should try them, on their own site. I suspect they will quickly find out why Wikipedia does not want to do them.

After all, martin luther didnt just whine that the pope wouldnt listen to him, he made his own thing.


Your comments are shallow because you just continue to assert the idea are bad with no reasoning. You also clearly don't know your protestant history: Martin Luther did basically just whine about the Pope. He was thoroughly a reformer that wanted to see the Catholic church changed; he did not condone "Lutheranism" as a separatist movement.


The reasoning is historical precedent. If something doesn't work out the first time you try it, why would you do it again.

We are talking about at least 10 pages worth of proposed reforms. Do you have a specific one you would like to discuss? I'm not particularly interested in writing a 10 page essay in an hn comment about why i think all the proposed reforms are stupid.


> Peer-to-peer communications such as gaming usually have to deal with NAT traversal, but with IPv6 this is no longer an issue, especially for multiple gamers using the same connection

You know the list of "benefits" is thin when the second item is entirely theoretical. Even though IPv6 doesn't have to do NAT traversal, it still has to punch through your router's firewall which is effectively the same problem. Most ISP provided home routers simply block all incoming IPv6 traffic unless there is outbound traffic first, and provide little to no support for custom IPv6 rules.

Even if that were not an issue, my bet is that there are close to zero popular games that actually use true peer to peer networking.


Punching through just a firewall is much easier than punching through a typical NAT+firewall setup

https://tailscale.com/blog/how-nat-traversal-works


How do you punch trough firewalls? You have to manually open them, punching through firewall would be a firewall vulnerability.


This is a common function of uPnP, which I've seen as features in router config pages since the mid 2000s.

https://en.wikipedia.org/wiki/Universal_Plug_and_Play#NAT_tr...


Running a firewall with upnp enabled has always amused me. Might as well just turn the firewall off if you let any machine shoot any hole it wants in it.


Typically firewalls will record the src and dst header values of outbound IP packets then temporarily allows inbound IP packets that have those values flipped.


You're just asserting that without explination. Please correct me if I'm wrong, but afiak the only difference in NAT hole-punching is that clients don't know their public port mapping ahead of time. This actually doesn't make a huge difference to the process because in practice, you still want a central rendezvous server for automated peer IP discovery. The alternative being that each peer shares their IP with every other peer "offline", as in manually through an external service like IRC or discord, which is a horrible user experience.


> You just asserted that without explanation.

They linked a whole article detailing the complexities of specifically NAT traversal.

I should think it obvious that by removing an entire leaky layer of abstraction the process would be much simpler. Yes, you still need a coordination server, but instead of having to deduce the incoming/outgoing port mappings you can just share the "external IP" of each client--which in the IPV6 case isn't "external," it's just "the IP".


I already am aware of how NAT traversal works. Linking a generic article explaining it is not a meaningful response.

Also NAT is a pretty simple abstraction, it's literally a single table.


>Also NAT is a pretty simple abstraction, it's literally a single table.

...And now, let's try punching a hole through this "simple" table. Oops, someone is using a port-restricted or symmetric NAT and hole punching has gotten just a tad more complicated.


Agreed; Or they're using CG-NAT, or consumer grade NAT behind CG-NAT, or....


> it still has to punch through your router's firewall

That's why most routers use a stateful firewall. Then nothing has to "punch through" it just has to be established from the local side.

> block all incoming IPv6 traffic unless there is outbound traffic first, and provide little to no support for custom IPv6 rules.

This is why STUN exists.

> my bet is that there are close to zero popular games that actually use true peer to peer networking.

For game state? You're probably right. For low latency voice chat? It's more common than you'd think.


> it just has to be established from the local side

This is exactly the problem. Unless you expect users to manually share their IPs with every other user in a given lobby through an external service, you would need to make a central peer discovery and connection coordination mechanism which ends up looking pretty similar to classic NAT traversal.


The complication starts when such an ephemeral port gets connection from somewhere else, which is the crucial part not the creation of such ports. That is not supported necessarily by firewalls, or not that simple than just having a stateful firewall.


Getting a streamer’s IP attracts DDoSes and doxxing, so yeah it’s generally considered a vulnerability to use P2P in games


Yeah, p2p is fine only with friends, people you know, otherwise it's like posting your private address for everybody to see.


Not having a congested CGNAT in the mix at 4pm every day is a nice benefit.


Also NAT66 exists and I use it on my home network so you still have to have the machinery to do NAT traversal when needed. It's nice to use my public addresses like elastic IPs instead of delegating ports. IPv6 stans won't be able to bully their way into pretending that NAT doesn't exist on IPv6.


For consumer traffic, your probably right. In data centers, cloud computing, and various enterprise networking solutions, IPv4 is still king. I'm sure IPv6 would work fine in all these use cases, but as long as many large tech companies are not exhausting the CIDR ranges they own (or can opt for using private ranges) there is no impetus to rework existing network infrastructure.


> cloud computing

Nope. Large scale DCs are IPv6 only underneath, exascalers like Google and Meta have stated that multiple times. I.e. https://www.youtube.com/watch?v=Q3ird3UDnOA also see various NANOG talks https://www.youtube.com/@TeamNANOG/videos


The underlay might be v6, but that doesn’t change the fact that people heavily use v4 for the actual workload traffic (i.e. the cloud computing part). EC2 VPCs still default to v4 only last time I checked.

Hyper scalers != cloud computing.


A great many home ISPs are also IPv6 only, and tunnel your IPv4 packets.


What about Amazon?


> such that they almost meld into Stalin or Mao

Stalin was an ideological authoritarian that executed political rivals and used lethal force, price controls, and other governmental tools to control the economy and the general working population. The idea that Sanders and Mambani advocate anything close to that is laughable.

The rhetoric on both the right and left that liken today's politics to extremism in the 20th century is a ridiculous anachronism that needs to be called out more often.


How about the fact that the human mind and genetics are simply fascinating and interesting topics. I would imagine that people don't care as much about running and high school testing because they are fairly niche interests relative to abstract thinking in general, something that almost everyone spends much of their life doing.


Goofy platform specific cleanup and smart pointer macros published in a brand new library would almost certainly not fly in almost any "existing enormous C code base". Also the industry has had a "new optional ways to avoid specific footguns" for decades, it's called using a memory safe language with a C ffi.


I meant the collective bulk of legacy C code running the world that we can’t just rewrite in Rust in a finite and reasonable amount of time (however much I’d be all on board with that if we could).

There are a million internal C apps that have to be tended and maintained, and I’m glad to see people giving those devs options. Yeah, I wish we (collectively) could just switch to something else. Until then, yay for easier upgrade alternatives!


I was also, in fact, referring to the bulk of legacy code bases that can't just be fully rewritten. Almost all good engineering is done incrementally, including the adoption of something like safe_c.h (I can hardly fathom the insanity of trying to migrate a million LOC+ of C to that library in a single go). I'm arguing that engineering effort would be better spent refactoring and rewriting the application in a fully safe language one small piece at a time.


I’m not sure I agree with that, especially if there were easy wins that could make the world less fragile with a much smaller intermediate effort, eg with something like FilC.

I wholeheartedly agree that a future of not-C is a much better long term goal than one of improved-C.


I don't really agree, at least if the future looks like Rust. I much prefer C and I think an improved C can be memory safe even without GC.


> I think an improved C can be memory safe even without GC

That's a very interesting belief. Do you see a way to achieve temporal memory safety without a GC, and I assume also without lifetimes?


A simple pointer ownership model can achieve temporal memory safety, but I think to be convenient to use we may need lifetimes. I see no reason this could not be added to C.


A C with lifetimes would be nice, I agree.

Would be awesome if someone did a study to see if it's actually achievable... Cyclone's approach was certainly not enough, and I think some sort of generics or a Hindley-Milner type system might be required to get it to work, otherwise lifetimes would become completely unusable.


Yes, one needs polymorphism. Let's see. I have some ideas.


C does have the concept of lifetimes. There is just no syntax to specify it, so it is generally described along all the other semantic details of the API. And no it is not the same as for Rust, which causes clashes with the Rust people.


What clashes?


I think there was a discussion in the Linux kernel between a kernel maintainer and the Rust people, which started by the Rust people demanding formal semantics, so that they could encode it in Rust, and the subsystem maintainer unwilling to do that.


Ah, I thought you were talking about core language semantics in C.


I don't know enough Rust, to do such a comparison.


“The rust people” were also kernel maintainers.


Sorry, I'm not familiar with the titles of kernel developers. I thought only one of them was the subsystem maintainer.


One of them was a maintainer of that particular subsystem, but that doesn't mean that the other folks aren't also maintainers of other parts of the kernel.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: