Thanks for the links.
I didn't know about SwiftWave.
I have a page with a comparison table of self-hosted PaaS on my site: https://dbohdan.com/self-hosted-paas.
It only covers options that don't use Kubernetes.
I have just added SwiftWave.
My goal is to build an intuitive, snappy UI that helps you but doesn’t get in your way. Happy to answer any questions and would love to hear what you think :-)
The core problem of most of the PaaS is the dependency on Swarm (serious workload can't be run on swarm from my experience, disaster recovery too tough).
This is true for most alternatives, but not for Coolify.
I am the second maintainer of Coolify and Andras and I maintain most of Core Coolify while we have 4 other maintainers helping with support and the docs and a few other maintainers who help with CLI and some other stuff.
He did not say "companies vs individuals", he said "single maintainer", which is obviously a high risk factor to consider IMHO.
I wonder why they all start their own projects instead of putting their heads together. They could achieve so much more and make a bit more money on the side, while each of them would have to spend less time on it. It would also attract risk-averse companies.
This is true for most alternatives, but not for Coolify.
I am the second maintainer of Coolify and Andras and I maintain most of Core Coolify while we have 4 other maintainers helping with support and the docs and a few other maintainers who help with CLI and some other stuff.
Second this! I just got hired for a short-term project to extend a payment solution I once wrote when I was employed by that company.
I was amazed to find that a) nobody maintained the project after I left, there were only two minor fixes because their house was on fire, and b) I really took the time to write almost complete documentation on all the important topics, which helped me get back on track faster.
You are absolutely right, and I have experienced this most of the time. The problem is that it is an uphill battle to explain to most stakeholders why you are "wasting" so much time on non-customer facing documentation.
It is hard enough to convince even technical stakeholders (e.g. product owners) to write automated tests.
While at the time I mostly think it's bad, later on it forces them to pay me twice as much, so I guess it's not as bad as I always think in those moments :D
Go disliking is due to its syntax not semantics. Go as a dynamic language runtime platform will be interesting; the platform defines the semantics and languages define syntax.
On similar lines, Fable [0] project recently announced rust & dart runtime support making F# a very attractive choice.
Go dislike has many different reasons, and some of them are semantics - e.g. the way nil interfaces work, or the dance you have to do to add an item to an array. And then there's the whole issue with FFI, which has nothing to do with syntax, and everything with Go threads being "special".
Yes, it is a zerotier alternative but there are key differences in how we do somethings... in fact, its on my list of things to do for creating some of these comparisons.... in the mean time, here is some comments on Ziti vs others - https://www.reddit.com/r/selfhosted/comments/v1ymn5/when_pub....
Thanks! From the benchmark report [1], it is not clear how much of the baseline wire performance is observed, the numbers in the table are anywhere from 30% to 90% of plain wire bandwidth.
As there are multiple overlay projects popping up - Tailscale, NetMaker, OpenZiti, NetBird, Nebula, ZeroTier, EVPN, etc, we should consider a baseline benchmark index, like [2].
P.S. We have been testing ZeroTier for VPN access and observe ~70% baseline wire bandwidth.
Nice! Sounds like OpenZiti is your next one to put on that list? We would __LOVE__ to have a third-party do any kind of performance testing like that and report the results. Good -- or bad! It's important to be transparent in things like this, we believe that wholly.
If you want any help, we'd be happy to help you setup a network (if you need it), just ask over on the discourse!
A bunch of us (me included) used to work in the IoT world. You can *TRY* to simulate 100k assets using "20 or so" nodes but truthfully it's just never *really* the same. So, to be transparent, no. It's really, really, really hard to test 100k devices effectively in my experience. (happy to be told otherwise/taught what others have done). We ran hundreds of __actual__ machines simulating "thousands" of devices, but it's still NOT the same... We do perform scale testing though it's not me that does it...
So no, we haven't gotten to 100k actual deployed devices - yet. You can be the first! :)
As every developer will say, we "built it for scale". Many of us are also a bunch of ex-IoT devs, and we _have_ built systems like this in the past so we're really familiar with the types of issues that can crop up.
Verified users count, we're in the 5000 to 10000 range that I know of (as in, I am pretty sure we have networks of that size deployed out there in the wild). I'll ask what our "fabric" people have tested and how and get back to you if it's significantly different than what I know about.
I'll add some more detail. There are a lot of different ways your application can break as it scales up. From something which is handles data flow, the three I tend to think about the most are: the model, throughput and number of connected clients.
We've tested the model with 100k identities with the operations that clients use: auth, listing services, creating sessions, etc, to make sure that the model scales reasonable well. We had to add additional denormalization and make some other tweaks, but now the controller scales relatively well for those operations. I'm sure there are still edge cases where it may break down, but we'll have to fix them if someone hits them.
We've done thoughput testing to make sure we can handle high throughput use cases. This also resulted in lots of changes, including reworking the end-to-end flow control. This is an area where we're happy with the progress we've made, and performance is in a reasonably good place, but where we have lots of ideas of how to continue to improve and will be continuing to test and iterate.
Having tons of connected clients (even without much traffic flowing) is it's own scaling challenge. We've done some amount of testing here. As part of the testing mentioned above we had to make some changes to make sure that slow clients didn't hold up fast clients. More generally,this is where things start getting complicated and very specific. You can have very different types of traffic flow, so it's hard to model anything generic. We have not done as much in this area because we've not seen any cases where we're memory constrained, which is the usual sign of a concurrent connections scaling bottleneck.
zero tier is layer 2 - openziti is layer 3, 4 or 7 depending on what you're doing. it's similar to zero tier in ways, but very, very different in others.
openziti's main goal is to bring application embedded, zero trust into applications but getting there is a long journey. that's why we provide "agents" like other "better vpns" like zero tier, wireguard, etc
It appears to include the currently standardized extensions. GC is a proposal in phase 2, and I'm not even aware or a proposal for DOM access yet but it would require several other proposals to go through first.
Thanks for recommending this path. Would the second 'ivy' refer to returning for a graduate degree? As in 'undergrad > big tech > grad > lab > startup'? What would recommend the timeframes be at each stage?
Of course, the performance will depend on the database.