These things happen, props to Hashicorp for communicating about it.
It would be easy to say that this is a failure of open-source in some way, but to do so would be unfair to the huge amount of work that companies put into tools like this, and the stewardship that they offer, both of which take a lot of time and money. If periods of low activity while teams change are the cost the community needs to pay for that, I think that's very fair.
I had a conversation with my coworkers when I worked there about being upfront with folks that we wouldn't review or accept their PR and stop leaving people hanging (this was on the TFE provider). I'm glad this was added.
I stand by my statement then and now: there is nothing worse than contributing to something open source and then have your PR completely ignored.
I don't understand why people think that just doing unsolicitied work and pressing a button means the people on the other side are obligated to spend time reviewing and integrating it. GitHub has weirdly unbalanced open source contributions with a heavier burden and burnout on maintainers.
Contributing to a project doesn't mean just slinging code and calling it good. Communicate--talk to the people maintaining the code. Ask them, hey I have an idea and want to add this feature/fix this bug/etc.--do you have bandwidth for that change? Mailing lists, discussion forums, chat rooms, e-mails, etc. are the place to sort this out, not snarky or even angrier and angrier replies to an unsolicited pull request that goes unreviewed.
The overhead of working out where the people who maintain the code are is usually higher than just fixing the bug I found. This is doubly true if the bug is bad enough that I need to temporarily fork the repo to deal with it.
In the same way that the maintainers don't have an obligation to review my PR, I don't have an obligation to go find them and learn how to use IRC/bugzilla/mattermost/mailing lists/smoke signals/yodelling to communicate with them with the exact secret handshake to get a review. I can just throw a PR out there, point our code at my fork, rebase it whenever they make changes, and otherwise ignore their requirements until they either fix the bug themselves or merge my code.
In usual circumstances, free work is generally appreciated.
But I imagine it's a bit different with code, as most people prefer writing code to reading it.
In the case of a bug in software you didn't write, you can spend hours reading and debugging a foreign codebase, and end up writing nothing but a one-line fix and a ten-line test case.
There is such a thing as open source project with "pull requests welcome" in readme and whole elaborate "how to contribute" wiki chapter. Except when you follow that wiki to the T, it still ends up ignored because no one is reading pull requests. It is not that these contributions are going out of nowhere, from clueless people who dont realized maintainers dont want pull requests. Pretty often maintenners wanted pull requests, made sure to promote the option and then found themselves unable/unwilling to merge them in.
I am not saying projects must merge in pull requests. I am saying they should not have "please contribute it is welcome" messages in their readmes.
And on second plan, there are those campaigns to make people contribute. And shaming of companies/people for freeloading if they dont contribute - especially here on HN. Again, if then people conclude that contribution is something expected, it is not only their own fault.
Early in the project the maintainer(s) are super-excited of their new baby. Later on they burn out and move on to do different things. The PR guides are written in the first stage, while PRs get ignored in the latter.
A bot should do it. Because sometimes people throw a tantrum, so it's easier to just ignore a PR. Or a maintainer might post a quick reply, only to be bitten later on. I love and live by open source, but the drama can be exhausting.
"open source" doesn't mean "accepting drive-by contributions from unknown authors". "open source" means "open source". If you want to apply your patches, you're free to fork the repo.
Is there no way to just have a script add a comment on every new PR saying "Due to insufficient staff, your PR may not be reviewed for a considerable amount of time."?
Your code is important to us. Please stay on the line, and the next available reviewer will answer your code. Due to unusually high code volumes, this response may take longer than usual.
I think the issue is about letting people know that PRs won't be reviewed/merged before they bother putting a lot of effort into them (both the commit itself and setting up/documenting the PR).
This has always been a weird quirk of GitHub. You can disable issues, boards, wikis, but pull requests cannot be toggled. It's a pain point for lots of projects: those closed to contributions, those which are not primarily code (issues-only), those that use a different platform for review (e.g. Gerrit), ...
Why does GitHub persist? It's not like forcing the feature enabled helps anyone. The maintainers will still not merge if they don't want to. There are bots in the marketplace that will close PRs with a message. GitLab has a toggle. Why is this so important to GitHub?
I’ve made PRs because I need the change for my day job. By the time I’ve traced an issue down to a third party library the cost of making a PR is minimal. But if they don’t want to take the changes, it doesn’t bother me.
Of course I don’t work on open source out of passion, so I could imagine this is different for the true believers.
Personally, I tend to post the change in a comment on the issue that it fixes. I don't generally bother with doing a formal PR, since that would mean setting up the repo in a dev environment, branching, etc. and would be a bunch of extra work.
Locally, I just make the change and check it in to my project.
Every repo has different contribution rules and a one-off in one repo often just isn't worth the time to learn all the bespoke boxes that need to be checked. The work is there in the comments and if it's of value, someone more familiar can take it the last mile ... and codespaces _just_ came out generally. Could you edit online before that?
First of all, building a provider isn't straightforward. The best way I've found to do this is to wrap `terraform init`, and have it `docker run` a build process for a plugin version that never existed - then dumping the built provider into the `.terraform` directory for the project. It's prone to failure; new users of the Terraform project complain that the build eats 8GB of RAM and takes many minutes.
Second, providers are constantly changing, and it's not always possible to cleanly rebase a set of community changes on top of master. Part of the trouble with letting PRs wither on the vine is that they themselves become stale - in one case, the code still compiles, but the end result is completely wrong.
For what it's worth: my use case was needing to use Terraform with some more "unusual" features of CloudFront and ALBs. The 80% use case for support is great. There's a remaining 15% that's well implemented by unmerged PRs, and another 5% entirely that's completely unsuppored. I've kept it IaC by using the `null_resource` provisioner to shell out to the AWS CLI where absolutely necessary.
heh. i've been considering a switch from ansible to terraform because i've been frustrated by ansible's limited support for some of the edge cases around ALBs. good to hear that terraform also sucks.
i'm gradually coming to the conclusion that all the tools that are supposed to make provisioning cloud infrastructure easier aren't as good as a bunch of crappy custom scripts using boto or aws-cli.
Terraform is a massive improvement over Ansible if you do anything non-trivial; even if it sucks, it sucks significantly less than Ansible at managing infrastructure resources.
I sometimes fool myself into thinking I can use Ansible in simple cases and that Terraform would be overkill but so far I've regretted those decisions every time.
Ansible is sort of bad at everything, and does a few things decently.
I use it as an orchestrator / clusterssh replacement, but for configuration management it makes me nervous because I can't trust it to just not break for stupid reasons.
Last time I used it on Amazon Linux 2 with packer. It worked, but 1/3 of the time it failed with some strange error about yum database corruption.
I suspect what was happening is that Amazon Linux on start run yum to apply all available updates, and ansible was not respect yum locks, which is very surprising given that Ansible came from Red Hat so they should have known how important locking is for yum.
I ended up using salt (which has its own set of issues) and never looked back.
One big difference here is that while the tools might all depend on the SDK provided by the cloud, the tools themselves can also do a whole lot of good/wrong on their side. Terraform fixes a lot of that by delivering a 'standard' provider interface with normalised data formats, resource structures and encapsulation. That was pretty much a requirement for the tool to work anyway, otherwise it wouldn't have a unified way to check for configuration drift, plan changes, apply changes, do cleanups etc. You also wouldn't be able to pass data around easily (you'd end up shuffling strings around instead).
Some people prefer to do the CDK thing where you use a general programming language to synthesise the IaC stuff and then run it that way, but that doesn't really fix anything because a CDK is just built on top of the same SDK. As an added insult to injury, you now don't have a domain-specific language so save you from yourself (and your team) with all the anti-patterns you now have at your disposal ;-)
You can sideload Terraform providers so you don't need to do this. I personally recommend the [implicit local mirror directory](https://www.terraform.io/docs/cli/config/config-file.html#im...) where you just place your provider in your OS's respective Terraform plugin directory (MacOS: `$HOME/.terraform.d/plugins/`).
There's other ways to sideload providers on that docs page too
So then when your upstream repo diverges would you just rebase and manually add in anything you want from the forked development tree on the upstream side?
Not sure what's best practice so... just curious how people have handled this - I usually leave my forks of stuff pretty stale and focus on my own little sub-pieces to achieve what I want but not too much else.
Yes, we would branch from upstream again and apply the patch set against the new branch. Normally it is trivial but once in a while there are manual changes necessary.
I usually have my own branch which can be used to get a diff and maintain the diff on any major code structure change. Most often there is not much hassle but can be annoying.
If you spent the time and effort to hunt down the problem for them, the least they could do is look into it. repos that don't care enough are a waste of time.
TBF, even if your PR solves an actual issue a number of users have, it still might not be a good fit for the project's future, and starting a discussion about it might not be worth anyone's time.
More importantly, as you disclosed your solution, other affected by the issue can rely on your code in the meantime (= the rest of the project's life in some cases. I've been there). If it really is a critical issue that isn't solved, your fix can always be used in a different fork with better maintenance.
All in all, the upstream repo not responding to a PR isn't the end of the world I think, and the openness of the system makes it an acceptable state in many ways IMHO.
I disagree, when someone opens a PR on my project it's an imposition of my time. I appreciate the help but I will review when I have time and feel like it's a good time to do it.
"It would be easy to say that this is a failure of open-source in some way"
I have seen far more commercial, closed source products go through similar staffing crunches. The difference is that the problems are hidden away behind misdirecting sales teams and so on.
I can't tell you how many times I've reached out to someone on the inside of a company to get a straight answer as to whether a product is being properly staffed and supported. Or, conversely, how many times I myself have had to decide to orphan some commercial, customer facing work to meet a goal with a higher priority.
In my experience, useful open source products are less likely to suffer from inadequate staffing than closed.
I think it's the likely reality of far more projects than just terraform, and I don't fault the project completely for reaching this state - attrition is a brutal thing.
It's distressing personally though, that it's an issue unknown to me before I read this post, and affects a product I use and champion.
Comparing programming languages to individual open source projects is silly. Lots of Node and Rust projects are behind on PRs. Also Vue just doesn't allow community development, which makes everything much easier.
It would be easy to say that this is a failure of open-source in some way, but to do so would be unfair to the huge amount of work that companies put into tools like this, and the stewardship that they offer, both of which take a lot of time and money. If periods of low activity while teams change are the cost the community needs to pay for that, I think that's very fair.