Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

100%. Terraform is half-way between a tool for generating the configuration and applying it. I think Terraform's application engine is actually quite good, but I would like to use a much better tool to generate the config. (And be able to diff that config)

You can feed JSON to Terraform however this falls over if you need dependencies for output values. This usually isn't an issue because most Cloud provider resources have predictable IDs but as soon as you have one that doesn't you are up for a lot of pain and suffering.



You may be interested in Pulumi: https://www.pulumi.com/

Basically it's Terraform but instead of declaring your resources in HCL, you declare them in a real programming language. You're still producing a declarative config that the engine then diffs, applies etc. In fact, it's compatible with existing terraform providers, so it has a surprisingly large selection of things you can use it for.

Note their docs will try to guide you towards using their hosted service which basically does nothing except host the state file, but you can use an S3 or GCS bucket instead and it works fine.

It's definitely not without its own problems, but I'd say it's overall an improvement.


Unfortunately last I checked, pulumi only offers state locking with their paid service. If you want to self-host you have to implement it yourself, which seems like a non-starter for a lot of people.


This was addressed a couple months ago in https://github.com/pulumi/pulumi/pull/2697


Wow it took 2 years for the PR to get merged.



Glad somebody mentioned Pulumi. It solved all of the major problems I had with Terraform.


Not with that licensing thanks


It looks like it's Apache 2.0 licensed? Wh issues do you have with that licemse.


It’s Apache 2, isn’t it? What’s wrong with that?



Someone should make a Clojure demo of those Java bindings, or even cljs. I hope Clojure has good type based completions these days, because it would be a fantastic language for this.


It’s pretty wild that the object identity via name thing is still a problem. Can they not add a transitional name feature where an object is known by multiple aliases for a while and then when you have finished putting though a change, you can delete the original name? Is this not very basic SQL migration practice? Like column aliases until no longer needed.


I don't even understand why the state needs to know the identifiers that the high level language uses for various resources. If the high level language has a binding "foo_bucket" for an AWS S3 bucket resource with a single property `name = "foo"`, then why should the state need to know that the high level language refers to that bucket with the name "foo_bucket"? Instead, the state should look something like this (obviously simplified):

    {
        "resources": [
            {
                "type": "aws_s3_bucket",
                "properties": {"name": "foo"}
            }
        ]
    }
Note that there is no reference to "foo_bucket".


This doesn't make sense to me. You need to know the logical identifier in order to explicitly link the code with the resource. Otherwise if I change the code for that resource how does TF know what it needs to change if none of the existing resources in state matches the new config? Do you just always destroy and re-create every time there's a change to anything?


> Otherwise if I change the code for that resource how does TF know what it needs to change if none of the existing resources in state matches the new config?

A resource provider defines a collection of fields that is the "identifier" for the resource. For example, an S3 bucket resource would have the "name" field for its identifier.

If you change another attribute besides the bucket name, the engine will see that the input and the state both have a s3 bucket resource with the same name but different props, so it knows it will need to update some props (rather than create a new one). However, if the name changes, the engine will see that the input has a bucket that doesn't exist in the state so it will add a "create bucket" step to the plan. It will also see that the state has a bucket that isn't in the input, so it will add a "delete bucket" step to the plan.

Maybe another way of saying the same thing is that a resource provider can mark any given field as "forces replacement", and all of the fields that force replacement are the de facto identifiers? I haven't thought through whether these are exactly equivalent.


The "identifier" is often something that's computed later or returned from the API. Think about something like an ec2 instance - the identifier is the instance ID that's returned from AWS. You can have many instances that basically look identical so how do you differentiate which one this logical resource is referencing?

And back to the s3 bucket use case sometimes you want uniqueness in your name so you use a prefix instead of specifying the whole name - how do you determine which bucket that resources is referencing if there are multiple buckets matching the prefix?

I hear what you're saying in terms of wanting state management to be simplified, but pretty much every IaC solution uses this explicit logical resource -> physical resource mapping in state.


Yeah, moving objects around the config is common if you want to keep it organized and requires manual actions that require essentially a global lock on the stack (and Terraform has no built-in feature to actually take this lock). It makes it basically impossible to implement a fully automated production change pipeline with Terraform.


Moreover I can never, ever, remember the syntax for moving objects around the config. It's really painful.

Edit: the aliases would have to handle moving as well as renaming. You could just have aliases in a global namespace, which means adding `alias = "portable-elb"` and doing one `terraform apply` means you can pick up that config, drop it anywhere else, and it will move it for you. It wouldn't even need to do a full `apply`, just a local JSON manipulation.


> application engine [vs] tool to generate the config

I get it from HashiCorp's perspective though.

A robust application engine with a suboptimal config generator is a viable product.

A suboptimal application engine with a brilliant config generator is not.

So given limited resources, former gets the dev grease.


This is a false dichotomy.

You can generate these configs really easily with any off-the-shelf programming language for a small fraction of the effort they’ve put into HCL + all of the stuff on top that makes HCL the shitty programming language that it is.

Even if you insist on building your own programming language for this purpose, Hashicorp could’ve saved themselves a lot of work by looking at the prior art of the last 70 years of programming language history.

In other words, if they just picked, say, JavaScript from the start they could have saved a bunch of time and energy and put that into their application engine.


> You can feed JSON to Terraform however this falls over if you need dependencies for output values

This is what I've started doing with Jsonnet for generation, and also exactly why I've stopped doing it.


I'm not sure I follow exactly what you're missing. `${aws_instance.example.x}` as a string value creates the same dependency as it would via HCL when used with JSON.


Same here, I don't see how outputs is being treated any differently by Terraform than any other .tf file written in HCL. I'm not saying it's not possible, but I haven't experienced a failure more there yet.


Thanks for the hint, now I'm not sure what went wrong when I tried something like this. I should read up on this more.


What are some of the tools that do this? The only ones I know of are Scalr and Pulumi.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: