Hacker Newsnew | past | comments | ask | show | jobs | submit | greencurry43's commentslogin

Author here. Sounds like I won you over :)

Thanks for the feedback here. I modified the post a bit to not show the arrays because the library doesn't return arrays but rather aync generators. The power of this is that you have a single interface for interacting with all data no matter if there are zero or many results.

This is library is a proof of concept. In a perfect world, there would be a nice wrapper around this query result for interacting with the data.

  queryResult.get('order').first()
This would provide a single order by calling the first yield for the generator. No need to worry about the underlying result.

With all of this said, it doesn't have to be this way. I'm exploring ideas here. The important concept is that GraphQL can be used to express the expectations of the client and a library like Graphable JSON can go find the needed data in the API. This gives the expressiveness of GraphQL with the benefits of REST. The sky is the limit. Thanks again for the feedback!


I wrote my own feedback below but left out this crucial piece you've mentioned here. I think to add to it, my family treats it as if I'm not there. I can't watch the kiddos for a few minutes or help get the laundry during work time. It's easy to do a lot of small things and have a completely disjointed day.

And the opposite is true, when I'm off work I'm off. My team can't treat me as always available because I'm remote. That takes time to establish.

Good point to mention.


I've had successes and failures working four years remotely, so that's my disclaimer. But the key for me has been to have a dedicated space with a dedicated computer for work. If it's easy to work on the couch you'll do it, so try to never work outside your area (if you can). Only work in your work area.

Also, protect your personal time. Do simple things, like mark out your lunch time on your calendar and don't let meetings creep into it—you'll be surprised how much this can help. Stop work at a time like 5pm and propose alternate times when people schedule things after it. People are usually open to moving meetings and may have not considered your stop time. Time zones make this harder as it's hard to keep track of when lunch is for people. Meetings during personal times should be exceptions.

Try to keep work things off your phone, though that's easier said than done with things like Slack or email. They make it too easy to work without feeling like you're working. I'm failing here currently :)

These above are all things you can do as an individual. But I think the success will also depend on your team. If you are the only remote person, it will be harder than if everyone is distributed. You'll need to ask people to write more and act like everyone is remote because you'll miss out. I've found that if you're the only remote person, it helps if your team works from home a day or two a week so they have to work remotely.

To summarize, set clear boundaries and expectations with yourself and your team with how and when you work and try to move your team to be remote-first.


Author here, great link—thanks! Do you know of any coverage tools that implement this?


For C/C++, and various code generation from state description tools like matlab/simulink, specific in the auto/aircraft/safety industries, I won't mention by name here, but no FOSS.

Anyway, I think about this stuff all the time and really enjoyed your article, thanks for writing it, it doesn't seem to be a very popular topic to write in such depth. Thanks again!


I wrote a little library for building DSLs in JavaScript [0] because of this very issue—JSON is not good for this kind of stuff. I want to be able to create small DSLs to solve problems rather than squeezing the language into a JSON format. I want the full power of a language AND the ability to serialize the semantics to send over the wire. Maybe we'll move toward that one day.

[0] https://github.com/smizell/treebranch


But you shouldn't want the full power of a Turing complete language in a config language. All of that belongs in application-space, configuration is supposed to be simple and static, with as little logic as possible, preferably none.


The difference is that I am not proposing a Turing complete config language, but rather proposing to build configurations with plain old JavaScript (or whatever language you want). I think a good read about this thinking is by Martin Fowler on Language Oriented Programming [0], specially the section about internal DSLs.

Another interesting read is around the configuration complexity clock [1], in that over time we move from hard coding things to building configurations to coming full circle and hard coding things again. I like to think internal DSLs closes that loop well.

[0] https://www.martinfowler.com/articles/languageWorkbench.html

[1] http://mikehadlow.blogspot.com/2012/05/configuration-complex...


That’s exactly what Lua was created for, to be a Turing complete configuration language. You can remove the whole standard library though, leaving only the built in operators, variables, functions, and control flow. To some it may seem ugly to define a configuration this way but it allows configurations to reduce duplication and add more dynamism. It’s a trade for, power for simplicity. But I guess the logical end of that is to end up evolving the configuration into a scripting language which is how Lua got where it is today.


>You can remove the whole standard library though, leaving only the built in operators, variables, functions, and control flow.

I just choose not to use them, and limit myself to tables or an API that generates tables when using it for config files.


I made my own little language called Geneva [0] for similar ideas but it acts as code and can be parsed as JSON. I also came up with a spec for doing this for HTML [1] (but no code to do this yet).

[0] https://github.com/smizell/geneva

[1] https://github.com/smizell/janeml


One of the most transformative things I've come across for how to structure and test code has been Gary Bernhardt's talk on Boundaries [0]. I've watched it at least ten times. He also has an entire series on testing where he goes deeper into these ideas.

In this video, he talks of a concept called functional core, imperative shell. The functional core is your code that contains your core logic that can be easily unit tested because it just receives plain values from the outside world. The imperative shell is the outside world that talks to disks, databases, APIs, UIs, etc. and builds these values to be used in the core. I'll stop there—Gary's video will do 100x than I can do here :)

[0] https://www.destroyallsoftware.com/talks/boundaries


You might find it useful to spend some time looking into the hypermedia constraint of REST itself (i.e. hypermedia as the engine of application state). The point of REST is to represent state machines by including hypermedia links in the responses of the API. Those links describe the state transitions a client may invoke in a given state.

Instead then of mapping CRUD operations to HTTP, you would map your application semantics to HTTP methods via link relations. Link relations let you go as far past CRUD as you would like to go depending on your domain and allow you to describe your states and state machine.

I always think a good way to think about a hypermedia state machine is in the context of HTML. Consider a todo app example. You might enter the application and get an empty list of todo items. If you have permissions to add a todo, you might have a "create todo" HTML form. Once you use that form, that todo has a new form called "mark complete." Once invoked, that todo has new transitions called "mark incomplete" or "archive." Once archived, you might see a new form for "unarchive." All of this captures the state machine and transitions in the REST API itself using domain-specific semantics.

Of course, there are other ways of solving this problem, but REST with hypermedia is a great way to work with state machines. There are lots of hypermedia JSON formats out there if you're interested in exploring (e.g. HAL, Siren, Collection+JSON, etc.).


I've seen some use a versioned media typed like GitHub [0] or add a version parameter like "vnd.example-com.foo+json; version=1.0" [1]. People may use this version for the entire API or for versions of that specific resource as well.

[0] https://developer.github.com/v3/media/ [1] http://blog.steveklabnik.com/posts/2011-07-03-nobody-underst...


I personally like to think about managing change well rather than versioning an entire API. In this, API designers provide instructions on how APIs will evolve over time, how features will be deprecated, and how this process will be communicated to API consumers. Change management also includes recommendations for client developers on how to evolve along with the API and build clients that don't fail when something as small as a JSON property is absent or added. You don't necessarily need version numbers to do this.

An API version says to me that one day the entire API may introduce a major change that is separate from the current API. If you plan to never introduce a major change like this, you may not need a "/v1" in the URL.

Kudos to the author for putting together a nice article :)


API consumers might also not be very flexible when it comes to adapting to API changes: some might have developed fragile/inflexible code that will go up in flames if you even add a new field to an object (leading to support calls and escalations), this is why depending on your domain you might need very strict versioning requirements.

Some APIs will not change a version number even if new objects/fields are added, others will, it really depends by what space you are in and your customer needs and it should be taken into account when designing your API.


I think you are right here. It takes two to tango, and if the clients are tightly coupled to the API, the best intentions for evolving slowly will fail. :)


Never say never. It's generally impossible to predict how an API will have to evolve over time in response to changing requirements.


That's true. However, I think this falls under YAGNI [0]. If you happen to get to a point where you have to change your an entire API rather than evolve what you have, you should consider it a new, separate API rather than instruct all client developers to plan for it up front. Plan for evolvability and handling changing requirements up front.

[0] https://martinfowler.com/bliki/Yagni.html


And how would you name this new API? Bob? Or version 2?


Name it whatever you want. Point is, you don't have to plan for such a drastic change when a separate domain/subdomain will solve it for you.


As an API consumer, clean cuts are much, much more manageable than a continuous stream of small breakages.


True. I would consider a continuous stream like that to not be helpful change management. But if you have to have continuous breaking changes like that, you would want clean cuts. My point is, find ways to evolve your API without breaking things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: