Hacker Newsnew | past | comments | ask | show | jobs | submit | badoongi's commentslogin

Nice, this somewhat reminds Apportable: https://www.ycombinator.com/companies/apportable


For sure. Pretty similar problem space, very different implementation and target market. Apportable wasn’t a transpiler (and boy were we proud of that); it was a reimplementation of the iOS frameworks on top of a replacement NDK (Android’s libc was bad-mediocre at the time. Real nasty behaviors in its (dl)malloc for instance). Similar to WINE. It was always targeting games, so there was never a lot of effort to port any of the UIKit controls, but OpenGL CoreGraphics were supported. It also had a compiler extension that let you access the whole Android SDK in ObjC, and a tool for generating ObjC APIs from JARs. The goal was to make ObjC the one true mobile dev language. I applaud the effort to do something similar for Swift, even if it does involve transpilation.


I've been working through a legacy starlark codebase recently. My take is that the language, and the bazel phased execution model, makes it very challenging to debug.

One example: as far as I can tell there is no way to get the Starlark call stack, this is a tool you really want for printf debugging (which is your main debugging tool with Starlark), I can't find it right now but I believe I saw them say that they don't want to give access to the call stack for performance reasons which I find odd - is the bottleneck in slow bazel builds close to be starlark execution time?

Also looking at a non trivial Starlark codebase that didn't mature well, I can't help but wonder if it wasn't better to not give engineers as much flexibility for their build configuration, yes, they'll have some copy pasted configuration snippets that could look tighter with a macro, but you are less likely 5 years later to look at a monster that is slow and hard to debug.


I (Bazel SWE at Google) agree with the too much flexibility being a problem. It's difficult for tools to work on complex Starlark code, so we encourage more copy and paste especially in BUILD files. But I don't think Blaze could have scaled to Google's needs without some amount of programmability. It's not just that it "looks tighter with a macro", when you have a rule that gets used to create millions of targets, you can't just copy and paste. You need to be able to systematically change it.


Fascinating project! I'm curious what's the business model? it's listed on Crunchbase that you raised 12M$ so I'm assuming you do have plans to make money?


Curious as well. Searching around I found this documentation on their ecosystem [0], which may shed some light on the organization structure. It may be they are organized as a DAO? From the intro:

> Radworks is a community dedicated to cultivating internet freedom.

They do not shy away from cryptocurrency technology, though AFAICS that is not directly applied to the Radicle project. Another project of Radworks is Drips [1], to help fund open source.

[0] https://docs.radworks.org/community/ecosystem

[1] https://www.drips.network/


I see testcontainers being used in tests making the test code style feel more like typical unit tests with fake implementations for system components. Which is misleading as these are more on the integration testing side typically. In essence this is another DSL (per language) for managing containers locally. And this DSL comes in addition to whatever system is actually used for managing containers in production for the project.


This is also their 25M$ series A announcement coming in 10 months after their 10M$ seed


rumored that this actually happened 6 months ago


Often when it's mentioned it's treated as a hard rule, claiming semver has no justification as any change is a breaking change.

Though in practice most software packages are not used extensively to the point that any observable behavior is depended upon (and Hyrum himself opens with this caveat "With a sufficient number of users ...").


> I can run our entire backend with a single command that will work on any developer's box.

Curious wouldn't `go run` give you the same? pure go code is supposed to be portable, unless you have cgo deps I guess?

> I can push a reproducible Docker image to Kubernetes with a single Bazel command.

That's definitely an upside over what would otherwise would probably default to a combination of Dockerfiles and scripts/Makefiles, does it worth bringing in the massive thing that is Bazel? depends I guess.

I'm curious: would you say your experience with golang IDEs / gopls is degraded? did you do anything special to make it good? I often feel like development is more clunky and often I just give up on the nice-to-haves of a language server e.g often some dependencies in the IDE aren't properly indexed, I can probably get Bazel to do some fetching, reindex and get it working, but it will take 3-4 minutes and I just often choose to live with the thing appearing as "broken" in the IDE and getting less IDE features.


> Curious wouldn't `go run` give you the same?

We use protobufs and pggen. Bazel transparently manages the codegen from proto file to Go code.

> would you say your experience with golang IDEs / gopls is degraded?

Yes, that's our biggest pain point with Go and Bazel. I haven't been able to coax IntelliJ into debugging Bazel managed binaries. To enable IntelliJ code analysis, we copy the generated code into the src directory (with a Bazel rule auto-generated by Gazelle) but don't add it to Git.

I've tried the IntelliJ with Bazel plugin a few times but I've always reverted back to stock IntelliJ.


My take - Avoid Bazel as long as you can, for most companies the codebase is not big enough to actually need distributed builds, if you've hit this problem Bazel is probably the best thing you can do today, if you're that big you can probably spare the few dozen headcount needed to make Bazel experience in your company solid.

Bazel takes on dependency management, which is probably an improvement for a C++ codebase where there is no de-facto package manager. For modern languages like golang where a package manager is widely adopted by the community it's usually just a pain. e.g Bazel's offering for golang relies on generating "Bazel configurations" for the repositories to fetch, this alternative definition of dependencies is not what all the existing go tooling are expecting, and so to get the dev tooling working properly you end up generating one configuration from the other having 2 sources of truth, and pains when there's somehow a mismatch.

Bazel hermeticity is very nice in theory, in practice many of the existing toolchains used by companies that are using Bazel are non-hermetic, resulting in many companies stuck in the process of "migration to Bazel remote execution" forever.

Blaze works well in Google's monorepo where all the dependencies are checked in (vendored), the WORKSPACE file was an afterthought when it was opensourced, and the whole process of fetching remote dependencies in practice becomes a pain for big monorepos (I just want to build this small golang utility, `bazel build //simple:simple` and you end up waiting for a whole bunch of python dependencies you don't need to be downloaded).

And this is all before talking about Javascript, if your JS codebase wasn't originally designed the way Bazel expects it you're probably up for some fun.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: