If you don't need concurrency, then you simply don't need to define any concurrency segmentation. But the real world is wildly concurrent, and most programs will eventually benefit from some degree of concurrency (especially when you can leverage that concurrency into parallelism), so it's beneficial to work in an environment where that improvement can be incremental rather than "we need do a complete rearchitecture to support n=2".
"letting it crash" in BEAM terms often means "simply redo the process". The difference is you end up defining your "transaction" (to borrow database terminology) by concurrency lines. What makes it so pleasant in practice is that you take a bunch of potential failure modes and lump them into a single, unified "this task cannot be completed" failure mode, which includes ~impossible to anticipate failure states, and then only have to expressly deal with the failure modes that do have meaningful resolutions within a task.
With that understanding in mind, I'd argue that nearly all business cases benefit from the BEAM. It's mostly one-off scripts and throwaway tools that don't.
"letting it crash" in BEAM terms often means "simply redo the process". The difference is you end up defining your "transaction" (to borrow database terminology) by concurrency lines. What makes it so pleasant in practice is that you take a bunch of potential failure modes and lump them into a single, unified "this task cannot be completed" failure mode, which includes ~impossible to anticipate failure states, and then only have to expressly deal with the failure modes that do have meaningful resolutions within a task.
With that understanding in mind, I'd argue that nearly all business cases benefit from the BEAM. It's mostly one-off scripts and throwaway tools that don't.