Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Such a scheme depends heavily on whether the cloud providers can efficiently multiplex their bare-metal machines to run these jobs concurrently. Ultimately, a particular computing job takes a fixed amount of CPU-hours, so there's definitely no savings in such a scheme in terms of energy consumption or CPU-hours. At the same time, overhead comes when a job can't be perfectly parallelized: e.g. the same memory content being replicated across all executing machines, synchronization, the cost of starting a ton of short-lived processes, etc. These overhead all add to the CPU-hour and energy consumption.

So, does serverless computing reduce the job completion time? Yes if the job is somewhat parallelizable. Does it save energy, money, etc.? Definitely no. The question is whether you want to make the tradeoff here: how much more energy would you afford to pay for, if you want to reduce the job completion time by half? It like batch processing vs. realtime operation. The former provides higher throughput, while the latter gives user a shorter latency. Having better cloud infrastructure (VM, scheduler, etc.) helps make this tradeoff more favorable, but the research community have just started looking at this problem.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: