Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Or it's a much simpler problem that they didn't make it semi-fast because it didn't need to be semi-fast.

When "hundreds of times the usual level" is still only 50 page loads per second, and 10 milliseconds of CPU per page would be extreme overkill for anything written in a reasonable way, it actually is straightforward.



It's not just CPU though, but IO - I've worked with horrible enterprise systems before that had response times measured in seconds.


Even 5 seconds will work if the actions can overlap. If it can't do things in parallel then we have issues much more fundamental than "performance", and there's no defending it as a competent system.

(That is not to say it's necessarily the devs' fault.)


I don't mean to defend it too much, because realistically it should be possible with relative ease to handle much more traffic than that - but my point is that in the enterprise and government worlds, things are often not as simple as you think.

Aside from potentially having to interface with dozens of unreliable, painfully slow SOAP-based web services, everything is often hosted on creaking, over-subscribed VMWare hosts, in VMs that would be under-specced regardless.

There is also often a "governing body" that severely restricts your tech stack choices.

Want to use Postgres? Nope, our standard is SQL Server - 2008 edition, actually!

Want to use Python/Ruby/Elixir/Clojure/Kotlin? None of that hipster nonsense here, we use good ole Java/VB.NET here!

Message queue, you say? It's Windows Message Queue with distributed COM all the way down here!

"Containers"? What's a one of those? You'll get a crappy VM with 1 vCPU and 1GB of RAM, and you'll thank me for it! etc...

As a dev, it's horrible and soul-destroying to work under such limitations, but if you have no choice...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: