Hacker Newsnew | past | comments | ask | show | jobs | submit | mckilljoy's commentslogin

I just started reading this book, maybe half way through, it's very interesting!

Some of the technical analogies are a little weak, but all the quotes and anecdotes are great.


Yes, I agree. It's also a fascinating window into what an achievement Windows NT was, and I say that as someone who does not use Windows anywhere.

By mid 1993 Dave Cutler and his team released a multi-threaded, pre-emptive multi tasking pure 32 bit version of Windows that supported the Windows API and worked across three different hardware architectures (two of which were pure RISC processors). Apple, often lauded for being ahead of the curve, wouldn't release anything remotely matching those capabilities for eight years when OS X came out. When Apple developers were just starting to port over to Cocoa and Carbon, Windows had reached deep maturity and stability with NT and its successors (still labelled as NT versions intenrally) and basically all Windows software ran on it natively and had been doing so for years.

Windows isn't exactly my cup of tea but Microsoft was way, way ahead of the curve with NT. It was a superb achievement and I strongly recommend reading Show Stopper to get a sense of what the team sacrificed, went through, and achieved.


I ran the very first beta of Windows NT and it was glorious. I had never before experienced anything like it: you could compile and run your C code in a graphical debugger, and it would catch bad pointer errors by breaking the program right were the bug was. Fix, recompile, rerun.

16 bit Windows computers would crash reboot on errors like that, leaving you with the printf errorlog to find the bug. Sun and Silicon Graphics workstations did not have software like that, and were ten times more expensive. Linux was just an experiment then.

I was never able to use Windows 3.1 and 95 after that, I would search for programs that ran on NT for everything. I still do. Poorly written software crashed, but the Windows NT OS was immensely stable on the right hardware, from the very first beta.


The only thing that (IMO) came close was OS/2.

I really wish Warp had beaten Windows 95. It was astounding. The kicker was that the Windows VM in OS/2 was, in some ways, actually more stable than the native apps [1].

> Ironically, if you never ran native OS/2 applications and just ran DOS and Windows apps in a VM, the operating system was much more stable.

[1]: http://arstechnica.com/business/2013/11/half-an-operating-sy...

EDIT: From the same article:

> Meanwhile, Dave Cutler’s team at Microsoft already shipped the first version of Windows NT (version 3.1) in July of 1993. It had higher resource requirements than OS/2, but it also did a lot more: it supported multiple CPUs, and it was multiplatform, ridiculously stable and fault-tolerant, fully 32-bit with an advanced 64-bit file system, and compatible with Windows applications. (It even had networking built in.) Windows NT 3.5 was released a year later, and a major new release with the Windows 95 user interface was planned for 1996. While Windows NT struggled to find a market in the early days of its life, it did everything it was advertised to do and ended up merging with the consumer Windows 9x series by 2001 with the release of Windows XP.


OS/2 had a much nicer API (such as a message loop that did not need a Window), it was very stable, but it was also more expensive. The early versions had no GUI and hardly any applications. I tried an early version, but the only software I could find for it in my student network was a FORTRAN compiler :-). Pricing, lack of application software, and the alternative of Microsoft Office on Windows 95 killed it.

Personally I hated Windows 95. It introduced changes to Windows NT 4 that made the OS much less reliable. It took many years for the consumer market to get away from 16 bit Windows, in which both OS/2 and Windows NT were niche products.


That OS/2 2.0 fiasco is one of my favorite topic, with NT originally being "NT OS/2". I have a bad opinion of it.


And then Microsoft screwed up NT, over Cutler's objections, by putting in crap code from Windows 95 to make it "compatible" with programs that relied on quirks of Windows 95. It took many years to clean up that mess.

(I started with Windows NT 3.51, which was a very nice system. The 16-bit emulation module was entirely optional, and I configured it off. Worked fine, as long as you bought applications certified for NT. In Windows 95, 16-bit mode was an integral part of the system, and kludges such as 16-32 bit thunking were added so that the two modes were not so distinct.)


Did any of that crap code from Windows 95 go in the kernel, or was it just in user space?


A lot of it went into the kernel for Windows XP. It took until Windows 7 to clean up the mess inside.


They moved the entire graphics subsystem into kernel because it was "too slow" in user mode


Yes. Then Microsoft hired Mark Russinovich, the guy who ran "ntinternals.com" and demonstrated it wasn't too slow, to shut him up.[1]

[1] http://windowsitpro.com/windows-server/did-microsoft-shut-do...


Not to detract from the achievement, but the transition to NT wasn't quite as rosy at that.

It took many years for apps written for Windows 3.x/95/98/etc. to catch up and run correctly on NT. In the early years, app and game producers had a distinct bias in favour of Win9x (popular consumer OS) and to the detriment of NT (esoteric, less shiny business/server OS).

The game compatibility situation, in particular, was quite miserable until Windows 2000 ("NT 5.0"). 4.0 had adopted the Win9x UI, but Win2k was arguably the first version that managed to match the consumer-oriented experience of Win9x, in particular with regard to DirectX.


I ran an NT4 / 98 dual boot system for years, and in practice the only applications that required rebooting into 98 were games and things that had a hardware component (scanner, digitizer, ...). So in my recollection the transition was actually quite smooth, games left aside.


One of the things that didn't run on NT4 was the AOL client, which was a pretty big thing for consumers at that time.


> The game compatibility situation, in particular, was quite miserable

Win 2000 already supported DX8 and most games worked fine. The step to WinXP was very small (and for games interesting: DX9 and "compatibility mode" shims). Only the setup routines were sometimes a problem because they checked for Win9x or even actively detected WinNT and aborted.

Though many with low end hardware got bad performance, that was the problem. Win95 required 4MB memory. Win98 16MB. WinME 32 MB. People with an old PC tried WinNT4/2000/XP and it run slow, no wonder.

The NT line was viewed as a resource hog and viewed as over-architecture with a HAL and Win32 running as sub-systems. In 1996 WinNT 4 already needed 32 MB. And Win 2000 128 MB. (I had a notebook with Win2000 with just 128MB and it wasn't flying, it barely run.) WinXP was also viewed as a resource hog by doubling the memory requirement within 1 1/2 years to 256MB as minimum requirement, only with 512MB it run really fine. (I bought a new PC with 512MB in 2001 and never looked back to Win9x DOS based Windows line. 99% of all games worked just fine, and for the rest there was Dosbox/Bochs/Quemu/VMware.)

I wish one could buy a Win10 build that comes with Win10 kernel, the Win2000 shell and no spying & tracking crap. A modern OS still could be very fast and consume a lot less hardware resources.


I had a run of dual core machines starting in the mid 90's and moved most of my work/etc to NT around NT 3.51 (games and such were still on win9x). Knowing that one of my cores wasn't in use in 9x was a pretty strong motivator to boot back to NT. Especially for dev purposes.

That said, I too had issues with the beta's of NT 3.1. But that didn't stop us from moving all our products/etc to NT, with our first commercial sales/installs of a 32-bit clean server application in early 1994. In late '94 another company wanted us to port our wares to a high availability solaris, and I started down that path, and the project got canned when it became apparent that the HA hardware only ran an older version of sunos/solaris that didn't support threads, and our application's core was built around multithreading.


If you wanted to see an OS that really was way ahead of the curve then NeXT Step was it.

By mid 1993 it was already on the 3rd version (running on a Motorola 68040 no less) and was already doing everything that NT could.

I really love watching the old NeXT presentations on Youtube https://www.youtube.com/watch?v=H07Xjom_GQA


Redex runs at compile time, not at install time.


Yes, but it changes the byte code, and I think android compiles that code into machine code when you install an app.


If anything it sounds like it would be faster because there is less byte code which needs to be compiled.

Most likely the problem is the Facebook app has a lot of bytecode. That's probably why the wrote redex in the first place, so they would have less bytecode in their final APK.


Any other app takes seconds to install. Facebook takes minutes. I can understand some difference, but not that much.


There was a post on HN [0] a few years back about the Dalvik patch that Facebook used for their Android app, due to the large number of methods [1] the app had. This was causing issues on the older Gingerbread devices, so their solution was to modify the internals of the Dalvik VM while it was running their code to increase the buffer size.

[0] https://news.ycombinator.com/item?id=5321634

[1] https://www.facebook.com/notes/facebook-engineering/under-th...


Which is borderline insane


Their ios app has over 18000 classes (https://www.reddit.com/r/programming/comments/3h52yk/someone...), so I can easily believe the android app is similarly bloated as well. (To be fair, most of the 18K classes seem to be auto generated, but that still doesn't lessen the compiler's burden.)


I heard there are something like 400 mobile developers working on the FB apps. I don't know there's any way to keep a team that size from producing bloat with every revision.


That seems like a useful engineering problem for Facebook to try to solve, rather than this bullshit rocket science optimization stuff that adds yet more complexity for only incremental gains.


Compiler optimizations are not "bullshit rocket science optimization". You depend on them every day.

None of the optimizations in this tool are "rocket science" compared to the optimizations that GCC, for example, does.


Right, so on Android if you need code to be fast, write it in C++.

If you have a ridiculously bloated and buggy Java app, writing an optimizing compiler doesn't sound like the most effective solution.


My take: If you can't write it fast enough in Java, it is probably not a good mobile app candidate in the first place. The Dalvik optimizer and the JIT yield a regularly reasonable experience for me.


That is actually going away in Android N: http://developer.android.com/preview/api-overview.html#quick...


25-30% is a pretty solid improvement.


I just skimmed through some of those Glassdoor reviews. They were all quite positive, but I find it interesting that the vast majority were posted Feb-Mar 2016.


That happened when someone noticed we were ranked #1 in the Bay Area and announced it to the company. Employees followed with more reviews.


I wonder, earnestly, on a tangent: how do you ensure the authenticity of Glassdoor reviews? What's the guarantee that people are not faking good reviews? They should probably implement a system of employment verification.


I apply the same metric I apply to online product or service reviews. If they appear on a regular cycle, or if they appear in some of there kind of statistically improbable clumps, or if there are no negative comments at all, then they may not be entirely plausible.

Employment verification won't help on Glassdoor, because it would be the easiest thing in the world to hint to a group of the most settled employees that maybe they could leave positive ratings.

It would help against malicious negative reviews - which can also be a thing.


Websites with reviews are doing a disservice to users by not giving us tabulated reviews and tools to view detailed statistics. I'm reminded of 538's impressive interactive data exploration pages.


Many valid reviews are written by former employees, so I'm not sure how you could verify those. A subset of those are from people with an axe to grind. Look at any company that's had layoffs, for example. In general, I've found Glassdoor to be incredibly biased, either shills (in a small company, nobody's going to publicly trash their employer with the risk of being identified), or trolls (any company, once large enough, has former employees who didn't get along with their boss and choose to escalate it).


Relatedly, I'm not convinced Glassdoor doesn't delete or hide posts that are unfavorable to the employer. I've kept tabs on the reviews and scores left on my previous company's page and at least several negative reviews are nowhere to be found.


It doesn't to my knowledge. I see both good and bad reviews for my company come hit my email inbox regularly.


Fake reviews are generally written by PR/Marketing people so they never look authentic.


A lot of companies ask their employees to write positive reviews so employment verification doesn't necessarily help guaranteeing authentic reviews.


I think it requires a company email? I can't recall. They also have a verification window, reviews don't post for a couple days.


It's not difficult for a company to create new email addresses or use past employees' email addresses. I know at least one company that does that.


As others are saying, Microsoft is a very large company with a great deal of variance in the quality and maturity of teams. I wouldn't consider this description representative.

Azure is still a comparably new team, so I'm not surprised if it is still sorting out its processes. I've hear similar descriptions of Bing teams, and other "startup" divisions in the company.

But there are definitely some brutally effective teams there that far outshine any other non-Microsoft team I've ever seen.


That was my first thought too -- Windows has had timer coalescing since at least 2008, if not longer.


The whitepaper at http://msdn.microsoft.com/en-us/library/windows/hardware/gg4... indicates that Windows timer coalescing requires an application to explicitly opt in. The article describes OS X Mavericks' support as applying to all upcoming timers while on battery power (presumably excluding particular classes of timing-sensitive applications, such as audio playback), which would be a much more aggressive approach than in Windows.


Yea that's fair. I was actually thinking of timer coalescing on the server OS for improved power efficiency/performance -- I'm sure IIS, MSSQL, etc. all opt-in to coalescing responsibly.

It sounds like the OSX version is more of a "forced timer coalescing" feature.


It landed (at least officially) in Windows 7. Quite awhile ago!


This looks like the kind of product that will do well in San Francisco -- SF is where the venn diagram of "dog owners" and "gadget owners" overlap.


Also, most of the cohort of SF residents fit into the most important category, "people with more money that they know what to do with."


At first I thought was going in a bad direction, but it actually ended up pretty cute.

Basically they asked women if they thought they were beautiful, and if they said "Yes!" they got their meal for free -- it was actually rewarding self-confidence rather than perceived physical beauty.


I like reading these analyses, although I'm afraid headlines like this oversimplify things and give off the wrong impression. There isn't anything inherently wrong with NUMA, it just isn't useful in this situation.

No technology is a 'silver bullet'. Every workload has a different set of considerations that require a different set of technology to optimize.


The way it's portrayed is extremely misleading. The headline misses the point -- actually, the article didn't really have a point. It sounds like they didn't get the results they wanted from the project, but tried to make the best of it by highlighting what they did get, which is a jumble of facts that are incoherent and self-contradictory. It's sort of interesting to read, because they did honest research, asked good questions and followed the data, and there is plenty of value in negative results.

The way I read the outcome, NUMA seems to do what it's supposed to. The premise was that remote memory accesses are a performance killer, and forcing threads onto fewer cpus should be a big win. But NUMA came out looking pretty good. Leaving it alone looks like an excellent policy. Consider that google brought in a team of experts for the sole purpose of figuring out how to beat the default behavior of NUMA.


Depends on the situation.

I assume this was probably a server side bug, since all the accounting would never be trusted to the client side.

If you are writing highly-performant server code, the actually memory size is extremely important. You cannot (should not) abstract away the machine specifics of the datatype if you want to write optimized code.

In some cases where the underlying datatype isn't a concern (e.g. Javascript), I agree with you. But ultimately, this isn't a failure of technology, it is a failure of the software development process.


Most servers are x86-64 these days. So using 32 bit signed ints barely improves performance at all. Could have gone with a 64 bit unsigned int as it is a positive value all of the time.


A x64 CPU will take a single instruction to operate on a both a 64-bit and 32-bit integer, and the CPU registers are all 64-bit, so in that context you are right that it doesn't matter that much between the datatypes.

However, the physical size of the integer as stored in the CPU cache, RAM, and HDD is still going to be 2x as big for a 64-bit integer. In a hypothetical worst case, you are cutting your CPU cache and memory bandwidth in half, which is tragic.

Additionally, while most physical servers are x64, the OS, server software, and virtualization layer is still often 32-bit -- maybe for legacy, maybe for performance, maybe for a lot of reasons. Upgrading that whole stack up to 64-bits just for the luxury of having default 64-bit integers seems misguided.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: