No, it doesn't. How would you even do that? Nobody has ever managed to explain how that would result in a different design -- not that I've been able to find.
> Once the industry switched from hexadecimal to graphic editors, it wasn’t rare for graphic designers to have not one but two screens on their desk: a computer monitor and a CRT, the second one being used to display the result of the work made on the first one. It’s hard to tell whether this was a standardized practice or not, but we know that many developers, graphic designers as well as programmers, used that technique, from Kazuko Shibuya (Square) and Akira Yasuda (Capcom) to the developers behind Thunder Force IV (1992) who used many CRTs to take into account the specifities of each kind of screen. Masato Nishimura, graphic designer in charge of the backgrounds from Sonic CD, mentioned something he had been told about the first Sonic the Hedgehog (1991): the developers used up to 3 CRTs to previsualize the game and see how were rendered the scrolling and blur effects.
> This practice can be explained by at least 3 reasons. The first one is related to the differences in rendering between a computer screen and a CRT, the pixels look generally sharper on a monitor. The second one lies in the specificities of each machine: display resolution, shape of the pixels (rarely as square as one would expect), rendering of the colors -the red color bleeded on the others on Mega Drive, it was recommanded to add neutral colors around to compensate. The third reason is related to the second one but also concerns programmers: a workstation doesn’t necessary simulate every aspect of the machine for which a game is being developed. For example, the parallax scrolling effect featured in the Mega Drive game Thunder Force IV couldn’t be tested on X68000.
> Tatsuro Iwamoto, graphic designer on the first episodes of the Phoenix Wright / Gyakuten Saiban series released on Game Boy Advance, explained that he took account of that (sometimes unwanted) effect on Nintendo’s portable console.
Thank you! That explains it in detail that I'd never been able to find before. Especially the part that immediately follows:
> Some graphic designers toyed with these specificities and mastered the 0.5 dot technique... “It’s a technique where by slightly changing the color of surrounding pixels, to the human eye it looks like the pixels move by around 0.5 pixels.” explains Kazuhiro Tanaka, graphic designer on Metal Slug (1996). His colleague Yasuyuki Oda adds that “Back in the old days, we’d say [to our artists] "add 0.5 of a pixel”, and have them draw in the pixels by taking scanlines into account. But with the modern Full HD monitor, the pixels comes out too clearly and too perfectly that you can’t have that same taste.“
That's a clear example, then, of something that looks worse in full-res. I hadn't been able to find an example like that before.
Thanks again -- this finally explains something I really didn't understand for the longest time.
Apologies, this article comes up on HN every year or so, and I assumed someone whose been here so long and was so steadfast in their opinions of the topic would have already come across it.