Because when the Y2K problem was created, computers had only recently been invented, every byte counted, and there was no history of software best practices.
Even by 1980, which was 5 years before any recorded mention of the Y2K problem, an IBM mainframe, of the kind a university might have, had only e.g. 4 MB of main memory, which had to support many concurrent users.
Such computers would have been unable to load even a single typical binary produced by a modern language like Go or Rust into its memory - yet they supported dozens of concurrent users and processes, doing everything from running accounting batch jobs, compiling and running programs in Assembler, COBOL, FORTRAN, PL/I, or APL, and running interactive sessions in languages like LISP or BASIC. Part of how they achieved all that was not wasting any bytes they didn’t absolutely have to.
Even by 1980, which was 5 years before any recorded mention of the Y2K problem, an IBM mainframe, of the kind a university might have, had only e.g. 4 MB of main memory, which had to support many concurrent users.
Such computers would have been unable to load even a single typical binary produced by a modern language like Go or Rust into its memory - yet they supported dozens of concurrent users and processes, doing everything from running accounting batch jobs, compiling and running programs in Assembler, COBOL, FORTRAN, PL/I, or APL, and running interactive sessions in languages like LISP or BASIC. Part of how they achieved all that was not wasting any bytes they didn’t absolutely have to.