The fact that we do binary floating point by default is an artifact of RAM being expensive way back in the day while arithmetic coprocessors became cheap. Single-precision is 4 bytes and can represent practically every quantity you care about precisely enough.
While IBM has been harping about this for ages, BCD (binary coded decimal) was sufficiently important that it had special instructions on even the earliest microprocessors. The 6800/8080/Z80 series had DAA (decimal adjust accumulator) and the 6502 had SED/CLD (set/clear decimal mode). The 8008 is notable in not having a specific BCD instruction.
And the 4004, which started it all, was a BCD oriented microprocessor from the start.
And the 8086 (on up), the 68000 family, and the VAX also had BCD instructions. When I wrote my own 6809 emulator [1] the DAA instruction was the hardest one to get right [2]
http://www.rexxla.org/events/2001/mike/rexxsy01.PDF