sizeof(char) has to be 1, but char doesn't have to be 1 byte in size. It's more correct to say that sizeof(foo) returns a result relative to sizeof(char).
The difference may be 100% semantics, but when I am using a bit field, I am addressing the particular bits I am interested in via a name, and when I am shifting and masking bits, I am directly manipulating a piece of memory until its value represents the bits that I am interested in.
I prefer to think of "byte" as unsigned. Although I've never quite figured out why char is signed on most implementations, it would most naturally be unsigned too.
I suspect because signed has more undefined behaviour. C compilers will always choose the option that gives them most optimization freedom since that's what they're judged on.
Add 128 to a signed char and the compiler is free to assume it is zero/false (because undefined behavior) OR assume the value is always greater than 127 (because undefined behavior). Or if it compiles it into machine code, the result may depend on register width since it may or may not store it back into memory. Resulting in a value either larger than 127 or mod 128 depending on register pressure since the compiler isn't obligated to AND 0xFF because Undefined Behavior.