Today we [and by we I mean “I” on behalf of my team] had a fun foray into signed vs. unsigned types and type conversions. Certainly makes one long for languages like C# where they have *unsigned* types and don’t make you worry about the nonsense of whether all your numbers are signed or not and how to convert a pair of bytes into a short and why you can’t or signed types containing unsigned data together and expect them to create the right bit pattern…
So my dry erase board is all covered in scribbles like 0xB2 = -78 (really should be 178) = 0xffffffb2 not 0x000000b2… but 0xffffffb2 & 0x000000ff = 0x000000b2. Well, that’s me linearizing what is on the dry erase board…which also have pictures of bytes and bit patterns within byes and casts and arrows and…
Pretty ugly huh? Man, and here I thought (once upon a time) that once we got past learning assembly language in school I’d probably never have to worry about such a low level programming, as to whether our numbers are twos complement and what the actual bit pattern is for various signed data types…at least I haven’t had to do any floating point number storage…cuz that’d have been really ugly. Hahaha…
[edit] So much for no floating point to worry about…I discovered the next day I needed to use “Float.floatToIntBits(latitude)”