Re: what is Hex?

  • From: "Ian D. Nichols" <inich@xxxxxxxxxx>
  • To: <programmingblind@xxxxxxxxxxxxx>
  • Date: Fri, 22 Feb 2008 20:55:10 -0500

Hi Listers,

Wow! I've been away from this all day, and got back to it a little while ago to discover the hornets' nest I seem to have helped unleash. But what an interesting hornets' nest!

Ken, thanks for the pointer to the wiki site. That was a real eye-opener for me, and showed me how narrow my experience has been. This old dog learned a few new tricks there (I'm a senior citizen and grandfather of 6).

My big mistake was in my assumption that the world outside my Borland C++ and MASM32 followed the same definitions. Apart from that, I think I was OK. I referred to "thousand millions" because I did not feel inclined to look up the exact values, and because "billion" does not necessarily mean the same in all parts of the world.

Anyway, thanks to all who have contributed to this enlightening thread.

All the best,

Ian

Ian D. Nichols,
Toronto, Canada



----- Original Message ----- From: "Ken Perry" <whistler@xxxxxxxxxxxxx>
To: <programmingblind@xxxxxxxxxxxxx>
Sent: Friday, February 22, 2008 11:37 AM
Subject: RE: what is Hex?



There are standards here and there about different parts of this but no
there is no one standard.  I still find that Wiki has the best definitions
of all this.  If you go to

http://en.wikipedia.org/wiki/Byte

You will find the best definition and history of byte and then you can
follow links to word etc from this page.  The explain how the byte is
supposed to just be the smallest character which in the past has been as low
as 5 bits but anyway I will let you all read that page and make your own
determination of what you think.

Ken

-----Original Message-----
From: programmingblind-bounce@xxxxxxxxxxxxx
[mailto:programmingblind-bounce@xxxxxxxxxxxxx] On Behalf Of Chris Hofstader
Sent: Friday, February 22, 2008 8:09 AM
To: programmingblind@xxxxxxxxxxxxx
Subject: RE: what is Hex?

I have a feeling that the definition of "WORD" may have migrated over the
years.  For that matter, the size of an int may have lost its definition
since the days that the ANSI committee left it intentionally ambiguous.

I wonder if there is some kind of ANSI/ISO glossary for computing terms?

-----Original Message-----
From: programmingblind-bounce@xxxxxxxxxxxxx
[mailto:programmingblind-bounce@xxxxxxxxxxxxx] On Behalf Of Sina Bahram
Sent: Friday, February 22, 2008 10:03 AM
To: programmingblind@xxxxxxxxxxxxx
Subject: RE: what is Hex?

I was taught that word was a relative term and simply refered to an
architectural  constannt, but not one that is immutable across various
platforms.

Could be wrong, but it makes sense to think of it that way.

Take care,
Sina

-----Original Message-----
From: programmingblind-bounce@xxxxxxxxxxxxx
[mailto:programmingblind-bounce@xxxxxxxxxxxxx] On Behalf Of Chris Hofstader
Sent: Friday, February 22, 2008 7:18 AM
To: programmingblind@xxxxxxxxxxxxx
Subject: RE: what is Hex?

SB:   A standard word is a misnomer. This assumes a two byte word which is
only
true on 16-bit architecture. A word can be 16 bits, 32-bits, or even 11 bits
in some platforms ... it just depends. A double word can be 32 bits, but it
can also be 16 bits in some platforms or not even supported in others, so
there is no standard here.


Cdh:  Are you certain of this?  I was taught that, no matter what, a word
was 16 bits but the number of bits in an int was defined as "the "most
convenient" for a given processor, hence, a 6502 would have an 8 bit int, a
8086 would have 16 bits, 80386 would have 32 and so on.

MS compilers that shipped in the 386 era fudged this by forcing an int to be
16 bits to make porting older DOS programs forward easier but had a compiler
option that would let the author select the number of bits in an int.
People I worked with back then would never declare a variable as an int as
its ambiguous definition would often come back to bite us - especially when
porting to a Macintosh 68030 or later.

So, we would use WORD for 16 bits, DWORD for 32 bits (although I think long
was specifically defined as a 32 bit value) and used other typedefs to
minimize ambiguity and make porting easier.  Of course, the definition of
"word" may have changed since 1989.

cdh

-----Original Message-----
From: programmingblind-bounce@xxxxxxxxxxxxx
[mailto:programmingblind-bounce@xxxxxxxxxxxxx] On Behalf Of Sina Bahram
Sent: Thursday, February 21, 2008 11:20 PM
To: programmingblind@xxxxxxxxxxxxx
Subject: RE: what is Hex?

Again, I'm sorry for the disagreement, but there are several flaws in this
explanation. I've attempted to correct them below.

The standard byte's signed values are -127 to 127, not -128 to 127 ... it's
being picky, but this is extremely important and the source of 90% of most
security flaws today.

A standard word is a misnomer. This assumes a two byte word which is only
true on 16-bit architecture. A word can be 16 bits, 32-bits, or even 11 bits
in some platforms ... it just depends. A double word can be 32 bits, but it
can also be 16 bits in some platforms or not even supported in others, so
there is no standard here.

However, using twos complement, I must again clarrify the minimum and
maximum of a 16 bit value, since it is not -32768 to 32767, I'm afraid, but
is instead -32767 to 32767

As for a 32 bit value, the minimum and maximum are as follows.

Using twos complement, the signed minimum and maximum of a 32-bit integer
are -2147483647 to 2147483647 , and the minimum and maximum of an unsigned
32 bit integer are 0 to 4294967295

Hope this clears things up.

Take care,
Sina

-----Original Message-----
From: programmingblind-bounce@xxxxxxxxxxxxx
[mailto:programmingblind-bounce@xxxxxxxxxxxxx] On Behalf Of Ian D. Nichols
Sent: Thursday, February 21, 2008 5:40 PM
To: programmingblind@xxxxxxxxxxxxx
Subject: Re: what is Hex?

Hi Listers,

As I see it, things have become a little muddled here, both in James's
message and in Sina's reply.

The standard byte is still 8 bits, containing unsigned values of 0 to 255
and signed values of -128 to +127.

The word contains 16 bits, with unsigned values of 0 to 65535, and signed
values of -32768 to +32767.

The double word contains 32 bits, with very large values possible.
Unsigned, 0 to 4 thousand millions, and signed values  from -2 thousand
millions to + 2 thousand millions, more or less.

I hope I've got my thinking straight on that, and haven't caused further
confusion.

All the best,

Ian

Ian D. Nichols,
Toronto, Canada




----- Original Message -----
From: "Sina Bahram" <sbahram@xxxxxxxxx>
To: <programmingblind@xxxxxxxxxxxxx>
Sent: Thursday, February 21, 2008 4:58 PM
Subject: RE: what is Hex?


A few things. big endian  versus little endian is arbritrary, so it's not a
fact with respect to storage.

More importantly, the minimum and maximum of a signed 32 bit integer is not
-65535 to 65535, it's actually -32767 to 32767

If it is signed, then it is 0 to 65535

At the end of the day, you only have 2^16 permutations of 16 bits in a
binary system; thus, you have a maximum of 65536 positions, and so you have
half as much capacity if you are using twos complement to allow for both
negative numbers and the concept of 0.

Hope this helps

Take care,
Sina

-----Original Message-----
From: programmingblind-bounce@xxxxxxxxxxxxx
[mailto:programmingblind-bounce@xxxxxxxxxxxxx] On Behalf Of James Panes
Sent: Thursday, February 21, 2008 2:35 PM
To: programmingblind@xxxxxxxxxxxxx
Subject: Re: what is Hex?

Yes, Hexidecimal numbers are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F
for a total of 16 possible digit values.

As stated before, this is much more convenient for the computer as 16 is an
even power of 2 and computers actually use binary, 0 and 1. The hexidecimal
representation is actually easier for humans to read than binary.
Hexidecimal digits are grouped into groups of 2 for a total of 16 x 16 or
256 possible values. This is a standard byte. Before unicode, a single byte
value was used to represent an alphanumeric character and two bytes or a
word were used to represent a 32 bit integer with values possible from
-65535 to 65535. This explains the limit of the size of variables in older
games.

The original Intel 8086 processor had 16 bit registers. Operations for
anything larger had to be synthisized with software.

What's more, for integer values larger than 255, the least significant pair
of digits is stored first. For example, if you were looking for the value
301 (decimal) in a game save file, you would find it represented as 23 01 in
the save file.

Since this list is about programming and not game save file hacking, I will
end my lecture here.

Anyone with further interest in this topic can write me off-list

Regards,
Jim
jimpanes@xxxxxxxxx
jimpanes@xxxxxxxxxxxx
"Everything is easy when you know how."


__________
View the list's information and change your settings at //www.freelists.org/list/programmingblind

Other related posts: