Understanding Packed BCD

See also Understanding Floating Point

Packed BCD (Binary Coded Decimal) is a numeric format that was directly supported on cpu's almost from the beginning, and still is today. Simply put, it relies on the fact that 4 bits are more than sufficient to represent decimal numbers. Therefore, two decimal numbers can be held in each byte, a 32 bit register can hold 8 such numbers, and a 64 bit register can of course hold 16. Many cpu's can do BCD math - it's just a matter of having the half bytes carry when their value exceeds 9, conceptually not a lot different than ordinary binary math. It also isn't very hard to write programs to do math on BCD numbers of arbitrary length.

There is also unpacked BCD, which of course is very wasteful of space: 1 byte per numeric digit stored.

If you look at hex representations of BCD numbers, the individual digits are simply read left to right: what you see is what you have (ignoring sign and any exponent). BCD formats are directly human readable without any more math than translating the bit values to numbers.

The advantage of BCD over floating point formats is that numbers can be accurately represented (assuming you have enough bytes to store the number). Floating point is a compromise: some numbers (6.25) can be represented accurately, but most can only be approximated. The approximation is very close with double precision formats, but it is still an approximation.

The disadvantage is range. In 32 bits, IEEE floating point can store numbers in the range of 2^-126 to 2^127 - very large numbers. In the same 32 bits, even if you ignore the need to store a sign, the largest possible number in packed BCD would be 99999999. That's less than 2^27 right there, which is a long, long way from IEEE floating point range. As you also need the sign and an exponent to locate the decimal point, packed BCD obviously needs much more storage space to handle typical numbers. However, people have used this: MBASIC on Tandy Xenix used packed BCD for floating point, and many an accounting package used BCD internally to avoid rounding errors.





(OLDER) <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

-> -> Understanding Packed BCD





More Articles by

Find me on Google+

© Tony Lawrence




Back in the days when I wrote a lot of 65xx machine code, I grappled with trying to produce accurate mathematical results. Neither excess 128 or IEEE floating point produced the desired results, so I turned to compressed (packed) BCD. I decided to use a 64 bit BCD number in big endian notation (opposite of the 65xx little endian word ordering, but easier to work with), which would produce more than sufficient range for the financial applications I was writing.

As Tony noted above, the number's precision and sign have to be accounted for somewhere. Accordingly, I employed the following scheme (I don't claim to have invented it, but I never saw it described in any of the literature I knew of at the time):

AL HL HL HL HL HL HL hl
In this structure, 'H' and 'h' refer to bits 4-7, the high nybble of each byte, and 'L' and 'l', the low nybble.

The 'A' nybble of byte 'AL' is the key to decoding the rest of the number. Bits within the 'AL' byte are defined as follows:

Bit Meaning
-----------------------------------------------
7 Sign: 0 = +, 1 = minus
4-6 Number of places to right of decimal point
0-3 Most significant BCD digit, range of 0-9
-----------------------------------------------

Bits 4-7 are the 'A' nybble, with bits 4-6 interpreted as a binary integer. Since the maximum value that can be represented with 3 bits is 7 (2^0 + 2^1 + 2^2), the maximum fractional precision possible is .0000001. For monetary usage, a bit pattern such as:

x010

would be appropriate, indicating two place decimal precision ('x' represents the sign bit). The maximum possible range would be:

+/- 9,999,999,999,999.99

which is more than adequate to describe Bill Gates' net worth. &lt;Smile&gt;

An 'A' nybble pattern like this:

1xyz

indicates that the number is negative; again, bits 4-6 ('xyz') describe the decimal precision.

To extract the sign and precision from the 'AL' byte, masking and arithmetic shifting were required. Assuming byte 'AL' was already loaded into the .A register, the required 65xx/85xx assembly language statements (using MOS Technology syntax) would be as follows:

PHA ;push .A onto stack...
PHA ;twice
AND #%10000000 ;mask out all but sign bit
STA SIGN ;store sign somewhere in memory
PLA ;recover original value from stack
AND #%01111111 ;mask out the sign bit
LSR A ;bits 4-6 become bits 3-5...
LSR A ;3-5 become 2-4...
LSR A ;2-4 become 1-3 &amp; finally...
LSR A ;1-3 become 0-2, the precision value
STA PRC ;store precision somewhere in memory
PLA ;again, recover original value from stack
AND #%00001111 ;mask hi nybble, leaving most siginificant BCD digit
... program continues

Bytes 'HL' and 'hl' define normal compressed BCD digits, with 'hl' being the least significant. Therefore, the compressed BCD number

09 99 99 99 99 99 99 99

decodes into the ASCII equivalent:

999,999,999,999,999

or

9 * 10^14
a range sufficient for most applications. The negative equivalent would be:

89 99 99 99 99 99 99 99
which would be:

-999,999,999,999,999

in ASCII. The other extreme would be:

F9 99 99 99 99 99 99 99

which would be:

-99,999,999.9999999
a number that sometimes seems to represent my business's cash flow.

--BigDumbDinosaur

Kerio Samepage


Have you tried Searching this site?

Unix/Linux/Mac OS X support by phone, email or on-site: Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us