"Format" is the word used to describe the layout of the the various parts of a floating point number (in memory or floating point register).

The meaning (semantics) of the various combinations of bits is termed the encoding.

The **formats**
for the binary floating point numbers supported by
the FPUs of the Intel 80x86 microprocessors are:
single precision (4 bytes),
double precision (8 bytes),
and
extended precision (10 bytes).

To convert a number `x` to
a normalised number in one of these formats, write:

x = (-1)^S * m * 2^(E-B)Where:

1.0 < =

and

Then select a format and put:

[sign] = S; [exponent] = E; [significand] = bits of m, excluding leading '1' if single or double format.For example, consider the decimal number 0.1. This number has

[sign] = 0; [exponent] = -4 + 127 = 123 (decimal), 01111011 (binary); [significand] = 10011001100110011001101Where we have rounded the signficand to the nearest representation in 24 bits. The bytes in memory for this number would therefore be (Intel puts the least signficant byte at the lower memory address):

0xcd 0xcc 0xcc 0x3d

[sign] [exponent] [significand] 1 bit 8 bits 23 bits Exponent bias is 127.

[sign] [exponent] [significand] 1 bit 11 bits 52 bits Exponent bias is 1023.

[sign] [exponent] [significand] 1 bit 15 bits 64 bits Exponent bias is 16383.Note that unlike the other formats, the '