The following topics may appear in Fall 2018’s Quiz 1:
- Interpreting unsigned / signed magnitude and two’s complement
- Arithmetic operations: Adding and subtracting binary numbers
- Bitwise operations: Shifting, NOT, AND, OR, XOR
- Binary sign extension
- Converting between binary and decimal
- Converting between binary and hexadecimal
- IEEE 754: What are the components of a 32-bit floating-point number? How are they compared? Learn the table below.
As you prepare, please feel free to ask questions on Piazza and in office hours.
More Information about Floating Points
From Prof. Southern:
I came across an interesting series of blog posts about floating point representations. This is optional material. It provides another perspective on how to understand the bitwise representation of floats, beyond our lectures.
Here is one piece:
One important concept is how we can compare floating point numbers, based only on looking at the bits. Given two floats, a and b, it is not necessary to interpret them as decimal numbers in order to determine whether a < b, a == b, or a > b. Instead, you can just examine the bits.
This is very practical. Think about sorting millions, or billions, of data points by a floating point value. Think about the number of comparisons involved. To compare a with b, we could subtract a-b, and examine the result as positive, zero, or negative. But subtraction is an expensive operation in floating point arithmetic. Bitwise comparisons are relatively cheap, and thus offer faster performance.
To compare two floats on a bitwise basis, you can look at the sign bit first. A positive number is always greater than a negative number. If the signs are the same, then compare the remaining 31 bits as if they were an unsigned int. (The main corner cases are: a) +0 == -0; and b) when the exponent field is 255 – meaning the bits represent +/- infinity or NaN.)
Think about how this comparison works, and why. How do you bitwise compare two positive floats? How do you bitwise compare two negative floats?
You should be able to tell which of two floats is larger, solely by examining the bits (in hex or binary representation) – with no need to interpret the bits as representations of decimal real numbers.
(Hint: think of floats as a sign-magnitude representation.)
This ability to make a bitwise comparison, and thus optimize sorting, helps to explain some of the quirky design choices behind the IEEE 754 floating point representation (such as the exponent with a bias/offset of 127).
Here is another advanced blog post in this series. This piece goes beyond the scope of our class, but provides some interesting food for thought for those of you who may want to think more deeply about representations of floats.
(Scroll down to “ULP, he said nervously” for background about why the bitwise comparison of two floating points works.)