Bitwise Operators in C++

I’ve been perfectly happy with logical operators since I started programming a long, long time ago. That is, your standard AND (&&), OR (||) and NOT (!) operators perform the relevant logical operations so you can say:

or,

or,

But… my understanding of bitwise operators was sketchier. So let’s take a look at the bitwise operators and give them a runthrough to see what they do.

Bitwise Operator List

Let’s start by listing the available bitwise operators:

Bitwise Operator Symbol
& Bitwise AND
| Bitwise Inclusive OR
^ Bitwise Exclusive OR
~ Bitwise NOT
<< Bitwise Left-shift
>> Bitwise Right-shift

So we have 6 basic operators, but we also have the compound versions of these operators, so instead of saying:

We can say:

And they’ll mean the exact same thing – just for completeness, let’s throw those in a table as well:

Bitwise Operator Symbol
&= Bitwise Compound AND
|= Bitwise Compound Inclusive OR
^= Bitwise Compound Exclusive OR
~= Bitwise Compound NOT
<<= Bitwise Compound Left-shift
>>= Bitwise Compound Right-shift

So now that we have these operators, let’s see what we can do with them…

Bitwise Operators in Action

Let’s take two characters (chars) stored as a single byte each, and we’ll assign them the ASCII codes for the characters ‘r’ and ‘3’ – which are 114 and 51, respectively:

When performing our bitwise operations, we perform the operation for each matching bit that we’re comparing – and that’s basically the idea of it right there!

Bitwise AND

When both of the corresponding bits is 1, then the resulting bit is 1, otherwise the resulting bit is 0:

Bitwise Inclusive OR

When either of the corresponding bits is 1, then the resulting bit is 1, otherwise the resulting bit is 0. If both bits are 1, then that’s fine and the result is still 1:

Bitwise Exclusive OR

When either of the corresponding bits is 1, then the resulting bit is 1, otherwise the resulting bit is 0. However, if both bits are 1, then the result is 0 (this is what makes it exclusive):

Compare this with the inclusive OR we just looked at and you’ll see that the 3rd and 4th bits from the left are 1’s in our inclusive OR but are 0’s in our exclusive OR.

Bitwise NOT

Any NOT operator (bitwise or ‘standard’) is a unary operator – that is, it only takes a single value to work with (not two values).

When a bit is 0, the corresponding result bit is 1, and when a bit is 1, the corresponding result bit is 0. Let’s do each letter (or byte, more accurately) separately:

Bitwise Left-shift

The bitwise left-shift operator takes your bits and moves (‘shifts’) them to the left. That is, each bit is moved to the left however many ‘places’ you ask it to move them. But, we still need to have the same number of bits at the end of the operation, so a number of 0 bits are added to the right hand side to keep our data the same size. That is, if we left-shift by 4 bits, then the left-most 4 bits are lost, and 4 new 0-bits are added to the right hand side. Bits do not ‘wrap-around’ if they go off the left-hand-side of our data then they’re simply discarded.

Let’s left-shift our ‘r’ value by 1 bit:

Now let’s left-shift our ‘3’ value by 2 bits:

Bitwise Right-shift

The bitwise right-shift operator works just like the bitwise left-shift operator, but this time as you’d imagine we move each bit to the right instead of to the left. The rightmost bits are discarded, and an equal number of new 0-bits are added to the left hand side to keep our data the same size:

Let’s right-shift our ‘r’ value by 1 bit:

Now let’s right-shift our ‘3’ value by 2 bits:

Uses of Bitwise Operators: Cheap Multiplication

Now that we understand the idea of bitwise operations – why would we want to use them? What are they good for? Well, lots of things, really. Take a look at this massive page of bit-twiddling hacks hosted at Stanford (who take their bit-twiddling seriously!) for some potential uses.

A simple example might be raising a number to a power of 2. To start off with, let’s multiply a value by 2-to-the-power-of-1 (i.e. 2). When we multiply a value by 2 on a computer, it goes through the ALU on the CPU and performs a bunch of calculations, so we might write:

The computer goes away, does the math – and we get the final result, which is 6. However it’s a calculation, and we can perform an (arguably faster) manipulation by simply shifting the value 1 bit to the left to get the same result. In practice, this would go:

Our binary for 3 is:

So when we left-shift this by 1 bit, we get:

Which is 6 – exactly what we were after! What about if we wanted to multiply something by 2-to-the-power-of-4 (i.e. 16). Well, in this case we can shift 4 bits to the left. Our original 3 again is:

Shifted to the left 4 bits gives:

Which is 48 (which is 3 x 2-to-the-power-of-4) – again, exactly what we were after! Similarly, we could right-shift bits to divide by some power-of-2 for each bit we right-shift.

Further uses: Swapping Values without a Temporary Variable

If we have the values 123 and 456 in two variables and we want to swap them, the most common way would be to use code like this:

But what if we wanted to swap them without using a temporary variable? If they were properties of a class which allowed swizzling then you might be able to do something like:

And that would do it (again, assuming your class supported swizzling) – but there’s another way to do it using the eXclusive OR (XOR – ^) operator, in a process called a XOR Swap, like this:

It’s a bit of a mind-f*ck – but it completely works – try it for yourself! You can read more about the technique here, if you’re interested: http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/xor.html.

One Final Use: Parity and Checksums

Bit-wise operations are used in the fundamental building blocks of all data storage and transmission. A good example is calculating data parity – which is a technique used to ensure our data (whether in memory, on a disk/disc, or anywhere) is valid and hasn’t been corrupted somehow. The simplest way we can do this by storing a parity bit along with our data – which is just a bit used to indicate whether the number of 1’s in our data is odd or even. We can add a parity bit of 1 if the number of 1’s in the data is odd, otherwise we add a parity bit of 0 (this is called even parity – because we’re making the number of 1’s in the data an even number), or alternatively, we can add a parity bit of 1 if the number of 1’s in the data is even, otherwise we add a 0 (this is called odd parity – because we’re leaving the number of 1’s in our data as an odd number).

Just so we’ve got that completely straight, here’s Wikipedia’s take on odd and even parity:

There are two variants of parity bits: even parity and odd parity. In case of even parity, the parity bit is set to 1 if the count of ones in a given set of bits (not including the parity bit) is odd, making the count of ones in the entire set of bits (including the parity bit) even. If the count of ones in a given set of bits is already even, it is set to a 0.

When using odd parity, the parity bit is set to 1 if the count of ones in a given set of bits (not including the parity bit) is even, making the count of ones in the entire set of bits (including the parity bit) odd. When the count of set bits is odd, then the odd parity bit is set to 0.

Get it? Got it? Good! If not, don’t worry – it’ll all make sense when you see an example:

Let’s use the even parity method, and we’ll store the decimal values 1 through 8 in a series of bytes (i.e. 8 bits per byte). In our data as a whole, we’ll use 7 bits for the data itself and the far-left bit as a parity bit:

Now, looking at that data, and according to the parity bit, is there any corruption? The answer being, nope – it looks fine. We have 8×8 bits, and there’s no memory corruption – no random “bit-flipping” has occurred – we’re all good! We had to sacrifice some storage space to store our parity information (or we need to use more storage space to store the same amount of data if we’re also storing parity), but if we care about the correctness of our data then we can live with that.

Now, looking at our very first byte again:

Let’s say that a random bit-flip occurs (i.e. the data gets corrupted – just by a single bit!), and we now have:

Is that data corrupted? According to the parity bit, it is! Because we have an even number of bits (which would indicate our parity bit should be 0) – and yet our parity bit is 1. We now know that bad things have happened, so we can start taking some steps to do something about it! We’d even know if the data was corrupted if our parity bit was the bit that flipped because the whole thing wouldn’t tally correctly.

Another method of doing this, instead of taking 1 bit for parity out of each byte, is to let each byte have the full range of 8-bits, and then after a given number of bytes we append an entire byte of parity data (called a checksum) which stores all the parity bits together in one place. This would then make our data block:

Notice that our vertical parity is not the same as our horizontal parity. You might have wondered what happens if we get two bits (or more!) flipped using our old 7-bits-and-1-bit-parity system – because if any two bits flip, the odd/even result is the same and our parity bit is still valid – but our data is corrupt. The answer is that, given sufficient parity information, we can detect and even track down the location of the corruption and repair it – only we have to use something more involved than the parity/checksum system we’ve been looking at. For example, if we use something like cyclic redundancy checks or Hamming codes (as used in RAID level 2) we can locate and correct the errors.

Back on planet topic, and with regard to how bitwise operators can be used to calculate parity – let’s say we had a variable of some integral data type (i.e. anything that uses whole numbers – int, char etc.), and we want to calculate the parity bit for that value – well, using a combination of bitwise XOR, right-shift and AND-ing, we can do so like this:

Wrap up

Should you use bit-wise operators in your everyday coding? In general, I’d say no. You’d only ever want to use them as an optimisation of the the most core parts of your code. For example, you’d never really want to perform multiplications using bit-shifting in non-performance critical code because, dammit Jim, you’re a human not a computer, and we’re more used to seeing code like:

Than:

For non-performance-critical code, unless it’s an absolutely perfect fit for what you’re doing, leave bit-twiddling well alone.

However, if you have some piece of code, say in a fragment shader, that runs 2 million times per frame (at 60 frames per second), meaning it runs a grand total of 120 million times per second… If you can replace a branch (like an if or switch or something) in that code by a non-branching version that does the exact same thing by using some clever bit-level manipulation trick – then that’s going to reap significant performance rewards even if the non-branching version is only marginally faster than the branching version. So, as with most things, it’s a time Vs. complexity trade-off, where you get to make the call as you see fit.

Cheers!

5 thoughts on “Bitwise Operators in C++”

  1. I’m just bored, going through my bookmarks and I thought I remembered you posting about something at some point but then I came across this post and thought “oh yeah” because I am just starting to use bitwise operations myself in AS3, GML (GameMaker Studio) and in C# (That I am taking online lessons in) I could check it out. I got to where you were taking two characters and storing them as bytes. Looks like i have a way to go xD
    What is it exactly you teach?

  2. Wow, I don’t even make any sense there ^ I meant when you got up to taking two characters and storing them as bytes I got so lost it wasn’t funny. I thought I had the jist of what was going on.

  3. For below code,
    // Convert our number down to a 4-bit number with the same parity
    while (typeSizeInBits > 4)
    {
    value ^= value >> typeSizeInBits;
    typeSizeInBits /= 2;
    }

    should “while (typeSizeInBits > 4)” be changed to while “(typeSizeInBits > =4)”? the testing shows 0x01 and 0x11 return back the same result if not change.

Leave a Reply

Your email address will not be published.