I’ve been perfectly happy with *logical* operators since I started programming a long, long time ago. That is, your standard AND (&&), OR (||) and NOT (!) operators perform the relevant logical operations so you can say:

1 |
if ((isColdOut) || (isRaining)) { wearCoat(); } |

or,

1 |
if ((isSunny) && (haveSunglasses)) { wearSunglasses(); } |

or,

1 |
if (!batteryHasCharge) { chargePhone(); } |

But… my understanding of **bitwise** operators was sketchier. So let’s take a look at the bitwise operators and give them a runthrough to see what they do.

#### Bitwise Operator List

Let’s start by listing the available bitwise operators:

Bitwise Operator | Symbol |
---|---|

& | Bitwise AND |

| | Bitwise Inclusive OR |

^ | Bitwise Exclusive OR |

~ | Bitwise NOT |

<< | Bitwise Left-shift |

>> | Bitwise Right-shift |

So we have 6 basic operators, but we also have the compound versions of these operators, so instead of saying:

1 |
A = A & B; |

We can say:

1 |
A &= B; |

And they’ll mean the exact same thing – just for completeness, let’s throw those in a table as well:

Bitwise Operator | Symbol |
---|---|

&= | Bitwise Compound AND |

|= | Bitwise Compound Inclusive OR |

^= | Bitwise Compound Exclusive OR |

~= | Bitwise Compound NOT |

<<= | Bitwise Compound Left-shift |

>>= | Bitwise Compound Right-shift |

So now that we have these operators, let’s see what we can do with them…

#### Bitwise Operators in Action

Let’s take two characters (chars) stored as a single byte each, and we’ll assign them the ASCII codes for the characters ‘r’ and ‘3’ – which are 114 and 51, respectively:

1 2 |
r = 114 in decimal = 0111 1100 in binary ( i.e. (0 x 128) + (1 x 64) + (1 x 32) + (1 x 16) + (1 x 8) + (1 x 4) + (0 x 2) + (0 x 1) = 114 ) 3 = 51 in decimal = 0011 0011 in binary ( i.e. (0 x 128) + (0 x 64) + (1 x 32) + (1 x 16) + (0 x 8) + (0 x 4) + (1 x 2) + (1 x 1) = 51 ) |

When performing our bitwise operations, we perform the operation for each matching bit that we’re comparing – and that’s basically the idea of it right there!

#### Bitwise AND

When both of the corresponding bits is 1, then the resulting bit is 1, otherwise the resulting bit is 0:

1 2 3 4 |
r 0111 1100 3 0011 0011 & --------- 0011 0000 |

#### Bitwise Inclusive OR

When either of the corresponding bits is 1, then the resulting bit is 1, otherwise the resulting bit is 0. If **both** bits are 1, then that’s fine and the result is still 1:

1 2 3 4 |
r 0111 1100 3 0011 0011 | --------- 0111 1111 |

#### Bitwise Exclusive OR

When either of the corresponding bits is 1, then the resulting bit is 1, otherwise the resulting bit is 0. However, if **both** bits are 1, then the result is 0 (this is what makes it *exclusive*):

1 2 3 4 |
r 0111 1100 3 0011 0011 ^ --------- 0100 1111 |

Compare this with the inclusive OR we just looked at and you’ll see that the 3rd and 4th bits from the left are 1’s in our inclusive OR but are 0’s in our exclusive OR.

#### Bitwise NOT

Any NOT operator (bitwise or ‘standard’) is a **unary** operator – that is, it only takes a single value to work with (not two values).

When a bit is 0, the corresponding result bit is 1, and when a bit is 1, the corresponding result bit is 0. Let’s do each letter (or byte, more accurately) separately:

1 2 3 |
r 0111 1100 ~ --------- 1000 0011 |

1 2 3 |
3 0011 0011 ~ --------- 1100 1100 |

#### Bitwise Left-shift

The bitwise left-shift operator takes your bits and moves (‘shifts’) them to the left. That is, each bit is moved to the left however many ‘places’ you ask it to move them. But, we still need to have the same number of bits at the end of the operation, so a number of 0 bits are added to the right hand side to keep our data the same size. That is, if we left-shift by 4 bits, then the left-most 4 bits are lost, and 4 new 0-bits are added to the right hand side. Bits do not ‘wrap-around’ if they go off the left-hand-side of our data then they’re simply discarded.

Let’s left-shift our ‘r’ value by 1 bit:

1 2 3 |
r 0111 1100 << 1 --------- 1111 1000 |

Now let’s left-shift our ‘3’ value by 2 bits:

1 2 3 |
3 0011 0011 << 2 --------- 1100 1100 |

#### Bitwise Right-shift

The bitwise right-shift operator works just like the bitwise left-shift operator, but this time as you’d imagine we move each bit to the right instead of to the left. The rightmost bits are discarded, and an equal number of new 0-bits are added to the left hand side to keep our data the same size:

Let’s right-shift our ‘r’ value by 1 bit:

1 2 3 |
r 0111 1100 >> 1 --------- 0011 1110 |

Now let’s right-shift our ‘3’ value by 2 bits:

1 2 3 |
3 0011 0011 >> 2 --------- 0000 1100 |

#### Uses of Bitwise Operators: Cheap Multiplication

Now that we understand the idea of bitwise operations – why would we want to use them? What are they good for? Well, lots of things, really. Take a look at this massive page of bit-twiddling hacks hosted at Stanford (who take their bit-twiddling seriously!) for some potential uses.

A simple example might be raising a number to a power of 2. To start off with, let’s multiply a value by 2-to-the-power-of-1 (i.e. 2). When we multiply a value by 2 on a computer, it goes through the ALU on the CPU and performs a bunch of calculations, so we might write:

1 2 |
int value = 3; int result = value * 2; |

The computer goes away, does the math – and we get the final result, which is 6. However it’s a *calculation*, and we can perform an (arguably faster) *manipulation* by simply shifting the value 1 bit to the left to get the same result. In practice, this would go:

1 2 |
int value = 3; int result = value << 1; |

Our binary for 3 is:

1 |
0 0 0 0 0 0 1 1 |

So when we left-shift this by 1 bit, we get:

1 |
0 0 0 0 0 1 1 0 |

Which is **6** – exactly what we were after! What about if we wanted to multiply something by 2-to-the-power-of-4 (i.e. 16). Well, in this case we can shift 4 bits to the left. Our original 3 again is:

1 |
0 0 0 0 0 0 1 1 |

Shifted to the left 4 bits gives:

1 |
0 0 1 1 0 0 0 0 |

Which is **48** (which is 3 x 2-to-the-power-of-4) – again, exactly what we were after! Similarly, we could right-shift bits to divide by some power-of-2 for each bit we right-shift.

#### Further uses: Swapping Values without a Temporary Variable

If we have the values 123 and 456 in two variables and we want to swap them, the most common way would be to use code like this:

1 2 3 4 5 6 |
int valueA = 123; int valueB = 456; int temp = valueA; valueA = valueB; valueB = temp; |

But what if we wanted to swap them without using a temporary variable? If they were properties of a class which allowed swizzling then you might be able to do something like:

1 2 3 4 |
myObject.a = 123; myObject.b = 456; myObject.ab = MyObject.ba; |

And that would do it (again, assuming your class supported swizzling) – but there’s another way to do it using the eXclusive OR (XOR – ^) operator, in a process called a XOR Swap, like this:

1 2 3 4 5 6 7 8 9 |
int valueA = 123; int valueB = 456; valueA ^= valueB; valueB ^= valueA; valueA ^= valueB; cout << "Value A is: " << valueA << endl; cout << "Value B is: " << valueB << endl; |

It’s a bit of a mind-f*ck – but it completely works – try it for yourself! You can read more about the technique here, if you’re interested: http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/xor.html.

#### One Final Use: Parity and Checksums

Bit-wise operations are used in the fundamental building blocks of all data storage and transmission. A good example is calculating data parity – which is a technique used to ensure our data (whether in memory, on a disk/disc, or anywhere) is valid and hasn’t been corrupted somehow. The simplest way we can do this by storing a parity bit along with our data – which is just a bit used to indicate whether the number of 1’s in our data is odd or even. We can add a parity bit of 1 if the number of 1’s in the data is odd, otherwise we add a parity bit of 0 (this is called *even parity* – because we’re making the number of 1’s in the data an even number), or alternatively, we can add a parity bit of 1 if the number of 1’s in the data is even, otherwise we add a 0 (this is called *odd parity* – because we’re leaving the number of 1’s in our data as an odd number).

Just so we’ve got that completely straight, here’s Wikipedia’s take on odd and even parity:

There are two variants of parity bits: even parity and odd parity. In case of even parity, the parity bit is set to 1 if the count of ones in a given set of bits (not including the parity bit) is odd, making the count of ones in the entire set of bits (including the parity bit) even. If the count of ones in a given set of bits is already even, it is set to a 0.

When using odd parity, the parity bit is set to 1 if the count of ones in a given set of bits (not including the parity bit) is even, making the count of ones in the entire set of bits (including the parity bit) odd. When the count of set bits is odd, then the odd parity bit is set to 0.

Get it? Got it? Good! If not, don’t worry – it’ll all make sense when you see an example:

Let’s use the *even parity* method, and we’ll store the decimal values 1 through 8 in a series of bytes (i.e. 8 bits per byte). In our data as a whole, we’ll use 7 bits for the data itself and the far-left bit as a parity bit:

1 2 3 4 5 6 7 8 9 |
Parity Bit | Data 1 | 0 0 0 0 0 0 1 <-- 1 in decimal, odd number of 1's so parity bit is 1 to make the number of 1's even 1 | 0 0 0 0 0 1 0 <-- 2 in decimal, odd number of 1's so parity bit is 1 to make the number of 1's even 0 | 0 0 0 0 0 1 1 <-- 3 in decimal, even number of 1's so parity bit is 0 to leave the number of 1's even 1 | 0 0 0 0 1 0 0 <-- 4 in decimal, odd number of 1's so parity bit is 1 to make the number of 1's even 0 | 0 0 0 0 1 0 1 <-- 5 in decimal, even number of 1's so parity bit is 0 to leave the number of 1's even 0 | 0 0 0 0 1 1 0 <-- 6 in decimal, even number of 1's so parity bit is 0 to leave the number of 1's even 1 | 0 0 0 0 1 1 1 <-- 7 in decimal, odd number of 1's so parity bit is 1 to make the number of 1's even 1 | 0 0 0 1 0 0 0 <-- 8 in decimal, odd number of 1's so parity bit is 1 to make the number of 1's even |

Now, looking at that data, and according to the parity bit, is there any corruption? The answer being, nope – it looks fine. We have 8×8 bits, and there’s no memory corruption – no random “bit-flipping” has occurred – we’re all good! We had to sacrifice some storage space to store our parity information (or we need to use more storage space to store the same amount of data if we’re also storing parity), but if we care about the correctness of our data then we can live with that.

Now, looking at our very first byte again:

1 2 |
Parity Bit | Data 1 | 0 0 0 0 0 0 1 |

Let’s say that a random bit-flip occurs (i.e. the data gets corrupted – just by a single bit!), and we now have:

1 2 3 4 5 |
Parity Bit | Data 1 | 1 0 0 0 0 0 1 ^ | Used to be 0, now corrupted to 1! |

Is that data corrupted? According to the parity bit, it is! Because we have an **even** number of bits (which would indicate our parity bit should be **0**) – and yet our parity bit is **1**. We now know that bad things have happened, so we can start taking some steps to do something about it! We’d even know if the data was corrupted if our parity bit was the bit that flipped because the whole thing wouldn’t tally correctly.

Another method of doing this, instead of taking 1 bit for parity out of each byte, is to let each byte have the full range of 8-bits, and then after a given number of bytes we append an entire byte of parity data (called a checksum) which stores all the parity bits together in one place. This would then make our data block:

1 2 3 4 5 6 7 8 9 |
Data 0 0 0 0 0 0 0 1 <-- 1 in decimal 0 0 0 0 0 0 1 0 <-- 2 in decimal 0 0 0 0 0 0 1 1 <-- 3 in decimal 0 0 0 0 0 1 0 0 <-- 4 in decimal 0 0 0 0 0 1 0 1 <-- 5 in decimal 0 0 0 0 0 1 1 0 <-- 6 in decimal 0 0 0 0 0 1 1 1 <-- 7 in decimal 0 0 0 0 1 0 0 0 <-- 8 in decimal Parity Byte 0 0 0 0 1 0 0 0 <-- Vertical parity (count number of 1's vertically - set parity to 1 if odd, parity is 0 otherwise) |

Notice that our vertical parity is not the same as our horizontal parity. You might have wondered what happens if we get two bits (or more!) flipped using our old 7-bits-and-1-bit-parity system – because if any two bits flip, the odd/even result is the same and our parity bit is still valid – but our data is corrupt. The answer is that, given sufficient parity information, we can detect and even track down the location of the corruption and repair it – only we have to use something more involved than the parity/checksum system we’ve been looking at. For example, if we use something like cyclic redundancy checks or Hamming codes (as used in RAID level 2) we can locate and correct the errors.

Back on planet topic, and with regard to how bitwise operators can be used to calculate parity – let’s say we had a variable of some integral data type (i.e. anything that uses whole numbers – int, char etc.), and we want to calculate the parity bit for that value – well, using a combination of bitwise XOR, right-shift and AND-ing, we can do so like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
// Templetised method to calculate the parity bit for any integral // type of data. Uses the parallel parity technique as found here: // http://graphics.stanford.edu/~seander/bithacks.html#ParityNaive template <typename T> bool calcEvenParity(T value) { // Note: We could place a sanity check here to ensure the data type // is integral (i.e. if (!std::is_integral<T>::value) { bail! } ) // But trying to pass a non-integral type to the right-shift operator // results in a compile-time failure so there's really no need to. // Calc half the size of the type, expressed in bits T typeSizeInBits = sizeof(T) * 4; // Convert our number down to a 4-bit number with the same parity while (typeSizeInBits > 4) { value ^= value >> typeSizeInBits; typeSizeInBits /= 2; } // Bitwise AND our value with 255 value &= 0xf; // Finally, the hex value 0x6996 is the binary value: 0110 1001 1001 0110 // This sequence of binary digits is then used as a look-up table to look-up // the correct parity bit is for any 4 bit number, which is exactly what we // have now that we've crunched our value down in the above loop! Clever! // An excellent explanation of how it all works can be found here: // http://stackoverflow.com/questions/3202745/question-from-bit-twiddling-site return (0x6996 >> value) & 1; } |

#### Wrap up

Should you use bit-wise operators in your everyday coding? In general, I’d say no. You’d only ever want to use them as an optimisation of the the most core parts of your code. For example, you’d never really want to perform multiplications using bit-shifting in non-performance critical code because, dammit Jim, you’re a human not a computer, and we’re more used to seeing code like:

1 |
int result = value * 16; |

Than:

1 |
int result = value << 4; |

For non-performance-critical code, unless it’s an absolutely perfect fit for what you’re doing, leave bit-twiddling well alone.

However, if you have some piece of code, say in a fragment shader, that runs 2 million times per frame (at 60 frames per second), meaning it runs a grand total of 120 million times per second… If you can replace a branch (like an if or switch or something) in that code by a non-branching version that does the exact same thing by using some clever bit-level manipulation trick – then that’s going to reap significant performance rewards even if the non-branching version is only marginally faster than the branching version. So, as with most things, it’s a time Vs. complexity trade-off, where you get to make the call as you see fit.

Cheers!

I’m just bored, going through my bookmarks and I thought I remembered you posting about something at some point but then I came across this post and thought “oh yeah” because I am just starting to use bitwise operations myself in AS3, GML (GameMaker Studio) and in C# (That I am taking online lessons in) I could check it out. I got to where you were taking two characters and storing them as bytes. Looks like i have a way to go xD

What is it exactly you teach?

I teach a lot of Android development and graphics programming these days, with some 3D modelling, multimedia & straight programming on the side, Rob.

Maybe this would help you get the binary representation side of things: http://www.mathsisfun.com/binary-number-system.html

Wow, I don’t even make any sense there ^ I meant when you got up to taking two characters and storing them as bytes I got so lost it wasn’t funny. I thought I had the jist of what was going on.