Bitwise operations and Flags (C#)

I haven’t written anything in a while, so I thought I would finally write about the subject of bitwise operations and the FlagsAttribute.

I mentioned this to one of the developers on my team, and he said that he somewhat understood bit operations but he had never found a reason to use them.

Here is the code I will use to discuss the operations

int a = 57754;
int b = 18782;
			
int aXORb = a ^ b;
int aORb = a | b;
int aANDb = a & b;
int aNOT = ~a;
int bNOT = ~b;
			
string spacer = "---------------------------------------------";
Console.WriteLine("{0} ({1}) a", GetBitString(a), a.ToString());
Console.WriteLine("{0} ({1}) b", GetBitString(b), b.ToString());
Console.WriteLine("{0} ({1}) a ^ b", GetBitString(aXORb), aXORb);
Console.WriteLine(spacer);	
Console.WriteLine("{0} ({1}) a", GetBitString(a), a.ToString());
Console.WriteLine("{0} ({1}) b", GetBitString(b), b.ToString());
Console.WriteLine("{0} ({1}) a | b", GetBitString(aORb), aORb);
Console.WriteLine(spacer);	
Console.WriteLine("{0} ({1}) a", GetBitString(a), a.ToString());
Console.WriteLine("{0} ({1}) b", GetBitString(b), b.ToString());
Console.WriteLine("{0} ({1}) a & b", GetBitString(aANDb), aANDb);
Console.WriteLine(spacer);	
Console.WriteLine("{0} ({1}) a", GetBitString(a), a.ToString());
Console.WriteLine("{0} ({1}) ~a", GetBitString(aNOT), aNOT);
Console.WriteLine("{0} ({1}) ~(~a)", GetBitString(~aNOT), ~aNOT);
Console.WriteLine(spacer);	
Console.WriteLine("{0} ({1}) b", GetBitString(b), b.ToString());
Console.WriteLine("{0} ({1}) ~b", GetBitString(bNOT), bNOT);
Console.WriteLine("{0} ({1}) ~(~b)", GetBitString(~bNOT), ~bNOT);
			
Console.ReadLine();

// And, the GetBitString method
static string GetBitString(int input){
	int sizeInt;
	// Number of bits is bytes * 8
	// I specifically chose 16-bit values to reduce the amount displayed
	unsafe{ sizeInt = (sizeof(ushort) * 8); }
	
	string output = String.Empty;
	for (; sizeInt >= 0; sizeInt--) {
		output += (input >> sizeInt) & 1;
		if(sizeInt % 4 == 0) { output += " "; }
	}
	
	return output;
}

XOR ( ^ )

From MSDN:

Binary ^ operators are predefined for the integral types and bool. For integral types, ^ computes the bitwise exclusive-OR of its operands. For bool operands, ^ computes the logical exclusive-or of its operands; that is, the result is true if and only if exactly one of its operands is true.

I think of XOR as a “one-toggle”. The logic can be seen as (format is [first] : [second] –> [result] ):

0 : 1 --> 1
1 : 1 --> 0
1 : 0 --> 1
0 : 0 --> 0

As you can see,

the result only changes when the second value, which I call the toggle, is a 1.

If you were to run the code above, you would receive the following output:

0 1110 0001 1001 1010  (57754) a
0 0100 1001 0101 1110  (18782) b
0 1010 1000 1100 0100  (43204) a ^ b

Compare the values in each column and notice that the result on the third line only toggles if the value in line two is a 1.

OR ( | )

From MSDN:

Binary | operators are predefined for the integral types and bool. For integral types, | computes the bitwise OR of its operands. For bool operands, | computes the logical OR of its operands; that is, the result is false if and only if both its operands are false.

A bitwise-or, in my eyes, collects all bits that are “on” or 1. The logic can be seen as (format is [first] : [second] –> [result] ):

0 : 1 --> 1
1 : 1 --> 1
1 : 0 --> 1
0 : 0 --> 0

As you can see, whenever there is a 1 in the first or second value, the resulting value is 1.

If you were to run the code above, you would receive the following output:

0 1110 0001 1001 1010  (57754) a
0 0100 1001 0101 1110  (18782) b
0 1110 1001 1101 1110  (59870) a | b

Compare the values in each column and notice that any column containing a 1 results in a 1.

AND ( & )

From MSDN:

Binary & operators are predefined for the integral types and bool. For integral types, & computes the bitwise AND of its operands. For bool operands, & computes the logical AND of its operands; that is, the result is true if and only if both its operands are true.

I like to think of the bitwise-and as a truth-only operation. Only the bits containing all 1’s will result in a one. This is like saying “If true and false, then false” or “If true and true, then true”. The logic can be seen as (format is [first] : [second] –> [result] ):

0 : 1 --> 0
1 : 1 --> 1
1 : 0 --> 0
0 : 0 --> 0

If you were to run the code above, you would receive the following output:

0 1110 0001 1001 1010  (57754) a
0 0100 1001 0101 1110  (18782) b
0 0100 0001 0001 1010  (16666) a & b

Notice how the result only contains a 1 if all other bits in the column are 1.

NOT ( ~ )

From MSDN:

The ~ operator performs a bitwise complement operation on its operand. Bitwise complement operators are predefined for int, uint, long, and ulong.

The not (or bitwise complement) will give you the opposite value for every bit, but it is a unary operator so you can only use it in assignment and probably doesn’t seem like it should be included in a discussion about operations. I think it’s important to understand what happens, though.

The logic can be seen as (format is [value] –> [result] ):

0  --> 1
1  --> 0

If you were to run the code above, you would receive the following output:

0 1110 0001 1001 1010  (57754) a
1 0001 1110 0110 0101  (-57755) ~a
0 1110 0001 1001 1010  (57754) ~(~a)
---------------------------------------------
0 0100 1001 0101 1110  (18782) b
1 1011 0110 1010 0001  (-18783) ~b
0 0100 1001 0101 1110  (18782) ~(~b)

You may wonder why the complement of 18782 would be -18783. The first bit, why you may or may not have noticed sitting rogue in front of the 16 bits in the outputs so far, is a positive/negative bit. In all of the output so far, the values have been positive. The complement of a positive value is it’s negative minus 1. We can see that it is minus 1, because all of the 1 bits are flipped until the first 0 is reached (which is how binary increments an integral value). The positive/negative bit is flipped last, giving you this offset that may seem strange.

Practical(maybe?) application

Using the code to output the bit string from above, let’s implement a simple CRUD flag.

[Flags]
public enum Permissions
{
	CREATE		= 1 << 0,
	READ		= 1 << 1,
	UPDATE		= 1 << 2,
	DELETE		= 1 << 3
}

// in your main method
Permissions c = Permissions.CREATE;
Permissions r = Permissions.READ;
Permissions u = Permissions.UPDATE;
Permissions d = Permissions.DELETE;
Permissions crud = Permissions.CREATE ^ Permissions.READ ^ Permissions.UPDATE ^ Permissions.DELETE;

Console.WriteLine(spacer);
Console.WriteLine("{0} ({1}) Create", GetBitString((int)c), ((int)c).ToString());
Console.WriteLine("{0} ({1}) Read", GetBitString((int)r), ((int)r).ToString());
Console.WriteLine("{0} ({1}) Update", GetBitString((int)u), ((int)u).ToString());
Console.WriteLine("{0} ({1}) Delete", GetBitString((int)d), ((int)d).ToString());
Console.WriteLine("{0} ({1}) CRUD", GetBitString((int)crud), ((int)crud).ToString());

To start, we have the Permissions enum marked with the FlagsAttribute. Notice that a general naming convention for flags is to pluralize the enum since you can have more than one value at a time (the MSDN examples in the link don’t stick to this naming convention 100%).

In the Permissions flag, we shift each bit according to the desired position. *NOTE* If you don’t shift bits here, you will receive an error.

Because an enum is a type which defaults to int values, we’ll have to cast to int before passing the enum into the GetBitString method. Remember, using XOR will toggle all bits.

Here is the output from the above code:

0 0000 0000 0000 0001  (1) Create
0 0000 0000 0000 0010  (2) Read
0 0000 0000 0000 0100  (4) Update
0 0000 0000 0000 1000  (8) Delete
0 0000 0000 0000 1111  (15) CRUD

Feel free to play around with the operators in the above CRUD variable and see how this could be useful in, for example, a linux environment where file permissions are stored as -rwxr-xr-x.

Related Articles