Bit manipulation is basically inspecting or changing one or more bits in the binary representation of a number and leaving the other bits alone.
You didn't mention your bit numbering order, so I assume that it's the so-called "little endian" version that starts with the least significant bit as "bit 0". If not, then you'll need to alter what follows accordingly.
In languages based on C, the easiest way to inspect and modify single bits is by using the "bitwise" operators:
.... a & b : each bit of the result is the AND of the same bits of arguments a and b
.... a | b : each bit of the result is the OR of the corresponding bits of a and b.
.... a ^ b : each bit of the result is the XOR (exclusive or) of the corresponding bits of a and b
.... ~a : each bit of the result is the opposite (complement) of the corresponding bit of a
.... a << k : returns a value with all bits shifted left by k bits
.... a >> k : returns a value with all bits shifted right by k bits
Those are the basics. The shifts always fill zeros on the right when shifted left. Right shifts of unsigned data also fill zeroes on the left, but right shifts of signed values fill with copies of the original sign bit.
With those operations (good in C#, Python, Java, as well as in the original C/C++) you can use:
1<
a&(1<
a|(1<
a&~(1<
a^(1<
For handling multiple contiguous bits as a single value, the following are handy to remember:
((1<
a & ((1<
a & ~((1<
It used to be a common thing to use function-like #define macros to make pseudo-functions like setbit(a,k), cleearbit(a,k), etc. to perform these operations without actual function calls.
If you want to learn this stuff, what I suggest you do is play around with those operations with pencil and paper on small (8-bit maybe) values and watch what happens with each operation. That should help you "get" what's going on and maybe come up with new patterns on your own as the need arises.