In terms of the ANSI and ISO standards for the C language, it's a bit fuzzier than what's been said so far. Yes, it's a data type that can hold exactly one character of the execution character set, but it's an integer type. The value of a char object is the numeric character code (usually based on ASCII) used during program execution. The standard doesn't even say whether the basic char type is signed or not, but there are signed char and unsigned char types available if you need to specify that.
The C language does not define the exact size of a char, but you can infer that at least 7 bits are required because the standard lists 91 required printable characters for the execution character set. Add 1 for a space, and 8 more for required control characters, and 1 for a binary zero character as as string terminator and you get at least 101 different characters that need representing. Since all of the printable codes are required to have *positive* values when stored in a signed char, nearly every implementation will need another bit to make 8 bits the practical minimum for the size of a char.
There is no upper limit. A compiler for a classic supercomputer with word addressing only might have 64-bit chars. A compiler intended for use with non-Roman alphabets might have 16-bit chars and an underlying Unicode 1.0 (UTF-16) representation. (Yes, there is a wchar_t "wide character" type that could also be used, but an implementation doesn't need to make char smaller than wchar_t.)
One thing that is required is that the allocation for any other type is an integral multiple of the size of a char. The sizeof operator returns the size of the result of an expression (or of a parenthesized type), measured in char units. So sizeof (char) is always 1, and:
sizeof (char) <= sizeof (short) <= sizeof(int) <= sizeof (long) <= sizeof (long long)
...is the basic guarantee. There are further guarantees about minimum sizes for the larger integer types, but there is no maximum for size for any type.