Home » c# » char + char = int? Why? [duplicate]

char + char = int? Why? [duplicate]

Posted by: admin November 30, 2017 Leave a comment


This question already has an answer here:


Accoding to the documentation of char it can be implicitly converted into integer values. The char type doesn’t define a custom operator + so the one for integers is used.

The rationale for there being no implicit conversion to string is explained well in the first comment from Eric Lippert in his blog entry on “Why does char convert implicitly to ushort but not vice versa?”:

It was considered in v1.0. The language design notes from June 6th
1999 say “We discussed whether such a conversion should exist, and
decided that it would be odd to provide a third way to do this
conversion. [The language] already supports both c.ToString() and new

(credit to JimmiTh for finding that quote)


char is a value type, meaning it has a numerical value (its UTF-16 Unicode ordinal). However, it is not considered a numeric type (like int, float, etc) and therefore, the + operator is not defined for char.

The char type can, however, be implicitly converted to the numeric int type. Because it’s implicit, the compiler is allowed to make the conversion for you, according to a set of rules of precedence laid out in the C# spec. int is one of the first things normally tried. That makes the + operator valid, and so that’s the operation performed.

To do what you want, start with an empty string:

var pr = "" + 'R' + 'G' + 'B' + 'Y' + 'P';

Unlike the char type, the string type defines an overloaded + operator for Object, which transforms the second term, whatever it is, into a string using ToString() before concatenating it to the first term. That means no implicit casting is performed; your pr variable is now inferred as a string and is the concatenation of all character values.


Because a single char can be converted to a Unicode value and can be easily stored as integer taking up less space than a single character string.


From the MSDN:

The value of a Char object is a 16-bit numeric (ordinal) value.

A char is an integral type. It is NOT a character, it is a number!

'a' is just shorthand for a number.

So adding two character results in a number.

Have a look at this question about adding bytes, it is, although counterintuitive, the same thing.


Another relevant bit of the spec, in section 4.1.5 (Integral Types) having defined char as an integral type:

For the binary + … operators, the operands are converted to type T, where T is the first of int, uint, long and ulong that can fully represent all possible values of both operands.

So for a char, both are converted to int and then added as ints.


The point is, that many C# concepts are coming from C++ and C.

In these languages a single character constant (like ‘A’) is represented as their Ascii value, and despite what one may expect, it’s type is not char but int (yes ‘A’ is an int, the same as writing 65).

Thus, the addition of all these values is like writing a series of ascii character codes, i.e.

   var pr= 82 + 71 + 66 + ...;

This has been a design decision in C / C++ at some point (its going back to the 70’s with C).


From MSDN:

Implicit conversions might occur in many situations, including method
invoking and assignment statements.

A char can be implicitly converted to ushort, int, uint, long, ulong, float, double, or decimal. Thus that assignment operation implicitly converts char to int.


A char or System.Char is an integral type:

An integral type representing unsigned 16-bit integers with values between 0 and 65535. The set of possible values for the type corresponds to the Unicode character set.

This means that it behaves exactly like a uint16 or System.UInt16, and adding chars with the + operator therefore adds the integral values, because the + operator is not overloaded in char.

To concatenate individual chars into a string use StringBuilder.Append(char) or new String(char[]).


As has been said, it is because a char has the Int32 value containing its unicode value.

If you want to concatenate chars into a string you can do one of the following:

Pass an array of chars to a new string:

var pr = new string(new char[] { 'R', 'G', 'B', 'Y', 'P' });

Use a StringBuilder:

StringBuilder sb = new StringBuilder();

Start off with a string:

var pr = string.Empty + 'R' + 'G' + 'B' + 'Y' + 'P';

Cast each to a string (or just the 1st one will work just as well):

var pr = (string)'R' + (string)'G' + (string)'B' + (string)'Y' + (string)'P';


It shouldn’t because that would be inefficient. If one wanted to concatenate the chars like that they should use string builder. Otherwise each addition would create a temporary memory to hold the concatinated partial string, which would mean that in your example 4 temporary memory allocations would have to occur.


A Char is a textual representation of a 16-bit integer value. You are simply adding ints together. If you want to concatenate chars, you’ll have to cast them to strings.


1) Definition (MSDN):

The char keyword is used to declare a 16-bit character, used to represent most of the known written languages throught the world.

2) Why char does like numeric types?

A char can be implicitly converted to a numeric type.

A char is closer to an integer than to a string. A string is only a collection of char objects, whereas an integer can present a char and vice versa.

3) Examples

You can simply convert the first of your chars to a string, to outwit your compiler:

var pr = 'R'.ToString() + 'G' + 'B' + 'Y' + 'P';

You could also define a char array and then use the string constructor:

char[] letters = { 'R', 'G', 'B','Y', 'P' };
string alphabet = new string(letters);

If you want to print out a character solely, you always have to convert it to a string, to get its text representation:

 var foo1 = 'F';


Why is C# designed like this? Wasn’t the default implementation of
adding two chars should be resulting to a string that concatenates the
chars, not int?

What you intended is not correct in respect to what you want to accomplish.
A String is not an addition of chars, a String is an addition of so to say “singleton” strings.

So “a”+”b”=>”ab”, which is absolutely correct if you take into account, that the + operator for strings is overloaded.
And hence ‘a’ represents the ASCII char 65, it is totally consistent to say, that ‘a’+’b’ is 131.


Because a char plus another char can exceed the maximum value permitted for a char variable, that’s why the result of that operation is converted to a int variable.


You are assuming that a char is a string type. The value of a char can be represented by a character value between single quotes, but if it helps, you should consider that to be an abstraction to provide readability, rather than forcing you as the developer to memorize the underlying value. It is, in fact, a numeric value type, so you should not expect any string manipulation functions to be applicable.

As to why why char + char = int? I have no idea. Certainly, providing implicit conversion to Int32 would mitigate arithmetic overflows, but then why is short + short not implicitly typed to int?