I spent about twenty minutes to fix value conversion order.
And, this is conversion rule :
- Integers meet decimals -> decimal
- Low precedence meets higher precedence level : switch to higher level
Value level list :
- bool < char < short < long long < float < double
Wow, that's quite interesting! The program automatically adjusts the main precedence is needed. You may change the default value definition and also can change the main conversion precedence by using type_cast
feature. And now with this feature I can write :
// '0' = 40
'0' + '0' + '0' + '0' // = 160
What will happen if this expression is performed? Well, it gives -33
- if the main value is unsigned, that's 160. So it needs to be converted, by using type_cast
. If you want the interpreter to give the correct value, here is one solution :
[int]'0' + '0' + '0' + '0'
Other examples :
false - true
Certainly the exact value is "-1"
but actually result is "1"
. Just because boolean
values only and only accept value "0", or "1".
[char]10 + 10000 / [char]6
is unknown, so the parser will automatically load it as an integer. If the value is 'char', then it causes "Out of range". The value increases the main precedence conversion up to Int
level, so generally the result is not affected much except decimal data (is truncated). Result is 176.
10 + [double]10000 / 6
10; 10000; 6
are integers, but 10000
has a double
attribute. Because integers have a lower precedence than decimals, so the precedence is switched to decimal. So the value is more accurate - 176.66666...
[int]10.24 + [float]10000 / [char]6.49f
Same value, 176.666...
But it's float
, not double
. So when I call a function it only takes 4 bytes, rather than a double
it takes 8 bytes...
Generally my opinion it's very important because certainly it helps a lot about function calling and correcting... :)
Only a question : If an unsigned value meets a signed value, what will happen?