Sequence Points - Order functions are being called

I'm a little confused, I was reading this article here

http://stackoverflow.com/questions/3457967/what-belongs-in-an-educational-tool-to-demonstrate-the-unwarranted-assumptions-pe/3458842#3458842

And I wanted to test out what the guy was saying so I wrote the following code and compiled it on my VS 2015 compiler.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#include <iostream>
using namespace std;

int hel() {
	return printf("Yoohoo");
}

int lo() {
	return printf(" I'm");
}

int wor() {
	return printf(" Over");
}

int ld() {
	return printf(" here!\n");
}



int main() {
	
	int j = hel() + lo() + wor() + ld();

	cout << j;

	char wait;
	cin >> wait;
	return 0;
}


If figured I should get something weird but actually I got exactly what was neat and intuitive I.E.

Yoohoo! I'm Over Here!


Why am I not getting the same undefined behavior. Was this a problem before 2011 but now is something that is fixed?

Last edited on
You are getting undefined behavior; the fact, for example, hel() was evaluated before lo() was what you were expecting doesn't mean it is suddenly defined.
What exactly does undefined behavior mean?

Does it mean that it's up to the compiler to decide what happens, or that no one knows what the hell is going to happen?

Because I guess VS 2015 might make it so the functions get called from left to right.

I don't know though. This is just my speculation
Last edited on
There is no undefined behaviour here.
The function calls in hel() + lo() + wor() + ld() are indeterminately sequenced - they may be evaluated in any arbitrary order, but the function executions would not interleave with each other.
Variation on this code,
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#include <iostream>
#include <string>
#include <sstream>

std::string fn(std::ostream & os, std::string txt)
{
    os << txt;
    return txt;
}

int main()
{
    std::ostringstream oss;
    std::string result = fn(oss, "Hel") +  fn(oss, "lo ") +  fn(oss, "Wor") +  fn(oss, "ld!"); 

    std::cout << "oss:    " << oss.str() << '\n';
    std::cout << "result: " << result    << '\n';
}

My output:
oss:    ld!Worlo Hel
result: Hello World!
Thank you Chervil. That makes a lot more sense.

JL, from what I've read online that seems to be right. I understand that there is an order of evaluation, but why would something be evaluated in a different order than it is executed?
Last edited on
The evaluation of an expression with a function call operator eg. F( A1, A2 ) involves:

a. Evaluate F (say to get fresult)
b. Evaluate A1, A2 (say to get arg1, arg2)
c. Evaluate fresult( arg1, arg2 )

Step c. is the function execution.

Note:
Before C++17, the evaluations of F, A1 and A2 are unsequenced with respect to each other
Since C++17, the evaluation of F is sequenced before the evaluations of A1 and A2; the evaluations of A1 and A2 are indeterminately sequenced with respect to each other

1
2
3
4
using fn_type = int ( int ) ;
fn_type* fn_pointers[5] = { /* .... */ } ; // array of pointers to functions
int i = 2 ;
fn_pointers[i]( ++i ) ; // undefined behaviour prior to C++17 
Last edited on
I wonder if these rules about evaluation are related to (or caused by) the compiler's probable use of the Abstract Syntax Tree:
https://en.wikipedia.org/wiki/Abstract_syntax_tree

If a variable is subject to it's value being changed more than once in a sub-expression, that would necessitate having limitations - possibly because the evaluation is carried out from the leaf nodes moving up the tree? I define a sub expression as being part of a full expression ended with a semicolon.

This is all a guess on my part - I may not have quite the right wording to express my idea.

In Chervil's code, I notice the oss expression had been evaluated right to left. I wonder if that is because of the use of AST or is something else going on there?

Could I also observe that all the rules C++ has about various (sometimes seemingly simple )things, are there out of necessity?
> I wonder if these rules about evaluation are related to (or caused by) the compiler's probable use of the Abstract Syntax Tree.

Rules about expression evaluation (and in general undefined behaviour) are there to allow implementations a large amount of leeway in their attempt to generate as efficient code as possible.

Undefined behavior exists in C-based languages because the designers of C wanted it to be an extremely efficient low-level programming language. In contrast, languages like Java (and many other 'safe' languages) have eschewed undefined behavior because they want safe and reproducible behavior across implementations, and willing to sacrifice performance to get it.
LLVM Blog - http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html


P0145R3 (C++17):
The current rules have been in effect for more than three decades. So, why change them now?
...
The changes suggested below are conservative, pragmatic, with one overriding guiding principle: effective support for idiomatic C++. In particular, when choosing between several alternatives, we look for what will provide better support for existing idioms, what will nurture and sustain new programming techniques. Considerations such as how an expression is internally elaborated (e.g. function call), while important, are secondary. The primary focus is on what the programmer reads and writes, in particular in generic codes, not what the compiler internally does according to fairly arcane rules.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0145r3.pdf
@JLBorges

Thanks for your always interesting replies :+)

Perhaps what I was trying to say was:

Do many rules exist to: disallow things which are logically wrong, or there is a good reason as to why they shouldn't be allowed? Some of the reasons may or may not be obvious.

For example Cubbi answered a question the other day about initialisation of static variables in a class: http://www.cplusplus.com/forum/beginner/195361/#msg939285

Other things may not be so obvious - I was thinking evaluation by the AST might be one of those, but there are probably lots of other examples.

I am not trying to counter what you have mentioned, just that I guess there is firstly rules dictated by sheer logic, then there rules which give compilers a great deal of leeway to allow for many optimisations as mentioned in the articles you linked. My main point is that the underlying reasons for these logic related rules may not be so obvious. Is that a reasonable way of looking at it?
Topic archived. No new replies allowed.