Brute Forcing Question (Project Euler Spoilers)

I'm not sure if it is poor taste to ask about project euler on these forums. If it is, let me know and I will delete the post.

I literally just picked up programming. I'm still very much learning, but thought I'd pick up some project euler problems. I managed to solve one of them, and after words looked up other solutions. I was happy to see mine in some of the lists, but these lists were labeled "Brute Forcing". I've also been speaking to a software engineer friend who I've shown some of my code. He has also mentioned a lot of my code is very brute force-y.

So far my self lessons have covered if statements, while/for loops, and switch. So, this might add to my brute force-ness way to solve problems.

My over-all questions are, are brute force solutions bad? Should I look for more elegant wants to solve problems? Am I limited to these brute force methods, because of my current knowledge? Are there any practices/exercises to evolve my brute force solutions into more arithmetic friendly solutions?

Here is the code in question. Please, share a less brute force method if you are able to acquire one. Also, is this is "bad" solution?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
 
/*If we list all the natural numbers below 10 that are multiples
of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is
23. Find the sum of all the multiples of 3 or 5 below 1000.*/

#include <iostream>
using namespace std;

int main()
{
	int x = 0;
	for (int i = 0; i < 1000; ++i)
	{
		if (i % 3 == 0 || i % 5 == 0)
		{
			x += i;
		}
	}
	cout << x << endl;
}
Last edited on
The only bad solution is the one that doesn't work. The Euler project questions don't have any time constraints, so any program that gives back the correct result can be considered correct.

Here's a solution to that problem that doesn't need iteration:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#include <iostream>

int triangular_sum(int n){
    return (n*n + n)/2;
}

int sum_of_multiples(int limit, int modulo){
    return triangular_sum(limit / modulo) * modulo;
}

int main(){
    const int limit = 999;
    std::cout <<
        sum_of_multiples(limit, 3) +
        sum_of_multiples(limit, 5) -
        sum_of_multiples(limit, 15) << std::endl;
}
Last edited on
Thanks, for the alternative helios.

Is the

- sum_of_multiples(limit, 15)

eliminating the numbers that intersect?

If not you might have lost me with this one! Would you mind explaining a bit about how this solution works?
Is the sum_of_multiples(limit, 15) eliminating the numbers that intersect?
Yes.

Would you mind explaining a bit about how this solution works?
Which part in particular is not clear?
My over-all questions are, are brute force solutions bad? Should I look for more elegant wants to solve problems?


Yes and Yes :+)

You might be able to get away with that for the early problems, but it won't fly after that. There are problems that involve large Fibonacci or factorial numbers, so big you can't store them in the largest int, modulus is your friend for those. The main idea of Project Euler is for you to come up with a cunning plan to do the problem efficiently.

Google and wiki are your friends here, there is all sorts of clever theory that will help solve these things. Also, stuff you learn in the earlier problems can help with the later problems.

When you write code, first try to get it working for the initial test data they provide - there are things to observe.

This quote is from the author of GNU grep, the link was originally posted by Moschops:

Mike Haertel wrote:
The key to making programs fast is to make them do practically nothing. ;-)


http://lists.freebsd.org/pipermail/freebsd-current/2010-August/019310.html

As helios noted, speed is not a requirement, but efficiency is and the two go hand in hand. Lots of the other online problem solving sites do have a time and memory limit.

Good Luck !!
> are brute force solutions bad?

No.

When in doubt, use brute force - Ken Thompson


commentary by ESR on Ken Thompson's quote:
He probably intended this as a ha ha only serious, but the original Unix kernel's preference for simple, robust, and portable algorithms over brittle ‘smart’ ones does seem to have been a significant factor in the success of that OS
- http://www.catb.org/jargon/html/B/brute-force.html


Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.

Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.

Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.)

Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.

Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

Rule 6. There is no Rule 6.

- Rob Pike, in 'Notes on C Programming'


Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. - Brian Kernighan



> Should I look for more elegant ways to solve problems?

Yes. It is an excellent learning aid.

And then, in real-life, write the simplest code that gets the job done - treat performance as a design constraint, rather than as a design goal.
Consider this problem here:

https://projecteuler.net/problem=3

The prime factors of 13195 are 5, 7, 13 and 29.

What is the largest prime factor of the number 600851475143 ?


Do we need to work out all the primes up until 600851475143?

Answer: No.

So this is where you should prefer something more elegant and simple over brute force.

At least start with the idea that factors come in pairs: small and large. If a small number is not a factor, the answer must be less than the larger divisor. In the example, the smallest factor is 5, so we are already down to 20% of the total.

For these sort of problems, brute force will either not work, or at least hopefully lead to a more simple solution. There is no reason why one couldn't use brute force on the small example though, just to see what can be learnt. Really one needs to get the thinking cap on, and set thinking mode to clever :+)

Good algorithms and data structures are an important thing: A bad one might literally take years to execute, a good one might do the same problem in less than 1 second. I guess that is why they teach such subjects at University, and this is the whole point of websites like Project Euler.
Isn't prime factorization one of those problems for which no solution asymptotically faster than brute force is known?
Well I guess the worst case scenario for prime factorization is that the small factor of N is 2 and the other one is some large prime, or maybe sqrt(N) is prime. So one is left with determining whether a large number is prime, which is brute force. At least that is better than the naive idea of working out all the primes, not realising the biggest one can only be half of N, and the answer for any given number is very probably a lot less than that.

But Euler 3 isn't as extreme as that, the purpose is to get the coder to think - reduce the amount of work to do, not do naive solutions.

Should we make a distinction between brute force and naive solutions? Any logic that reduces the problem domain is more elegant?

The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, ... and about the same size (but not too close, ... even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical ...
https://en.wikipedia.org/wiki/Integer_factorization
Well, thanks for the reading material guys :o

From what I can gather brute force isn't awful, but I should be working on alternative methods that stimulate my mind. Learn to see other solutions basically, and then pick what's best for the situation I guess. Bah, this is going to be a journey ha ha.

@helios

1
2
3
4
5
6
int triangular_sum(int n){
    return (n*n + n)/2;
}

int sum_of_multiples(int limit, int modulo){
    return triangular_sum(limit / modulo) * modulo;


I can see what these functions are doing, but I'm having a hard time understand how exactly sum_of_multiples is calling triangular_sum? And if that's not what is happening at all, then I'm totally lost!
I'm having a hard time understand how exactly sum_of_multiples is calling triangular_sum?
What do you mean "how"? triangular_sum(limit / modulo) calls triangular_sum() and passes the value of limit/modulo as the only parameter.
@Cheech0

There is a detailed explanation here:

http://math.stackexchange.com/questions/9259/find-the-sum-of-all-the-multiples-of-3-or-5-below-1000

From what I can gather brute force isn't awful, but I should be working on alternative methods that stimulate my mind.


Hopefully we can all agree not to do naive solutions. Any logic or theory one can use to reduce the problem, is going to be an improvement.

Just on that, prime numbers are very near in the Euler landscape. All the time we see beginners using a naive solution of doing trial division - that is, for all the numbers in the set, seeing if every number before it is a factor. So this is terrible when one has to come up with a list of all prime numbers up 2 million ! It's a lot easier to start out with all the numbers up to some limit, remove multiples of 2; the next number is 3, so remove multiples of that; next are 5, 7, 11, 13 and so on. There are other improvements, you can read about them on the wiki page. Wait for it ...... it's called Euler's algorithm ! Next is what container to store the numbers in? One can use an array (or some other container) of true false values, the subscript to the array is the number, set to true if it's a prime. The std::bitset is an array of bits, rather than integers - so this is smaller storage.

One other thing: Are you aware of the tutorial, reference material and articles on this site? There are links at the top left of the page. A lot of people seem to miss that somehow :+)

JLBorges wrote:
The hardest instances of these problems (for currently known techniques) are semi-primes, the product of two prime numbers.


Ok, thanks, I will remember that :+). I should have said "a bad" rather than "the worst" scenario. By "large" I gather that could mean numbers with thousands of digits. So this is now very far from the example I gave earlier.

Last edited on
> numbers with thousands of digits. So this is now very far from the example I gave earlier.

Yes, very far. For instance, the non-brute-force general number field sieve algorithm is the most efficient 'classical' algorithm for factoring numbers this large.
https://en.wikipedia.org/wiki/General_number_field_sieve

As Rob Pike points out,
Rule 3. Fancy algorithms are slow when n is small, and n is usually small. ... don't get fancy ....

Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.
@helios

Honestly, now that I stare at it, I have no idea why it confused me. My bad! I'm just new to this I think. A lot of code feel a bit like an oceans worth of water crashing on me.
Topic archived. No new replies allowed.