Renting reallly powerful cpu power with only 2gb ram, for a week, what's the cheapest solution?

Hello everyone!

I need a really really powerful computing power but
the thing is that I don't need much
ram. 800~ mb is enough.

About computer power, well.
The more the better but I only need to rent it for week at most.

Can anyone suggest me anything?
Thank you!

Edit: taking this account that having 4 cores 3.1 ghz is not enough.
( this is my home pc )

Edit2: Location does not matter.
Last edited on
Most compute nodes for rent do not have much higher single thread performance than you already have. For example, Amazon's EC2 offers Xeons at 3.3 GHz. Such is the nature of the end of Moore.

What kind of workload are we talking about? Is it highly parallelizable? If so, you could distribute it among several instances. Of course, the more, the more expensive.
It is very highly parallelizable.
It is an genetic neural networks project.

Right now the best solution I've found:
https://www.ovh.co.uk/dedicated_servers/one-week-rental.xml

My allowance allows it but the project's bucket is very tight and saving money
from cpu power for training those networks would be just amazing.

MG-256
with 256ram, 1 x 1 Gbps network , disk space is just ridiculous.
I only need to upload 1~ GB of training data and then network is not needed at all.

I was hoping maybe someone offers the kind of solution what fits just right for my needs
but I do understand that computing power is quite expensive and I do not hope much to save.

I have heard about dedicated servers but never came across solutions what are just rights for offline simulations with small ram needs.

Don't look in dedicated servers. Those packages are thought for people who want to run, guess what, servers. You want to run a computation.

If you want it to be as cheap as possible, why don't you just run it on your own machine? How long could it take to run like that?
>Don't look in dedicated servers. Those packages are thought for people who want to run, guess
> what, servers. You want to run a computation.
That was my only solution.

If estimated time is month and I would get 4 times more cpu power then it will only be a week~ instead of a month.

Thank you so much !
That's pretty cool actually. It has some pretty flexible options there too. Still a bit expensive though... 1 week is 24 * 7 * cost. For instance, a mid-level instance would be $88 for one week.

In addition though, the pricing is linear. If you double the resources, you double the price but the same is in reverse. If your application scales linearly to resources, you can halve your calculation time by getting a bigger instance while spending around the same.

Seems like a pretty good deal for a one-off run of a calculation. I might try it sometime.
To be fair, 168 hours is a lot of CPU time. The longest process I've had to run, a computer vision algorithm applied over a 150k images dataset, only took around 12 hours of GPU time.
use raspberry pi 2 or 3 get 4 of them . they are 30 or 40 dollars 2.0 ghz fast little linux computers they are also supercomputers
Raspberry Pis use 1.2 GHz, not 2 GHz, CPUs.
I doubt even 4 Pis can outrun OP's computer. Very small CPU cache + low CPU frequency + low RAM frequency = lousy compute performance. He'd probably be better off assembling a small cluster from old used computers.
Topic archived. No new replies allowed.