Language Choices

Pages: 123
IMO it all depends on the situation. If you're designing software that does need efficiency (kernels, simulations, etc.) then by all means take efficiency over anything else. Otherwise, use whatever gets the job done in the timeliest and most bug-free manner.
In other words, always use Haskell.
I'd take efficiency over usability any day
That's fine for hobbyists, but for those of us that intend to make a career out of programming I don't think it is a good mindset, at least from a business perspective. I should also note, I probably shouldn't have said "efficiency" because it suggest c# is inefficient, which it is not (speed != efficiency).
Efficiency of the final product has usually little to do with the language it was written in. E.g. Apache Cassandra written in Java beats in efficiency MySQL/PostgreSQL/Oracle which were all written in C. The same goes about Netty web server vs most of C-based web-servers (actually the only that can compete is nginx).

The key aspects of writing efficient software are good architecture and algorithms.
As for being close to the metal and fine tuning- what about writing massively-parallel lockless code in assembly, C or C++? Are you sure you can do this for a complex problem without having a great support in standard library and GC? Writing lockless code without GC becomes extremely hairy very soon, even for very simple problems like lockless linked lists.

Last edited on
closed account (zb0S216C)
@naraku9333: I am a hobbyist programmer and nothing more. Also, I'm aware that efficiency isn't about speed.

rapidcoder wrote:
"what about writing massively-parallel lockless code in assembly, C or C++? Are you sure you can do this for a complex problem without having a great support in standard library and GC?"

I don't use garbage collection; I detest it in fact. As for the standard library, I hardly use it -- "std::cout" being the most commonly used function of the standard library.

Besides, if complex code is simplified greatly by the language's built-in mechanisms, then there's less of a challenge to tackle -- where's the fun in that? Besides, I enjoy implementing complex structures by myself; I've learnt so much in doing so.

Wazzak
Last edited on
Sure, as a hobby you can do it, but it is not the way professionals write efficient software.


I don't use garbage collection; I detest it in fact.


Well, you are allowed to have an opinion on that, but the facts are:
- CPU cores are not getting faster any more, at least not at a pace it was before 2005.
- Sequential memory access is getting faster, but random access is not.
- Memory is getting non uniform, and memory sharing is getting extremely expensive.

Hence:
- High performance means scaling to hundreds of cores in the near future.
- To be able to scale well the only way is either hardware supported STM or avoiding shared mutable state.
- Hardware supported STM is still research and not going to happen in the next 5 years. So I guess avoiding shared mutable state is the only way to go.
- Avoiding shared mutable state = functional programming (assuming as done in Haskell, Scala or Erlang, not just stacked lambdas on top of imperative language).
- Functional programming without GC? You must be crazy.

http://fpcomplete.com/the-downfall-of-imperative-programming/

Actually guys who thought you could go with locks and threads and "classical" imperative C++ style wrote MongoDB. It is lock on a lock on a lock. Guess what? It doesn't scale.
Last edited on
- Functional programming without GC? You must be crazy.

I'd wager C++ will be around, with as poor a support for FP as its support for OOP, for at least as long as Stroustrup will.
C++ may be a convoluted mess, but it was my first convoluted mess and I can't help but love it. </3
closed account (zb0S216C)
rapidcoder wrote:
"Functional programming without GC? You must be crazy."

I think you meant: "Functional programming without GC? You must be sensible."

Wazzak
@Framework
No, he definitely meant crazy.
Well we got three kinds of smart pointers now, not counting the deprecated one. They should be enough to approximate any garbage collector. I wonder how std::shared_ptr is implemented.
Catfish2 wrote:
I wonder how std::shared_ptr is implemented.

There's an implementation note on how it's usually done at http://en.cppreference.com/w/cpp/memory/shared_ptr#Implementation_notes
Last edited on
@ Cubbi: thanks for that link.
So it works by counting owners. Which makes it a destructor-driven, "deterministic" garbage collector. And the cycle problem is solved by std::weak_ptr.
Yes, as with everything in C++, you get maximum control and are still allowed to shoot yourself in the foot if you really want to. I like C++ because of its fine-tuned control :)

They should be enough to approximate any garbage collector


1. They don't scale to multicore - assigning pointers require interlocked inc/dec which is extremely costly on modern multiprocessors compared to less-than-single-CPU-cycle pointer assignment in GCed languages.
2. They don't solve cycles. Weak pointers are not a proper solution - they change one problem into another one (now you have to check if pointer is valid on every access = slow and error prone). Weak pointers are used for writing memory friendly caches, not solving cycles.
3. They don't solve memory fragmentation problem. Once allocated memory usually can't be given back to OS (if you don't believe: open any browser, check memory usage, open 50 tabs, close 50 tabs, check memory usage - I bet it is much higher than on startup).


Topic archived. No new replies allowed.
Pages: 123