is there a simpler way to write an abs function?

first off the requirement is, my integer class has no copy-constructor, instead it's a child of:

1
2
3
4
5
6
7
8
9
10
template <class T, class B = EmptyClass>
class DeepCopyOption : public B {
public:
	friend T& operator<<=(T& dest, const T& src)
	{ if(&dest != &src) { (&dest)->T::~T(); ::new(&dest) T(src, 1); } return dest; }
	friend T& DeepCopyConstruct(void *dest, const T& src)
	{ return *(::new (dest) T(src, 0)); }
	friend T *DeepCopyNew(const T& src)
	{ return ::new T(src, 0); }
};


therefore my current implementation is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
template<typename T> T ABS(const T&& a){
  if(a>T(0))return pick(a);
  return T(-a);
}
template<typename T, bool deepcopyoption> struct GA{
  static T ABS(const T& a) {
    if(a>0)return a;
    return -a;
  }
};
template<typename T> struct GA<T,true>{
  static T ABS(const T& a) {
    if(a>T(0))return T(a,1);
    return -a;
  }
};
template<typename T> T ABS(const T& a) {
  return GA<T,std::is_base_of<DeepCopyOption<T>,T>::value>::ABS(a);
}


any ideas how to improve that? how to make it more readable? (pick is alias for std::move) if my class has r-value initialization, this should produce optimal binaries for large numbers occupying millions of bytes, no?

of course everything here can be re-used in open-source...
Last edited on
Sorry, I don't understand why you need a whole class hierarchy to wrap a numeric type? What's wrong with std::abs?
Why are you overloading the left-shift operator to do something that is totally different from left-shifting?

And placement new everywhere? What if these objects are created on the stack? That'll likely crash your program.

Seems like you're trying to be too clever for your own good.
Last edited on
the numeric type is a wrapper for gmp_mpz, and I plan to implement another class which stores the gmp_mpz struct on disc instead of using memory. of course, where this comes from there is also an implementation of vector and sorted list and all that, also derived from DeepCopyOption. all such classes have the property that there exists no copy-constructor, only r-value construction which basically swaps around pointers instead of copying whole data-sets. the lack of copy-constructor is to throw an error if the programmer is too lazy to distinguish between copying and passing by reference. i.e. I can throw 3rd party sourcecode at it, and will get informed of every copy-constructor call. naturally this means, std::abs fails because it assumes the object has copy-construction. my code would have been easier to write if I could assign an attribute of a name I choose onto a class and then specialize a template in such a way to take only objects with that attribute. as for creating on stack, it is correct that errors occur. therefore there must be another version of a number-class for storage on stack. the placement-new is to speed up storage in vectors, i.e. on the heap. and I disagree that what I'm doing is no left-shift: in the same way as cout and streams in general argue with shifting data to the left, so also my <<= operator is shifting data of a big integer from the right side of the equal sign into the object on the left side...
Why does DeepCopyWhatever exist in the first place?

piotr5 wrote:
I plan to implement another class which stores the gmp_mpz struct on disc instead of using memory
This is not feasible, as even a memory-mapped file consumes memory.
Last edited on
> This is not feasible

This is certainly feasible.

> as even a memory-mapped file consumes memory.

One does not have to mmap an entire file, every time.

This technique is often used for
a. supporting automagical, efficient persistence
b. sharing sequences across process boundaries

This might prove interesting: http://www.boost.org/doc/libs/1_59_0/doc/html/interprocess/allocators_containers.html
have you ever heard of Rvalue and what it can do in terms of evading garbage-collection? take a look at http://thbecker.net/articles/rvalue_references/section_04.html for info how r-value can speed up sorting or whatever swap-intensive activity. now suppose, instead of swapping, I just destroy the old value and replace it by the new. this is a quick way for copying objects which have been temporary (r-value) in the first place. the downside is, the compiler is unable to do that automatically. when I exit a function with a return statement, many objects lose scope and get destroyed. in that case, the same strategy can be used to save time on supplying a return-value. since I know which objects are being destroyed. of course, if I return the object about to be destroyed directly, the compiler could do some optimization. but if I return a vector with such an object, during writing of that value into the vector it is ok to destroy the old value too. it's store and forget. would you create a copy for the vector instead, your heap would end up with a huge hole where the temporary was stored. so you save on time and get less fragmented memory at the same time. DeepCopyOption is just to make that more easy, a small fix for the compiler to distinguish between ordinary objects and objects which can get picked up this way and which therefore have no copy-constructor, which have copying as an optional feature when you need it instead of default-behaviour. I've heard people complain c++ unlike rust defaults to using const T& as default instead T&&, so the solution for that shortcoming is to just never overload with a const T& version to force compiler to throw an error when the compiler attempted to transform an R-value into an L-value.

disc-stored numbers is not feasible because of slow discs, and because slowly memory-chips are catching up in size with mechanical disc-drives. but as memory goes, the only limit is in addressibility. and of course gmp isn't made for numbers bigger than memory stored on disc. but imagine several terabyte of numbers in some matrices, do some linear algebra with that! takes some days to do a calculation, but it's a good test for future computers where the terabytes are ram-size and not just disc-size...
@nayonrony: I'm not sure why you copypasted JLBorges' post, but I did see it and it is clear to me that I did not know what I was talking about at the time. No need to repeat what has already been said and understood.
Topic archived. No new replies allowed.