Name of this operation?

Pages: 12
Let S1 and S2 be two sets of T. Let D = f(S1, S2) be a set of something else such that g(S1, D) = S2 and g(S2, h(D)) = S1. That is, D encodes the differences, or deltas, between the two sets.

For example,
S1 = {1, 3, 4, 5, 7}
S2 = {2, 4, 6, 8}
D = f(S1, S2) = {(F, {1, 3, 5, 7}), (T, {2, 6, 8})}
h(D) = {(T, {1, 3, 5, 7}), (F, {2, 6, 8})}

Note that this follows the intuitive algebraic properties of addition and subtraction:
S2 - S1 = D

f(S1, S2) = D

S2 + -D - S1 = 0
S2 + -(S2 - S1) - S1 = 0
S2 - S2 + S1 - S1 = 0
0 = 0

f(S1, g(S2, h(D))) = {}
f(S1, S1) = {}
{} = {}

Note however that
S2 + -D != S2 - D
Last edited on
closed account (48bpfSEw)
The name of this is called in German "Mengenalgebra". A "Menge" is a "set". Googletranslator means: "quantitative algebra"


I suppose your next question is "how...":
Here are some answers:

http://arma.sourceforge.net/
http://eigen.tuxfamily.org/index.php?title=Main_Page


http://arma.sourceforge.net/docs.html#operators
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Examples:

    mat A = randu<mat>(5,10);
    mat B = randu<mat>(5,10);
    mat C = randu<mat>(10,5);

    mat P = A + B;
    mat Q = A - B;
    mat R = -B;
    mat S = A / 123.0;
    mat T = A % B;
    mat U = A * C;

    // V is constructed without temporaries
    mat V = A + B + A + B;

    imat AA = "1 2 3; 4 5 6; 7 8 9;";
    imat BB = "3 2 1; 6 5 4; 9 8 7;";

    // compare elements
    umat ZZ = (AA >= BB);
Last edited on
Wikipedia says this is the equivalent page: https://en.wikipedia.org/wiki/Field_of_sets
I don't think this is it. The operations I have defined fail both the commutativity and associativity field axioms. + is not even defined if both its operands are in the same set.

Armadillo and Eigen are linear algebra libraries, so that's pretty disconnected from algebras over sets.
The "how" for what I propose is really easy, anyway. Represent the sets as sorted sequences and do a merge, but instead of merging record the differences.
I don't really understand your notation:

In this notation:
{(F, {1, 3, 5, 7}), (T, {2, 6, 8})}
what do F and T stand for? What do the parentheses stand for? I take it the curly braces stand for set (non-ordered collections of elements). Parentheses usually stand for ordered sets/lists.


What is the relationship between your operation and set subtraction? What is the relationship between your operation and set symmetric difference?

For two sets A, B, the set A-B is by the definition the set whose elements are those that are contained in A, but aren't contained in B.

For the set A={1,2}, B={2,3}, the set A -B is the set {1}, the set B-A is the set {3}.

There is also symmetric difference of two sets defined as:

A<->B = the set of elements of A not contained in B together with the elements of B not contained in A.

In this way, A<->B = (A \cup B)-(A\cap B).

For the set A={1,2} and B={2,3}, the set A<->B equals {1,3}.
Last edited on
what do F and T stand for? What do the parentheses stand for? I take it the curly braces stand for set (non-ordered collections of elements). Parentheses usually stand for ordered sets/lists.
The object belongs to {({T, F}, P(N))}, that is, the set of pairs of booleans and sets of naturals.
<edit>Note that f() allows for alternative definitions, such as returning N^2, or {({T, F}, N)}</edit>

I'm well-aware of the operations of the algebra of sets. Union, intersection, complement, symmetric and asymmetric difference. All of these operations give results that are in the same powersets as their operands.
Consider bit-twiddling in C/++. You have &, |, ^, ~. You can do x & y, x ^ y, or even x & ~y, but no operation $ and integer z exists such that x $ z = y. However, there exists a pair of integers (a, b) such that (x & a) | b = z. Therefore:

g_bitwise_over_integers(x, (a, b)) = (x & a) | b
f_bitwise_over_integers(x, y) = (x & ~y, y & ~x)
h_bitwise_over_integers((a,b)) = (b, a)


I knew it could generate confusion with asymmetric set difference, but I only used - to show how it resembles ordinary subtraction, algebraically speaking.
B - A = C
A + C = B
B + -C = A
If you want to draw a parallel with another actual subtraction, you could compare it to subtraction of polynomials. Using the same example as in my first post,
P1 = x + x^3 + x^4 + x^5 + x^7
P2 = x^2 + x^4 + x^6 + x^8
D = f_over_polynomials(P1, P2) = P2 - P1 = (-x - x^3 - x^5 - x^7) + (x^2 + x^6 + x^8)
P1 + D = P2
P2 + -D = P1
Last edited on
closed account (48bpfSEw)
I apologize. It was dark and I had no light... ^^
As compensation here is the first step to implement a set and its +operator:


http://www.cplusplus.com/reference/set/set/

S1 = {1, 3, 4, 5, 7} : implementation with std::set<int>
S2 = {2, 4, 6, 8} :
SS = {S1, S2} : implementation as std::vector<std::set<int> >

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#include <iostream>
#include <set>

// + Operator as normal function
std::set<int> Add (std::set<int> &s1, std::set<int> &s2) {
  std::set<int> SR; // result

  SR = s1;
  
  for (std::set<int>::iterator it  = s2.begin();
                               it != s2.end(); ++it) {
    if (SR.find(*it) == SR.end())
      SR.insert(*it);
  }

  return SR;
}

// << Operator as normal function
void Print (std::set<int> &s1) {
  
  for (std::set<int>::iterator it  = s1.begin();
                               it != s1.end(); ++it) {
    std::cout << *it << " ";
  }
}

int main ()
{
  std::set<int> S1;
  std::set<int> S2;
 
  S1.insert(1);
  S1.insert(3);
  S1.insert(4);
  S1.insert(5);
  S1.insert(7);

  S2.insert(2);
  S2.insert(4);
  S2.insert(6);
  S2.insert(8);

  
  std::set<int> SR = Add(S1,S2); 
  Print (SR);
  
  return 0;
}

I didn't ask for an implementation. I already know how to implement it; it's fairly trivial.

S1 = {1, 3, 4, 5, 7} : implementation with std::set<int>
S2 = {2, 4, 6, 8} :
SS = {S1, S2} : implementation as std::vector<std::set<int> >
Incorrect. This is not the operation I described.
Incidentally, using std::set to implement mathematical sets is suboptimal in both time and space, in most cases.

[code]
Doubly incorrect. This is neither the operation I described, nor the operation you described. This is just a set union, which by the way is already implemented in the STL as std::set_union().

Superb job completely missing the point. I couldn't have done it better.
closed account (48bpfSEw)
Hi helios, now I see my fault... I thought the originator of this thread is someone unknown with a beginner question. But it was you!!! I would never dare to give you an advice because you are one of the few big souls in this section I learn a lot. Hope you forgive my arrogance.

This is the truth. No hidden sarkasm.

sincerly,
Necip
So, when you say bitwise over the integers, what do you mean? My interpretation is that you write the integers in binary and then do bitwise operations on each binary digit?

The parts of your post that I managed to understand (about 1/3 of it) reminds me of fractional p-adic numbers (in the case of p=2). By fractional 2-adic number I mean a 2-adic numbers between 0 and 1.

https://en.wikipedia.org/wiki/P-adic_number

[not quite sure about that any more, I just remembered that when you add p-adic numbers you get carryover].

What I didn't understand (there may be more coming): I gathered that & | are bitwise and and or but I am confused about ~ and ^ (are they bitwise not and bitwise xor)?

What is the aim of these two functions:

f_bitwise_over_integers(x, y) = (x & ~y, y & ~x)
h_bitwise_over_integers((a,b)) = (b, a)

and how they relate to the examples you give below them?
Last edited on
closed account (48T7M4Gy)
'Set theoretic difference' has the relevant property regarding minus signs that apply in arithmetic but not here. The negative of a set doesn't make sense since a set cannot hold less than zero elements and a minus element, say in a set of toenails, unless it's ingrown maybe, is bizarre.

(I'm still checking against OP but I'm assuming f, g and h are the same operator or function, dare I say it. The numerical example isn't a proof as we all know but gives a start to probably what is a simple mathematical proof, possibly with Venn diagram assistance. See Wikipedia?)
I made a mistake in my previous definition:
g_bitwise_over_integers(x, (a, b)) = (x & a) | b
This should be
g_bitwise_over_integers(x, (a, b)) = (x & ~a) | b

bitwise over the integers
No, no. "f_bitwise_over_integers" is "[f bitwise] over the integers", not "f [bitwise over the integers]". I thought bitwise_f_over_integers would be harder to read.

My interpretation is that you write the integers in binary and then do bitwise operations on each binary digit?
The set of finite sets of integers is itself countable.
to_integer({0}) = 1b
to_integer({1}) = 10b
to_integer({2}) = 100b
to_integer({10, 3}) = 10000001000b
to_integer({0, 2, 4, 6, ...}) is undefined

I gathered that & | are bitwise and and or but I am confused about ~ and ^ (are they bitwise not and bitwise xor)?
Yes. I used the C operators because I figured that'd be clear.

What is the aim of these two functions:

f_bitwise_over_integers(x, y) = (x & ~y, y & ~x)
h_bitwise_over_integers((a,b)) = (b, a)

and how they relate to the examples you give below them?
S1 = {1, 3, 4, 5, 7}
S2 = {2, 4, 6, 8}
P1 = x + x^3 + x^4 + x^5 + x^7
P2 = x^2 + x^4 + x^6 + x^8
N1 = 1^2 + 3^2 + 4^2 + 5^2 + 7^2 = 186
N2 = 2^2 + 4^2 + 6^2 + 8^2 = 340
(In other words, the integer representation of a set of integers is the polynomial representation with x = 2)

DN = f_bitwise_over_integers(N1, N2) = (N1 & ~N2, N2 & ~N1) = (186 & 4294966955, 340 & 4294967109) = (170, 324)

g_bitwise_over_integers(N1, DN) = (186 & ~170) | 324 = 16 | 324 = 340
g_bitwise_over_integers(N2, (324, 170)) = (340 & ~324) | 170 = 16 | 170 = 186

The negative of a set doesn't make sense
Note that if f() applies to two subsets of S, it cannot return a subset of S under any possible definition. It can, for example, return a pair of subsets of S.
That aside, it doesn't matter whether "the negative of a set" makes sense as a concept in itself. It only needs to make sense within the given algebraic context. If these properties hold:
* B - A = C
* A + C = B
* B + -C = A
then -- regardless of the specific identities of A, B, and C -- the corresponding definitions of addition, subtraction, and negation make sense.

For example, since rationals are in a 1-to-1 correspondence with naturals, it's possible to define rational_to_natural(q), natural_to_rational(n), rational_addition(n1, n2), rational_sign_negation(n), rational_multiplication(n1, n2), and rational_multiplicative_inverse(n), even though it's not possible to perform sign negation or multiplicative on a natural and get back a natural.
q1 = 1/2
q2 = 3/7
n1 = rational_to_natural(q1)
n2 = rational_to_natural(q2)
q3 = natural_to_rational(rational_multiplication(rational_sign_negation(n1), rational_multiplicative_inverse(n2))) = -1/2 * 7/3 = -7/6

I'm still checking against OP but I'm assuming f, g and h are the same operator or function, dare I say it.
How could that possibly work?
* g(x, f(x, y)) = y
* g(y, h(f(x, y))) = x
* h(f(x, y)) = f(y, x)
Last edited on
closed account (48T7M4Gy)
How could that possibly work?


a) * g(x, f(x, y)) = y
b) * g(y, h(f(x, y))) = x where h(f(x, y)) = f(y, x)

From my point of view and in the abscence of anything I could find to the contrary, they translated to the following, (E&OE):

a) f(x,f(x,y))
b) f(y, f( y,f(x,y) ) )

There is nothing unusual about nested operators. Take 2nd derivatives as one example among zillions.

What I haven't checked, and still haven't, is whether the numerical substitution works and gives the same answer as that posted.

I'm treating f as an operator on two sets to give a single set result analogous to any other operator I can think of.

If the f operator produces the same result as the 'relative complement' then the naming problem is solved for me and indeed the resulting difference is a single set, albeit you showed 2
D = f(S1, S2) = {(F, {1, 3, 5, 7}), (T, {2, 6, 8})}
, the second of which I ignored because the second set appeared to be just the 'remainder'. There is no requirement that I am aware of that the remainder has to be include in the difference, just as later your calculations show.

As far as the negative of a set, and the reason the algebra doesn't work again as you show, is simply that the rules of arithmetic don't apply, set theory applies, the negative has no meaning if my assessment of the name and general set theory is correct. The rule 'misbehavior' is inherent in the definition of the 'set-theoretic difference' and set theory and that's what your results have shown n'est pas. It would be a problem if the reverse was true - anomalous to say the least, perhaps even a whole new field.

The whole discourse as far as my involvement is concerned hinges on whether the name 'set-theoretic difference' is the answer you were asking for. If it is then you will see that fighting the -ve sign is pointless despite my paraphrasing -ve sets which follows automatically from the definitions and set theory in general. If it's not then I guess you'll have to keep asking, looking.
a) f(x,f(x,y))
b) f(y, f( y,f(x,y) ) )
This makes no sense. f(x, y) gives back something of a different type than its arguments. f(x,f(x,y)) is as meaningful as f(🚀, 🍌).

If the f operator produces the same result as the 'relative complement'
It doesn't. Look at the properties of f(), g(), and h().
* g(x, f(x, y)) = y
* g(y, h(f(x, y))) = x
* h(f(x, y)) = f(y, x)
If f(x, y) = y\x, where you do you get the extra information required to fulfill g(x, y\x) = y? To go from one set to another there's something you have to take out and something you have to add in.

The rest of your post is derived from the proposition that f(x, y) = y\x is a valid definition for f(), which it isn't.

Take your time to read and understand my original definition.
Let S1 and S2 be two sets of T. Let D = f(S1, S2) be a set of something else such that g(S1, D) = S2 and g(S2, h(D)) = S1. That is, D encodes the differences, or deltas, between the two sets.
closed account (48T7M4Gy)
f(🚀, 🍌) is not a problem depending of course on your notation. The difference, set-theoretic even, between a set containing only a rocket and another containing only a banana is the empty set. I don't see any problem there unless you slip on a skin at Cape Canaveral.

Unfortunately I have to go, so I'll read your other stuff and reply another time. I think there are numerous misconceptions yet to be resolved even if that will probably end with a stalemate.

I think you need to consider exactly what g(...) is. Is it a repeat of f or is that where the secret lays?
Perhaps it will help if I post some code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
#include <iostream>
#include <algorithm>
#include <vector>
#include <initializer_list>

template <typename T>
class Delta;

template <typename T>
class Set{
	std::vector<T> data;
public:
	Set() = default;
	Set(const std::initializer_list<T> &l){
		for (auto &i : l)
			this->data.push_back(i);
		std::sort(this->data.begin(), this->data.end());
	}
	void add(const T &i){
		this->data.push_back(i);
		std::sort(this->data.begin(), this->data.end());
	}
	typename std::vector<T>::const_iterator begin() const{
		return this->data.begin();
	}
	typename std::vector<T>::const_iterator end() const{
		return this->data.end();
	}
	bool contains(const T &item) const{
		return std::binary_search(this->data.begin(), this->data.end(), item);
	}
	// Implementation of f()
	Delta<T> operator-(const Set<T> &other) const;
	// Implementation of g()
	Set<T> operator+(const Delta<T> &delta) const;
};

template <typename T>
std::ostream &operator<<(std::ostream &stream, const Set<T> &set){
	stream << "{ ";
	for (auto &i : set){
		stream << i << ", ";
	}
	return stream << " }";
}

template <typename T>
class Delta{
	Set<T> add;
	Set<T> remove;

public:
	Delta(const Set<T> &add, const Set<T> &remove){
		this->add = add;
		this->remove = remove;
	}
	// Implementation of h()
	Delta<T> operator-() const{
		return Delta<T>(this->remove, this->add);
	}
	const Set<T> &get_add() const{
		return this->add;
	}
	const Set<T> &get_remove() const{
		return this->remove;
	}
};

template <typename T>
Delta<T> Set<T>::operator-(const Set<T> &other) const{
	Set<T> add, remove;

	for (auto &i : other)
		if (!this->contains(i))
			remove.add(i);
	for (auto &i : *this)
		if (!other.contains(i))
			add.add(i);

	return Delta<T>(add, remove);
}

template <typename T>
Set<T> Set<T>::operator+(const Delta<T> &delta) const{
	Set<T> ret;
	auto &add = delta.get_add();
	auto &remove = delta.get_remove();
	for (auto &i : *this)
		if (!remove.contains(i))
			ret.add(i);
	for (auto &i : add)
		ret.add(i);
	return ret;
}

int main(){
	Set<int> S1({ 1, 3, 4, 5, 7 });
	Set<int> S2({ 2, 4, 6, 8 });
	auto delta = S2 - S1;
	std::cout
		<< S1 << std::endl
		<< S2 << std::endl
		<< S1 + delta << std::endl
		<< S2 + -delta << std::endl;

	return 0;
}
Actually, posting code didn't help at all.

I haven't quite gotten the purpose of f, h and g yet, but first, what are you doing with the bitwise operations?

The bitwise operations you gave are, from what I understood, pointwise binary operations in a list of bits.

Since the number of bits in each sequence is not specified, you are first padding the smaller sequence of bits with zeroes to make the two sequences of equal length, and then you do the pointwise operation.

The trick of "padding the smaller object so you can do operations between it and a larger object " has a mathematical name, "direct limit". Actually, direct limits are slightly more general: you are allowed to use a 'common refinement' [common padding] on both objects you want to do an operation with.

This direct limit business works like a charm with & and |: it inherits all nice properties of & and |.

However, with ~ and ^ there is an issue (not necessarily a problem).

The issue with a& ~x: first, you pad x with extra zeroes, and then you flip the bits. However, the number of bits you pad x with will change depending on a. In other words, ~x is not defined as a stand-alone operation.
Last edited on
The bitwise operations you gave are, from what I understood, pointwise binary operations in a list of bits.
I don't agree. Bitwise operations can be defined directly over integers without reinterpretation as a list of bits.

(x & y) = x >= y ? and(x, y) : and(y, x)
and(x, 0) = 0
and(x, y) = ((x%2 != 0 && y%2 != 0) ? 1 : 0) + 2 * and(x/2, y/2)

(x | y) = x >= y ? or(x, y) : or(y, x)
or(x, 0) = x
or(x, y) = ((x%2 != 0 || y%2 != 0) ? 1 : 0) + 2 * or(x/2, y/2)

(x ^ y) = x >= y ? xor(x, y) : xor(y, x)
xor(x, 0) = x
xor(x, y) = ((x%2 != y%2) ? 1 : 0) + 2 * xor(x/2, y/2)

where / denotes integer division

As for ~, my previous example assumed that the working sets were a small subset of the naturals. Specifically, {0, 1, 2, ..., 31}. This was necessary because otherwise ~ would return an infinite value for all finite values. It's possible to define a new bitwise operation \ such that x \ y == x & ~y:

x \ 0 = x
0 \ y = 0
x \ y = ((x%2 != 0 && y%2 == 0) ? 1 : 0) + 2 * ((x/2) \ (y/2))

EDIT: Corrected some of the logic.
Last edited on
(x & y) = x >= y ? and(x, y) : and(y, x)
and(x, 0) = 0
and(x, y) = ((x%2 != 0 && y%2 != 0) ? 1 : 0) + 2 * and(x/2, y/2)


[edit]
Seems to me an equivalent way to do this is:
1) convert both integers to binary notation (list of bits)
2) pad on the left the smaller list with zeroes
3) Take ordinary and on each element of the list
4) go back from binary notation to integers.
Of course, you are implementing it recursively (the number of operations has the same order of magnitude).

If you remove step 1 and step 4 you simply operate on lists of bits.

The padding operation here would be what I was referring to as "direct limit".
Last edited on
Of course. It would also be possible to convert the integers to sets of bit magnitudes that are 1, then perform common set operations, then go back.
But I'm not sure what you're getting at.
I was just trying to understand what you are talking about.

It seems your operations give a software implementation of the direct limit ring

Z_2 -> Z_4->Z_8->...->Z_{2^n}->...
where each consecutive map is given by multiplication by two. Under that interpretation, your right-most binary digits ('right-most bits') correspond to rings on the left of that chain.

Now that it seems I have some understanding of the operations, what is the purpose of the functions f,g,h?
Last edited on
Pages: 12