Quantum Computing and Artificial Sentience

Pages: 123
YellowPyrmid wrote:
how do most encryption work?

Public key cryptography works more-or-less the way you described, except with prime numbers. The exact details vary from algorithm-to-algorithm, and the algorithm you select depends on what your goal is. For example, imagine Romeo and Juliet want to communicate via e-mail but their families are monitoring their e-mails. Each generates a private key and a public key, and they exchange public keys. If Romeo wants to send a message to Juliet, he first encrypts it using her public key. Then, the message can only (feasibly) be decrypted by someone with access to Juliet's privates private key, so only Juliet and anyone she chooses to share her private key with (or accidentally shares it with) would be able to decrypt the messages, and the same is true for Romeo.

Of course that doesn't solve the problem of impersonation: Tibalt could send a message to Romeo, encrypted with Romeo's public key (because it's public) claiming to be Juliet asking Romeo to re-send his last message, encrypted with a different public key because she lost access to her private key, and Romeo would have a hard time proving otherwise. Also, judging by the play, Romeo's not too bright, so he'd probably fall for it.
Quantum "computation" but I'm talking here about quantum simulation. e.g. Simulating the interaction of and photon with an electron and predicting the result non-probabilistically. If you are happy with uncertain answers to uncertain questions then yes you can simulate it with a turing machine, but if you want photon A to interact with electron A and tell me yes or no will it then go on to interact with electron B, the answer is "maybe" and using all the turing machines in the universe cannot resolve the problem.
Last edited on
closed account (z0My6Up4)
I think the term "artificial sentience" is contradictory. An entity can either feel & experience emotions or it cannot. Without replicating some form of biological life form, I don't see how it can be possible to make a machine really experience emotion - even with quantum computing. But then again I am not educated on what quantum computing really is and what it's possibilities are.

On the other hand I can see clearly how scientists can do some hideous work and create living beings from conducting various biological experiments. It would not surprise me if stuff like that was not already going on somewhere in the world. The book "Brave New World" by Aldous Huxley shows that doing things like this can introduce a horrible nightmarish world.
Last edited on
closed account (EwCjE3v7)
Thank you for the explanation chrisname, and yea I meant Prime.

But earlier someone said it was only Adobe who used that. But I have heard many do.
Quantum "computation" but I'm talking here about quantum simulation. e.g. Simulating the interaction of and photon with an electron and predicting the result non-probabilistically. If you are happy with uncertain answers to uncertain questions then yes you can simulate it with a turing machine, but if you want photon A to interact with electron A and tell me yes or no will it then go on to interact with electron B, the answer is "maybe" and using all the turing machines in the universe cannot resolve the problem.

Yeah. I guess I really meant to question whether or not there is a physical process that could be used to create, or that is part of something which is more computationally powerful than a Turing Machine.

Your example is on the sort of track I was thinking towards. The uncertainty principle seams to defy logic. There is no known cause and effect for the particular way that photons or electrons will interact with respect to their wavelike behavior.

I suppose randomness is one of the issues I am thinking about. Maybe some physical process is able to bring random things / behavior in existence in a more powerful way than a Turing Machine, enabling the ability to make guesses in forms that Turing Machine's cannot achieve.

Of course I am being wildly speculative.
Last edited on
YelloyPyrmid wrote:
But earlier someone said it was only Adobe who used that. But I have heard many do.

I said that, it was a joke. Adobe had a password database stolen a few months back and it turned out they'd encrypted everything but didn't hash the passwords.

flint wrote:
Without replicating some form of biological life form, I don't see how it can be possible to make a machine really experience emotion

Well, a biological brain is made up of neurons and little else -- there are other cells, but they don't take a part in processing, they're just satellite cells of the neurons. By themselves, neurons are very simple: they have a bunch of dendrites and one axon. The dendrites receive signals and the axon transmits them (it sounds like a many-to-one relationship but the axon is branched so it's actually many-to-many). If the sum of the signal to the dendrites is sufficient to pass the neuron's threshold potential then the neuron "fires": it sends a signal down the axon to be transmitted to all the neurons that are connected there. This can be trivially simulated by a computer with something like the following:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
struct Neuron {
    void input()
    {
        // Signal received from dendrite.
        ++potential;
        if (potential >= thresholdPotential)
            activate();
    }
private:
    static constexpr int thresholdPotential = 10;
    std::vector<std::shared_ptr<Neuron>> outputs;
    int potential;

    void activate()
    {
        // Transmit signal over axon.
        for (std::shared_ptr<Neuron>& neuron : outputs)
            neuron->input();
        potential = 0;
    }
};

Although you wouldn't use that for an actual neural network -- normally they use floating point values, and the inputs are weighted which means dendrite A can be more important to activating the neuron than dendrites B and C, and the output is usually smoothed using a sigmoid function of some kind -- the general idea is the same. Biological neural networks differ from artificial ones mainly in that (1) most artificial neural networks (ANNs) have only a few hundred neurons whereas even a fruit fly has about 100,000 of them, and (2) ANNs usually only allow data to travel in one direction ("feed-forward") whereas biological brains can have loops, data going in multiple directions, etc. so the structure is much more complicated. Also, brains regularly change their structure during normal operation, whereas ANNs typically don't: the programmer can change the number of layers of neurons (though it's not common to have more than 3 layers) and the number of neurons per layer, but generally learning occurs by changing the weight values which determine the relative importance of each input to the neuron, rather than by adding or removing neurons, altering the flow of data between specific groups of neurons, and other things that biological brains can do. Also, it's not just the number of neurons that count, but the number of connections between them: for a biological neuron, being connected to 1,000 other neurons is not uncommon; for an ANN, having 1,000 neurons in the first place would be unusual: imagine 1,000 instances of the above structure, each holding 1,000 pointers to other neurons: just storing all the pointers would take almost 4 MB of RAM, and that's before you take into account the input to the network. And not to mention the CPU overhead (though a GPU would be better) of processing 1,000,000 synapses.

That being said, I do think ANNs are a promising avenue in AI. It's just that the scale and complexity of the brains of even a mouse -- with 70 million neurons -- are greater than that of the largest neural networks humans have built thus far (I can't find anything about the number of neurons but the most connections was about 11.2 billion, which would be less than the mouse's 70 billion assuming an average of 1,000 connections per neuron; the human brain is estimated to have about 100 trillion for comparison). Also, just having an ANN won't get you anywhere. You have to train it on huge sets of data before it can do anything useful (the Google X neural network spends its time watching YouTube videos of cats: consider the Turing test passed). And I think making bigger and bigger networks will give diminishing returns pretty quickly. It will have to be combined with other techniques.

helios wrote:
A society where everyone is happy and has everything they want. How dreadful.

Suffering builds character. Although I suppose you could argue that in Brave New World, since they have total mastery of genetic engineering, they can manufacture character. Besides, I don't think they were really happy as evidenced by their frequent drug abuse. What they were is comfortable, and I'd rather be free than merely comfortable. Also, they were conditioned to be incapable of feeling the full range of human emotion, their human experience narrowed to a vague "slightly better than neutral" feeling. I can tell you from experience that it's boring when 90% of the time, all you feel is "OK". I've spent days trying to find something that would make me sad or whatever, but none of it works.
Last edited on
I hope this is some kind of sc-fi thread because I think we are far from having robots roaming about. It takes a decade just to put a drug on the market, think how long it would take to put robots on the market.
Last edited on
Well with robots doing everything we wont have much of an economy. In regards to encoding androids with rules, I could imagine some virus made by the government, kid in his basement or whatever could take care of that.
Besides, I don't think they were really happy as evidenced by their frequent drug abuse.
The drug is part of why they were happy. There's a certain amount of unhappiness that's inherent to interacting with other people, which has nothing to do with the society you live in. If you have something you help you cope with that, all the better, I say.

What they were is comfortable, and I'd rather be free than merely comfortable.
But were they really less free than you? You may not be engineered into a class, but if you live in the Western world, your economic class is very strongly influenced by the economic class of your parents. There's a little movement up and down, but it's rare.
The freedom we have that they didn't is the freedom to do work we hate. Go us!

Also, they were conditioned to be incapable of feeling the full range of human emotion, their human experience narrowed to a vague "slightly better than neutral" feeling. I can tell you from experience that it's boring when 90% of the time, all you feel is "OK". I've spent days trying to find something that would make me sad or whatever, but none of it works.
Yes, but,
* Orgies.
* Smellable movies.
* No starving to death.
closed account (j3Rz8vqX)
Well with robots doing everything we wont have much of an economy.

We can always work for the robots...

They'll put us to sleep and cultivate the pulses generated from our brain waves, body heat, and chemical digestive processing.

Just kidding.

But I'm sure we'll all get along, as long as we find uses for one another.
I'm not sure, but it sounds like you're asking for computers to become Maxwell's demons, which is just being unreasonable.


It does sound unreasonable (although it's not a Maxwell demon), which is why I don't think a Turing machine could ever simulate it...

@Chrisname - "...a technique far more accurate than earlier microscopic methods, has shown about 200 billion neurons in the human brain with 125 trillion synapses in the cerebral cortex alone." - http://en.wikipedia.org/wiki/Human_brain

Most of this thread has been tldr but I thought I'd share a relevant article http://gizmodo.com/should-your-driverless-car-kill-you-to-save-two-other-p-1575246184
closed account (z0My6Up4)
helios wrote:
flint: Do you have any more ignorant, anti-intellectualist BS you'd like regurgitate, or is that it?


How ignorant are you? If you were having a conversation face to face would you speak to people in that way? Your a real internet tough guy aren't you. Idiot.
closed account (z05DSL3A)
helios wrote:
What does the first question have to do with the second question?


The second meaning of ignorant is not knowing right way to behave. British spoken not knowing the right way to behave or to treat people. So it is basically How rude are you? If you were having a conversation face to face would you speak to people in that way?
closed account (z0My6Up4)
helios wrote:

How ignorant are you? If you were having a conversation face to face would you speak to people in that way?
What does the first question have to do with the second question?

If you were having a conversation face to face would you speak to people in that way?
I don't know. Would you openly display in public the level of ignorance and stupidity you have repeatedly shown on these forums? If I came across someone who did, I might talk to them the same way I just talked to you.

And you reported my post. How very mature.


Can't you figure out what I mean or can't you understand English? To be perfectly frank I could not care less.
Last edited on
To be honest I was inclined to report the trolling by helios myself. (but I refrained, I only report spam).
Not all offensive comments are trolling. Not all trolling comments are offensive.
Last edited on
closed account (L6b7X9L8)
Doesn't it make you wonder guys if this could ever be reached considering how rude humans can be to each other? I don't know if it's just unfortunate for me to see all the time but people constantly try to fight each other at every corner before they actually try to achieve something great.

To me, or it appeared so, we only got into space because Russia and America were seeing who could develop the better missile. Image the reasons behind self-aware robots.


I hope I don't get bashed for this comment.
closed account (N36fSL3A)
Keep in mind that text doesn't have any tone. :)
Last edited on
Internet spat doesn't really bother me....
Pages: 123