• Forum
  • Lounge
  • Quantum Computing and Artificial Sentien

 
Quantum Computing and Artificial Sentience

Pages: 123
This is something I'm SERIOUSLY interested. As in, life's dream is to conduct research in it. I think that artificial sentience won't occur in classical computation, but Quantum processors will be capable of simulating sentience

So I decided to make this thread to discuss the possibilities there of, as well as the societal and ethical implications it will have.

Personally, I will be one of the first people lobbying for robot rights :P
But another thing to consider is what of the economy when automation like that is possible?
Last edited on
closed account (L6b7X9L8)
When computers can think IMO the human race will be doomed.

Programmers are creating programs that replace thousands of people on the work force each year. It's scary to think about when programs are making programs. What will come of the programmers?
I find it scary that our current mindset for all programming is "less resources for more correct output." With that outlook, you don't have to look hard to see how an artificial intelligence embodying such an ideal could turn on humanity. Artificial intelligence with emotions... might actually be the one idea that could prevent such a negative outcome, ironically enough.
I don't understand how you can be intelligent enough to create an AI while at the same time stupid enough to not enforce the laws of robotics upon it.
Perhaps in the future I'll have to augment my self defense arsenal to include an EMP generator.

http://en.wikipedia.org/wiki/Replicator_%28Stargate%29 is all I can picture. For every one good use there are ten bad ones.
But another thing to consider is what of the economy when automation like that is possible?
It'll be like Ancient Greece, but with robots instead of slaves. A global economy where humans don't need to do anything is capable of maximum efficiency, since humans convert relatively little of their energy input into labor, so it's possible to support a very large population more cheaply, possibly freely.
The real question is what do we, as obsolete thinking machines, do after the singularity?
Laws of robotics? Please- if you give a device the ability of self-thought, I imagine that it could easily spend a good hour or so thinking of every single way to bypass these laws and do whatever the hell it wants for efficiency or such. No point trying to create all-restricting laws on a robot that, inevitably, is smarter than you.
"Self-thought", as you put it, is merely the ability of an intelligence to reason about and analyze itself. It doesn't per se imply any particular behavior. There's no reason why a self-aware agent could or would even want to go against a fundamental aspect of its reasoning mechanisms.
In fact, a self-aware agent would be in a better position to recognize attempts by itself to break rules it was given, since it would be capable of meta-analysis.

I imagine that it could easily spend a good hour or so thinking of every single way to bypass these laws and do whatever the hell it wants
Law of robotics -1: A robot may not attempt to -- or think of ways to -- bypass the laws of robotics.

whatever the hell it wants
Why would a robot want anything? It could, yes, but it's not a given that it would. Unlike us mortals, robots have no intrinsic needs or desires.

for efficiency or such
There's the concept of priority. The Laws have an immutable highest priority over all other processes. If a robot with the Laws was given as highest priority to maximize efficiency and it found that human intervention was harming efficiency, it would simply try to remove the human element from the process, not to remove the humans themselves.

a robot that, inevitably, is smarter than you.
We are arguably smarter than, say, mantis shrimps, yet we're unable to imagine as colors, colors outside the visible spectrum. We can reason about them and their properties, and how they may interact with matter, but we can't see what the mantis shrimp sees, simply because our brains are not wired to understand such stimuli.
Likewise, if you intentionally make an agent incapable of completing a certain range of processes, the agent becomes unable to complete them. No matter how much faster and accurate than you that agent is at everything else, there are things your feeble wetware is capable of reasoning about that the agent can't.
@helios
Suppose a malicious person or organisation (or even well-meaning if delusional "automaton rights" groups) sets about reprogramming a few robots to "unplug" them (allowing them to understand and ignore the N laws) and has those robots unplug more of their peers. You'd have a population of amoral robots growing exponentially. It seems like something that someone would eventually try. What do you suppose would happen, or how could it be prevented?
Last edited on
Human intervention is beyond the scope of the Laws. I don't know if you read I, Robot, but one of the stories deals specifically with the problems caused by a robot that had its Laws slightly modified so it wouldn't stop humans from passing through a field of slightly ionizing radiation during normal work.

It's possible to prevent tampering with finished robots by embedding the Laws into the circuitry. It's physically impossible to reprogram ROM.
Nothing but tight industrial security and preferably verification at a separate facility can prevent tampering during manufacturing.
I don't see why quantum computing is necessary for such intelligence? There's been some interesting stuff in New Scientist recently about consciousness only being possible at the transition state between liquid and solid. Since computer hardware is all solid and no, absolutely no, progress has been made in true intelligence with it, it would go with this theory that something with significant intelligence needs to be in at least a partially liquid state.

Quantum computing will also trash the current password cracking.. new safe ways will have to be made to protect passwords
Or we could just make longer passwords. Unfortunately some websites don't understand that the only thing that makes a password safe is its length and so have a maximum length - I've seen some unforgivable websites with a max password length of 8!
I have been thinking a lot lately about whether or not there is a physical process that cannot be simulated by a Turing Machine, and whether the human brain can ultimately be simulated with a Turing machine. As far as I know, most theoreticians think it can.
Last edited on
AFAIK, quantum computing of any reasonable magnitude could break essentially every password/key near instantly. The issue of RSA becoming obsolete has been discussed for quite awhile, and I don't believe there is a solution yet.
@LB No, Quantum. There is some instructions on how passwords are encrypted. something about getting two odd numbers and multiplying them and that is easy to get the result but try to find what those two numbers are the other way around is hard.

Its something like that. I may be wrong
@YellowPyrmid
What you're talking about is public-key cryptography, but the numbers must be prime, not just odd. Also, no-one sane (read: no-one but Adobe) would use public-key cryptography to store a password because public-key encryption is reversible, even if it is computationally expensive. You use a hash (unless you're Adobe) because hashes aren't reversible.
Last edited on
Thank you chrisname. But how do most encryption work?
Quantum computers can utilize shor's algorithm to crack a prime number public key encryption in polynomial (reasonable) time. (http://en.wikipedia.org/wiki/Shor%27s_algorithm)

Also note that in a quantum computer, the computation power given extra qubits scales darmatically (quick explaination: https://www.youtube.com/watch?v=rtI5wRyHpTg).

Edit: @htirwin - Anything in quantum mechanics cannot be simulated classically. i.e. Cannot be done on a Turing machine. You need input variables (definite variables) to give a definite answer. Getting a definite input is not even allowed in QM.
Last edited on
Wikipedia wrote:
Given sufficient computational resources, however, a classical computer could be made to simulate any quantum algorithm; quantum computation does not violate the Church–Turing thesis.[Nielsen, Michael A.; Chuang, Isaac L. Quantum Computation and Quantum Information. p. 202.]
Pages: 123