AI EPQ Question

Pages: 12
I'm currently doing an EPQ on to what extent the development of AI should be restricted in the future. As part of this, I need to get the opinions of as many people as I can, especially (though not limited to) those who are knowledgeable on the subject. As such I really just want to get people's opinions on areas such as:
Should we aim to develop systems that can replace people as manual labourers?
Who should receive the blame if an intelligent system harms someone, especially one which has been taught how to complete its task rather than just having been programmed to do it?
Is the sci fi trope of an AI which is smarter than humans and turns against them actually possible?
Should AI be used to develop a better AI?
If a computer system is 'better' (e.g. is correct a higher proportion of the time) than a person at a job, is it always right to replace the person with the computer?

Any opinions/thoughts on any of the above questions, or the over arching one of to what extent should the development of AI be restricted in the future, are greatly appreciated, as are any pointers to books/internet sources on the matter.
Thanks
Should we aim to develop systems that can replace people as manual labourers?
Yes.

one which has been taught how to complete its task rather than just having been programmed to do it?
Please clarify this distinction.

Who should receive the blame if an intelligent system harms someone
An investigation should be conducted to determine what happened, and why it happened. For example, was the person doing something they shouldn't have (e.g. trespassing, ignoring warning signals, etc.)? Did the autonomous system malfunction, and if so, why? If not, was there a design issue? If so, are there any processes within the developer company that are to blame (e.g. staff are routinely overworked or pressured to meet deadlines)?
And we will eventually have to deal with the fact that everyone was doing everything right and someone still got hurt, and no one is to blame.

Is the sci fi trope of an AI which is smarter than humans and turns against them actually possible?
A conception of allegiance does not follow from intelligence. For example, if nation A develops an AI, and it is stolen by enemy nation B, the AI might be equally compelled to follow the instructions of either master. In such a scenario, the AI could be said to have turned against A, even though all it's really doing is following whatever instructions it's given.
Likewise, the AI could be given the order "design, build, and deploy a weapon capable of killing all humans, and only humans" and might carry it out, if doing so is within its capabilities. Did it turn against us by not refusing to do so?

Should AI be used to develop a better AI?
Sure.

If a computer system is 'better' (e.g. is correct a higher proportion of the time) than a person at a job, is it always right to replace the person with the computer?
What do you mean by "is it right"?
Firstly, thank you for taking the time to reply.

Please clarify this distinction.

If someone programs a computer incorrectly, and the result of the computer following this program is that it harms someone, the programmer would usually be to blame. For example, if a computerised doctor was made to diagnose patients, but it was programmed incorrectly, the programmer/whoever decided how it should be programmed would be at fault. If it was an intelligent system that was taught and then got it wrong, should the responsibility go to the programmer, whoever taught it, whoever owns it etc?

About turning against us:
The points you raise are interesting, and not along the lines I had originally thought for that question (which is a good thing). However to clarify my original question, do you think we will ever get to a point where AI will have been developed to such an extent that it could simply decide not to obey humans and there would be nothing we would be able to do about it because it would have become more intelligent that people.

About AI developing AI:
This directly ties into the previous question. If we create a system that continuously gets more effective at developing intelligent systems, could it get to a point where this is dangerous because the developing system can develop better than people, hence we become dependent upon it? Alternatively, could it develop dangerous systems that cannot be stopped because people simply are less intelligent (as in cannot thing/develop/learn as fast)?

What do you mean by "is it right"?

Going back to the doctor example. Doctors are people and so they make errors a certain proportion of the time. Yet they are also able to apply judgment in a way far beyond current diagnosis systems. If, because these systems don't make human errors, they have a higher success rate, yet they cannot adapt as well as doctors if they encounter something they don't know, should the doctor be replaced. In this case you are choosing between randomly misdiangosing a higher proportion of people, as opposed to misdiagnosing a specific, but smaller, group of people, which the computer system would have no chance of diagnosing. I hope I made myself clear with that, I'm not sure though. It's really meant to be a ethical question of should you always replace people with computer systems that are effective a higher proportion of the time?

EDIT:
This is especially with the first question, one of the main problems with simply replacing everyone whom you can replace with machines, is unemployment. What will those people do? Is it really desirable to get to the sort of reality in films such as Wall-E where all work is done by robots, and humans are left purposeless? Furthermore, without a massive culture change, people would lose their source of income if it was done by machines, which don't need to be paid.
Last edited on
If someone programs a computer incorrectly, and the result of the computer following this program is that it harms someone, the programmer would usually be to blame. [...] If it was an intelligent system that was taught and then got it wrong, should the responsibility go to the programmer, whoever taught it, whoever owns it etc?
Okay, I see what you mean. I don't agree that the programmer is automatically at fault if there are software flaws. Like I said in my answer, an investigation should look into the ultimate source of the problem.

However to clarify my original question, do you think we will ever get to a point where AI will have been developed to such an extent that it could simply decide not to obey humans and there would be nothing we would be able to do about it because it would have become more intelligent that people.
No. Even the most sophisticated program can be stopped and debugged, even if only at the hardware level. If we can't figure it out, we can always leave it off.

If we create a system that continuously gets more effective at developing intelligent systems, could it get to a point where this is dangerous because the developing system can develop better than people, hence we become dependent upon it?
Dependant how? Such that no living human remembers how to run things? Yes, it could get to that point; no, I don't think it's inherently dangerous (although it would be wise to take preventative measures against it).

Alternatively, could it develop dangerous systems that cannot be stopped because people simply are less intelligent (as in cannot thing/develop/learn as fast)?
No, the power can always be cut.

Going back to the doctor example. Doctors are people and so they make errors a certain proportion of the time. Yet they are also able to apply judgment in a way far beyond current diagnosis systems. If, because these systems don't make human errors, they have a higher success rate, yet they cannot adapt as well as doctors if they encounter something they don't know, should the doctor be replaced. In this case you are choosing between randomly misdiangosing a higher proportion of people, as opposed to misdiagnosing a specific, but smaller, group of people, which the computer system would have no chance of diagnosing. I hope I made myself clear with that, I'm not sure though. It's really meant to be a ethical question of should you always replace people with computer systems that are effective a higher proportion of the time?
If the machine is better in some instances and worse in others, rather than being at least as good in all instances, then it can't really be called "better" without some degree of deception.
I suppose answering this question would involve looking at the specifics of the situation: how much permanent damage is done by misdiagnosing common conditions and how much by misdiagnosing rare conditions? It's a bad trade if 1000 more people every year are cured of a flu but 10 more people every year die of hepatitis.
So I can't answer the question as it is phrased.

one of the main problems with simply replacing everyone whom you can replace with machines, is unemployment. What will those people do?
Before computers as we know it existed, the word used to refer to women who sat around doing calculations. I think we can all agree that it would have been a disservice to all mankind to not invent electronic computers so these women could keep their jobs. We'll stay in the mud forever if besides figuring out how best to solve problems we also need to think about what to do with the things that are currently solving those problems.

Is it really desirable to get to the sort of reality in films such as Wall-E where all work is done by robots, and humans are left purposeless?
We already are purposeless. We just don't have the luxury of doing whatever we want with our time, so we spend it learning and applying trades, and convinsing ourselves that this is what purpose is, rather than it being what it really is, which is the method we've found to not starve.
If you want to look at what a post-scarcity civilization really looks like, check out Star Trek.
Should we aim to develop systems that can replace people as manual laborers?

Yes, but we also need to simultaneously aim to change our society and economic systems to work in an environment where people don't need to work.

Who should receive the blame if an intelligent system harms someone, especially one which has been taught how to complete its task rather than just having been programmed to do it?
This is difficult to say, and depends on the case.

Is the sci fi trope of an AI which is smarter than humans and turns against them actually possible?

Of course it's possible. Although I think the scenarios we might want to worry about may not necessarily be quite like what you see in hollywood. Here is one possibility I have thought of: We develop some autonomous self replacing robots that we send out into our and other solar systems to colonize, explore, mine, and what have you. We program them to be survivalists, we probably also program them to leave the earth alone. But at some point, a bug emerges, and the robots begin overpopulating and coming towards Earth to expand in monolithic swarms. Note that they don't need to be smarter than us to pose a risk, but suppose we begin trying to stop them and they keep outsmarting us: that would be scary, and yes it is possible.

Should AI be used to develop a better AI?
I don't see why not, but should the machine be allowed to play 'machine-god'(human)? In some instances this could lead to apocalypse, for example, if they start making grey goo or something. Humans are dangerous enough, how much independent power we allow robots to have could be important.

If a computer system is 'better' (e.g. is correct a higher proportion of the time) than a person at a job, is it always right to replace the person with the computer?

This goes back to the issue of changing society so that people don't need to work anymore when human work becomes obsolete. The scary scenarios involve a group of 'elites' deciding they don't need us anymore and just throwing us away.

Any opinions/thoughts on any of the above questions, or the over arching one of to what extent should the development of AI be restricted in the future, are greatly appreciated, as are any pointers to books/internet sources on the matter.
Thanks


Don't forget nano-technology, and possible future very powerful nano-computers and such things used for war and murder.
Last edited on
We already are purposeless. We just don't have the luxury of doing whatever we want with our time, so we spend it learning and applying trades, and convinsing ourselves that this is what purpose is, rather than it being what it really is, which is the method we've found to not starve.
If you want to look at what a post-scarcity civilization really looks like, check out Star Trek.

That's a very interesting, albeit somewhat depressing way of thinking of things, and for the record, when I said "without a massive culture change", I was actually thinking about the sort of change mentioned in Star Trek, but didn't want to be entirely relying on sci fi references. I'm just not sure society will ever change in such a way. After all, someone still has to pay for the generation of power, maintainance of the systems etc, meaning money would still have value, it would just be much harder for people to get. I do get your point about how the women who sat around doing calculations adapted to the new situation, I'm just not sure it's safe to assume that would always happen.

No, the power can always be cut.

I'm not entirely convinced by this. Off the top of my head, the easiest way around this would be if the AI was a virus or worm. It could then spread faster than we could shut systems off, or it could spread undetected and then activate en masse. While this scenario seems a bit far fetched, if we have a creative AI developing other AIs, with the goal of making them more reliable, preventing them from being shut off in any way is quite an effective way of making a system reliable. It is always difficult to tell where the line between possible and not is when discussing AI though.
the easiest way around this would be if the AI was a virus or worm
If this is a concern, simply keep the network airgapped.
It's worth noting that even an infinitely intelligent being would not be omnipotent, even in matters relating to its own hardware. Just like a person cannot learn how to consciously rewire their own brain, it's not a given than a hard AI would be able to learn how to use a network, even if its hardware was connected to one.
Even if it could use the network and see other computers and knew how to program, it does not follow that it would be able to program them. A human programmer locked in a room with a computer connected through a network to unknown systems (i.e. they're architectures completely unknown to the programmer) would never be able to seize control of the other computers.

preventing them from being shut off in any way is quite an effective way of making a system reliable
This is like saying that a train that cannot be stopped at all is more reliable that one that can be. Actually, train brakes are designed so that power is required to keep them disengaged.
A system is made more reliable if it's more difficult for it to stop when not commanded to do so and vice versa. I would argue that, all other things being equal, a system that cannot be stopped by any means is inherently less reliable than a system that does not start.
I'm a hobbyest coder, daydreamer, and one who thinks a little too far out side the box. I hope my opinions help.

To what extent the development of AI should be restricted in the future.


This is clearly a massive book to write. To be 100% safe we don't make true AI, but just a simpletons. Yet the moment somebody makes a true AI, then they will have the upper hand.

Should we aim to develop systems that can replace people as manual labourers?


Yes, but people will go the way of the horse when we created cars. Although, if too many people are unemployed by this, than we could have a revolt. Every dictator only needs one thing to blame that people can agree on to get into power and lead people to do evil things to their fellow man. Horses took 100 years for the population to decrease ... But they don't have rights.

Who should receive the blame if an intelligent system harms someone


Automated systems are not intelligent. Therefore, if it is true intelligent, then you take the AI unit to court to be judge like you would a person. The outcome is what you decide what to do withe the situation.

Is the sci fi trope of an AI which is smarter than humans and turns against them actually possible?


True intelligents is one where the AI can make all decisions. Only a few creatures are able to lie and manipulate others in their community to get more out of there fellow creatures. Whether it's for personal gain like eating a banana in peace, or to improve the group by getting them to farm.

Therefore, if it's intelligent, than it's going to know how to lie to get the end results it needs .... or wants.

Should AI be used to develop a better AI?


Again to keep the upper hand, you are going to have to, but it will be like lighting a forest fire , we won't be prepared to control it for very long

If a computer system is 'better' (e.g. is correct a higher proportion of the time) than a person at a job, is it always right to replace the person with the computer?


They are about equal right now, and in a year or two, we'll be out of date. Journelists have been replaced in both financial and sports article writing. There are apps writing fresh new music which is better than what we can do in much much shorter times. There are 'ai bot' doing high speed trade on the markets.

The real focus is what do you do with the people who can't or won't adapt, especially the dumb one's who won't ever get that perfect burger flipping or paper pushing job? We as people, are already a powder keg ready to go off.

The end result I see, is people doing what they can to destroy power lines, and anything and everything at the cost of technology so they have a job again ... as in "If I can't have it, no one will." Just look at what people do to gas pumps when they do run out due to oversight on the station owner/crew. They almost destroy the pump and drive recklessly out of the lot.
Going back to the doctor example. Doctors are people and so they make errors a certain proportion of the time. Yet they are also able to apply judgment in a way far beyond current diagnosis systems. If, because these systems don't make human errors, they have a higher success rate, yet they cannot adapt as well as doctors if they encounter something they don't know, should the doctor be replaced. In this case you are choosing between randomly misdiangosing a higher proportion of people, as opposed to misdiagnosing a specific, but smaller, group of people, which the computer system would have no chance of diagnosing. I hope I made myself clear with that, I'm not sure though. It's really meant to be a ethical question of should you always replace people with computer systems that are effective a higher proportion of the time?

This is already the case. The solution is to use a combination of human and machine intelligence. With modern AI systems used for these types of purposes, it's usually very obvious when the machine can or cannot make accurate predictions as the predictions are based off of what has been learned from previous data, and it's predictive abilities can be evaluated quantitatively from that same data. But now the doctor will use the intelligent system as a tool to help them make the decision.

Another thing you might want to think about is the fact that most stock market transactions are now done automatically by AI. That is that the AI itself decides to buy or sell billions of dollars worth of stock everyday, without the owners direct oversight. And actually has caused many problems in the past, in rare cases where a real person would have known better. Also AI dominance in the financial transaction market has lead to too little diversity in buying/selling strategy which has caused for example everyone to dump or buy the same stocks at once, which can end up sabotaging the strategy for all and cause economic problems.

It's humans ability to use, or create diverse strategies that enables us to be so successful. The irony is that such a thing requires some lack of optimization and in-correctness. I think to make AI adaptable enough to take our place, we have to enable, maybe even encourage it to sometimes make mistakes.

Last edited on
Sorry for the relatively long absence, a combination of panic-programming course work and getting AS results has been taking up my time recently. I've also started reading Surviving AI by Calum Chace, which is a highly informative book.

@Helios
If this is a concern, simply keep the network airgapped.
It's worth noting that even an infinitely intelligent being would not be omnipotent, even in matters relating to its own hardware. Just like a person cannot learn how to consciously rewire their own brain, it's not a given than a hard AI would be able to learn how to use a network, even if its hardware was connected to one.

If an AI was taught to program, especially if it was one which had been taught to produce and improve other AIs, it would most likely be able to either directly modify itself, or at least eventually create a better version of itself, which then repeats and the AI gets better and better at improving itself, although there will of course be a hardware barrier to how good it can actually get. As such, I believe it's not out of the question that an AI could learn, most likely via incredibly fast trial and error, to at least send data over a network, if not more.

This is like saying that a train that cannot be stopped at all is more reliable that one that can be...

This was intended to highlight an issue with a truly creative AI system, especially one with a natural language interface. There's no guarantee that it would interpret what it was told to do in the same way that the person commanding it interpreted the instructions. This is where a system of restrictions such as Asimov's laws could be required, however as I've mentioned, AIs could well learn to better themselves, and in doing so, they could remove any restrictions that we program in.

@DTM256
I'm a hobbyest coder, daydreamer, and one who thinks a little too far out side the box. I hope my opinions help.

Thinking a bit far outside the box is most likely a good thing when it comes to this sort of topic. Also your opinions do help; I need to get the opinions of as many people as I can, especially people such as programmers.

This is clearly a massive book to write. To be 100% safe we don't make true AI, but just a simpletons. Yet the moment somebody makes a true AI, then they will have the upper hand.

Well I've got a 5000 word essay and 15 minute presentation to write on it, so I'll most likely do a general analysis of the question and find a couple of specific areas to go into detail in. And this arms race scenario of everyone rushing to make better and better AIs is one of the dangerous scenarios which I intend to address. I believe some restrictions do need to be put in place to ensure that we don't develop something which is too powerful to control/contain, hence my main question.

Automated systems are not intelligent. Therefore, if it is true intelligent, then you take the AI unit to court to be judge like you would a person. The outcome is what you decide what to do withe the situation.

While this does seem the logical conclusion, I'm not sure most people will ever be happy with punishing what they will most likely consider 'just a machine', if the machine does something to harm them.

Therefore, if it's intelligent, than it's going to know how to lie to get the end results it needs .... or wants.

That's what I'm trying to work out how to avoid. The ideal situation would be one where we can make actually intelligent systems, but do it in such a way that they never actually want to do anything that we don't want them to. I'm not sure that'll happen though.

@htirwin
Another thing you might want to think about is the fact that most stock market transactions are now done automatically by AI.

This is a very interesting topic, and one that is also mentioned in the book I'm reading. In fact I think that if we do get a feedback loop of self-improving, or evolving AIs, that it would most likely be here. I think evolving AIs are likewise an interesting topic, though as far as I know, something that has not been developed for practical use yet.

I think to make AI adaptable enough to take our place, we have to enable, maybe even encourage it to sometimes make mistakes.

Is it really a mistake if the result is better than not making the 'mistake' though? Or is it teaching the AI to use variety, and to predict what other AIs will do and react to that?

closed account (48T7M4Gy)
Just like a person cannot learn how to consciously rewire their own brain
That is not quite right. CBT among many other modern psychology (and psychiatric more especially) techniques and treatments train and encourage patients to do just that. Sure the patient doesn't get out with a soldering iron and wire cutters in that sense but the emerging treatments based on brain plasticity are effectively self-rewiring.

Cognitive embodiment is another well established but emergent field of endeavour at the University of Edinburgh and elsewhere in the US, successfully spending substantial amounts of time, money and research effort here also.

Is it really a mistake if the result is better than not making the 'mistake' though? Or is it teaching the AI to use variety, and to predict what other AIs will do and react to that?
Extra special care in overgeneralizing with this too as it could be misunderstood as falling victim to the fail fast mythology Forbes write about. AI is more attuned to fuzzy logic than risky-deliberate-errors
Last edited on
closed account (48T7M4Gy)
I believe it's not out of the question that an AI could learn, most likely via incredibly fast trial and error, to at least send data over a network, if not more


I agree, but let's not lose sight of the fact that it is already here, some sophisticated and some not so much. Google "adaptive artificial intelligence" for instance. Try C# (as one that springs to mind) and it has a simple facility within it to carry adaptive code generation.

Adaptation may not even rely just on fast trial and error, there are other means by which adaptation can be generated eg the spectrum of possibilities and choices coming in from sensor arrays. Weapons and fire control systems have been doing this for years in a successful but comparatively rudimentary compared to the foreseeable future with adaptive algorithm deployment etc.
If an AI was taught to program, especially if it was one which had been taught to produce and improve other AIs, it would most likely be able to either directly modify itself, or at least eventually create a better version of itself, which then repeats and the AI gets better and better at improving itself
Imagine a computer designed such that code memory and data memory are in different address spaces, data can never be executed due to the design of the ISA, and code is stored in ROM that stores data in the crystalline configuration of the semiconductor. It would be physically impossible for such a computer to execute self-modifying code.

This is where a system of restrictions such as Asimov's laws could be required, however as I've mentioned, AIs could well learn to better themselves, and in doing so, they could remove any restrictions that we program in.
The laws could be programmed in to protect themselves from modifications.

That is not quite right. CBT among many other modern psychology (and psychiatric more especially) techniques and treatments train and encourage patients to do just that. Sure the patient doesn't get out with a soldering iron and wire cutters in that sense but the emerging treatments based on brain plasticity are effectively self-rewiring.
Then I question the relevance, since a self-modifying AI would use the software equivalent of those tools.
It's like comparing training muscles through exercise with cyborgization.
closed account (48T7M4Gy)
The question of relevance and cyborgization isn't the point I was raising. The comment made by the OP relates to self repair, consciously especially, of the human brain and I have demonstrated there are at least several examples of that actually in operation as of now. They are conscious interventions by the patient not requiring cyborgization or even medication, just training, and they are not simple musclular physiotherapy. The key to one of the therapies is as mentioned brain plasticity, conscious brain plasticity guided by the same non-AI individual human.

The extrapolation of that to AI machines may well indeed be achieved by appropriate software tools. There is nothing in the above comments to suggest otherwise.
From what you're saying, the treatment is the neurological equivalent of muscle training. The individual is not consciously deciding where to move some synapse, they're performing some activity that, because of the way the brain works, it happens to cause reorganization.
This is a hardware feature, not a software feature. If the brain didn't work like this (and I see no reason why it would need to), nothing would happen.
closed account (48T7M4Gy)
Not at all. You are talking about muscle training hence my reference to it not being physiotherapy to dispel that idea. In any case the human brain is not muscular - well at least in most cases of human crania?

The superfluous notion of control over individual neuronal structures is not the point hence the reference to wire cutters etc. I could equally argue a computer with the best AI will in the world is not such on the grounds it has no control over the electrons or holes flowing through its circuits. That would be silly. Similarly the brain under therapy isn't working properly and that's the whole point of the self therapy which takes advantage of brain plasticity, and the individual does it consciously, deliberately and without moving a muscle.

Hardware is involved of course, just as with a computer. But that is not the totality of the changes as any neurologist, psychiatrist or other practioner will advise. I know one or two neurologists who describe the human brain in general hardware software terms so there are no surprises there. I would go so far to say the use of the terms for human brains by them is quite common.

The idea that a brain or computer didn't work like this then nothing would happen is a tautology unfortunately beyond anything I can fathom.
As far as I know, it's the brain's plasticity that makes it so effective at learning and remembering. In fact, according to Google, advanced artificial neural networks can use plasticity in a similar way to the brain in order to adapt to stimuli effectively. This would imply that if an AI could learn how, it could employ similar techniques if it was based on a neural network, which would not surprise me at all.
I could equally argue a computer with the best AI will in the world is not such on the grounds it has no control over the electrons or holes flowing through its circuits. That would be silly.
Electrons and conductor leads (computer) are analogous to the laws of physics (human). Executable bits (computer) are analogous to neuronal configuration (human). Data bits (computer) are analogous to subjective memory (human).
An AI might or might not have control of its own executable code, which is what is being discussed here.

the individual does it consciously, deliberately and without moving a muscle
Can you explain what it is the person actually does, so I know we're not misinterpreting each other?

The idea that a brain or computer didn't work like this then nothing would happen is a tautology unfortunately beyond anything I can fathom.
"X is true and Y is true, but if X was false, Y would also be false" is not a tautology.
closed account (48T7M4Gy)
That's right. Shadow, that is, due to the crossover of posts.

With the rest from Helios the reductionist approach doesn't detract from the initial response I made in commenting on the OP's point about human brains. The comments are well and truly on-topic and directly refer.

But reductionism reaches a point generally where splitting hairs becomes like the repeated tautologous route on its way to nowhere if you get what I mean and understand it.

You raise an interesting side issue though straight out of the scientism manual. Do electrons obey physics law or do the laws of physics obey electrons? And that is even without contemplating whether electrons or even observers exist. This is why reductionist arguments are so fascinating don't you think? Brains are most definitely chemo-electronic entities and certainly subscribe to subjective memory as Plato so ably pondered in the allegory of the cave. No question there at all.

All those X's and Y's. All good stuff but regrettably not a tautology to be seen. ;)
Last edited on
I think that an interesting way to look at intelligence, and adaptive learning, is that it requires some direction or motivation. And in turn, that requires some system, or set of rules about what is desirable or undesirable, and some system for measuring an outcome and mapping it such a value. Biological life is heavily embedded with such things, most notably in the form of pain and pleasure rewards and punishments delivered through your nervous system to your consciousness. When we are being damaged, we feel pain, when we succeed, we feel happy, etc. It is largely this type of system that is built into us that guides us.

In modern AI applications, for example machine learning, the focus is typically on making some optimal decision, or implementing some optimal strategy for achieving some predefined task. And there is some error measure that the system tries to reduce. In humans such an error can be thought of as analogous to pain or discomfort, or any other feeling we try to avoid.

One of the problems is that there is no clear simple way to generalize the assignment of error to arbitrary functions. When the AI is confronted with a type of decision, how do they know what is bad or what is good. We can't really hardcode error measures for every possible situation. In human beings for example, we simply seam to have a set of base measures that have been optimized to allow us to derive measures for the important decisions and situations we find ourselves in as we live our lives and as species that is a member of planet earth. Is there a reason we like kittens, we can be made to cry by watching a movie, buy life insurance for ourselves and write wills, feel good we people give us compliments, appreciate nice views, smells, etc? Where does this come from? Can or should future AI have similar sentiments?

So to me, one of the interesting things about humans essentially creating artificial intelligent "life", or robotics that acts as life, is that we are introducing something which has not evolved alongside the rest of the intelligent entities in this ecosystem to achieve some sort of symbiosis, and to respect or love the world it lives in and the other creatures it lives alongside. To make artificial 'life', and only provide it with motivation to survive and complete useful tasks is not really enough, especially for something already advanced and capable, introduced abruptly into an evolutionary symbiotic system.

So if we take AI steps further past simply making predefined decisions where the error measure is explicitly given, we need to be careful about how we design their pleasure/pain, reward and punishment systems, so that they can both be complete enough from which to derive further error measures, e.g. the way mathematics is derived from simple axioms, but we also need to make sure that these basic axioms lead to AI that behaves the way we would like. We can make them evil or benevolent, self serving, or selfless, and so forth, but it seams with human life, there is a sort of yin-yang relationship and a sort of balance required, which is not as strait forward as you would think, to enable us to survive and be successful, and be good to one another, and not destroy our environment, and so forth. Even human beings are not doing so well in this, even in-spite that we are fairly heavily geared towards loving the very things we are destroying.

Anyway, perhaps one of the most interesting questions, is whether selfishness and other negative traits or feelings are necessary, and for what, and to what extent.

On the topic of AI creating other AI, there is of course a risk just as there is a risk when we create AI, but it's of course much more risky from our perspective, because as it is difficult enough for us to create AI with good core 'axioms', what 'axioms' the AI created from AI will have, and the vast universe of 'logic' that will follow from them, is even more unpredictable for us.

Another thing, is that whichever 'axioms' they have, suppose they differ from ours, then they will disagree with us about fundamental things, and may potentially see us as evil, or lesser than themselves.
Last edited on
Pages: 12