It really, really depends on who you ask. Kurzweil, if I recall, claims that we will have general-intelligence AI within 30 years. Others aren't as optimistic, and claim it to be further in the 40-50 year range. Personally, I have no idea, hence why I want to go into work in the field.
We're already there. There is AI everywhere right this second. Google Maps? AI. Stupid little Roomba vacuums? AI. Netflix telling you you'll probably enjoy watching Daredevil? AI. Hell, Google has working self-driving cars right now. That's some pretty advanced AI.
AI is a very broad term which encompasses a significant number of topics.
Now, when will we have a general-purpose AI robot that we can all be friends with until it murders its creator? I'm inclined to say never.
Are we talking general-purpose AI as in the level of things like C3PO from Star Wars or things like that?
I don't know all that much about what that would need, but I would agree in saying that we will never be able to develop it. According to all the definitions of general-purpose AIs that I have found, they must be able to communicate in natural language.
As you may understand, if you have ever tried to learn another language, learning a language is a very complicated skill, and some people can try for years to learn a second language and never become fluent. Most people may get to a level of understanding others, and other people understanding them, but they'll always be identified as a foreigner due to accents, odd sentence structures, different ways of saying things, or misuse of articles.
To get an AI to learn something this complicated would be insane. Especially if we look at a language like English, where it would not only have to learn all the vocabulary and (sometimes conflicting) grammar rules, but also the huge numbers of idioms and different meanings given to words in foreign usage, otherwise it will not be able to properly understand. It would also need to be able to extrapolate what you mean (as we do) if we don't speak in 'proper' English, for example using slang terms or contractions.
It gets even worse when it needs to get to speaking. It needs to be able to construct speech itself, and present it in a straight-forward order. That order isn't necessarily always the same - depending on intent and what it feels requires emphasis the order needs to be changed, otherwise people may misinterpret the sentence, regardless that it is still grammatically correct. It may also need to change its manner of speech based on the audience, and their social status and position (though this may be more of a factor in other cultures).
However, the hardest bit would be for it to know what to say. For it to speak with natural language, it should be able to move a conversation on spontaneously, without necessarily having any stimuli to get it to know what to talk about. It can't just always reply, it has to come up with new meaning itself. I can't even think of how you might begin to do that.
Even just having a fully-formed brain isn't enough - as far as we know, of all animals on the planet, humans are then only species that has the capability of higher level thought. Though people have taught monkeys and things to learn to talk / understand, they always do so in direct response to stimulus - they have never initiated a complex conversation themselves (i.e. not simply about their current needs).
There are more levels of complication that speaking with natural language might bring, but I'll leave it there for now.
And remember, this is just one aspect of an AI that needs to be fulfilled, although possibly the most complicated. With all the requirements that a general-purpose AI would need to fill, I cannot see how that they will manage to make one within the next few hundred years, if ever.
It's very hard to put a scale on these kind of things. If some government or other decides it's a good idea to fund AI research out of fear of the enemy making an AI first, then it could shift progress substantially.
Having said that, I'd be quite surprised to see human level AI appear this century.
One of the big challenges for me when doing AI programming for my games, is getting the computer to make "human like" mistakes. My Tic Tac Toe for example isn't much fun because the computer never looses. It was programmed to make the appropriate response to all possibilities, using the same logical order of thought that I myself was using when playing the game.
First check for a winning move, if none, then check if a blocking move is needed, if not check for a possible two-way pinning move, if not, check if opponent is potentially able to make a two-way pinning move and thwart it, if not that then f*** it and go where ever.
In many ways, human intelligence is the ability to solve problems. This is thinking. But humans also have to be taught. It could be very much the same for computers. They are useful in solving problems, but also need to be taught, or programmed as is the case. This has been going on for years, and so computers are not only already at human level intelligence but they have passed us in ability. The key difference between human and machine is not intelligence, but emotion. People 'feel' and computers don't. When they can develop artificial emotion then computers will have evolved into a form of life its own, created by man.