Reading information off the web?

I am building a program that will read from a particular web page. I have written it so that it will parse the page correctly, but I can only do it for a downloaded page right now. I would like for it to work simply with the URL.

I have seen things about "screen scraping", though I don't quite understand what it does - I think it's more for just parsing it, which I can already do.

If it is possible to automatically download that file, that is fine as well.

If it matters at all, the pages I am reading from are stories on
Last edited on
Are you looking for this kind of thing?

How to download an http file?

the code in this threads uses cURL : If you search this site -- and the web -- for (e.g.) curl_easy_init you'll find other examples and info about cURL's easy API.

cURL is definitely worth considering, but there are other libraries available, as listed here: Other HTTP/FTP client Libraries

which, if you're using Windows, include WinInet:

Using Wininet

Last edited on
Topic archived. No new replies allowed.