APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed

LWP (Library for WWW in Perl)

If you want to automatically process web pages to extract data, you have a number of tools available. You can bring a web page down to your computer using "curl" or "wget"


curl http:.//aplawrence.com > mysite
 

If you don't really want the html, use "lynx --dump http://whatever.com > /yourstorage/whatever.txt" to get a text representation of the page. Check the man page for options you might want like "--nolist".

You can also easily be selective and pull only the data you want from a page with simple Perl scripts.

#!/usr/bin/perl
use LWP::Simple;  
$url = 'http://aplawrence.com";   
$content = get $url;     
print $content;
 

And then of course you'd process the $content as desired. It's only a little more complex if you are dealing with forms.

A book that covers LWP is reviewed at /Books/webc.html.



Got something to add? Send me email.





(OLDER)    <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

-> -> LWP (Library for WWW in Perl)



Increase ad revenue 50-250% with Ezoic


More Articles by

Find me on Google+

© Tony Lawrence



Kerio Samepage


Have you tried Searching this site?

Unix/Linux/Mac OS X support by phone, email or on-site: Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us





Any inaccuracies in this index may be explained by the fact that it has been sorted with the help of a computer. (Donald Knuth)





This post tagged: