HTML::LinkExtor man page on BSDi

Man page or keyword search:  
man Server   6284 pages
apropos Keyword Search (all sections)
Output format
BSDi logo
[printable version]



lib::HTML::LinkUserrContributed Perl Documlib::HTML::LinkExtor(3)

NAME
       HTML::LinkExtor - Extract links from an HTML document

SYNOPSIS
	require HTML::LinkExtor;
	$p = HTML::LinkExtor->new(\&cb, "http://www.sn.no/");
	sub cb {
	    my($tag, %links) = @_;
	    print "$tag @{[%links]}\n";
	}
	$p->parse_file("index.html");

DESCRIPTION
       The HTML::LinkExtor (link extractor) is an HTML parser
       that takes a callback routine as parameter.  This routine
       is then called as the various link attributes are
       recognized.

       The HTML::LinkExtor is a subclass of HTML::Parser. This
       means that the document should be given to the parser by
       calling the $p->parse() or $p->parse_file() methods.

       $p = HTML::LinkExtor->new([$callback[, $base]])

       The constructor takes two optional argument. The first is
       a reference to a callback routine. It will be called as
       links are found. If a callback is not provided, then links
       are just accumulated internally and can be retrieved by
       calling the $p->links() method. The $base is an optional
       base URL used to absolutize all URLs found.

       The callback is called with the lowercase tag name as
       first argument, and then all link attributes as separate
       key/value pairs.	 All non-link attributes are removed.

       $p->links

       Returns a list of all links found in the document.  The
       returned values will be anonymous arrays with the follwing
       elements:

	 [$tag, $attr => $url1, $attr2 => $url2,...]

       The $p->links method will also truncate the internal link
       list.  This means that if the method is called twice
       without any parsing in between then the second call will
       return an empty list.

       Also note that $p->links will always be empty if a
       callback routine was provided when the the HTML::LinkExtor
       manpage was created.

24/Aug/1997	       perl 5.005, patch 03			1

lib::HTML::LinkUserrContributed Perl Documlib::HTML::LinkExtor(3)

EXAMPLE
       This is an example showing how you can extract links as a
       document is received using LWP:

	 use LWP::UserAgent;
	 use HTML::LinkExtor;
	 use URI::URL;

	 $url = "http://www.sn.no/";  # for instance
	 $ua = new LWP::UserAgent;

	 # Set up a callback that collect image links
	 my @imgs = ();
	 sub callback {
	    my($tag, %attr) = @_;
	    return if $tag ne 'img';  # we only look closer at <img ...>
	    push(@imgs, values %attr);
	 }

	 # Make the parser.  Unfortunately, we don't know the base yet (it might
	 # be diffent from $url)
	 $p = HTML::LinkExtor->new(\&callback);

	 # Request document and parse it as it arrives
	 $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])});

	 # Expand all image URLs to absolute ones
	 my $base = $res->base;
	 @imgs = map { $_ = url($_, $base)->abs; } @imgs;

	 # Print them out
	 print join("\n", @imgs), "\n";

SEE ALSO
       the HTML::Parser manpage

AUTHOR
       Gisle Aas <aas@sn.no>

24/Aug/1997	       perl 5.005, patch 03			2

[top]

List of man pages available for BSDi

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net