Here's an example of using the xurl program from
the earlier recipe to extract the URLs, then running that program's
output to feed into surl.
% xurl http://use.perl.org/~gnat/journal | surl | head
Mon Jan 13 22:58:16 2003 http://www.nanowrimo.org/
Sun Jan 12 19:29:00 2003 http://www.costik.com/gamespek.html
Sat Jan 11 20:57:03 2003 http://www.cpan.org/ports/index.html
Sat Jan 11 09:46:19 2003 http://jakarta.apache.org/gump/
Tue Jan 7 20:27:30 2003 http://use.perl.org/images/menu_gox.gif
Tue Jan 7 20:27:30 2003 http://use.perl.org/images/menu_bgo.gif
Tue Jan 7 20:27:30 2003 http://use.perl.org/images/menu_gxg.gif
Tue Jan 7 20:27:30 2003 http://use.perl.org/images/menu_ggx.gif
Tue Jan 7 20:27:30 2003 http://use.perl.org/images/menu_gxx.gif
Tue Jan 7 20:27:30 2003 http://use.perl.org/images/menu_gxo.gif
Having a variety of small programs that each do one thing and can be
combined into more powerful constructs is the hallmark of good
programming. You could even argue that xurl
should work on files, and that some other program should actually
fetch the URL's contents over the Web to feed into
xurl, churl, or
surl. That program would probably be called
gurl, except that program already exists: the
LWP module suite has a program called
lwp-request with aliases
HEAD, GET, and
POST to run those operations from shell scripts.