From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from aslan.xandyj.erols.com(xandyj.erols.com[207.96.19.176]) (1583 bytes) by braille.uwo.ca via smail with P:esmtp/D:aliases/T:pipe (sender: ) id for ; Wed, 19 Apr 2000 16:49:03 -0400 (EDT) (Smail-3.2.0.102 1998-Aug-2 #2 built 1999-Sep-5) Received: from localhost (xandy@localhost) by aslan.xandyj.erols.com (8.9.3/8.9.3) with ESMTP id QAA03923; Wed, 19 Apr 2000 16:48:44 -0400 X-Authentication-Warning: aslan.xandyj.erols.com: xandy owned process doing -bs Date: Wed, 19 Apr 2000 16:48:44 -0400 (EDT) From: Xandy Johnson X-Sender: xandy@aslan.xandyj.erols.com To: Janina Sajka cc: ma-linux@tux.org, speakup@braille.uwo.ca Subject: Re: Grabbing An Entire Website In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII List-Id: You probably want to look into wget. It can follow links to recursively retrieve all documents referenced by an http URL (also does ftp, but you specifically said your needs were http). There are a lot of options (e.g. maximum depth, spanning hosts, converting absolute links to relative ones locally, etc.), so I suggest reading the man page and then asking more specific questions if you have them. Yours, Xandy On Wed, 19 Apr 2000, Janina Sajka wrote: > Hi: > > Anyone know how to auto-retrieve an entire www page hierarchy? > > I know software like ncftp can and wuftp can tar up an entire directory > tree, but the pages I need aren't available over ftp, only http. I'd hate > to have them by hand one at a time, though. > >