From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from merlin.pegasi.net([206.170.15.52]) (1615 bytes) by braille.uwo.ca via smail with P:esmtp/D:aliases/T:pipe (sender: ) id for ; Wed, 19 Apr 2000 17:06:13 -0400 (EDT) (Smail-3.2.0.102 1998-Aug-2 #2 built 1999-Sep-5) Received: (from pieter@localhost) by merlin.pegasi.net (8.9.3/8.9.3) id VAA30254; Wed, 19 Apr 2000 21:06:08 GMT (envelope-from pieter) Date: Wed, 19 Apr 2000 21:06:08 +0000 From: George Lewis To: Janina Sajka Cc: ma-linux@tux.org, speakup@braille.uwo.ca Subject: Re: Grabbing An Entire Website Message-ID: <20000419210608.B30011@schvin.net> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 1.0.1us In-Reply-To: ; from janina@afb.net on Wed, Apr 19, 2000 at 04:20:33PM -0400 List-Id: Use wget, it has special mirroring options specifically for such a task. There are also other applications and perl modules for this task, but generically wget is excellent. George Janina Sajka (janina@afb.net) wrote: > Hi: > > Anyone know how to auto-retrieve an entire www page hierarchy? > > I know software like ncftp can and wuftp can tar up an entire directory > tree, but the pages I need aren't available over ftp, only http. I'd hate > to have them by hand one at a time, though. > > -- > > Janina Sajka, Director > Information Systems Research & Development > American Foundation for the Blind (AFB) > > janina@afb.net > -- George Lewis http://schvin.net/