From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from rock.idev.com([38.179.208.10]) (1589 bytes) by braille.uwo.ca via smail with P:esmtp/D:aliases/T:pipe (sender: ) id for ; Wed, 19 Apr 2000 22:53:42 -0400 (EDT) (Smail-3.2.0.102 1998-Aug-2 #2 built 1999-Sep-5) Received: from localhost (aaron@localhost) by rock.idev.com (8.9.3/8.9.3/Debian 8.9.3-21) with ESMTP id WAA00761; Wed, 19 Apr 2000 22:53:30 -0400 Date: Wed, 19 Apr 2000 22:53:29 -0400 (EDT) From: Aaron To: Garrett Nievin cc: Janina Sajka , ma-linux@tux.org, speakup@braille.uwo.ca Subject: Re: Grabbing An Entire Website In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII List-Id: yup, wget -r www.foobar.com. Of course that gets what a browser would "see" not the source code behind dymanic pages, unless of course it's cold fusion ;) If you want to get source code, for dynamic pages or something else that would depend on the situation. Aaron On Wed, 19 Apr 2000, Garrett Nievin wrote: > I think that you can use wget for that. Have not done it myself. > > > Cheers, > Garrett > > On Wed, 19 Apr 2000, Janina Sajka wrote: > > > Hi: > > > > Anyone know how to auto-retrieve an entire www page hierarchy? > > > > I know software like ncftp can and wuftp can tar up an entire directory > > tree, but the pages I need aren't available over ftp, only http. I'd hate > > to have them by hand one at a time, though. > > > > > > -- > Garrett P. Nievin > > Non est ad astra mollis e terris via. -- Seneca >