Hi Arun,
If you have to save a large number of pages automatically from a certain
website, use *HTTrack Website Copier* available from http://httrack.com. ;
It gives you an exact copy of website(With all images, IF you have
specified the correct domain name from where image is served, and IF the
image is not dynamically served).
Browse the downloaded website's directory structure for your image.
In case you have to save one or more open pages, use the Mozilla Firefox
addon *Mozilla Archive Format, with MHT and Faithful Save* available
from
http://addons.mozilla.org/en-US/firefox/addon/mozilla-archive-format/. ;
This one gives you an exact and faithful copy of the pages as you have
viewed online. It even preserves the dynamically served ads.
Open the saved archive with *7-zip*(http://7-zip.org) to get your
image.
Regards,
Arun GP
On 07/25/2014 10:42 PM, pavithran wrote:
On Fri, Jul 25, 2014 at 5:56 PM, Arun Khan <knura9 at gmail.com> wrote:
wget -m url should work fine.I try manual File->save as, automated using wget, httrack, or python'man wget' look for the mirror option.
mechanize.
Sometime you might be in a situation where some scripts wont allow you
to make a proper mirror/copy .
In such cases I found the mozilla addon Down them all very useful.
https://addons.mozilla.org/en-US/firefox/addon/downthemall/
Regards,
Pavithran