Web Archiving Made Simple

The Simple Web Archiver—a straightforward, open source web archiving tool to create personal archives of websites and the files they host—has been published on GitHub under the GNU General Public License, allowing users to use and remix the tool with minimal limitations. The tool, built in Python, provides a GUI interface, and uses BeautifulSoup and wget to parse websites and download files, respectively. I created the tool as part of my work as the European Studies Librarian at the UT Libraries. 

Archiving websites is an important practice for anyone interested in preserving digital history. Digital media, and media online, is particularly vulnerable to being lost, as it is often ephemeral in nature and not preserved in an archival format. Saving born-digital materials complements the archiving, curation, and preservation of physical materials, and helps to ensure that internet-based ephemera will be preserved into the future.

Why use this tool?

This tool provides an easy way to create small, personal archives that live offline. While there are many useful web archiving tools available (listed below), this program fills a gap not addressed by existing solutions. Its scope is intentionally small: it aims to create low-memory- use archives for personal use, and to be as easy to use as possible so that users with limited technical knowledge can begin using it immediately, without a complicated setup process or learning curve.

The tool uses a GUI to make the tool very easy to use. Following the directions on the GitHub site allows one to set up the tool and begin using it almost instantly.  Another important aspect of this tool is the ease with which it can be modified, by those with some coding experience, to accomplish something else or adapt to the behavior of a certain site. One example of such remixing is this code to capture Omeka sites, specifically, downloading more of the site’s content than the Simple Web Archiver does by default.

Extant web archiving tools tend to accomplish different things than the Simple Web Archiver, and to encompass different scopes. Here is a brief review of other popular software:

Internet Archive – An excellent and easy-to-use tool, but the archives created are hosted online, on the Internet Archive’s servers. Also, not all files will be preserved when crawling a website (PDFs, for example, cannot be archived).

ArchiveIt: also from the Internet Archive, this is great for institutions who want online hosting. It operates on a paid model and, again, not ideal for individual researchers or archivists who want a quick, easy archive of a site and its files.

HTTrack: HTTrack is a free, offline browser utility. Per the tool’s website, it “allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site’s relative link-structure.” This is good for those who want a thorough, complete archive of a site, but is not geared toward quick, low-memory-use archives to be stored for personal use.

WarcIt – This tool is entirely programmatic, and only provides WARC (Web ARChive) files. There is no GUI available.

Archivebox – This tool is self-hosted, but programmatic. More robust than the Simple Web Archiver, but its features are not necessarily needed for quick, easy-to-setup or one-off archives. It does not save PDFs, or other files.

Wget – A programmatic tool to download content from the internet. The Simple Web Archiver tool primarily uses wget on the backend to grab online materials.

Adaptability

This tool should work well out of the gate, but there is always the possibility that certain websites, due to their specific architectures, may not be completely archived. The tool’s code was written to be open-ended and adapt to many different types of sites, but for users with specific wants or use cases, it provides a blueprint for the creation of a variation on the tool, or even a completely new piece of software. It is also designed to run relatively quickly, and to grab the main content of a site without unnecessarily consuming CPU power.

The code is simple, and all in one file. The two main functions in the tool download either HTML/CSS/other file types or WARCs, depending on user preference.

The code is also released under the GNU General Public License v3.0. This is a strong copyleft license conditioned on making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license. Using this license allows for a wide range of remix and reuse by users and programmers.

Conclusion

I would encourage anyone interested in web archiving to give the tool a try, and to contribute in any way they’d like: by remixing the tool’s code, forking the GitHub repository, or by simply using the tool and providing any feedback they’d like to share. 

Leave a Reply