Skip to main content

How to archive old content?

This tutorial can help you compress or restore previous unused web content.

If for some reason the developer of the previous website is not available, the server’s previous website code can be found, then it should be saved and saved and removed from the server. Because the source code for old web pages is usually not updated, it is likely to improve vulnerability and vulnerability. By taking advantage of this, they can load harmful codes that can be used to send out, for example, spam.

Therefore, we recommend that you archive the contents of the old site as soon as possible and then remove it from the server. Because the source code of the pages may be large and may contain many files, it is advisable to compress the affected folder, such as gzip. If you want to keep the old page content compressed on the server, it is recommended that you specify a directory that is not accessible from the web. For example, /var/www/oldwebpage.tar.gz

COMPRESS A FOLDER

After you connect to the server via SSH, you can create a backup by issuing the following command:

tar -czvf public_html_backup.tar.gz public_html/

EXTRACT AN ARCHIVE

tar is the program that performs compression, using the -c switch to create the archive. The -z switch calls gzip for compression, using the -v option to set the verbose output that displays the compression process on the current console. The -f option allows you to specify the archive name, which in this case is public_html_backup.tar.gz, which can of course be freely modified, but the .tar.gz extension must be kept in the file name. The last parameter public_html/ which is the directory whose content you want to compress is recursive by default, so all files and subdirectories will be found in the archive.

To decompress, use the following command:

tar -xzvf public_html_backup.tar.gz -C /public_html_backup

After the -C switch, you can specify the folder name in which you want to restore the contents of the archive concerned.